This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
To Our Valued Readers: In a CertCities.com article dated December 15, 2001, Oracle certification was ranked #2 in a list of the “10 Hottest Certifications for 2002.” This shouldn’t come as a surprise, especially when you consider the fact that the OCP program nearly tripled in size (from 30,000 to 80,000) in the last year. Oracle continues to expand its dominance in the database market, and as companies begin integrating Oracle9i systems into their IT infrastructure, you can be assured of high demand for professionals with the Oracle Certified Associate and Oracle Certified Professional certifications. Sybex is proud to have helped thousands of Oracle certification candidates prepare for the exams over the years, and we are excited about the opportunity to continue to provide professionals like you with the skills needed to succeed in the highly competitive IT industry. Our authors and editors have worked hard to ensure that the Oracle9i Study Guide you hold in your hands is comprehensive, in-depth, and pedagogically sound. We’re confident that this book will meet and exceed the demanding standards of the certification marketplace and help you, the Oracle9i certification candidate, succeed in your endeavors. Good luck in pursuit of your Oracle9i certification!
Neil Edde Associate Publisher—Certification Sybex, Inc.
Software License Agreement: Terms and Conditions The media and/or any online materials accompanying this book that are available now or in the future contain programs and/or text files (the “Software”) to be used in connection with the book. SYBEX hereby grants to you a license to use the Software, subject to the terms that follow. Your purchase, acceptance, or use of the Software will constitute your acceptance of such terms. The Software compilation is the property of SYBEX unless otherwise indicated and is protected by copyright to SYBEX or other copyright owner(s) as indicated in the media files (the “Owner(s)”). You are hereby granted a single-user license to use the Software for your personal, noncommercial use only. You may not reproduce, sell, distribute, publish, circulate, or commercially exploit the Software, or any portion thereof, without the written consent of SYBEX and the specific copyright owner(s) of any component software included on this media. In the event that the Software or components include specific license requirements or end-user agreements, statements of condition, disclaimers, limitations or warranties (“End-User License”), those End-User Licenses supersede the terms and conditions herein as to that particular Software component. Your purchase, acceptance, or use of the Software will constitute your acceptance of such End-User Licenses. By purchase, use or acceptance of the Software you further agree to comply with all export laws and regulations of the United States as such laws and regulations may exist from time to time. Software Support Components of the supplemental Software and any offers associated with them may be supported by the specific Owner(s) of that material, but they are not supported by SYBEX. Information regarding any available support may be obtained from the Owner(s) using the information provided in the appropriate read.me files or listed elsewhere on the media. Should the manufacturer(s) or other Owner(s) cease to offer support or decline to honor any offer, SYBEX bears no responsibility. This notice concerning support for the Software is provided for your information only. SYBEX is not the agent or principal of the Owner(s), and SYBEX is in no way responsible for providing any support for the Software, nor is it liable or responsible for any support provided, or not provided, by the Owner(s). Warranty SYBEX warrants the enclosed media to be free of physical defects for a period of ninety (90) days after purchase. The Software is not available from SYBEX in any other form or media than that enclosed herein or posted to www.sybex.com.
If you discover a defect in the media during this warranty period, you may obtain a replacement of identical format at no charge by sending the defective media, postage prepaid, with proof of purchase to: SYBEX Inc. Product Support Department 1151 Marina Village Parkway Alameda, CA 94501 Web: http://www.sybex.com After the 90-day period, you can obtain replacement media of identical format by sending us the defective disk, proof of purchase, and a check or money order for $10, payable to SYBEX. Disclaimer SYBEX makes no warranty or representation, either expressed or implied, with respect to the Software or its contents, quality, performance, merchantability, or fitness for a particular purpose. In no event will SYBEX, its distributors, or dealers be liable to you or any other party for direct, indirect, special, incidental, consequential, or other damages arising out of the use of or inability to use the Software or its contents even if advised of the possibility of such damage. In the event that the Software includes an online update feature, SYBEX further disclaims any obligation to provide this feature for any specific duration other than the initial posting. The exclusion of implied warranties is not permitted by some states. Therefore, the above exclusion may not apply to you. This warranty provides you with specific legal rights; there may be other rights that you may have that vary from state to state. The pricing of the book with the Software by SYBEX reflects the allocation of risk and limitations on liability contained in this agreement of Terms and Conditions. Shareware Distribution This Software may contain various programs that are distributed as shareware. Copyright laws apply to both shareware and ordinary commercial software, and the copyright Owner(s) retains all rights. If you try a shareware program and continue using it, you are expected to register it. Individual programs differ on details of trial periods, registration, and payment. Please observe the requirements stated in appropriate files. Copy Protection The Software in whole or in part may or may not be copyprotected or encrypted. However, in all cases, reselling or redistributing these files without authorization is expressly forbidden except as specifically provided for by the Owner(s) therein.
Thank you Sybex for trusting me to work on two books at the same time. I would like to thank the following wonderful people at Sybex for their support, patience, and hard work: Jeff Kellum (Development Editor) for his support, valuable comments, and getting us going; Elizabeth Campbell (Production Editor) for her patience and understanding and for making sure every piece of the book ties together and is on schedule. I know many more people from Sybex contributed to this book; I thank each one of them for their hard work and the high quality of work. I thank Pat Coleman (Editor) for her hard work. Pat, your edits made a difference in the chapters. I thank Ashok Hanumanth and Betty MacEwan for their technical review and comments. Bob, thank you for completing the chapters well ahead of schedule. It would not have been possible for me to participate in this project if my parents had not come over to the United States from India to take care of our son Joshua. I thank my parents for taking care of the baby and house for the past five months. Thank you, Shiji, for your endless support and love. Last, but not least, I thank my colleagues for their support and friendship. Thank you, Wendy, for understanding me so well and all the help you provided. Thank you all—you are the best to work with. —Biju Thomas I would like to thank all the folks at Sybex that made this a most enjoyable and rewarding experience, including Elizabeth Campbell and Jeff Kellum, who reinforced my attention to detail. Thanks go to Biju for not letting me write too many of these chapters myself. Thanks also to Pat Coleman, who filled in the gaps from my college writing courses, and to Ashok and Betty for their insightful comments and suggestions. This book wouldn’t be possible without the love and support from my family throughout the long nights and weekends when I still managed to find time to give the kids a bath and read books before bedtime. I loved every minute of it. Thanks also to my professional colleagues, both past and present, who provided me with inspiration, support, and guidance and pushed me a little further to take a risk now and then: Joe Johnson, Julie Krause, Karen Kressin, Chuck Dunbar, and that math teacher in high school, whose name eludes me at the moment, who introduced me to computers on a DEC PDP-8 with a teletype and a paper tape reader. —Bob Bryla
There is high demand for professionals in the information technology (IT) industry, and Oracle certifications are the hottest credential in the database world. You have made the right decision to pursue certification, because being Oracle certified will give you a distinct advantage in this highly competitive market. Many readers may already be familiar with Oracle and do not need an introduction to the Oracle database world. For those who aren’t familiar with the company, Oracle, founded in 1977, sold the first commercial relational database and is now the world’s leading database company and secondlargest independent software company, with revenues of more than $10 billion, serving more than 145 countries. Oracle databases are the de facto standard for large Internet sites, and Oracle advertisers are boastful but honest when they proclaim, “The Internet Runs on Oracle.” Almost all big Internet sites run Oracle databases. Oracle’s penetration of the database market runs deep and is not limited to dot-com implementations. Enterprise resource planning (ERP) application suites, data warehouses, and custom applications at many companies rely on Oracle. The demand for DBA resources remains higher than others during weak economic times. This book is intended to help you on your exciting path toward becoming an Oracle9i Oracle Certified Associate (OCA), which is the first step on the path toward Oracle Certified Professional (OCP) and Oracle Certified Master (OCM) certification. Basic knowledge of Oracle SQL is an advantage when reading this book but is not mandatory. Using this book and a practice database, you can start learning Oracle and pass the IZ0-031 test: Oracle9i Database: Fundamentals I.
Why Become an Oracle Certified Professional? The number one reason to become an OCP is to gain more visibility and greater access to the industry’s most challenging opportunities. Oracle certification is the best way to demonstrate your knowledge and skills in Oracle database systems. The certification tests are scenario-based, which is the most effective way to assess your hands-on expertise and critical problemsolving skills.
Certification is proof of your knowledge and shows that you have the skills required to support Oracle core products. The Oracle certification program can help a company to identify proven performers who have demonstrated their skills and who can support the company’s investment in Oracle technology. It demonstrates that you have a solid understanding of your job role and the Oracle products used in that role. OCPs are among the best paid in the IT industry. Salary surveys consistently show the OCP certification to yield higher salaries than other certifications, including Microsoft, Novell, and Cisco. So, whether you are beginning a career, changing careers, securing your present position, or seeking to refine and promote your position, this book is for you!
Oracle Certifications Oracle certifications follow a track that is oriented toward a job role. There are database administration, database operator, and developer tracks. Within each track, Oracle has a three-tiered certification program:
The first tier is the Oracle Certified Associate (OCA). OCA certification typically requires you to complete two exams, the first via the Internet and the second in a proctored environment.
The next tier is the Oracle Certified Professional (OCP), which builds upon and requires an OCA certification. The additional requirements for OCP certification are additional proctored exams.
The third and highest tier is the Oracle Certified Master (OCM). OCM certification builds upon and requires OCP certification. To achieve OCM certification, you must attend two advanced Oracle Education, classroom courses (from a specific list of qualifying courses) and complete a practicum exam.
The following material will address only the database administration track, because at the time of this writing, it was the only 9i track offered by Oracle. The other tracks have 8 and 8i certifications and will undoubtedly have 9i certifications. See the Oracle website at http://www.oracle.com/ education/certification for the latest information.
Oracle9i Certified Database Associate The role of the database administrator (DBA) has become a key to success in today’s highly complex database systems. The best DBAs work behind the scenes, but are in the spotlight when critical issues arise. They plan, create, maintain, and ensure that the database is available for the business. They are always watching the databaseµ for performance issues and to prevent unscheduled downtime. The DBA’s job requires broad understanding of the architecture of Oracle database and expertise in solving problems. The Oracle9i Certified Database Associate is the entry-level certification for the database administration track and is required to advance toward the more senior certification tiers. This certification requires you to pass two exams that demonstrate your knowledge of Oracle basics:
1Z0-007: Introduction to Oracle9i: SQL
1Z0-031: Oracle9i Database: Fundamentals I
The 1Z0-007 exam, Introduction to Oracle9i: SQL, is offered on the Internet. The 1Z0-031 exam, Oracle9i Database: Fundamentals I, is offered at a Sylvan Prometric facility. Oracle9i Certified Database Administrator (DBA) The OCP tier of the database administration track challenges you to demonstrate your continuing experience and knowledge of Oracle technologies. The Oracle9i Certified Database Administrator certification requires achievement of the Certified Database Associate tier, as well as passing the following two exams at a Sylvan Prometric facility:
1Z0-032: Oracle9i Database: Fundamentals II
1Z0-033: Oracle9i Database: Performance Tuning
Oracle9i Certified Master The Oracle9i Certified Master is the highest level of certification that Oracle offers. To become a certified master, you must first achieve Certified Database Administrator status, then complete two advanced instructor-led classes at an Oracle education facility, and finally pass a hands-on exam at Oracle Education. The classes and practicum exam are offered only at an Oracle education facility and may require travel. The advanced classes that will count toward your OCM requirement include the following:
Oracle9i: High Availability in an Internet Environment
Oracle9i: Database: Implement Partitioning
Oracle9i: Real Application Clusters Implementation
Oracle9i: Data Warehouse Administration
Oracle9i: Advanced Replication
Oracle9i: Enterprise Manager
Passing Scores The 1Z0-031: Oracle9i Database: Fundamentals I exam consists of two sections—basic and mastery. The passing score for basic section is 71 percent and for mastery section is 56 percent at the time of writing this book. Please download and read the Oracle9i Certification candidate guide before taking the exam. The basic section covers the fundamental concepts, and the mastery section covers more difficult questions, mostly based on practice and experience. You must pass both sections to pass the exam. The objectives, test scoring, number of questions, and so on are listed at http://www.oracle.com/education/certification. More Information You can find the most current information about Oracle certification at http://www.oracle.com/education/certification. Follow the Certification link and choose the track that interests you. Read the Candidate Guide for the test objectives and test contents, and keep in mind that they can change at any time without notice.
OCA/OCP Study Guides The Oracle9i database administration track certification consists of four tests: two for OCA level and two more for OCP level. Sybex offers several study guides to help you achieve this certification:
OCA/OCP: Introduction to Oracle9i™ SQL Study Guide (exam 1Z0-007: Introduction to Oracle9i: SQL)
OCA/OCP: Oracle9i™ DBA Database Fundamentals I Study Guide (exam 1Z0-031: Oracle9i Database: Fundamentals I)
Additionally, these four books are offered in a boxed set: OCP: Oracle9i™ DBA Certification Kit. Skills Required for DBA Certification To pass the certification exams, you need to master the following skills:
Write SQL SELECT statements that display data from either single or multiple tables.
Restrict, sort, aggregate, and manipulate data using both single and group functions.
Create and manage tables, views, constraints, synonyms, sequences, and indexes.
Create users and roles to control user access and maintain security.
Understand Oracle Server architecture (database and instance).
Understand the physical and logical storage of the database, and be able to manage space allocation and growth.
Manage data, including its storage, loading, and reorganization.
Manage redo logs, automatic undo, and rollback segments.
Use globalization features to choose a database character set and National Language Support (NLS) parameters.
Configure Net8 on the server side and the client side.
Use backup and recovery options.
Archive redo log files and hot backups.
Perform backup and recovery operations using Recovery Manager (RMAN).
Use data dictionary views and set database parameters.
Configure and use multithreaded server (MTS) and Connection Manager.
Use the tuning/diagnostics tools STATSPACK, TKPROF, and EXPLAIN PLAN.
Tune the size of data blocks, the shared pool, the buffer caches, and rollback segments.
Diagnose contention for latches, locks, and rollback segments.
Tips for Taking the OCP Exam Use the following tips to help you prepare for and pass each exam.
Each OCP test contains about 55–80 questions to be completed in 90 minutes. Answer the questions you know first so that you do not run out of time.
The answer choices for many questions on the exam look identical at first. Read the questions carefully. Do not just jump to conclusions. Be sure that you clearly understand exactly what each question asks.
Most of the test questions are scenario-based. Some scenarios contain nonessential information and exhibits. You need to be able to identify what’s important and what’s not important.
Do not leave any questions unanswered. There is no negative scoring. After selecting an answer, you can mark a difficult question or one that you’re unsure of and come back to it later.
When answering questions that you are not sure about, use a process of elimination to get rid of the obviously incorrect answers first. Doing this greatly improves your odds if you need to make an educated guess.
If you’re not sure of your answer, mark it for review and then look for other questions that might help you eliminate any incorrect answers. At the end of the test, you can go back and review the questions that you marked for review.
Where Do You Take the Exam? You take the Introduction to Oracle9i: SQL exam (1Z0-007) via the Internet. To register for an online Oracle certification exam, you will need an Internet connection of at least 33Kbps, but a 56Kbps, LAN, or broadband connection is recommended. You will also need either Internet Explorer 5 (or later) or Netscape 4.x (Oracle does not recommend Netscape 5.x or 6.x). At the time of this writing, the online 1Z0-007 exam is $90. If you do not
have a credit card to use for payment, you will need to contact Oracle to purchase a voucher. You can pay with a certification voucher, promo codes, or credit card. You can take the other exams at any of the more than 800 Sylvan Prometric Authorized Testing Centers around the world. For the location of a testing center near you, call 1-800-891-3926. Outside the United States and Canada, contact your local Sylvan Prometric Registration Center. Usually, you can take the tests in any order. To register for a proctored Oracle Certified Professional exam at a Sylvan Prometric test center, do the following:
Determine the number of the exam you want to take.
Register with Sylvan Prometric online at http://www.2test.com or, in North America, by calling 1-800-891-EXAM (800-891-3926). At this point, you will be asked to pay in advance for the exam. At the time of this writing, the exams are $125 each and must be taken within one year of payment.
When you schedule the exam, you’ll get instructions regarding all appointment and cancellation procedures, the ID requirements, and information about the testing-center location. You can schedule an exam as much as six weeks in advance or as soon as one working day before the day you want to take it. If something comes up and you need to cancel or reschedule your exam appointment, contact Sylvan Prometric at least 24 hours in advance.
What Does This Book Cover? This book covers everything you need to pass the Oracle9i Database: Fundamentals I exam. This exam is part of the Oracle9i Certified Database Associate certification tier in the database administration track. It teaches you the basics of Oracle Architecture and Administration. Each chapter begins with a list of exam objectives. Chapter 1 Discusses the new features of Oracle9i database compared with the previous versions. Chapter 2 Explains the Oracle9i architecture and its main components. Chapter 3 Discusses the various tools available to DBAs, connecting to the Oracle database, and startup/shutdown of the database.
Chapter 4 Discusses how to create a database manually as well as how to use the Database Configuration Assistant. It also discusses the Oracle data dictionary. Chapter 5 Explains the uses and contents of the control files and redo log files. Chapter 6 Discusses tablespaces and data files. The logical structure of the tablespace within he database and Oracle Managed Files are discussed. Chapter 7 Explains logical storage structures such as blocks, extents, and segments and managing undo data. Chapter 8 Discusses creating tables with the various datatypes and options available to store data. Creating and managing indexes and constraints are discussed. Chapter 9 Introduces database and data security. Setting up profiles, users, privileges, and roles are discussed. It also discusses the Globalization Support. Each chapter ends with Review Questions that are specifically designed to help you retain the knowledge presented. To really nail down your skills, read and answer each question carefully.
How to Use This Book This book can provide a solid foundation for the serious effort of preparing for the OCA database administration exam track. To best benefit from this book, use the following study method: 1. Take the Assessment Test immediately following this introduction.
(The answers are at the end of the test.) Carefully read over the explanations for any questions you get wrong, and note which chapters the material comes from. This information should help you plan your study strategy. 2. Study each chapter carefully, making sure that you fully understand
the information and the test objectives listed at the beginning of each chapter. Pay extra close attention to any chapter related to questions you missed in the Assessment Test. 3. Complete all hands-on exercises in the chapter, referring to the chap-
ter so that you understand the reason for each step you take. If you do
not have an Oracle database available, be sure to study the examples carefully. Answer the Review Questions related to that chapter. (The answers appear at the end of each chapter, after the “Review Questions” section.) 4. Note the questions that confuse or trick you, and study those sections
of the book again. 5. Before taking the exam, try your hand at the Bonus Exams included on
the CD that comes with this book. The questions on these exams appear only on the CD. This will give you a complete overview of what you can expect to see on the real test. 6. Remember to use the products on the CD included with this book. The
electronic flashcards and the Edge Test exam preparation software have been specifically designed to help you study for and pass your exam. You can use the electronic flashcards n your Windows computer or on your Palm device. To learn all the material covered in this book, you’ll need to apply yourself regularly and with discipline. Try to set aside the same time period every day to study, and select a comfortable and quiet place to do so. If you work hard, you will be surprised at how quickly you learn this material. All the best!
What’s on the CD? We have worked hard to provide some really great tools to help you with your certification process. All the following tools should be loaded on your workstation when you’re studying for the test. The EdgeTest for Oracle Certified DBA Preparation Software Provided by EdgeTek Learning Systems, this test-preparation software prepares you to pass the Oracle9i Database: Fundamentals I exam. In this test, you will find all the questions from the book, plus two Bonus Exams that appear exclusively on the CD. You can take the Assessment Test, test yourself by chapter, take the Practice Exam that appears in the book or on the CD, or take an exam randomly generated from all the questions. Electronic Flashcards for PC and Palm Devices After you read the OCA/OCP: Oracle9i Database: Fundamentals I Study Guide, read the Review Questions at the end of each chapter, and study the Practice Exams included in the book and on the CD. But wait, there’s more!
Test yourself with the flashcards included on the CD. If you can get through these difficult questions and understand the answers, you’ll know that you’re ready for the exam. The flashcards include 150 questions specifically written to hit you hard and make sure you are ready for the exam. With the Review Questions, Practice Exams, and flashcards, you should be more than prepared for the exam. OCA/OCP: Oracle9i Database: Fundamentals I Study Guide in PDF Sybex is now offering the Oracle certification books on CD so you can read the book on your PC or laptop. It is in Adobe Acrobat format. Acrobat Reader 5 is also included on the CD. This will be extremely helpful to readers who fly or commute on a bus or train and don’t want to carry a book, as well as to readers who find it more comfortable reading from their computer.
How to Contact the Authors To contact Biju Thomas, you can e-mail him at [email protected] or visit his website for DBAs at http://www.bijoos.com/oracle. To contact Bob Bryla, you can e-mail him at [email protected].
About the authors Biju Thomas is an Oracle9i certified professional with eight years of Oracle database management and application development experience. He has written articles for Oracle Magazine, Oracle Internals, and Select Magazine. He maintains a website for DBAs at http://www.bijoos.com/oracle. Bob Bryla is an Oracle9i certified professional with more than ten years of database design, database application development, and database administration experience in a variety of fields. He is currently an Internet Database Analyst and DBA at Lands’ End, Inc. in Dodgeville, Wisconsin.
Assessment Test 1. Multiple ____________ can share an SGA. A. PMON processes B. Server processes C. Instances D. Databases E. Tablespaces 2. Which component in the following list is not part of the SGA? A. Database buffer cache B. Library cache C. Sort area D. Shared pool E. Java pool 3. Which background process updates the online redo log files with the
redo log buffer entries when a COMMIT occurs in the database? A. DBWn B. LGWR C. CKPT D. CMMT 4. How do you change the status of a database to restricted availability,
if the database is already up and running? (Choose the best answer.) A. Shut down the database and start the database using STARTUP
RESTRICT. B. Use the ALTER DATABASE RESTRICT SESSIONS command. C. Use the ALTER SYSTEM ENABLE RESTRICTED SESSION command. D. Use the ALTER SESSION ENABLE RESTRICTED USERS command.
5. When you connect to a database by using CONNECT SCOTT/TIGER AS
SYSDBA, which schema are you connected to in the database? A. SYSTEM B. PUBLIC C. SYSDBA D. SYS E. SCOTT 6. Suppose the database is in the MOUNT state; select two statements from
the options below that are correct. A. The control file is open; the database files and redo log files are
closed. B. You can query the SGA by using dynamic views. C. The control file, data files, and redo log files are open. D. The control file, data files, and redo log files are all closed. 7. Which of the following clauses will affect the size of the control file
when creating a database? (Choose two.) A. MAXLOGFILES B. LOGFILE C. ARCHIVELOG D. MAXDATAFILES 8. Which script creates the data dictionary tables? A. catalog.sql B. catproc.sql C. sql.bsq D. dictionary.sql
9. Which files can be multiplexed? A. Data files B. Parameter files C. Redo log files D. Alert log files 10. What happens when one of the redo members of the next group is
unavailable when LGWR has finished writing the current log file? A. Database operation will continue uninterrupted. B. The database will hang; do an ALTER DATABASE SWITCH LOGFILE
to skip the unavailable redo log. C. The instance will be shut down. D. LGWR will create a new redo log member, and the database will
continue to be in operation. 11. When you multiplex the control file, how many control files can you
have for one database? A. Four B. Eight C. Twelve D. Unlimited 12. Which initialization parameter specifies that no more than the speci-
fied number of seconds will elapse during an instance recovery? (Choose the best answer.) A. FAST_START_IO_TARGET B. FAST_START_MTTR_TARGET C. LOG_CHECKPOINTS_TO_ALERT D. CHECKPOINT_RECOVERY_TIME E. LOG_CHECKPOINT_TIMEOUT
13. Which SQL*Plus command can you use to see whether the database
is in ARCHIVELOG mode? A. SHOW DB MODE B. ARCHIVELOG LIST C. ARCHIVE LOG LIST D. LIST ARCHIVELOG 14. Which initialization parameter must be set to create a control file
using OMF? A. DB_CREATE_SPFILE B. DB_CREATE_FILE_DEST C. DB_CREATE_ONLINE_LOG_DEST_n D. CONTROL_FILES 15. The following are the steps required for relocating a data file belong-
ing to the USERS tablespace. Choose the correct order in which the steps are to be performed. 1. Copy the file /disk1/users01.dbf to /disk2/users01.dbf
using an operating system command. 2. ALTER DATABASE RENAME FILE ‘/disk1/users01.dbf’ TO
‘/disk2/users01.dbf’ 3. ALTER TABLESPACE USERS OFFLINE 4. ALTER TABLESPACE USERS ONLINE A. 1, 2, 3, 4 B. 3, 1, 2, 4 C. 3, 2, 1, 4 D. 4, 2, 1, 3
16. Which storage parameter is used to make sure that each extent is a
multiple of the value specified? A. MINEXTENTS B. INITIAL C. MINIMUM EXTENT D. MAXEXTENTS 17. Choose two extent management options available for tablespaces. A. Dictionary-managed B. Data file-managed C. Locally managed D. Remote managed E. System-managed 18. Which dictionary views would give you information about the total
size of a tablespace? (Choose two.) A. DBA_TABLESPACES B. DBA_TEMP_FILES C. DBA_DATA_FILES D. DBA_FREE_SPACE 19. Which parameter is used to set up the directory for Oracle to create
data files, if you do not specify a file name in the DATAFILE clause when creating or altering tablespaces? A. DB_FILE_CREATE_DEST B. DB_CREATE_FILE_DEST C. DB_8K_CACHE_SIZE D. USER_DUMP_DEST E. DB_CREATE_ONLINE_LOG_DEST_1
20. Select the invalid statements from the list below regarding undo
segment management. (Choose all that apply.) A. ALTER SYSTEM SET UNDO_TABLESPACE = ROLLBACK; B. ALTER DATABASE SET UNDO_TABLESPACE = UNDOTBS; C. ALTER SYSTEM SET UNDO_MANAGEMENT = AUTO; D. ALTER SYSTEM SET UNDO_MANAGEMENT = MANUAL; 21. Which statement allows specifying the parameters PCTFREE and
PCTUSED? A. CREATE TABLE B. ALTER INDEX C. ALTER TABLESPACE D. All the above 22. Choose two space management parameters used to control the free
space usage in a data block. A. PCTINCREASE B. PCTFREE C. PCTALLOCATED D. PCTUSED 23. Which data dictionary view would you query to see the temporary
segments in a database? A. DBA_SEGMENTS B. V$SORT_SEGMENT C. DBA_TEMP_SEGMENTS D. DBA_TABLESPACES 24. The ALTER INDEX
25. Which command do you use to collect statistics for a table? A. ALTER TABLE COMPUTE STATISTICS B. ANALYZE TABLE COMPUTE STATISTICS C. ALTER TABLE COLLECT STATISTICS D. ANALYZE TABLE COLLECT STATISTICS 26. How do you prevent row migration? A. Specify larger PCTFREE B. Specify larger PCTUSED C. Specify large INITIAL and NEXT sizes D. Specify small INITRANS 27. Which data dictionary view can you query to find the primary key
columns of a table? A. DBA_TABLES B. DBA_TAB_COLUMNS C. DBA_IND_COLUMNS D. DBA_CONS_COLUMNS E. DBA_CONSTRAINTS 28. Choose three valid partitioning methods available in Oracle9i. A. RANGE B. BINARY C. LIST D. COMPOUND E. HASH 29. If you run the ALTER SESSION SET NLS_DATE_FORMAT = ‘DDMMYY’
statement, which dictionary view would you query to see the value of the parameter? A. V$SESSION_PARAMETERS B. NLS_SESSION_PARAMETERS
C. NLS_DATABASE_PARAMETERS D. V$SESSION 30. Which NLS parameter can be specified only as an environment
variable? A. NLS_LANGUAGE B. NLS_LANG C. NLS_TERRITORY D. NLS_SORT 31. Look at the result of the following query and choose the best answer.
SELECT PROPERTY_VALUE FROM database_properties WHERE property_name = 'DEFAULT_TEMP_TABLESPACE'; PROPERTY_VALUE ------------------------APP_TEMP_TS A. Newly created users in the database will be assigned APP_TEMP_TS
as their temporary tablespace. B. Newly created users in the database will be assigned APP_TEMP_TS
as their temporary tablespace if the TEMPORARY TABLESPACE clause is omitted in the CREATE USER statement. C. Newly created users in the database will be assigned APP_TEMP_TS
as their temporary tablespace even if the TEMPORARY TABLESPACE clause is specified in the CREATE USER statement. D. Newly created users in the database will be assigned APP_TEMP_TS
as their default as well as temporary tablespace, if the DEFAULT TABLESPACE and TEMPORARY TABLESPACE clauses are omitted in the CREATE USER statement.
Answers to Assessment Test 1. B. The background processes and the SGA constitute an instance. An
instance can have only one PMON process, but can have many server processes. An instance can only be associated with one database. See Chapter 2 for more information. 2. C. The sort area is not part of the SGA; it is part of the PGA. The sort
area is allocated to the server process when required. See Chapter 2 for more information on the components of the SGA and an overview of the Oracle database architecture. 3. B. The LGWR process is responsible for writing the redo log buffer
entries to the online redo log files. The LGWR process writes to the redo log files when a COMMIT occurs, when a checkpoint occurs, when the DBWn writes dirty buffers to disk, or every three seconds. To learn more about the background processes and database configuration, refer to Chapter 2. 4. C. Though answer A is correct, the more appropriate answer is C.
You can use the ALTER SYSTEM command to enable or disable restricted access to the database. To learn about sessions and database startup/shutdown options, turn to Chapter 3. 5. D. When you connect to the database by using the SYSDBA privilege,
you are really connecting to the SYS schema. If you use SYSOPER, you will be connected as PUBLIC. To learn more about administrator authentication methods, refer to Chapter 3. 6. A and B. When the database is in the MOUNT state, the control file is
opened to get information about the data files and redo log files. You can query the SGA information by using the V$ views as soon as the instance is started, that is, in the NOMOUNT state. More information about database start-up steps is in Chapter 3. 7. A and D. The clauses MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS,
MAXINSTANCES, and MAXHISTORY affect the size of the control file. Oracle pre-allocates space in the control file for the maximums you specify. To learn more about database creation, refer to Chapter 4.
8. C. The script sql.bsq is executed automatically by the CREATE
DATABASE command, and it creates the data dictionary base tables. The catalog.sql script creates the data dictionary views. To learn more about the other scripts and data dictionary, refer to Chapter 4. 9. C. Redo log files and control files can be multiplexed. There should
be a minimum of two control files and two redo log members on different disks. See Chapter 4 for more information about multiplexing database files. 10. A. When one of the redo log members becomes unavailable, Oracle
writes an error message in the alert log file and the database operation continues uninterrupted. When all the redo log members of a group are unavailable, the instance shuts down. For more information, see Chapter 5. 11. B. You can have a maximum of eight control files per database. It is
recommended that you keep the control files on different disks. For more information, see Chapter 5. 12. B. FAST_START_MTTR_TARGET ensures that no more than the specified
number of seconds will elapse until the instance recovery is complete. FAST_START_IO_TARGET and LOG_CHECKPOINT_TIMEOUT are deprecated in Oracle 9i. LOG_CHECKPOINTS_TO_ALERT is TRUE if database checkpoints are logged in the alert log file. See Chapter 5. 13. C. The ARCHIVE LOG LIST command shows whether the database is
in ARCHIVELOG mode, whether automatic archiving is enabled, the archival destination, and the oldest, next, and current log sequence numbers. Refer to Chapter 5. 14. C. The parameter DB_CREATE_ONLINE_LOG_DEST_n gives the direc-
tory location for the control file and redo log files. Oracle automatically generates the control filename itself. The parameter DB_CREATE_SPFILE is a nonexistent parameter. OMF uses DB_CREATE_FILE_DEST to specify the location of datafiles. The CONTROL_FILES parameter must NOT be present for OMF to automatically create the control file. See Chapter 5 for more information about maintaining the control file.
15. B. To rename a data file, you need to make the tablespace offline, so
that Oracle does not try to update the data file, while you are renaming. Using OS commands copy the data file to the new location and using the ALTER DATABASE RENAME FILE command or the ALTER TABLESPACE RENAME FILE command, rename the file in the database’s control file. To rename the file in the database, the new file should exist. Bring the tablespace online for normal database operation. For more information, refer to Chapter 6. 16. C. Use the MINIMUM EXTENT parameter to ensure that each extent is
a multiple of the value specified. This parameter is useful for reducing fragmentation in the tablespace. For more information, refer to Chapter 6. 17. A, C. When the extent management options are handled through the
dictionary, the tablespace is known as dictionary managed. When the extent management is done using bitmaps in the data files belonging to the tablespace, it is known as locally managed. The default is locally managed. For more information, see Chapter 6. 18. B, C. The DBA_DATA_FILES view has the size of each data file
assigned to the tablespace; the total size of all the files is the size of the tablespace. Similarly, if the tablespace is locally managed and temporary, you need to query the DBA_TEMP_FILES view. For more information, refer to Chapter 6. 19. B. DB_CREATE_FILE_DEST specifies the directory to create data files
and temp files. This directory is also used for control files and redo log files if the DB_CREATE_ONLINE_LOG_DEST_1 parameter is not set. For more information, refer to Chapter 6. 20. B, C, D. Choice A is the only valid statement, because the undo
tablespace can have any name that follows Oracle object-naming conventions. Choice B is incorrect because undo segments are not managed with ALTER DATABASE. Choices C and D are incorrect because the UNDO_MANAGEMENT parameter cannot be changed dynamically. Undo tablespace creation and management is discussed in Chapter 7.
21. A. You can specify PCTFREE and PCTUSED for creating or altering
tables or clusters. You can specify PCTFREE for indexes. Creating or altering tablespaces does not allow the specification of free space management parameters. See Chapter 7 for more information on data block space management. 22. B and D. PCTFREE and PCTUSED are the space management parame-
ters that control space in a block. PCTFREE specifies the percentage of space that should be reserved for future updates (which can increase the length of the row), and PCTUSED specifies when Oracle can start reinserting rows to the block once PCTFREE is reached. PCTFREE and PCTUSED together cannot exceed 100. To learn about space management parameters, refer to Chapter 7. 23. A. To see all the temporary segments in the database, use the
DBA_SEGMENTS view and restrict the query using SEGMENT_TYPE = ‘TEMPORARY’. The V$SORT_SEGMENT view shows only the temporary segments created in TEMPORARY tablespaces. To learn about the types of segments, see Chapter 7. 24. D. To rename an index, you use the ALTER INDEX
RENAME TO command, but you cannot combine a rename with any other index operation. When rebuilding an index, you can specify a new tablespace and new storage parameters. The index can be rebuilt in parallel, and you can specify COMPUTE STATISTICS to collect statistics. For information about indexes, refer to Chapter 8. 25. B. You use the ANALYZE command to collect statistics on a table.
COMPUTE STATISTICS reads all the blocks of the table and collects the statistics. ESTIMATE STATISTICS takes a few rows as a sample and collects statistics. For information about collecting statistics and validating structure by using the ANALYZE command, refer to Chapter 8. 26. A. PCTFREE specifies the free space reserved for future updates to
rows. By specifying a larger value for PCTFREE, more free space is available in each block for updates. Row migration occurs when a row is updated and there is not enough space to hold the row; Oracle then
moves the entire row to a new block, leaving a pointer in the old block. For information about data block free space management, refer to Chapter 8. 27. D. The DBA_CONS_COLUMNS view has the column name and position
that belongs to the constraint. To find the primary key constraint name, query the DBA_CONSTRAINTS view with CONSTRAINT_TYPE and TABLE_NAME in the WHERE clause. To learn about constraints, refer to Chapter 8. 28. A, C, E. Oracle9i has four partitioning methods available: RANGE,
HASH, COMPOSITE, and LIST. In range partitioning, rows with a range of values are mapped to a partition. In hash partitioning, rows are mapped to a partition using a derived hash value. In composite partitioning, range is used for partitions, and hash is used for sub-partitions. In list partitioning, rows are mapped to partitions based on discrete column values. For information about partition, refer to Chapter 8. 29. B. The NLS_SESSION_PARAMETERS view shows information about
the NLS parameter values that are in effect in the session. For more information, see Chapter 9. 30. B. NLS_LANG is specified as an environment variable. The parameter
specifies a language, a territory, and a character set. For more information, see Chapter 9. 31. B. The query shows the tablespace name specified for the DEFAULT
TEMPORARY TABLESPACE clause of the CREATE DATABASE or ALTER DATABASE statement. Prior to Oracle9i, if you omit the TEMPORARY TABLESPACE clause in the CREATE USER statement, SYSTEM tablespace was the default. In Oracle9i, you can define a default temporary tablespace for the database. For more information, see Chapter 9.
he Oracle9i platform picks up where Oracle8i left off. The Oracle9i database was enhanced across all major functional areas: server availability, scalability, performance, security, and manageability. Of course, as an Oracle Certified DBA candidate, you need to know about all aspects of the Oracle database, not just the new features; however, if you have a good background in previous Oracle versions, you can certainly benefit from an Oracle9i new-features overview. Although the Oracle9i platform is also enhanced in the application server and development tools areas, this chapter focuses on the new features of Oracle9i that are database-related. As with any new release of the Oracle Server, a lot of the new features replace or make obsolete features that exist in previous versions of the Oracle Server. The last section of this chapter discusses the deprecated and unsupported features in Oracle9i.
High Availability
T
he Oracle9i database was enhanced in a number of areas to make sure that the database is available during maintenance operations, even if those maintenance operations are occurring on the user objects currently in use. The Oracle DBA has more control over the recovery of the database in the case of instance failure, and the user has more options to re-create data even after changes or deletions have been committed.
More flexibility has been added to the import/export process, and LogMiner has been expanded to include DDL (Data Definition Language) statement support. RMAN (Recovery Manager) is more automated and more efficient; it is also easier to use with the new OEM (Oracle Enterprise Manager) interface. You can perform additional operations on Index Organized Tables (IOTs), as well as redefinition operations on regular tables without any downtime for the users of those tables. The flexibility of the SPFILE initialization file option frees the DBA from having to edit a text-based initialization file and having to wait for a shutdown and restart for the new parameter values to take effect.
Disaster Recovery In previous versions of Oracle, the DBA had to contend with a number of different parameters to strike a balance between high performance, availability, and minimal recovery time. Oracle9i introduces the new parameter FAST_START_MTTR_TARGET to allow the DBA to specify the maximum number of seconds that a crash recovery should take. Database users can implement their own style of disaster recovery by using Oracle Flashback Query. A user can essentially move back to a particular point of time in the past and view the contents of a table or tables. Using this feature, users can “undo” changes made in the past by seeing which operations led to the change and then manually re-inserting or repairing the changes to the database. This feature can also be used as a historical query tool, for example, to give a bank customer an account balance as of a particular time in the past. This flashback capability is supported by the new system package DBMS_FLASHBACK. Oracle Data Guard makes standby databases easier to use, with more robust failover features and an easy to use GUI interface. It essentially combines the primary and standby databases into a single “high availability” resource. Oracle’s native standby database functionality (which can be managed under the Data Guard umbrella) has been enhanced to allow the primary database to be used as the new standby, instead of being discarded as in previous versions of Oracle.
Import/Export Oracle9i contains a number of enhancements to make the Import and Export utilities more precise and efficient. Instead of having to manually recalculate table statistics after an import, the DBA can use statistics that were saved with the table during the export with the STATISTICS import parameter. This feature goes well beyond a simple “yes” or “no”: the DBA can trust the Import engine to reject the saved statistics if they are questionable and to recalculate appropriately. Another “fine-grained” enhancement to the Export utility is the ability to specify the tables to be exported by specifying the tablespace(s) that contain the tables to export. In addition to exporting all tables within a given tablespace, all indexes are exported with their corresponding tables regardless of where the index itself is stored. The new Export and Import utilities support components of the Oracle Flashback Query feature, in which parts of an export can be extracted using new flashback parameters.
LogMiner LogMiner, already a robust tool in previous versions of Oracle, has been significantly enhanced in Oracle9i. Unlike previous versions, the new version can support DDL statements, chained or migrated rows, and direct path inserts. Additionally, you can extract the database’s data dictionary to the redo logs and analyze the logs with LogMiner. In previous versions of LogMiner, all DDL statements were indirectly represented in the log files as several transactions against the data dictionary, making it difficult for the DBA to determine what the actual DDL statement was. Now, LogMiner will log both the DDL statement that the DBA or user typed, plus the multiple DML (Data Manipulation Language) statements run against the data dictionary. Being able to extract the dictionary to a flat file or redo logs has several advantages:
There is no performance hit against the live data dictionary, reducing dictionary contention with other transactional users of the database.
Because all the information needed is in the redo logs, the database need not be open to use LogMiner.
In a quickly changing data dictionary, the table metadata in the redo logs may not match what is currently in the live data dictionary.
A couple of other features are worthy of mention. In previous versions of LogMiner, the analysis stopped when a corrupted redo log file was encountered. In the new version of LogMiner, the SKIP_CORRUPTION option in the DBMS_LOGMNR.START_LOGMNR procedure notes and ignores the bad block(s). The other new option in this procedure is COMMITTED_DATA_ONLY. With this option enabled, any LogMiner operation will return results only from committed transactions.
Backup and Recovery The enhancements to Recovery Manager (RMAN) are numerous. They fall into three basic categories:
Persistent configuration parameters
General enhancements to backup and restore
A redesigned, easier-to-use graphical interface
RMAN now supports the CONFIGURE command, which allows the DBA to set the backup parameters persistently across backup sessions. Once all the appropriate parameters are set correctly, the DBA can do a full backup with one command, BACKUP DATABASE. The CONFIGURE command applies to many RMAN operations: backup retention policies, channel allocations, device type specifications, backup copies, and control file backups. General enhancements to RMAN include long-term backups, mirrored backups, restartable backups, and archive log backups. Long-term backups are backups that you can explicitly archive for longer than the default retention policy. Mirrored backups are an enhanced version of the duplexing option originally released in Oracle8i, with the added capability to specify different formats (destinations) for backup copies. Time savings can be realized with the new restartable backup feature of RMAN. When you restart a backup with the NOT BACKED UP option, only missing or incomplete files are backed up, based on backup time. And finally, you can now include archive logs that have not been backed up in a datafile backup, instead of or in addition to using a BACKUP ARCHIVELOG command. A significantly enhanced user interface to RMAN makes the DBA’s job even easier. All the new options available in the command line interface are also available in the GUI version of the tool.
Online Operations Many of the new features in Oracle9i allow online operations to proceed without interruption; in other words, access to tables and other database objects is continuously available to users even though redefinition and reorganization operations may be going on in the background. Of particular note are high availability enhancements related to Index-Organized Tables (IOTs), online reorganization of tables, and server-side parameter files (SPFILEs). In previous versions of the Oracle database, an IOT was unavailable for most reorganization and index operations. In Oracle9i, a number of operations on IOTs are allowed while the table is in use. For example, you can create and rebuild IOT secondary indexes; you can update stale logical ROWIDs; you can rebuild IOTs by using the ALTER TABLE ... MOVE option, which can not only rebuild the primary key index but also rebuild the overflow data segment. Problematic for an enterprise DBA are large tables that are heavily used around the clock and occasionally need some kind of modification or reorganization. In Oracle9i, many of the common operations that would previously have made the table unavailable can now be done “on the fly” with minimal impact to the table’s users. For example, you can convert nonpartitioned tables to partitioned tables, and vice versa. You can convert IOTs to heap-based tables. You can drop non-primary key columns, add new columns, and rename columns. In addition, you can modify storage parameters for a table. Tables without a primary key or tables with userdefined data types cannot be altered in this way, however. SPFILEs enhance the online availability of databases by no longer requiring manual parameter file edits that may necessitate a restart of the database. An SPFILE is binary, not directly editable, and resides on the server. When you change SPFILE parameters with an ALTER SYSTEM command, they can be changed for the current instance only, the next restart of the instance (in other words, in the SPFILE), or both.
Scalability
The scalability of the Oracle9i Server is improved in three areas:
Changes to the internal database structures to keep downtime to a minimum
Expansion of the Oracle clustering technology (Real Application Clusters) to add additional resources without changes to application programs
More flexibility in user session management to use session memory resources more efficiently
Architecture Numerous changes to the Oracle9i architecture make the Oracle database even more scalable as the enterprise grows, with little or no changes to applications or procedures. In many cases, these new features smooth the operation and maintenance of the database for the DBA. These features include global index architecture changes, metadata extraction capabilities, and tablespace block management changes along with various memory management enhancements. Global index improvements allow users and DBAs to execute DDL commands without invalidating the entire global index. This keeps the availability of the index as high as possible while at the same time making the DBA’s life easier by reducing the number of steps and commands required to keep the indexes valid. Extracting the metadata from a database was a complicated task in previous versions of Oracle, involving multiple queries or doing special export/ import operations. Oracle9i adds a new package called DBMS_METADATA to either browse all metadata or to extract metadata for specific database objects. The output can be in either SQL or XML format. The use of external tables in Oracle9i extends the reach of SQL select statements to external files. Although there are a number of restrictions on how external tables can be accessed, external tables provide a useful way to stage intermediate tables for data warehouse ETL (extract, transform, load) operations without loading the intermediate data into the database itself. Automatic segment space management within a tablespace makes the DBA’s life easier by essentially eliminating a lot of the guesswork when attempting to specify the default segment parameters in the tablespace. The free and used space is managed with bitmaps instead of free lists; tablespaces whose segment space is automatically managed must also be locally managed (that is, not managed in the data dictionary). In the area of memory management, major changes were made in Oracle9i to ease the maintenance and improve the utilization of memory in the SGA (System Global Area). In essence, SGA memory and its sub-components can
grow or shrink in response to changes in load or types of database operations being performed at the time. Memory in the SGA is now allocated in units called granules, whose size depends on the total estimated size of the SGA itself. In response to changing conditions, the DBA can dynamically change memory in each of the sub-components, such as the shared pool and buffer cache. To help the DBA in specifying an optimal buffer cache size, statistics collection can be enabled using the buffer cache advisory feature.
Real Application Clusters In a nutshell, Real Application Clusters (RACs) allows multiple instances to run against the same database. Special hardware is required to allow a group of shared disks to be accessed at a very high throughput rate by each node in the cluster. Each node in the cluster can have more than one CPU. There are a couple of benefits to using RACs. It’s easy to add an additional node when the workload increases, without having to change any application code or operational procedures. Additionally, as each node is added to the cluster, the total availability of the database increases, as an instance failure on any particular node automatically initiates transparent application failover on one of the other nodes. Cache Fusion, one of the new features included with RACs, allows data blocks to be shared between instances without the use of the shared disk resources. Retrieving a block from another instance’s cache is significantly faster than retrieving that same block from a disk subsystem.
Session Management Oracle Shared Server, formerly known as multithreaded server (MTS), contains many enhancements to further increase the performance and reduce the overhead of shared server connections. Changes to the connection establishment process reduce the total number of messages required to establish the connection between the client and the dispatcher. The new Common Event Model in the dispatcher handles both network and database events similarly, reducing overhead and the amount of polling required to capture the event notifications. OCI (Oracle Call Interface) connection pooling allows middle-tier products to more efficiently manage a pool of connections for an application, rather than having the middleware explicitly manage the connections to the database.
Performance gains in the Oracle9i Server are realized with new features that are highly visible to the user or application developer. Conformance to the latest SQL standards makes coding more efficient for the developer and makes the execution of this code potentially more efficient on the server side. The DBA has a new feature set to help monitor index usage, allowing the DBA to drop indexes that are used infrequently or not at all.
SQL and PL/SQL Optimization Oracle9i complies much closer with the SQL:1999 standards and syntax. Some of the standards now reflected in the Oracle SQL processor include enhancements to join operations, case statements, FK (foreign key) and PK (primary key) caching operations, and multi-table inserts. Significant enhancements to the PL/SQL processor allow for dramatic decreases in execution time for PL/SQL procedures, especially those that do not have SQL references. You can now explicitly specify query join types in FROM clauses, rather than in the WHERE clause. The join types supported include cross joins (Cartesian products), natural joins (equijoins), and full, left, and right outer joins. Oracle9i expands on the CASE expression that has been available since Oracle8i. A new type of CASE expression, a searched CASE expression, operates much like an IF...THEN...ELSE construct and allows for multiple predicates within the WHEN clauses. The NULLIF and COALESCE functions operate much like “abbreviated” CASE statements for returning and evaluating null values. Unindexed foreign keys still require table-level share locks when an update or delete on the primary key takes place; however, the overhead is reduced and availability increased because the lock is immediately released after it is obtained. Foreign key creation is faster because Oracle9i caches the first 256 primary key values for DML statements that process at least two rows. The new multi-table insert feature allows for easier coding and less SQL processing overhead, because all source and destination tables are specified in the same INSERT statement. You can also use this feature to easily refresh materialized views in a data warehouse environment.
PL/SQL execution is significantly more efficient in Oracle9i because the byte code generated in previous Oracle versions has been replaced by native C code. Additional performance gains are a result of the compiled code residing in the PGA (Program Global Area) rather than in the SGA, reducing contention in the SGA.
I/O Performance The presence of too many indexes can negatively impact performance when you are inserting or updating rows, especially in an OLTP (Online Transaction Processing) environment. In addition, significant amount of disk space may be wasted if these indexes are not needed. You can gather new statistics at query parse time to help identify which indexes are used during a particular query. You can alter indexes directly with the ALTER INDEX ... MONITORING USAGE clause; new data dictionary views such as V$OBJECT_USAGE indicate whether a particular index has been used in the specified time frame. Cursor sharing, a feature introduced in Oracle8i, has been enhanced in Oracle9i. In Oracle8i, you could reuse SQL statements in the shared pool if only the literal values in the SQL statement were different. In many cases, this reuse improved memory utilization in the shared pool, but risked some performance degradation when the values in the keyed column were skewed in terms of the histogram statistics. As a result, using only one execution plan for these queries was a potentially inefficient operation. In Oracle9i, the execution plans are reused only if the optimizer has determined that the execution plan is independent of the literal value(s) used in the query. The parameter CURSOR_SHARING can now have the value SIMILAR in addition to the already available FORCE and EXACT values. To make Oracle’s Cost Based Optimizer (CBO) more accurate, three new columns have been added to the PLAN_TABLE: CPU_COST, IO_COST, and TEMP_SPACE. In other words, the new Oracle9i cost model now takes into account the estimated CPU cost of the operation, the effect of caching, and the effect of using temporary segments for pre-fetching index blocks.
Java Enhancements The internal Java engine has better garbage collection and native compilation. The performance has been enhanced by the use of object sharing and session pinning. Middle-tier operations have also been enhanced by internal
improvements to JDBC (Java Database Connectivity) and SQLJ (SQL embedded in Java applications).
Security
O
racle9i adds a number of security related features to make random numbers more random, make rows of a table accessible only to those who need access, and allow the DBA to more easily audit table access based on an expanded set of conditions.
Data Encryption Oracle9i introduces a new function, GETKEY, in the package DBMS_OBFUSCATION_TOOLKIT. Provided that the encryption keys themselves are stored securely, GETKEY will generate a random number that is significantly more secure than a number generated from DBMS_RANDOM.
Label Security Oracle9i provides Label Security, a more secure and “fine-grained” approach to controlling access to rows in a database by the use of a special label in each row of the database. This new access control method is based on Oracle’s Virtual Private Database (VPD) features and is facilitated by the use of a new set of PL/SQL packages.
Fine-Grained Auditing Automatic auditing of the database has been enhanced in Oracle9i. In Oracle8i, auditing could be triggered only at a very high level: access of privileges or objects. In addition, the data returned in the audit table only contained a limited set of facts such as the username, date and time, and the object or privilege accessed. Using Fine-Grained Auditing (FGA), you can specify conditions on a given row of a table and record them in the audit table when those conditions are satisfied. In addition, user-defined procedures can be triggered when an audit condition is satisfied to perform additional processing, for example, to page the DBA when a particular row or set of rows is accessed in a table.
The new manageability features of Oracle9i simplify the life of the DBA by centralizing the locations where various database-related files are stored. The enhanced undo tablespace (rollback) features eliminate much of the guesswork when setting up the proper undo structures for various database scenarios. DBAs also have more control over how resources are used within a resource consumer group. All these enhancements are fully supported through a simplified and streamlined Oracle Enterprise Manager (OEM).
Oracle Managed Files (OMF) Oracle Managed Files (OMF) provides an easy way for the DBA to manage the locations of many types of database files by using two new initialization parameters. The DBA specifies only operating system locations for certain types of files, and the Oracle Server handles the unique naming of the operating system files themselves. The two new parameters are DB_CREATE_FILE_DEST and DB_CREATE_ ONLINE_LOG_DEST_n. The parameter DB_CREATE_FILE_DEST defines the default location for storing new datafiles associated with a given tablespace as well as for temporary files. The second parameter, DB_CREATE_ONLINE_ LOG_DEST_n, provides similar functionality except that this location is specified for new control files and online redo log files. You manage archived redo log files as you did in previous Oracle versions.
Using OMF for one group of files does not prevent the DBA from continuing to use the older methods for naming files in the database; both methods can coexist nicely, and you can convert to a completely OMF-based database in stages.
Undo Tablespace Management Automatic Undo Management provides yet another way to ease the administrative burden for the DBA. In previous Oracle versions, managing space for undo (rollback) segments was complex and error prone. In Oracle9i,
managing space is almost as simple as creating an undo tablespace big enough to handle the maximum number of undo entries at the busiest time of the day and letting the Oracle database handle the rest. Another “finegrained” enhancement to undo management is the ability for the DBA to specify how long to keep undo data before it is overwritten, potentially avoiding the classic, yet dreaded, “Snapshot too old” error. You can create the undo tablespace when you create the database or later if migration from manual undo (rollback) management cannot be implemented immediately. Multiple undo tablespaces can exist in the database, although only one can be active at any given time.
Fine-Grained User Policy Management User security has been strengthened with more restrictive, and therefore more secure, default values. Most of the accounts created by the Database Configuration Assistant (DBCA) are initially locked with expired passwords. The initialization parameter O7_DICTIONARY_ACCESSIBILITY now defaults to FALSE, unlike previous versions of Oracle. As a result, only users with the SYSDBA privilege can see the contents of the data dictionary. The Secure Application Role feature of Oracle9i extends the functionality of Application Context first introduced in Oracle8i. To enable an application role in Oracle8i required using a password as authentication for a role; this practice can be a big security problem if the password itself is breached, therefore allowing any application to access the restricted data via the role. Instead, in Oracle9i, the role is enabled by calling a stored procedure, which can validate the user based on a number of criteria, such as the IP address of the user or the time of day. The Oracle Enterprise Login Assistant (ELA) improves enterprise user security and ease of use by allowing users to have only one username and password (stored in the Oracle Internet Directory). In addition, because SSL (Secure Sockets Layer) and wallets on the client side are not required, user administration is further simplified for the DBA. Even previous versions of the Oracle client can use ELA to utilize single sign-on functionality.
Fine-Grained Resource Management In Oracle9i, DBAs can further restrict or fine-tune resource usage by resource consumer group. The new Active Session Pool feature of the Database Resource Manager can now restrict how many active sessions can exist
from users within a particular resource consumer group. If this limit is reached, the new user must wait until another user’s session has completed. The data dictionary views V$SESSION and V$RSRC_CONSUMER_GROUP have new columns that show how many consumer group sessions are waiting for resources and how long they have been waiting. You can use the new Oracle9i feature Automatic Consumer Group Switching to switch a particular session’s consumer group on the fly. You can temporarily switch long-running daytime queries to a resource group that is normally used for nightly batch jobs if the session is active for more than a particular length of time. As a result, the session will not be terminated, but switched to another consumer group that will have less impact on OLTP transactions during the day. On a similar note, you can also restrict the amount of undo space used by a session for a given consumer group by using the new plan directive UNDO_POOL. When this limit is exceeded, no further statements (other than SELECT) will be allowed for the session until other sessions within the same group release some undo segments or until the DBA manually increases the amount of undo quota allowed for the consumer group.
Enterprise Manager Oracle Enterprise Manager (OEM) has had a major facelift in terms of look and feel. Instead of the four-pane format of previous versions of OEM, the new Oracle9i OEM has a simpler, two-pane master/detail layout. Many of the functions that were available by launching a separate executable in previous versions are now tightly integrated into a single OEM console. For example, the database administration functions previously available in DBA Studio are now a component of OEM. As with previous versions of OEM, you can launch the console in standalone or via an Oracle Management Server (OMS). Connecting standalone does not require the middle-tier services nor do the target databases need the Intelligent Agent installed; however, the DBA cannot use many of the advanced features of OEM. These features include functionality such as Web-enabled applications, paging, backup tools, and access to events, jobs, and groups, to name a few.
Globalization Support Previously known as National Language Support (NLS), Globalization Support is now the term that describes the features of Oracle9i that facilitate the use of the Oracle database with applications across all languages, continents, and time zones without having to customize the application for each locale in which the application is used. New timestamp data types, such as TIMESTAMP and INTERVAL, are both more precise than the DATE data type in previous Oracle versions, but also allow the option to store a timestamp value as non-globalized, absolute within a specified time zone, or relative to the time zone of the user retrieving the data. Along with the new TIMESTAMP and INTERVAL data types in Oracle9i are the operations and functions that support these types. “Common sense” operations between TIMESTAMP and INTERVAL are allowed, such as calculating an INTERVAL from two TIMESTAMP variables or adding an INTERVAL to a TIMESTAMP. A full complement of predefined functions are included to process these types, such as DBTIMEZONE to retrieve the value of the database time zone or TO_TIMESTAMP to convert a string representation of a timestamp to the internal representation. Oracle9i supports Unicode version 3. New functions such as COMPOSE and DECOMPOSE handle characters with diacritical marks, and other new functions such as UNISTR convert a string to a Unicode string. And finally, expanded sorting options provide new sorts as well as a fourth level of sorting that can be user-defined using Oracle’s Locale Builder.
Unsupported and Deprecated Features
M
any of the changes to the Oracle9i Server that enhance the performance or increase the ease of use for both administrators and users can unfortunately require changes to features or changes in how a task is accomplished in Oracle9i. Unsupported features are not available at all; deprecated features are still available for sites that need to spread out the conversion over a period of time and more than one release of the Oracle Server.
Some of the key changes in Oracle9i are in the areas of backup and recovery, security, initialization parameter changes, network enhancements, and Unicode datatype handling.
Backup and Recovery The Export/Import utilities still support the INCREMENTAL option, but this option may be removed in a future release. The INCREMENTAL option backed up an entire table even if only one row in the table was changed. You can still specify individual tables in an export operation, but administrators are encouraged to use RMAN (Recovery Manager) instead to back up and recover database tables and entire databases. Using a clone database for tablespace point-in-time recovery (TSPITR) is deprecated in Oracle9i. Using the transportable tablespace (TTS) feature is recommended instead. For the same reason, the FOR RECOVER clause of ALTER TABLESPACE ... OFFLINE is supported only for backward compatibility.
Security The CONNECT INTERNAL and CONNECT INTERNAL/PASSWORD commands are not supported in Oracle9i. Instead, you use the syntax CONNECT / AS SYSDBA or CONNECT username/password AS SYSDBA. Server Manager is no longer supported. Use SQL*Plus for all DBA maintenance operations. Any automated scripts that were written for use with Server Manager should run in SQL*Plus with only minor modifications.
Network The following services previously supported under Oracle Net8 (now known as Oracle Net Services) are no longer supported in Oracle9i: NDS External Naming and Authentication, the SPX protocol, Net8 OPEN, the authentication methods Identix and SecurID, and the use of a protocol.ora file. In addition, no new features have been added to Oracle Names, and in the future, Oracle Names will no longer be used as a centralized naming method for name resolution. Instead, administrators should use Oracle Internet Directory (OID) for name resolution. Oracle Internet Directory uses LDAP (Lightweight Directory Access Protocol) version 3 and provides
a high level of scalability, security, and availability that Oracle Names cannot provide.
Initialization Parameters The initialization parameters in the following lists have been either deprecated or are not supported in Oracle9i. The replacements for these parameters are discussed in more detail in the other chapters of this book and the other books in the OCA/OCP series. Deprecated Initialization Parameters ROLLBACK_SEGMENTS FAST_START_IO_TARGET TRANSACTIONS_PER_ROLLBACK_SEGMENT LOG_CHECKPOINT_INTERVAL DB_BLOCK_BUFFERS BUFFER_POOL_KEEP BUFFER_POOL_RECYCLE Unsupported Initialization Parameters ALWAYS_ANTI_JOIN ALWAYS_SEMI_JOIN JOB_QUEUE_INTERVAL OPTIMIZER_PERCENT_PARALLEL HASH_MULTIBLOCK_IO_COUNT DB_BLOCK_LRU_LATCHES DB_BLOCK_MAX_DIRTY_TARGET SORT_MULTIBLOCK_READ_COUNT DB_FILE_DIRECT_IO_COUNT GC_DEFER_TIME
National Character Set Various changes have been made to simplify and unify the use of Unicode characters in an Oracle database. The Unicode datatypes NCHAR, NVARCHAR2, and NCLOB can only be used as Unicode types in Oracle9i. The national character sets for these types can only be AL16UTF16 or UTF8; the AL24UTFFSS character set has been replaced by UTF8.
Summary
T
he new features in Oracle9i focus on two high levels: improvements to the “user experience” and improvements to make the life of a DBA less stressful. From the user’s point of view, the Oracle Server is available when the user needs it, and the performance improvements keep up with their query processing needs, both from an OLTP and a data warehousing point of view. From the DBA’s point of view, the database is more secure, easier to manage, and easier to upgrade without noticeable changes to applications or operations. The Oracle9i server is more available than ever. Failover support is more robust and automatic, and the DBA’s job is more streamlined by enhancements to the GUI for these new availability features. Database users are able to be more self-sufficient with the new features of LogMiner and Oracle Flashback Query feature, maximizing database availability while reducing the dependency on the DBA for routine historical data requests. In addition, many of the operations that required the database to be shut down can now be done online with minimal impact to ongoing user activity.
Many of the changes to the Oracle9i architecture facilitate both the performance and scalability of the Oracle server. Memory management is more flexible, allowing many of the SGA memory structures to grow and shrink as the demand on the system changes during the day. Access to external data is improved by treating some external data tables as if they were native Oracle tables. The coding of SQL and PL/SQL statements is more streamlined for the application developer by higher conformance to the SQL:1999 standards. Additions to the supported join types, along with a more versatile CASE statement, and multi-table insert makes both the application developer and the Oracle server more efficient. In terms of overall security, the Oracle9i Server provides more “finegrained” enhancements: a better random number generator, the ability to more tightly control access to rows in a table, and a more useful and flexible means of auditing access to the database. The Oracle9i Server is much easier to manage from a DBA perspective with the new Oracle Managed Files (OMF) and the automated undo features. Security is further enhanced by the Secure Application Role feature, which replaces a relatively weak password authentication method with a more robust and secure stored procedure methodology for enabling roles. DBAs can also control resource usage more easily by using the Active Session Pool feature to limit resource consumption by consumer group. Globalization Support (formerly known as National Language Support) has been expanded well beyond the role of supporting different character sets. It now includes additional data types and predefined functions to support applications that will be run simultaneously in different languages and different time zones, with no changes to the application programs. Many of the new features in Oracle9i either replace or make obsolete features in previous Oracle releases.
Review Questions 1. Which new feature of Oracle9i allows users to view the contents of a
table at some point in the past? A. LogMiner B. Import C. Metadata Viewer D. Oracle Flashback 2. Choose the statement below that is true regarding enhancements to
shared SQL statements in the shared pool. A. The cursor sharing feature can re-use a SQL statement even if the
columns in the statement are in a different order or the GROUP BY clause is different. B. The new columns CPU_COST, IO_COST, and TEMP_SPACE in PLAN_
TABLE help the rule-based optimizer (RBO) to be more accurate. C. Even if the only difference in SQL statements is in the literal values,
the SQL statement may not be re-used if the histogram statistics are skewed for a column in the WHERE clause. D. The CURSOR_SHARING parameter now supports the SIMILAR and
DERIVED values. 3. Given the table declaration below, identify invalid use of timestamp
datatypes in an expression or function. (Choose two.) CREATE TABLE TRANSACTIONS (TRANS_ID NUMBER, AMOUNT NUMBER(10,2), TRANS_START TIMESTAMP, TRANS_END TIMESTAMP, SHIP_DATE DATE, EXPIRE_DATE INTERVAL DAY(0) TO SECOND(0));
A. TRANS_START - TRANS_END B. TO_TIMESTAMP(AMOUNT, ‘YY-MM-DD HH:MI:SS’) C. TRANS_START + INTERVAL ‘4’ DAY D. TRANS_START + SHIP_DATE 4. Which of the following operations cannot be performed online
without any disruption to ongoing online transactions? A. Dropping a user-defined column B. Rebuilding secondary IOT indexes C. Adding new columns to a heap-based table D. Rebuilding a primary IOT index 5. Which of the following types of joins are now allowed in the FROM
clause of a SQL statement? (Choose all that apply.) A. cross joins B. inner joins C. full outer joins D. left outer joins 6. How many panes exist in the new version of Oracle Enterprise
Manager (OEM)? A. One, with pop-up windows B. Four, as in previous versions C. Two, in a master/detail format D. Two, with DBA tools in the right-hand pane 7. The DBA is importing a table and an index from a dump file that was
exported from another Oracle9i database. Which options does the DBA have when using the statistics from this dump file? (Choose all that apply.)
A. Explicitly accept all statistics B. Explicitly reject all statistics C. Let IMPORT decide if the statistics are safe; otherwise recalculate D. Accept statistics only for non-partitioned tables E. Explicitly re-calculate statistics, regardless of whether the original
statistics are good or bad 8. The Secure Application Role feature in Oracle9i allows a user to
authenticate role privileges by doing which of the following? A. Calling a stored procedure B. Using OS authentication C. Using PWFILE authentication D. Using an encrypted role password 9. Chad normally runs queries against very small tables, but has
informed the DBA that he will soon be running some queries against the data warehouse tables for the operations manager. What can the DBA do to make sure that these new queries won’t slow down OLTP operations? (Choose the best answer.) A. The DBA can use the Active Session Pool feature to put Chad’s
session on hold until another user in the same consumer group finishes their session. B. The DBA can use the Automatic Consumer Group Switching
feature to switch Chad’s consumer group to the same group as the OLTP users. C. The DBA can use the Active Session Pool feature to suspend the
session if there are too many active OLTP sessions. D. The DBA can use the Automatic Consumer Group Switching
feature to switch Chad’s consumer group to a secondary group that has a lower priority.
10. Which of the following is not an advantage of having the data
dictionary in the redo logs when using LogMiner for DML and DDL activity? A. The LogMiner activity will not impact other users’ activity against
the data dictionary. B. The LogMiner reports will be more accurate against a snapshot
of the data dictionary rather than a constantly changing live data dictionary. C. Bad blocks in one of the redo logs will not stop the LogMiner
analysis with a static data dictionary. D. The database does not need to be open to use LogMiner, since all
needed information is in the redo logs. 11. Which of the new RMAN options can the DBA use to save time when
a backup does not complete successfully? A. Restart the backup with the NOT BACKED UP option. B. Use mirrored backups to send the backup to two different
device types. C. Include the archive logs in the backup. D. There is no alternative to a failed backup other than to restart
the backup. 12. Identify the true statement regarding binary SPFILEs. A. All changes to an SPFILE are implemented only after the instance
is restarted. B. Changes made to an SPFILE with the ALTER SYSTEM command can
be made simultaneously with the change to the memory copy of the parameter. C. An SPFILE can exist on the client side. D. SPFILEs can be used in conjunction with a text-format PFILE.
13. Place the following block read options in order of access time, shortest
to longest. A. Block is read from remote cache without Cache Fusion B. Block is read from a local cache C. Block is read from a remote cache with Cache Fusion D. Block is read from a shared disk 14. PL/SQL execution is significantly more efficient at runtime for which
of the following reasons? (Choose two.) A. Native C code is generated for PL/SQL procedures. B. The compiled code resides in the SGA. C. Byte code is generated by the compiler and therefore can easily be
re-used by different transactions. D. The compiled code resides in the PGA.
Answers to Review Questions 1. D. The package DBMS_FLASHBACK allows the user to view the
contents of a table or tables at a specified time in the past. 2. C. If the execution plan is independent of the literal values used in the
query, it is likely that the query can be re-used. The new columns in the PLAN_TABLE assist the cost-based optimizer, not the rule-based optimizer, and the CURSOR_SHARING parameter does not have DERIVED as a possible value. 3. B, D. Any reasonable combination of date and time data types is
allowed. However, date fields cannot be added together, and dollar amounts are not valid arguments to date conversion functions. 4. A. Primary keys cannot be dropped nor can columns with user-
defined data types be dropped without making a table unavailable. 5. A, B, C, D. All the above joins are now specified in the WHERE clause.
Previous versions of Oracle supported all these join types other than the full outer join in the WHERE clause. 6. C. The new OEM not only uses a cleaner, easier-to-use two-pane
layout, it integrates all the tools previously available through DBA Studio. 7. A, B, C, E. IMPORT cannot reject statistics based on whether the table
is partitioned. 8. A. The stored procedure can restrict access to the role in a number of
ways, such as by date and time or by the IP address of the user requesting access to the role. 9. D. Switching to another consumer group with a lower priority will
allow the query to finish while minimizing the impact on the ongoing OLTP transactions. The Active Session Pool feature controls resource usage within the same consumer group and will not necessarily reduce the contention with OLTP transactions.
10. C. Bad blocks can be ignored in LogMiner; however this feature is inde-
pendent of where LogMiner retrieves the data dictionary information. 11. A. Running RMAN with the NOT BACKED UP option backs up only
the missing or incomplete files. 12. B. The changes to an SPFILE may be made at the same time the
change is made to the memory copy of the parameter. SPFILEs only exist on the server side and are created using a PFILE. Once the SPFILE is activated, the PFILE is no longer needed. 13. B, C, D, A. Blocks read from a remote cache without Cache Fusion
must be written to the shared disk by the remote instance before the blocks can be retrieved by the local instance. 14. A, D. The compiled code is moved to the PGA to reduce contention
on the SGA; interpreted byte code is inherently less efficient to execute than native compiled C code.
Oracle Overview and Architecture ORACLE9i FUNDAMENTALS I EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe the Oracle architecture and its main components Describe the structures involved in connecting a user to an Oracle instance
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification website (http://www.oracle.com/ education/certification/) for the most current exam objectives listing.
he Oracle9i database is filled with many features that enhance the functionality and improve the performance of the database. It is feature-rich with objects, Java, and many Internet programming techniques. The DBA Fundamentals I exam of the OCP (Oracle Certified Professional) certification tests your knowledge of the Oracle Server architecture and the most common administration tasks. This chapter begins by discussing the components that constitute the Oracle database and the way the database functions. Administering an Oracle database requires that you know how these components interact and how to customize them to best suit your requirements.
Oracle9i Server: An Overview
The Oracle Server consists of two major components—the database and the instance. Database is a confusing term that is often used to represent different things on different platforms; the only commonality is that it is something to do with data. In Oracle, the term database represents the physical files that store data. An instance comprises the memory structures and background processes used to access data (from the physical database files). Each database should have at least one instance associated with it. It is possible for multiple instances to access a single database; this is known as the Real Application Cluster configuration.
Oracle Objective
Describe the Oracle server architecture and its main components
You use the Oracle database, which is a collection of data, to store and retrieve information. The database consists of logical structures and physical structures. Logical structures represent the components that you can see in the Oracle database (such as tables, indexes, and so on), and physical structures represent the method of storage that Oracle uses internally (the physical files). Oracle maintains the logical structure and physical structure separately, so that the logical structures can be defined identically across different hardware and operating system platforms.
Logical Storage Structures Oracle logically divides the database into smaller units to manage, store, and retrieve data efficiently. The following paragraphs give you an overview of the logical structures; they are discussed in detail in the coming chapters. Tablespaces The database is logically divided into smaller units at the highest level called tablespaces. A tablespace commonly groups related logical structures together. For example, you might group data specific to an application or a function together in one or more tablespaces. This logical division helps to administer a portion of the database without affecting the rest of it. Each database should have one or more tablespaces. When you create a database, Oracle creates the SYSTEM tablespace as a minimum requirement. Blocks A block is the smallest unit of storage in Oracle. A block is usually a multiple of the operating system block size. A data block corresponds to a specific number of bytes of storage space. The block size is based on the parameter DB_BLOCK_SIZE and is determined when the database is created. Extents An extent is the next level of logical grouping. It is a grouping of contiguous blocks, allocated in one chunk. Segments A segment is a set of extents allocated for logical structures such as tables, indexes, clusters, and so on. Whenever you create a logical structure, Oracle allocates a segment, which contains at least one extent, which in turn has at least one block. A segment can be associated to only one tablespace. Figure 2.1 shows the relationship between tablespaces, segments, extents, and blocks.
There are four types of segments: Data segments Store the table (or cluster) data. Every table created will have a segment allocated. Index segments Store the index data. Every index created will have an index segment allocated. Temporary segments Are created when Oracle needs a temporary work area, such as for sorting, during a query, and to complete execution of a SQL statement. These segments are freed when the execution completes. Undo segments Store undo information. When you roll back the changes made to the database, undo records in the undo tablespace are used to undo the changes.
Segments and other logical structures are discussed in detail in Chapter 6, “Logical and Physical Database Structures.”
A schema is a logical structure that groups the database objects. A schema is not directly related to a tablespace or to any other logical storage structure. The objects that belong to a schema can reside in different tablespaces, and a tablespace can have objects that belong to multiple schemas. Schema objects include structures such as tables, indexes, synonyms, procedures, triggers, database links, and so on.
Physical Storage Structures
The physical database structure consists of three types of physical files:
Data files
Redo log files
Control files
The purpose and contents of each type of file are explained in the following paragraphs. Figure 2.2 shows the physical structures and how the database is related to the memory structures and background processes. This figure also shows the relationship between tablespaces and data files. Data files contain all the database data. Every Oracle database should have one or more data files. Each data file is associated with one and only one tablespace. A tablespace can consist of more than one data file. Redo log files Redo log files record all changes made to data. Every Oracle database should have two or more redo log files, because Oracle writes to the redo log files in a circular fashion. If a failure prevents a database change from being written to a data file, you can obtain the changes from the redo log files; therefore changes are never lost. Redo logs are critical for database operation and recovery from a failure. Oracle allows you to have multiple copies of the redo log files (preferably on different disks). This feature is known as multiplexing of redo logs, a process in which Oracle treats the redo log and its copies as a group identified with an integer, known as a redo log group. Redo log files are discussed in detail in Chapter 5, “Control Files and Redo Log Files.”
Control files Every Oracle database has at least one control file. It maintains information about the physical structure of the database. The control file can be multiplexed, so that Oracle maintains multiple copies. It is critical to the database. The control file contains the database name and timestamp of database creation as well as the name and location of every data file and redo log file. Control files are discussed in detail in Chapter 5, “Control Files and Redo Log Files.”
The size of a tablespace is determined by the total size of all the data files associated with the tablespace. The size of the database is the total size of all its tablespaces.
The memory structures are used to cache application data, data dictionary information (metadata—information about the objects, logical structures, schemas, privileges, and so on—discussed in Chapter 3, “Creating a Database and Data Dictionary”), Structured Query Language (SQL) commands, PL/SQL and Java program units, transaction information, data required for execution of individual database requests, and other control information. Memory structures are allocated to the Oracle instance when the instance is started. The two major memory structures are known as the System Global Area (also called the Shared Global Area) and the Program Global Area (also called the Private Global Area or the Process Global Area). Figure 2.3 illustrates the various memory structures in Oracle. FIGURE 2.3
Oracle memory structures SGA Shared pool Library cache
System Global Area The System Global Area (SGA) is a shared memory area. All users of the database share the information maintained in this area. The SGA and the background processes constitute an Oracle instance. Oracle allocates memory for the SGA when an Oracle instance is started and de-allocates it when the instance is shut down. The information stored in the SGA is divided into multiple memory structures that are allocated space when the instance is started. These memory structures are dynamic in Oracle9i, in which the total size cannot exceed the value specified in the initialization parameter SGA_MAX_SIZE. Memory in the SGA is allocated in units of contiguous memory called granules. The size of a granule depends on the parameter SGA_MAX_SIZE; if the SGA size is less than 128MB, each granule is 4MB; otherwise, each granule is 16MB. A minimum of three granules are allocated for the SGA: one for the fixed part of the SGA (redo buffers, locking information, database state information), one for the buffer cache, and one for the shared pool (library cache and data dictionary cache).
The dynamic performance view V$BUFFER_POOL tracks the granules allocated for the DEFAULT, KEEP, and RECYCLE buffer pools.
The following are the components of the SGA.
Database Buffer Cache The database (DB) buffer cache is the area of memory that caches the database data, holding blocks from the data files that have been read recently. The DB buffer cache is shared among all the users connected to the database. There are three types of buffers: Dirty buffers Dirty buffers are the buffer blocks that need to be written to the data files. The data in these buffers has changed and has not yet been written to the disk. Free buffers Free buffers do not contain any data or are free to be overwritten. When Oracle reads data from disk, free buffers hold this data. Pinned buffers Pinned buffers are the buffers that are currently being accessed or explicitly retained for future use (for example, the KEEP buffer pool).
Oracle maintains two lists to manage the buffer cache. The write list (dirty buffer list) contains the buffers that are modified and need to be written to the disk (the dirty buffers). The least recently used (LRU) list contains free buffers, pinned buffers, and the dirty buffers that have not yet been moved to the write list. Consider the LRU list as a queue of blocks, in which the most recently accessed blocks are always in the front (known as the most recently used, or MRU, end of the list; the other end, where the least recently accessed blocks are, is the LRU end). The least-used blocks are thrown out of the list when new blocks are accessed and added to the list. When an Oracle process accesses a buffer, it moves the buffer to the MRU end of the list so that the most frequently accessed data is available in the buffers. When new data buffers are moved to the LRU list, they are copied to the MRU end of the list, pushing out the buffers from the LRU end. An exception to this procedure occurs when a full table is scanned; in this case, the blocks are written to the LRU end of the list. When an Oracle process requests data, it searches the data in the buffer cache, and if it finds data, the result is a cache hit. If it cannot find the data, the result is a cache miss, and data then needs to be copied from disk to the buffer. Before reading a data block into the cache, the process must first find a free buffer. The server process on behalf of the user process searches either until it finds a free buffer or until it has searched the threshold limit of buffers. If the server process finds a dirty buffer as it searches the LRU list, it moves that buffer to the write list and continues to search. When the process finds a free buffer, it reads the data block from the disk into the buffer and moves the buffer to the MRU end of the LRU list. If an Oracle server process searches the threshold limit of buffers without finding a free buffer, the process stops searching and signals the DBWn background process to write some of the dirty buffers to disk. The DBWn process and other background processes are discussed in the next section.
To allow the DBA (database administrator) to size the components of the buffer cache efficiently, Oracle9i provides the “buffer cache advisory feature” to maintain the statistics associated with different cache sizes. This feature does incur a small performance hit on both memory and CPU, however.
You use the parameter DB_CACHE_ADVICE to enable or disable statistics collection; its values can be ON, OFF, or READY. The view V$DB_CACHE_ADVICE is available for displaying the cache statistics.
Redo Log Buffer The redo log buffer is a circular buffer in the SGA that holds information about the changes made to the database data. The changes are known as redo entries or change vectors and are used to redo the changes in case of a failure. Changes are made to the database through INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP commands.
The parameter LOG_BUFFER determines the size of the redo log buffer cache.
Shared Pool The shared pool portion of the SGA holds information such as SQL, PL/SQL procedures and packages, the data dictionary, locks, character set information, security attributes, and so on. The shared pool consists of the library cache and the data dictionary cache. Library Cache The library cache contains the shared SQL areas, private SQL areas, PL/SQL procedures and packages, and control structures such as locks and library cache handles. The shared SQL area is used for maintaining recently executed SQL commands and their execution plans. Oracle divides each SQL statement that it executes into a shared SQL area and a private SQL area. When two users are executing the same SQL, the information in the shared SQL area is used for both. The shared SQL area contains the parse tree and execution plan, whereas the private SQL area contains values for the bind variables (persistent area) and runtime buffers (runtime area). Oracle creates the runtime
area as the first step of an execute request. For INSERT, UPDATE, and DELETE statements, Oracle frees the runtime area after the statement has been executed. For queries, Oracle frees the runtime area only after all rows have been fetched or the query has been canceled. Oracle processes PL/SQL program units the same way it processes SQL statements. When a PL/SQL program unit is executed, the code is moved to the shared PL/SQL area, and the individual SQL commands within the program unit are moved to the shared SQL area. Again, the shared program units are maintained in memory with an LRU algorithm. If another process requires the same program unit, Oracle can omit disk I/O and compilation, and the code that resides in memory will be executed. The instance maintains the third area of the library cache for internal use. Various locks, latches, and other control structures reside here, and any server processes that require this information can freely access it. Data Dictionary Cache The data dictionary is a collection of database tables and views containing metadata about the database, its structures, its privileges, and its users. Oracle accesses the data dictionary frequently during the parsing of SQL statements. The data dictionary cache holds the most recently used database dictionary information. The data dictionary cache is also known as the row cache because it holds data as rows instead of buffers (which hold entire blocks of data).
The parameter SHARED_POOL_SIZE determines the size of the shared pool and can be dynamically altered.
Large Pool The large pool is an optional area in the SGA that the DBA can configure to provide large memory allocations for specific database operations such as an Oracle backup or restore. The large pool allows Oracle to request large memory allocations from a separate pool to prevent contention from other applications for the same memory. The large pool does not have an LRU list.
The parameter LARGE_POOL_SIZE specifies the size of the large pool and can be dynamically altered.
Java Pool The Java pool is another optional area in the SGA that the DBA can be configure to provide memory for Java operations, just as the shared pool is provided for processing SQL and PL/SQL commands.
The parameter JAVA_POOL_SIZE determines the size of the Java pool, and can be dynamically altered.
Program Global Area The Program Global Area (PGA) is the area in the memory that contains the data and process information for one process, and this area is non-shared memory. The contents of the PGA depend on the server configuration. For a dedicated server configuration (one dedicated server process for each connection to the database—dedicated server and shared server configurations are discussed later in this chapter), the PGA holds stack space and session information. For shared server configurations, in which user connections are pooled through a dispatcher, the PGA contains the stack space information, and the session information is in the SGA. Stack space is the memory allocated to hold variables, arrays, and other information that belongs to the session. A PGA is allocated for each server process and de-allocated when the process is completed. Unlike the SGA that is shared by several processes, the PGA provides sort space, session information, stack space, and cursor information for a single server process.
Sort Area The memory area that Oracle uses to sort data is known as the sort area, and it uses memory from the PGA for a dedicated server connection. For shared server configurations, the sort area is allocated from the SGA. Shared and dedicated server configurations are discussed later in this chapter. Sort area size can grow depending on the need; you use the SORT_AREA_SIZE parameter to set the maximum size. The parameter SORT_AREA_RETAINED_SIZE determines the size to which the sort area is reduced after the sort operation. The memory released from the sort area is kept with the server process; it is not released to the operating system.
If the data to be sorted does not fit into the memory area defined by SORT_ AREA_SIZE, Oracle divides the data into smaller pieces that do fit and sorts these individually. These individual sorts are called runs, and the sorted data is held in the user’s temporary tablespace using temporary segments. When all the individual sorts are complete, Oracle merges these runs to produce the final result. Oracle sorts the result set if the query contains a DISTINCT, ORDER BY, or GROUP BY operator or any set operators (UNION, INTERSECT, MINUS). Managing SORT_AREA_SIZE in a large enterprise environment may be challenging. The trick is trying to maximize performance without using up too many system resources. Oracle9i provides an automatic method for managing PGA memory. The two key initialization parameters used to automate PGA memory management are PGA_AGGREGATE_TARGET and WORKAREA_SIZE_POLICY. The value for PGA_AGGREGATE_TARGET specifies the total amount of memory that can be used by all server processes, and the value for WORKAREA_SIZE_POLICY is either MANUAL or AUTO.
Software Code Area Software code areas are the portions of memory that store the code that is being executed. Software code areas are mostly static in size and depend on the operating system. These areas are read-only and can be shared (if the operating system allows), so multiple copies of the same code are not kept in memory. Some Oracle tools and utilities (such as SQL*Forms and SQL*Plus) can be installed as shared, but some cannot. Multiple instances of Oracle can use the same Oracle code area with different databases if they are running on the same computer.
Oracle Background Processes
A process is a mechanism used in the operating system to execute a series of tasks. Oracle starts multiple processes in the background when the instance is started. Each background process is responsible for specific tasks. The following sections describe each process and its purpose. All the background processes need not be present in every instance. Figure 2.4 shows the Oracle background processes.
A user (client) process is initiated from the tool that is trying to use the Oracle database. A server process accepts a request from the user process and interacts with the Oracle database. On dedicated server systems, there will be one server process for each client connection to the database.
Database Writer (DBWn) The purpose of the database writer process (DBWn) is to write the contents of the dirty buffers to the data file. By default, Oracle starts one database writer process (DBW0) when the instance starts; for multi-user and busy systems, you can have nine more database writer processes (DBW1 through
DBW9) to improve performance. The parameter DB_WRITER_PROCESSES determines the additional number of database writer processes to be started. The DBWn process writes the modified buffer blocks to disk, so more free buffers are available in the buffer cache. Writes are always performed in bulk to reduce disk contention; the number of blocks written in each I/O is operating system dependent. The DBWn process initiates writing to data files under these circumstances:
When the server process cannot find a clean buffer after searching the set threshold of buffers, it initiates the DBWn process to write dirty buffers to the disk, so that some buffers are freed.
When a checkpoint occurs, DBWn periodically writes buffers to disk.
When a timeout occurs.
When you change a tablespace to read-only.
When you place a tablespace offline.
When you drop or truncate a table.
When you place a tablespace in BACKUP mode.
Writes to the data file(s) are independent of the corresponding COMMIT performed in the SQL code.
Log Writer (LGWR) The log writer process (LGWR) writes the blocks in the redo log buffer in the SGA to the online redo log files. The redo log buffer is circular. When the LGWR writes log buffers to the disk, Oracle server processes can write new entries in the redo log buffer. LGWR writes the entries to the disk fast enough to ensure that room is available for the server process to write log information. The log writer process writes the buffers to the disk under the following circumstances:
When a user transaction issues a COMMIT
When the redo log buffer is one-third full
When the DBWn process writes dirty buffers to disk
LGWR writes simultaneously to the multiplexed online redo log files. Even if one of the log files in the group is damaged, LGWR continues writing to the available file. LGWR writes to the redo logs sequentially so that transactions can be applied in order in the event of a failure.
By writing the committed transaction to the redo log files, the change to the database is never lost (that is, it can be recovered if a failure occurs).
Checkpoint (CKPT) Checkpoints help to reduce the time required for instance recovery. A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and data files. The checkpoint process (CKPT) updates the headers of data files and control files; the DBWn process writes the actual blocks to the file. If checkpoints occur too frequently, disk contention becomes a problem with the data file updates. If checkpoints occur too infrequently, the time required to recover a failed database can be significantly longer. Checkpoints occur automatically when an online redo log file fills (log switch). A log switch occurs when Oracle finishes writing one file and starts the next file.
System Monitor (SMON) The system monitor process (SMON) performs instance or crash recovery at database start-up by using the online redo log files. SMON is also responsible for cleaning up temporary segments in the tablespaces that are no longer used and for coalescing the contiguous free space in the tablespaces. If any dead transactions were skipped during crash and instance recovery because of file-read or offline errors, SMON recovers them when the tablespace or file is brought back online. SMON wakes up regularly to check whether it is needed. Other processes can call SMON if they detect a need for SMON to wake up.
SMON coalesces the contiguous free space in a tablespace only if its default PCTINCREASE value is set to a nonzero value.
Process Monitor (PMON) The process monitor process (PMON) cleans up failed user processes and frees up all the resources used by the failed process. It resets the status of the active transaction table and removes the process ID from the list of active processes. It reclaims all resources held by the user and releases all locks on tables and rows held by the user. PMON wakes up periodically to check whether it is needed.
DBWn, LGWR, CKPT, SMON, and PMON processes are the default processes associated with all instances.
Archiver (ARCn) When the Oracle database is running in ARCHIVELOG mode, the online redo log files are copied to another location before they are overwritten. You can use these archived log files to recover the database. When the database is in ARCHIVELOG mode, you can recover the database up to the point of failure. The archiver process (ARCn) performs the archiving function. Oracle9i can have as many as 10 ARCn processes (ARC0 through ARC9). The LGWR process starts new ARCn processes whenever the current number of ARCn processes is insufficient to handle the workload. The ARCn process is enabled only if the database is in ARCHIVELOG mode and automatic archiving is enabled (parameter LOG_ARCHIVE_START = TRUE).
Recoverer (RECO) The recoverer process (RECO) is used with distributed transactions to resolve failures. The RECO process is present only if the instance permits distributed transactions and if the DISTRIBUTED_TRANSACTIONS parameter is set to a nonzero value. If this initialization parameter is zero, RECO is not
created during instance start-up. This process attempts to access databases involved in in-doubt transactions and resolves the transactions. A transaction is in doubt when you change data in multiple databases and a failure occurs before you save the changes. The failure can be the result of a server crash or a network problem.
Lock (LCKn) LCKn processes (LCK0 through LCK9) are used in the Real Application Cluster environment, for inter-instance locking. The Real Application Cluster option lets you mount the same database for multiple instances.
Queue Monitor (QMNn) The queue monitor process is used for Oracle Advanced Queuing, which monitors the message queues. You can configure as many as 10 queue monitor processes (QMN0 through QMN9). Oracle Advanced Queuing provides an infrastructure for distributed applications to communicate asynchronously using messages. Oracle Advanced Queuing stores messages in queues for deferred retrieval and processing by the Oracle Server. The parameter AQ_TM_PROCESSES specifies the number of queue monitor processes.
Failure of an SNP process or a QMN process does not cause the instance to crash; Oracle restarts the failed process. If any other background process fails, the Oracle instance fails.
Dispatcher (Dnnn) Dispatcher processes are part of the shared server architecture. They minimize the resource needs by handling multiple connections to the database using a limited number of server processes. You can create multiple dispatcher processes for a single database instance; you must create at least one dispatcher for each network protocol used with Oracle.
Shared Server (Snnn) Shared server processes provide the same functionality as the dedicated server processes, except that shared server processes are not associated with
a specific user process. You create shared server processes to manage connections to the database in a shared server configuration. The number of shared server processes that you can create ranges between the values of the parameters SHARED_SERVERS and MAX_SHARED_SERVERS.
Managing and Monitoring Background Processes Over the last couple of weeks, the users have reported that the system is slowing down. Your system hardware manager had just added a second CPU and some additional disk drives to handle the space needed by some new applications that use your database. The users always seem to get their reports, and their sessions never terminate abnormally. You suspect that one of the background processes may be at fault. As an experienced DBA, you know that the first place you should look is the alert log. There might be a trace file in the user dump area, but the users are not reporting anything abnormal other than the increase in response time. The alert log indicates that your log switches are occurring every couple of minutes, and you’re occasionally getting “Checkpoint not complete” messages. Because of the increase in the number of users, the online log files (discussed in later chapters) are filling up too fast and causing unnecessary delays for the users. You decide to automate the review of these logs in the future so you can catch these problems before the users notice a problem. Keeping the background processes straight can be a challenge. Some of them exist as single background processes (for example, LGWR), while other background processes may have multiple copies for a given instance (for example, DBWn). The alert log, the process-specific logs, and the user logs are the key to detecting and preventing performance problems. Daily “cron” jobs or some of the newer features of Oracle Enterprise Manager (OEM) can easily automate the review of the logs and e-mail the results to the DBA or operations group. In addition, these jobs can clean up, truncate, or archive the logs if the size of the log file becomes unmanageable. Many third-party tools can also help manage these tasks, but knowing what’s going on “under the hood” is required to use these tools effectively.
Before discussing the actual mechanism for connecting to an Oracle database, let us review the terms user process and server process. An application program (such as Pro*C) or an Oracle tool (such as SQL*Plus) starts the user process when you run the application tool. The user process may be on the same machine where the instance/database resides, or it may be initiated from a client machine in a client/server architecture.
Oracle Objective
Describe the structures involved in connecting a user to an Oracle instance
A server process gets requests from the user process and interacts with the Oracle instance to carry out the requests. On some platforms, it is possible to combine the user process and the server process (single task) to reduce system overhead if the user process and the server process are on the same machine. The server process is responsible for the following:
Parsing and executing SQL statements issued via the application or tool
Reading the data files and bringing the necessary data blocks into the shared buffer cache if the data blocks requested are not already in the SGA
Returning the results to the user process in such a way that it can understand the data
In client/server architecture, OracleNet is commonly used to communicate between the client and the server. The client process (user process) attempts to establish a connection to the database using the appropriate OracleNet driver, and then OracleNet communicates with the server and assigns a server process to fulfill the request on behalf of the user process. OracleNet has a listener on the server that constantly waits for connection requests from client machines (client and server can be on the same machine).
Depending on the architecture of the network, the distribution of the employees, the response time requirements, and the security requirements, different users may connect to the database using different methods. The next two sections describe the ways in which an Oracle user can connect to the database using either a dedicated server connection or a multithreaded (shared server) connection. For certain DBA operations, dedicated server connections are required.
Dedicated Server Configuration In a dedicated server configuration, one server process is created for each connection request. Oracle assigns a dedicated server process to take care of the requests from the user process. The server process is terminated when the user process disconnects from the database. Even if the user process is not making any requests, the server process will be idling and waiting for a request. Refer to Figure 2.4 for a diagram of how a user process interacts with Oracle by using a dedicated server process. The following steps detail how a dedicated server process takes the request from a user process and delivers the results (the background processes are not discussed in these steps): 1. The client application or tool initiates the user process to connect to
the instance. 2. The client machine communicates the request to the server machine by
using OracleNet drivers. The OracleNet listener on the server detects the request and starts a dedicated server process on behalf of the user process after verifying the username and password. 3. The user issues a SQL command. 4. The dedicated server process determines whether a similar SQL is
in the shared SQL area. If not, it allocates a new shared SQL area for the command and stores the parse tree and execution plan. During parsing, the server process checks for syntactic correctness of the statement, checks whether the object names are valid, and checks privileges. The required information is obtained from the data dictionary cache. A PGA is created to store the private information of the process.
5. The server process looks for data blocks that need to be changed or
accessed in the buffer cache. If they are not there, it reads the data files and brings the necessary blocks to the SGA. 6. The server process executes the SQL statement. If data blocks
need to be changed, they are changed in the buffer cache (the DBWn process updates the data file). The change is logged in the redo log buffer. 7. The status of the request or the result is returned to the user process.
The Shared Server (Multithreaded) Configuration If many users are connecting to the dedicated server database, there will be many server processes. Most of the time, these server processes will be idle for Online Transaction Processing (OLTP) applications. You can configure Oracle to have one server process manage multiple user processes. This configuration is known as the shared server or multithreaded configuration. In a shared server configuration, Oracle starts a fixed number of server processes when the instance starts. These processes work in a round-robin fashion to serve requests from the user processes. The user processes connect to a dispatcher background process, which routes client requests to the next available shared server process. One dispatcher process can handle only one communication protocol; hence, there should be at least one dispatcher process for every protocol used. The shared server configuration requires all connections to use OracleNet. So, for establishing a connection to the instance using a shared server configuration, three processes are involved: an OracleNet listener process, a dispatcher process, and a shared server process. When a user makes a request, the dispatcher places the request on the request queue; an available shared server process picks up the request from this queue. When the shared server process completes the request, it places the response on the calling dispatcher’s response queue. Each dispatcher has its own response queue in the SGA. The dispatcher then returns the completed request to the appropriate user process. Figure 2.4 shows the connection using a shared server configuration and the associated processes.
The following steps detail how a shared server process takes the request from a user process and delivers the results: 1. When the instance is started, one or more shared server processes and
dispatcher processes are started. The OracleNet listener is running on the server. The request and response queues are created in the SGA. 2. The client application or tool initiates the user process to connect to
the instance. 3. The client machine communicates the request to the server machine by
using OracleNet drivers. The OracleNet listener on the server detects the request and identifies the protocol that the user process is using. It connects the user process to one of the dispatchers for this protocol (if no dispatcher is available for the requested protocol, a dedicated server process is started by the listener). 4. The user issues a SQL command. 5. The dispatcher decodes the request and puts it into the request queue
(at the tail) along with the dispatcher ID. 6. The request moves up the queue as the server processes serve the pre-
vious requests. The next available shared server process picks up the request. 7. The shared server process determines whether a similar SQL state-
ment is there in the shared SQL area. If not, it allocates a new shared SQL area for the command and stores the parse tree and execution plan. During parsing, the server process checks for syntactic correctness of the statement, validity of the object names, and privileges. The required information is obtained from the data dictionary cache. A PGA is created to store the private information of the process. 8. The server process looks for data blocks that need to be changed or
accessed in the buffer cache. If they are not there, it reads the data files and brings the necessary blocks into the SGA.
9. The server process executes the SQL statement. If data blocks need
to be changed, they are changed in the buffer cache (the DBWn process updates the data file). The change is logged in the redo log buffer. 10. The status of the request or the result is returned to the response queue
for the dispatcher. 11. The dispatcher periodically checks the response queue. When it finds
a response, it sends it to the requested user process.
Processing SQL Statements
You use Structured Query Language (SQL) to manipulate and retrieve data in an Oracle database. Data Manipulation Language (DML) statements query or manipulate data in existing database objects. SELECT, INSERT, UPDATE, DELETE, EXPLAIN PLAN, and LOCK TABLE are DML statements. These are the most commonly used statements in the database. In this section, we’ll explain how Oracle processes the queries and other DML statements. We’ll also discuss what happens when a user makes the changes they made to the database permanent (issues a COMMIT), or decides to undo the changes (issues a ROLLBACK).
This section on processing SQL statements is primarily for background information. More than likely you will not come across a question on processing SQL statements in the Oracle9i Fundamentals I exam.
SQL Parse, Execute, and Fetch Stages SQL statements are processed in either two or three steps, or stages. Each SQL statement passed to the server process from the user process goes through parse and execute phases. In the case of queries (SELECT statements), an additional phase of fetch is done to retrieve the rows. We will look at each of these in the following sections.
Parse Parsing is one of the first stages in processing any SQL statement. When an application or tool issues a SQL statement, it makes a parse call to Oracle, which does the following:
Checks the statement for syntax correctness and validates the table names and column names against the dictionary
Determines whether the user has privileges to execute the statement
Determines the optimal execution plan for the statement
Finds a shared SQL area for the statement
If there is an existing SQL area with the parsed representation of the statement in the library cache, Oracle uses this parsed representation and executes the statement immediately. If not, Oracle generates the parsed representation of the statement, allocates a shared SQL area for the statement in the library cache, and stores its parsed representation there. Oracle’s parse operation allocates a shared SQL area for the statement, which allows the statement to be executed any number of times without parsing repeatedly. Execute Oracle executes the parsed statement in the execute stage. For UPDATE and DELETE statements, Oracle locks the rows that are affected, so that no other process is making changes to the rows until the transaction is completed. Oracle also looks for data blocks in the data buffer cache. If it finds them, the execution will be faster; if not, Oracle has to read the data blocks from the physical data file to the buffer cache. If the statement is a SELECT or an INSERT, no rows need to be locked because no data is being changed. Fetch The fetch operation follows the execution of a SQL SELECT command. After the execution completes, the rows identified during the execution stage are returned to the user process. The rows are ordered (sorted) if requested by the query. The results are always in a tabular format; rows may be fetched (retrieved) one row at a time or in groups (array processing). Figure 2.5 shows the steps required in processing a SELECT query.
1. Create a cursor. The cursor may be explicit or implicit. 2. Parse the statement. 3. Define output: specify location, type, and data type of the result set,
and convert the data type if necessary. This step is required only when the results are fetched to variables. 4. Bind variables; if the query is using any variables, Oracle should know
3. Bind variables; if the statement is using any variables, Oracle should
know the value for the variables. 4. See whether the statement can be run in parallel (multiple server pro-
cesses working to complete the work). 5. Execute the statement. 6. Inform the user that the statement execution is complete. 7. Close the cursor.
Processing a COMMIT or ROLLBACK You have seen how the server process processes a query and other DML statements. Before discussing the steps in processing a COMMIT or a ROLLBACK, let’s look at an important mechanism that Oracle uses for recovery—the system change number (SCN). When a transaction commits, Oracle assigns a unique number that defines the database state at a precise moment in time, acting as an internal timestamp. The SCN is a serial number, unique and always increasing. SCNs provide a read-consistent view of the database. The database is always recovered based on the SCN. The SCN is also used to provide a read-consistent view of the data. When a query reaches the execution stage, the current SCN is determined; only the blocks with an SCN less than or equal to this SCN are read— for changed blocks (with a higher SCN), data is read from the rollback segments. The SCN is recorded in the control file, data file headers, block headers, and redo log files. The redo log file has a low SCN (the lowest change number stored in the log file) and high SCN (the highest change number in the log file—assigned when the file is closed, before opening the next redo log file). The SCN value is stored in every data file header, which is updated whenever a checkpoint is done. The control file records the SCN number for each data file that is taken offline.
Steps in Processing a COMMIT Oracle commits a transaction when you do the following:
The following are the steps for processing a COMMIT, that is, making the changes to the database permanent: 1. The server process generates an SCN number and is assigned to the
rollback segment; then it marks in the rollback segment that the transaction is committed. 2. The LGWR process writes the redo log buffers to the online redo log
files along with the SCN number. 3. The server process releases locks held on rows and tables. 4. The user is notified that the COMMIT is complete. 5. The server process marks the transaction as complete.
Oracle defers writes to the data file to reduce disk I/O. The DBWn process writes the changed blocks to the data file independent of any COMMIT. By writing the change vectors and the SCN number to the redo log files, Oracle ensures that the committed changes are never lost. This process is known as the fast commit—writes to redo log files are faster than writing the blocks to data files.
Steps in Processing a ROLLBACK If the transaction has not been committed, it can be rolled back; that is, the state of the database tables for the session are restored to their original values. Oracle rolls back a transaction when:
You issue a ROLLBACK command.
The server process terminates abnormally.
The DBA kills the session.
The following steps are used in processing a ROLLBACK, that is, in undoing the changes to the database: 1. The server process undoes all changes made in the transaction by using
the undo segment entries. 2. The server process releases all locks held on tables and rows. 3. The server process marks the transaction as complete.
This chapter introduced you to the Oracle architecture and configuration. The Oracle Server consists of a database and an instance. The database consists of the structures that store the actual data. The instance consists of the memory structures and background processes. The database has logical structures and physical structures. Oracle maintains the logical structures and physical structures separately, so they can be managed independently of each other. The database is divided logically into multiple tablespaces. Each tablespace can have multiple segments. Each database table or index is allocated a segment. The segment consists of one or many extents. The extent is a contiguous allocation of blocks. A block is the smallest unit of storage in Oracle. The physical database structures include data files, control files, and redo log files. The data files contain all the database data. The control file keeps information about the data files and redo log files in the database, as well as the database name, creation timestamp, and so on. The redo log files keep track of the database changes. You can use the redo log files to recover the database in case of instance or media failure. The memory structures of an Oracle instance include the System Global Area (SGA) and Program Global Area (PGA). The SGA is shared among all the database users; the PGA is not. The SGA consists of the database buffer cache, shared pool, and redo log buffers. The database buffers cache the recently used database blocks in memory. The dirty buffers are the blocks that are changed and need to be written to the disk. The DBWn process writes these blocks to the data files. The redo log buffers keep all changes to the database; these buffers are written to the redo log files by the LGWR process. The shared pool consists of the library cache, dictionary cache, and other control structures. The library cache contains parsed SQL and PL/SQL codes. The dictionary cache holds the most recently used dictionary information. The application tool (such as SQL*Plus or Pro*C) communicates with the database by using a server process on the server side. Oracle can have dedicated server processes, whereby one server process takes requests from one user process. In a multithreaded configuration, the server processes are shared.
Parse, execute, and fetch are the major steps used in processing queries. For other DML statements (other than SELECT), the stages are parse and execute. The parse step compiles the statement in the shared pool, checks the user’s privileges, and arrives at an execution plan. In the execute step, the parsed statement is executed. During the fetch step, data is returned to the user.
Exam Essentials Identify the three types of database files that constitute the database. Briefly describe the purpose and key differences between control files, data files, and redo log files. Describe other essential files that are needed to start up the database but are not considered a part of the database. Explain and categorize the SGA memory structures. Identify the SGA areas along with the sub-components contained within each of these areas. Be able to place a database-related object (for example, a SQL statement or a data file block) into its appropriate SGA area. Understand the steps involved in processing a SQL statement. Understand which server components do and do not participate in processing SQL, and understand the steps required when a DML statement is executed. Enumerate and explain the primary (required) background processes. Identify each process with its primary purpose, its interaction with other background processes, and when the background process is active. Understand the purpose of the PGA. List the components of the PGA, as well as understand the conditions under which PGA components are stored in the SGA. Identify the initialization parameters related to the SGA and buffer pool sizing. Understand which parameters can be dynamically altered and which parameters are optional.
Review Questions 1. Which component is not part of the Oracle instance? A. System Global Area B. Process monitor C. Control file D. Shared pool 2. Which background process and associated database component guar-
antee that committed data is saved even when the changes have not been recorded in the data files? A. DBWn and database buffer cache B. LGWR and online redo log file C. CKPT and control file D. DBWn and archived redo log file 3. What is the maximum number of database writer processes allowed in
an Oracle instance? A. 1 B. 10 C. 256 D. Limit specified by an operating system parameter 4. Which background process is not started by default when you start up
the Oracle instance? A. DBWn B. LGWR C. CKPT D. ARCn
5. Which of the following best describes a Real Application Cluster
configuration? A. One database, multiple instances B. One instance, multiple databases C. Multiple databases on multiple servers D. Shared server process takes care of multiple user processes 6. Choose the correct hierarchy, from largest to smallest, from this list of
logical database structures. A. Database, tablespace, extent, segment, block B. Database, tablespace, segment, extent, block C. Database, segment, tablespace, extent, block D. Database, extent, tablespace, segment, block 7. Which component of the SGA contains the parsed SQL code? A. Buffer cache B. Dictionary cache C. Library cache D. Parse cache 8. Julie, one of the database analysts, is complaining that her queries are
taking longer and longer to complete, although they seem to produce the correct results. The DBA suspects that the buffer cache is not sized correctly and is causing delays due to data blocks not being available in memory. Which initialization parameter should the DBA use to monitor the usage of the buffer cache? A. BUFFER_POOL_ADVICE B. DB_CACHE_ADVICE C. DB_CACHE_SIZE D. SHARED_POOL_SIZE
9. Which background process is responsible for writing the dirty buffers
to the database files? A. DBWn B. SMON C. LGWR D. CKPT E. PMON 10. Which component in the SGA has the dictionary cache? A. Buffer cache B. Library cache C. Shared pool D. Program Global Area E. Large pool 11. When a server process is terminated abnormally, which background
process is responsible for releasing the locks held by the user? A. DBWn B. LGWR C. SMON D. PMON 12. What is a dirty buffer? A. Data buffer that is being accessed B. Data buffer that is changed but is not written to the disk C. Data buffer that is free D. Data buffer that is changed and written to the disk
13. If you are updating one row in a table using the ROWID in the WHERE
clause (assume that the row is not already in the buffer cache), what will be the minimum amount of information read to the database buffer cache? A. The entire table is copied to the database buffer cache. B. The extent is copied to the database buffer cache. C. The block is copied to the database buffer cache. D. The row is copied to the database buffer cache. 14. What happens next when a server process is not able to find enough
free buffers to copy the blocks from disk? A. Signals the CKPT process to clean up the dirty buffers B. Signals the SMON process to clean up the dirty buffers C. Signals the CKPT process to initiate a checkpoint D. Signals the DBWn process to write the dirty buffers to disk 15. Which memory structures are shared? Choose two. A. Sort area B. Program Global Area C. Library cache D. Large pool 16. Which of the following initialization parameters does NOT determine
the size of the buffer cache? A. DB_KEEP_CACHE_SIZE B. DB_CACHE_SIZE C. DB_BLOCK_SIZE D. DB_RECYCLE_CACHE_SIZE
17. Which memory structure records all database changes made to the
instance? A. Database buffer B. Dictionary cache C. Redo log buffer D. Library cache 18. What is the minimum number of online redo log files required in a
database? A. One B. Two C. Four D. Zero 19. When are the system change numbers assigned? A. When a transaction begins B. When a transaction ends abnormally C. When a checkpoint occurs D. When a COMMIT is issued 20. Which of the following is not part of the database buffer pool? A. KEEP B. RECYCLE C. LIBRARY D. DEFAULT
Answers to Review Questions 1. C. The Oracle instance consists of memory structures and back-
ground processes. The Oracle database consists of the physical components such as data files, redo log files, and the control file. The System Global Area and shared pool are memory structures. The process monitor is a background process. 2. B. The LGWR process writes the redo log buffer entries when a
COMMIT occurs. The redo log buffer holds information on the changes made to the database. The DBWn process writes dirty buffers to the data file, but it is independent of the COMMIT. The dirty buffers can be written to the disk before or after a COMMIT. Writing the committed changes to the online redo log file ensures that the changes are never lost in case of a failure. 3. B. By default, every Oracle instance has one database writer
process—DBW0. Additional processes can be started by setting the initialization parameter DB_WRITER_PROCESSES(DBW1 through DBW9). 4. D. ARCn is the archiver process, which is started only when the
LOG_ARCHIVE_START initialization parameter is set to TRUE. DBWn, LGWR, CKPT, SMON, and PMON are the default processes associated with all instances. 5. A. In a Real Application Cluster configuration, multiple instances
(known as nodes) can mount one database. One instance can be associated with only one database. In a multithreaded configuration, one shared server process takes requests from multiple user processes. 6. B. The first level of logical database structure is the tablespace. A
tablespace may have segments, segments have one or more extents, and extents have one or more contiguous blocks. 7. C. The library cache contains the parsed SQL code. If a query is exe-
cuted again before it is aged out of the library cache, Oracle will use the parsed code and execution plan from the library cache. The buffer cache has data blocks that are cached. The dictionary cache caches data dictionary information. There is no SGA component named parse cache.
8. B. The parameter DB_CACHE_ADVICE can be set to YES to enable
cache usage monitoring. DB_CACHE_SIZE and SHARED_POOL_SIZE are sizing parameters for SGA structures; the parameter BUFFER_POOL_ ADVICE does not exist. 9. A. The DBWn process writes the dirty buffers to the data files under
two circumstances—when a checkpoint occurs or when the server process searches the buffer cache for a set threshold. 10. C. The shared pool has three components: the library cache, the
dictionary cache, and the control structures. 11. D. PMON, or the process monitor, is responsible for cleaning up
failed user processes. It reclaims all resources held by the user and releases all locks on tables and rows held by the user. 12. B. Dirty buffers are the buffer blocks that need to be written to the
data files. The data in these buffers has changed and is not yet written to the disk. A block waiting to be written to disk is on the dirty list and cannot be overwritten. 13. C. The block is the smallest unit that can be copied to the buffer
cache. 14. D. To reduce disk I/O contention, the DBWn process does not write
the changed buffers immediately to the disk. They are written only when the dirty buffers reach a threshold, when there are not enough free buffers available, or when the checkpoint occurs. 15. C and D. The sort area is allocated to the server process as part of the
PGA. The PGA is allocated when the server process starts and is deallocated when the server process completes. The library cache and the large pool are part of the SGA and are shared. The SGA is created when the instance starts. 16. C. The parameter DB_BLOCK_SIZE does not change the size of the
buffer cache. It changes only the size of each Oracle block written to and read from disk.
17. C. The redo log buffer keeps track of all changes made to the data-
base before writing them to the redo log files. The database buffer contains the data blocks that are read from the data files, and are most recently used. The dictionary cache holds the most recently used data dictionary information. The library cache holds the parsed SQL statements and PL/SQL code. 18. B. There should be at least two redo log files in a database. The
LGWR process writes to the redo log files in a circular manner, so there should be at least two files. 19. D. A system change number (SCN) is assigned when the transaction
is committed. The SCN is a unique number acting as an internal timestamp, used for recovery and read-consistent queries. 20. C. There is no database buffer cache named LIBRARY. The DBA can
configure multiple buffer pools by using the appropriate initialization parameters for performance improvements. The KEEP buffer pool retains the data blocks in memory; they are not aged out. The RECYCLE buffer pool removes the buffers from memory as soon as they are not needed. The DEFAULT buffer pool contains the blocks that are not assigned to the other pools. 21. E. All of these SGA components are allocated in granule units. A
minimum of three granules are allocated for the SGA at instance startup: one for the fixed portion of the SGA, one for the database buffer cache, and one for the shared pool.
Installing and Managing Oracle ORACLE9i DBA FUNDAMENTALS I EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Identify common database administrative tools available to a DBA Identify the features of the Oracle Universal Installer Explain the benefits of Optimal Flexible Architecture (OFA) Set up password file authentication List the main components of the Oracle Enterprise Manager and their uses Create and manage initialization parameter files Configure Oracle Managed Files (OMF) Start up and shut down an instance Monitor the use of diagnostic files
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification website (http://www.oracle.com/ education/certification/) for the most current exam objectives listing.
racle9i uses Java-based tools to install Oracle software and create databases. Java gives the same look and feel for the installer across all platforms. The Oracle Enterprise Manager utility comes with many userfriendly database administration tools. In this chapter, you will be introduced to the features of the Oracle Universal Installer and Enterprise Manager utilities. You will also learn to use parameters and to start up and shut down an Oracle instance.
The Oracle Universal Installer
T
o install Oracle9i, you use the Oracle Universal Installer (OUI), a GUI-based Java tool that has the same look and functionality across all platforms. On Windows platforms, you invoke the installer by running the executable setup.exe. On Unix platforms, you invoke the installer by running the script runInstaller. Figure 3.1 shows the installation location screen when you invoke the OUI. You can install new Oracle9i products or remove installed Oracle9i products by using the OUI.
Oracle Objective
Identify the features of the Oracle Universal Installer
The OUI accepts minimal user inputs for a typical installation, and you can choose the desired products by using the custom installation. OUI supports multiple Oracle homes in case you need to install different versions of Oracle under different Oracle homes. OUI resolves the dependencies among various Oracle products automatically. OUI allows silent installation, which is especially useful for workstations that do not support a graphical interface. You can capture the response to each installer question in a response file and use the file for future installations. Installer activities and result statuses are logged into files that you can review after installation. The installer can start other Oracle tools such as the Database Configuration Assistant to create a new database or the Oracle Net Configuration Assistant to configure the listener for the database.
Oracle Enterprise Manager (OEM) is a graphical system management tool used to manage components of Oracle and to administer the databases from one session. OEM comprises a console and management utilities, a repository in which to save all the metadata information, and the actual nodes (databases and other components) that need to be managed. Figure 3.2 shows the three-tier architecture of OEM.
Oracle Objective
FIGURE 3.2
List the main components of the Oracle Enterprise Manager and their uses
The Console The console is a client GUI tool that provides a single point of access to all the databases, network, and management tools. The console consists of two panes, which can be seen in Figure 3.3. FIGURE 3.3
The OEM console
The Navigator pane displays a hierarchical view of all the databases, listeners, nodes, and other services in the network and their relationships. You can drill down the branches and see the database users, roles, groups, events, and so on. The Group branch enables you to graphically view and construct logical administrative groups of objects for more efficient management and administration. You can group objects together based on any criteria, such as department, geographical location, or function. The Group branch is especially useful for managing environments that include large numbers of databases and other services or for seeing the relative location of managed services. To create a group, you first name and register a group in the Group branch, and then you drag objects that you want to manage as a unit from the Navigator branch and drop them into the Group branch. Groups can consist of similar or dissimilar targets; for example, a group might have two
database servers, a management server, and an application server. You can display these groups on top of a graphical image of your choice, such as a geographical map, a building blueprint, or an organization chart. The Jobs branch is the user interface to the Job Scheduling System, which you can use to automate repetitive tasks at specified times on one or multiple databases. A job consists of one or more tasks. You can build dependencies within the tasks, and you can specify that certain tasks execute as a result of the outcome of another task. The Events branch is the user interface to the Event Management System, which monitors the network for problem events. An event is made up of one or more tests that an Intelligent Agent checks against one or more of its managed services in monitoring for critical occurrences. When the Intelligent Agent detects a problem on the services, it notifies the console and the appropriate DBA based on the permissions set up. The Intelligent Agents are local to a database node and are responsible for monitoring the databases and other services on the database node.
The Management Server and Common Services The Management Server is the middle tier between the console GUI and managed nodes. It processes all system management tasks and distributes these tasks to Intelligent Agents on the nodes. You can use multiple Management Servers to balance workload and to improve performance. The common services are the tools and systems that help the Management Server. The common services consist of the following: Repository The repository is a set of tables that store the information about the managed nodes and the Oracle management tools. You can create this data store in any Oracle database, but preferably on a node that does not contain a critical Oracle instance to be monitored. Service discovery OEM discovers all databases and listeners on a node, once the node is identified. The Intelligent Agent finds the services and reports them back to the Oracle Management Server. These discovered services are displayed in the Navigation pane of the console. Job Scheduling System Using the Job Scheduling System, you can schedule and execute routine or repetitive administrative tasks. You can set up the system to notify you upon completion, failure, or success of a job through e-mail or a pager.
Event Management System The Event Management System in the OEM monitors the resource problems, loss of service, shortage of disk space, or any other problem detected on the node. You can set these occurrences up as events, and the Intelligent Agent tests periodically to monitor them. Notification system You can specify that the notification about the status of jobs or events can be sent to the console, via e-mail, or to a pager. You can select the notification procedures when you set up the job or event. Paging/e-mail blackout This feature prevents the administrator from receiving multiple e-mails or pages when a service is brought down for maintenance or for a scheduled period of downtime. Security Security parameters in OEM are defined for services, objects, and administrators. A super administrator is someone who creates and defines the permissions of all the repository’s administrators. The super administrator can access any object and control its security parameters, including objects owned by other administrators.
DBA Tools The DBA Tools are integrated with the OEM, which helps the administrators with their daily routine tasks. These tools provide complete database administration using GUI tools rather than using SQL*Plus. You access the tools in the left pane of the OEM console under each database instance.
Oracle Objective
Identify common database administrative tools available to a DBA
Using the DBA Tools, you can administer the following: Instance You can start up and shut down an instance; modify parameters; view and change memory allocations, redo logs, and archival status; view user sessions and their SQL; see the execution plan of SQL; and manage resource allocations and long-running sessions.
Schema You can create, alter, or drop any schema object, including advanced queues and Java-stored procedures. You can clone any object. Security You can change the security privileges for users and roles, and you can create and alter users, roles, and profiles. Storage You can manage tablespaces, data files, undo segments, redo log groups, and archive logs. SQL*Plus Worksheet You can issue SQL statements against any database, in a graphical environment that is much easier to use than the command line version of SQL*Plus.
Optimal Flexible Architecture (OFA)
T
he Optimal Flexible Architecture (OFA) is a set of guidelines specified by Oracle to better manage the Oracle software and the database. OFA enforces the following:
Oracle Objective
A consistent naming convention
Separating Oracle software from the database
Separating the Oracle software versions
Separating the data files belonging to different databases
Separating parameter files and database creation scripts from the database files and software
Separating trace files, log files, and dump files from the database and software
Explain the benefits of Optimal Flexible Architecture (OFA)
Figure 3.4 shows the software installation and database files on a Windows 2000 platform conforming to the OFA. Here the ORACLE_BASE directory is G:\oracle, which has four branches—admin, ora90, ora91,
and oradata. The ora90 and ora91 folders are software installations. If you separate the versions, upgrading the database is easy. The admin and oradata folders have subfolders for each database on the server. In Figure 3.4, oradb01 and oradb02 are two databases. Under the admin branch, for each database, are subfolders for administrative scripts (adhoc), background dump files and the alert log file (bdump), core dump files (cdump), database creation scripts (create), export files (exp), parameter files (pfile), and a user dump folder (udump). The oradata folder has the data files, redo log files, and control files belonging to the database, separated at the database level by using subfolders. FIGURE 3.4
OFA directory structures
For performance reasons, the OFA architecture can be slightly extended to include multiple disks and to spread out the data files. Figure 3.5 shows such a layout, in which oradata01, oradata02, oradata03, oradata04, and so on can be on separate disks and can hold separate types of files (data files separate from redo log files) or different tablespaces (data tablespace separate from the index tablespace, and separate from the system tablespace).
n this section, we will discuss the privileges and authentication methods available when using the administration tools described in the previous section. You can allow administrators to connect to the database by using operating system authentication or password file authentication. For remote or local database administration, you can use either method, but you can use the operating system authentication method with remote administration only if you have a secured network connection. To use remote authentication of users through Remote Dial-In User Service (RADIUS—a standard lightweight protocol used for user authentication and authorization) with Oracle, you need Oracle9i Enterprise Edition with the Advanced Security option.
When you create a database, Oracle automatically creates two administrator login IDs—SYS and SYSTEM. The initial password for SYS is CHANGE_ ON_INSTALL, and the initial password for SYSTEM is MANAGER. For security reasons, change these passwords as soon as you finish creating the database. Oracle recommends that you create at least one additional user to do the DBA tasks, rather than using the SYS or SYSTEM account. A predefined role, DBA, is created with all databases and has all database administrative privileges.
Operating System Authentication Oracle can verify your operating system privileges and connect you to the database to perform database operations. To connect to the database by using operating system authentication, you must be a member of the OSDBA or OSOPER operating system group. On most Unix systems, this is the dba group. You can specify the name of the OSDBA and OSOPER groups when you install Oracle by using the OUI. OSDBA and OSOPER are not Oracle privileges or roles that you grant through the Oracle database. The operating system manages them. When you connect to the database by using the OSOPER privilege (or SYSOPER privilege), you can perform STARTUP, SHUTDOWN, ALTER DATABASE [OPEN/ MOUNT], ALTER DATABASE BACKUP, ARCHIVE LOG, and RECOVER, and SYSOPER includes the RESTRICTED SESSION privilege. When you connect to the database by using the OSDBA privilege (or SYSDBA privilege), you have all system privileges with ADMIN OPTION, the OSOPER role, CREATE DATABASE, and time-based recovery. To use operating system authentication, set the REMOTE_LOGIN_ PASSWORDFILE parameter to NONE, which is the default. Operating system authenticated users can connect to the database by using CONNECT / AS SYSDBA or CONNECT / AS SYSOPER. You do not need a user created in the Oracle database to use operating system authentication. Here is an example from a Windows platform, making a local operating system authentication connection to the database to perform administration operations: Microsoft(R) Windows DOS (C)Copyright Microsoft Corp 1990-1999. E:\>sqlplus /nolog
SQL*Plus: Release 9.0.1.0.1 - Production on Tue Oct 2 20:53:08 2001 (c) Copyright 2001 Oracle Corporation. reserved. SQL> connect / as sysdba Connected. SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Current log sequence SQL>
All rights
No Archive Mode Disabled H:\Oracle9i\RDBMS 0 1
Password File Authentication When using password file authentication, the user connects to the database by specifying a username and a password. The user needs to have been granted the appropriate privileges in the database.
Oracle Objective
Set up password file authentication
To use password file authentication, follow these steps: 1. Using the ORAPWD utility, create a password file with the SYS pass-
word. When you change the password in the database, the password in this file is automatically updated. 2. Set the REMOTE_LOGIN_PASSWORDFILE parameter. 3. Grant the appropriate users SYSDBA or SYSOPER privilege. When you
grant this privilege, these users are added to the password file.
When you invoke the ORAPWD utility without any parameters, the syntax for creating the password file is displayed. $ orapwd Usage: orapwd file= password=<password> entries=<users> where file - name of password file (mand), password - password for SYS and INTERNAL (mand), entries - maximum number of distinct DBAs and OPERs (opt), There are no spaces around the equal-to (=) character. The FILE parameter specifies the name of the parameter file. Normally the file is created in the dbs directory under ORACLE_HOME (the directory where Oracle software is installed). The PASSWORD parameter specifies the SYS password, and ENTRIES specifies the maximum number of users you will be assigning the SYSOPER or SYSDBA privileges. If you exceed this limit, you will need to re-create the password file. ENTRIES is an optional parameter. You can set the parameter REMOTE_LOGIN_PASSWORDFILE to either EXCLUSIVE or SHARED. If you set the parameter to EXCLUSIVE, the password file can be used for only one database; you can add users other than SYS and INTERNAL to the password file. If you set the parameter to SHARED, the password file is shared among multiple databases, but you cannot add any user other than SYS or INTERNAL to the password file. When you connect to the database by using the SYSDBA privilege, you are connected to the SYS schema, and when you connect by using the SYSOPER privilege, you are connected to the PUBLIC schema.
The view V$PWFILE_USERS has the information on all users granted either SYSDBA or SYSOPER privileges. The view has the username and a value of TRUE in column SYSDBA if the SYSDBA privilege is granted, or it has a value of TRUE in column SYSOPER if the SYSOPER privilege is granted.
To start or stop an Oracle instance, you must have the SYSDBA or SYSOPER privilege. To start up a database, you can use either the Instance branch of OEM or SQL*Plus to connect with a user account that has SYSDBA or SYSOPER privileges. The database start-up is done in three stages. First, you start an instance associated with the database, then the instance mounts the database, and finally you open the database for normal use. The examples discussed in this section use SQL*Plus to start up the database.
Oracle Objective
Start up and shut down an instance
The instance can start, but not mount, the database by using the STARTUP NOMOUNT command. Normally you use this database state for creating a new database or for creating new control files. When you start the instance, Oracle allocates the SGA and starts the background processes. The instance can start and mount the database without opening it by using the STARTUP MOUNT command. This state of the database is used mainly for performing specific maintenance operations such as renaming data files, enabling or disabling archive logging, renaming, adding, or dropping redo log files, or for recovering a full database. When you mount the database, Oracle opens the control files associated with the database. Each control file contains the names and locations of database files and online redo log files. You use STARTUP OPEN or STARTUP to start the instance, mount a database, and open the database for normal operations. When you open the database, Oracle opens the online data files and online redo log files. If any of the files are not available or are not in synch with the control file, Oracle returns an error. You may have to recover one of the files before you can open the database. Issuing the ALTER DATABASE MOUNT command when the database is not mounted will mount the database in a previously started instance. ALTER DATABASE OPEN will open a closed database. You can open a database in read-only mode by using the ALTER DATABASE OPEN READ ONLY command. When you start the database in read-only mode, no redo information is generated because you cannot modify any data. The following example shows how to start a database by using the SQL*Plus utility.
E:\>sqlplus /nolog SQL*Plus: Release 9.0.1.0.1 - Production on Tue Oct 2 21:05:53 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect / as sysdba Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 118255568 bytes Fixed Size 282576 bytes Variable Size 83886080 bytes Database Buffers 33554432 bytes Redo Buffers 532480 bytes Database mounted. Database opened. SQL> exit Disconnected from Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production With the Partitioning option JServer Release 9.0.1.1.1 - Production E:\> Sometimes you may have problems starting up an instance. In those cases, you can use STARTUP FORCE to start a database that will not shut down or start up gracefully. Use this option only if you could not shut down the database properly; STARTUP FORCE shuts down the instance if it is already running and then restarts it. You can restrict access to the database by using the command STARTUP RESTRICT to start the database in restricted mode. Only users with the RESTRICTED SESSION system privilege can connect to the database. You can also use ALTER SYSTEM [ENABLE/DISABLE] RESTRICTED SESSION to enable or disable restricted access after opening the database. Put the database in restricted mode if you want to make any major structure modifications or to get a consistent export.
You need to have the ALTER SYSTEM privilege to change the database availability by using the ALTER SYSTEM [ENABLE/DISABLE] RESTRICTED SESSION command.
You can start an Oracle instance with one of two types of parameter files: a text-based PFILE or a binary SPFILE. The SPFILE parameter file is new in Oracle9i, which not only eases the administration of parameter files, but also gives the DBA more flexibility when specifying the persistence of parameter values.
When an instance is started in the NOMOUNT state, you can access only the views that read data from the SGA. V$PARAMETER, V$SGA, V$OPTION, V$PROCESS, V$SESSION, V$VERSION, V$INSTANCE, and so on are dictionary views that read from the SGA. When the database is mounted, information can be read from the control file. V$THREAD, V$CONTROLFILE, V$DATABASE, V$DATAFILE, V$DATAFILE_HEADER, V$LOGFILE, and so on all read data from the control file.
The Parameter File: PFILE Oracle uses a parameter file when starting up the database, either a textbased PFILE or a binary SPFILE (discussed in the next section). The PFILE is a text file containing the parameters and their values for configuring the database and instance. The default location and name of the file depend on the operating system; on Unix platforms, by default Oracle looks for the parameter file by the name init<SID>.ora (SID is the name of the instance) under the $ORACLE_HOME/dbs directory. You can specify the parameter file location and name when starting up the database by using the PFILE option of the STARTUP command. The following command starts up the database in restricted mode by using the parameter file initORADB01.ora under the /oracle/admin/ORADB01/pfile directory. STARTUP PFILE=/oracle/admin/ORADB01/pfile/initORADB01.ora RESTRICT
The parameter files tell Oracle the following when starting up an instance:
The name of the database and the location of the control files
The location of the archived log files and whether to start the archival process
The size of the SGA
The location of the dump and trace files
The parameters to set limits and that affect capacity
If you do not specify a parameter in the parameter file, Oracle assumes a default value for the parameter. You can structure a custom parameter file liberally, but certain syntax rules are enforced for the files. The syntax rules are:
Precede comment lines with a pound sign (#).
All parameters are optional. When parameters are omitted, defaults will be applied.
Parameters and their values are generally not case sensitive. Parameter values that name files can be case sensitive if the host operating system’s filenames are case sensitive.
You can list parameters in any order.
Parameters that accept multiple values, such as the CONTROL_FILES parameter, can list the values in parentheses delimited by commas or with no parentheses delimited by spaces.
The continuation character is the backslash character (\). Use the backslash when a parameter’s list of values must be continued on a separate line.
Enclose parameter values that contain spaces in double quotes.
Use the equal sign (=) to delimit the parameter name and its associated value.
The Parameter File: SPFILE The other type of parameter file that Oracle9i supports is a persistent parameter file, otherwise known as an SPFILE. This file is located in the same directory as a PFILE, in the $ORACLE_HOME/dbs directory.
The SPFILE is a binary file and is not meant to be edited by a standard text editor; it is created from a standard PFILE and then modified by the ALTER SYSTEM command thereafter. In the case of an SPFILE, the ALTER SYSTEM command can change the value of an initialization parameter either for the life of the instance, or across a shutdown and restart, or both. To initially create an SPFILE, a PFILE must exist first; the following example creates an SPFILE in the default location from an initSID.ora PFILE that resides in the same default location: SQL> CREATE SPFILE FROM PFILE; The next time the instance is restarted, only the SPFILE will be used to initialize the database.
Get Parameter Values You can get the value of a parameter by using the SHOW PARAMETERS command. When this command is used without any arguments, Oracle displays all the parameters in alphabetic order and their values. To get the value for a specific parameter, use the SHOW PARAMETERS command with the parameter name as the argument. For example, to view the value of the DB_BLOCK_SIZE parameter, use the following: SQL> show parameters db_block_size NAME TYPE VALUE ------------------------- --------- --------------------db_block_size integer 8192 SQL> The argument in the SHOW PARAMETERS command is a filter; you can specify any string, and Oracle displays the parameters that match the argument string anywhere in the parameter name. The argument is not case sensitive. In the following example, all parameters with OS embedded somewhere in the name are shown. SQL> show parameters OS NAME ------------------------optimizer_index_cost_adj os_authent_prefix os_roles
remote_os_authent boolean FALSE remote_os_roles boolean FALSE timed_os_statistics integer 0 SQL> You can also get the parameter values by querying the V$PARAMETER view. V$PARAMETER shows the parameter values for the current session. V$SYSTEM_PARAMETER has the same structure as the V$PARAMETER view, except that it shows the system parameters. The columns in the V$PARAMETER view are shown in Table 3.1. TABLE 3.1
V$PARAMETER View Column Name
Data Type
Purpose
NUM
NUMBER
Parameter number.
NAME
VARCHAR2 (64)
Parameter name.
TYPE
NUMBER
Type of parameter: 1—Boolean, 2—string, 3—integer, 4—file.
VALUE
VARCHAR2 (512)
Value of the parameter.
ISDEFAULT
VARCHAR2 (9)
Whether the parameter value is the Oracle default. FALSE indicates that the parameter was changed during start-up.
ISSES_ MODIFIABLE
VARCHAR2 (5)
TRUE indicates that the parameter can be changed by using an ALTER SESSION command.
ISSYS_ MODIFIABLE
VARCHAR2 (9)
FALSE indicates that the parameter cannot be changed by using the ALTER SYSTEM command. IMMEDIATE indicates that the parameter can be changed, and DEFERRED indicates that the parameter change takes effect only in the next session.
MODIFIED indicates that the parameter was changed by using ALTER SESSION. SYS_MODIFIED indicates that the parameter was changed by using ALTER SYSTEM.
ISADJUSTED
VARCHAR2 (5)
TRUE indicates that Oracle adjusted the value of the parameter to be a more suitable value.
DESCRIPTION
VARCHAR2 (64)
A brief description of the purpose of the parameter.
UPDATE_COMMENT
VARCHAR2 (255)
Set if a comment has been supplied by the DBA for this parameter.
To get the parameter names and their values for the parameter names that start with OS, perform this query: SQL> SQL> SQL> 2 3
col name format a30 col value format a25 SELECT name, value FROM v$parameter WHERE name like ‘os%’;
NAME VALUE ------------------------------ ------------------------os_roles FALSE os_authent_prefix SQL> You can also use the GUI tool in OEM to see the values of parameters. The description shown in this tool is more elaborate than the description you would see in the V$PARAMETER view.
Set Parameter Values When you start up the instance, Oracle reads the parameter file and sets the value for the parameter. For the parameters that are not specified in the parameter file, Oracle assigns a default value. The parameters that are modified at instance start-up can be displayed by querying the V$PARAMETER view for a FALSE value in the ISDEFAULT column. SQL> SELECT name, value 2 FROM v$parameter WHERE isdefault = ‘FALSE’; Certain parameters can be changed dynamically by using the ALTER SESSION or ALTER SYSTEM command. To identify such parameters, query the view V$PARAMETER. You can change the value of a parameter system-wide by using the ALTER SYSTEM command. A value of DEFERRED or IMMEDIATE in the ISSYS_MODIFIABLE column shows that the parameter can be dynamically changed by using the command ALTER SYSTEM. DEFERRED indicates that the change you make does not take effect until a new session is started. The existing sessions will use the current value. IMMEDIATE indicates that as soon as you change the value of the parameter, it is available to all sessions in the instance. A session can be a job or a task that Oracle manages. When you log in to the database by using SQL*Plus or any client tool, you start a session. Sessions are discussed in the next section. Here is an example of modifying a parameter by using ALTER SYSTEM. SQL> ALTER SYSTEM SET log_archive_dest = ‘/oracle/archive/DB01’; The following example will set the TIMED_STATISTICS parameter to TRUE for all future sessions. SQL> ALTER SYSTEM SET timed_statistics = TRUE DEFERRED; A value of TRUE in the ISSES_MODIFIABLE column indicates that the parameter can be changed by using ALTER SESSION. When you change a parameter by using ALTER SESSION, the value of the parameter is changed only for that session. When you start the next session, the parameter will have the original value (the Oracle default, the value set in the parameter file, or the value set by ALTER SYSTEM). Here is an example of modifying a parameter by using ALTER SESSION: SQL> ALTER SESSION SET nls_date_format = ‘MM-DD-YYYY’;
Using an SPFILE, the DBA has more flexibility as to when the parameter takes effect in the instance: in effect for the current instance only, in effect only after the instance is restarted, or is both in effect immediately and after the instance is restarted. The following example changes the value of MAX_DUMP_FILE_SIZE; this new value will take effect only after the instance is shut down and restarted: SQL> ALTER SYSTEM SET MAX_DUMP_FILE_SIZE=20000 SCOPE=SPFILE; The other two options for the SCOPE clause are MEMORY (for the life of the current instance only) and BOTH (for the current instance and across shutdown and restart). The default is BOTH.
Managing Sessions
O
racle starts a session when a database connection is made. The session is available as long as the user is connected to the database. When a session is started, Oracle allocates a session ID to that session. To display the user sessions connected to a database, query the view V$SESSION. In V$SESSION, the session identifier (SID) and the serial number (SERIAL#) uniquely identify each session. The serial number guarantees that sessionlevel commands are applied to the correct session objects if the session ends and another session begins with the same session ID. The V$SESSION view contains a lot of information about a session. The username, machine name, program name, status, and login time are a few of the useful pieces of information in this view. For example, if you need to know which users are connected to the database and the program they are running, execute the following query. SQL> SELECT username, program 2 FROM v$session; Sometimes it may be necessary to terminate certain user sessions. You can terminate a user session by using the ALTER SYTEM command. The SID and SERIAL# from the V$SESSION view are required to kill the session. For example, to kill a session created by user JOHN, you do the following. SQL> SELECT username, sid, serial#, status 2 FROM v$session 3 WHERE username = ‘JOHN’;
USERNAME SID SERIAL# STATUS ------------------- ---------- --------- -------JOHN 9 3 INACTIVE SQL> ALTER SYSTEM KILL SESSION ‘9, 3’; System altered. SQL> SELECT username, sid, serial#, status 2 FROM v$session 3 WHERE username = ‘JOHN’; USERNAME SID SERIAL# STATUS ------------------- ---------- --------- ------JOHN 9 3 KILLED SQL> When you kill a session, first Oracle terminates the session to prevent the session from executing any more SQL statements. If any SQL statement is in progress when the session is terminated, the statement is terminated, and all changes are rolled back. The locks and other resources used by the session are also released. If you kill an INACTIVE session, Oracle terminates the session and marks the status in the V$SESSION view as KILLED. When the user subsequently tries to use the session, an error is returned to the user, and the session information is removed from V$SESSION. If you kill an ACTIVE session, Oracle terminates the session and issues an error message immediately to the user that the session is killed. If Oracle cannot release the resources held by the session in 60 seconds, Oracle returns a message to the user that the session has been marked for kill. The status in the V$SESSION view will again show as KILLED. If you want the user to complete the current transaction and then terminate their session, you can use the DISCONNECT SESSION option of the ALTER SYSTEM command. If the session has no pending or active transactions, this command has the same effect as KILL SESSION. Here is an example: ALTER SYSTEM DISCONNECT SESSION ‘9,3’ POST_TRANSACTION;
You can also use the IMMEDIATE clause with the KILL SESSION or DISCONNECT SESSION to roll back ongoing transactions, release all session locks, recover the entire session state, and return control to you immediately. Here are some examples: ALTER SYSTEM DISCONNECT SESSION ‘9,3’ IMMEDIATE; ALTER SYSTEM KILL SESSION ‘9,3’ IMMEDIATE;
Shutting Down the Oracle Instance
Similar to the stages in starting up a database, there are three stages to shutting down a database. First, you close the database, then the instance dismounts the database, and finally you shut down the instance.
Oracle Objective
Start up and shut down an instance
When closing the database, Oracle writes the redo buffer to the redo log files and the changed data in the database buffer cache to the data files, and closes the data files and redo log files. The control file remains open, but the database is not available for normal operations. After closing the database, the instance dismounts the database. The control file is closed at this time. The memory allocated and the background processes still remain. The final stage is the instance shutdown. The SGA is removed from memory and the background processes are terminated when the instance is shut down. To initiate a database shutdown, you can use the SHUTDOWN command in SQL*Plus or use the Instance branch of the OEM GUI tool. You need to connect to the database by using a dedicated server process with an account that has SYSDBA privileges to shut down the database. Once the shutdown process is initiated, no new user sessions are allowed to connect to the database. You can shut down the database by using the SHUTDOWN command with any of four options. These options and the steps that Oracle takes are as follows.
SHUTDOWN NORMAL When you use the SHUTDOWN command without any options, the default option is NORMAL. When you issue SHUTDOWN NORMAL, Oracle does the following:
Does not allow any new user connections.
Waits for all users to disconnect from the database. All connected users can continue working.
Closes the database, dismounts the instance, and shuts down the instance once all users are disconnected from the database.
SHUTDOWN IMMEDIATE You use SHUTDOWN IMMEDIATE to bring down the database as quickly as possible. When you issue SHUTDOWN IMMEDIATE, Oracle does the following:
Does not allow any new user connections
Terminates all user connections to the database
Rolls back uncommitted transactions
Closes the database, dismounts the instance, and shuts down the instance
SHUTDOWN TRANSACTIONAL You use SHUTDOWN TRANSACTIONAL to bring down the database as soon as the users complete their current transaction. This is a mode that fits between IMMEDIATE and NORMAL. When you issue SHUTDOWN TRANSACTIONAL, Oracle does the following:
Does not allow any new user connections.
Does not allow any new transactions in the database. When a user tries to start a new transaction, the session is disconnected.
Waits for the user to either roll back or commit any uncommitted transactions.
Closes the database, dismounts the instance, and shuts down the instance once all transactions are complete.
The following example shows a SHUTDOWN TRANSACTIONAL command using SQL*Plus. E:\>sqlplus /nolog SQL*Plus: Release 9.0.1.0.1 - Production on Tue Oct 2 21:30:55 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect / as sysdba Connected. SQL> shutdown transactional Database closed. Database dismounted. ORACLE instance shut down. SQL> exit Disconnected from Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production With the Partitioning option JServer Release 9.0.1.1.1 - Production E:\>
SHUTDOWN ABORT When any of the other three shutdown options does not work, you can bring down the database abruptly by using the SHUTDOWN ABORT command. An instance recovery is needed when you start up the database next time. When you issue SHUTDOWN ABORT, Oracle does the following:
Terminates all current SQL statements that are being processed
Disconnects all connected users
Terminates the instance immediately
Will not roll back uncommitted transactions
When the database is started up after a SHUTDOWN ABORT, Oracle has to roll back the uncommitted transactions by using the online redo log files.
Oracle writes informational messages and alerts to different files depending on the type of message. These messages are useful when you’re troubleshooting a problem. Oracle writes to these files in locations that are specific to the operating system; you can specify the locations in the initialization parameters. You alter these parameters by using the ALTER SYSTEM command.
Oracle Objective
Monitor the use of diagnostic files
The three variables used to specify the locations are as follows: BACKGROUND_DUMP_DEST Location to write the debugging trace files generated by the background processes and the alert log file. USER_DUMP_DEST Location to write the trace files generated by user sessions. The server process, on behalf of the user sessions, writes trace files if the session encounters a deadlock or encounters any internal errors. The user sessions can be traced. The trace files thus generated are also written to this location. CORE_DUMP_DEST Location to write core dump files, primarily used on Unix platforms. Core dumps are normally produced when the session or the instance terminates abnormally with errors. This parameter is not available on Windows platforms. All databases have an alert log file. An alert log file in the directory specified by BACKGROUND_DUMP_DEST logs significant database events and messages. The alert log stores information about block corruption errors, internal errors, and the non-default initialization parameters used at instance start-up. The alert log also records information about database start-up, shutdown, archiving, recovery, tablespace modifications, rollback segment modifications, and data file modifications. The alert log is a normal text file. Its filename depends on the operating system. For Unix platforms, it takes the format alert_<SID>.log (SID is the instance name). During the start-up of the database, if the alert log file is not available, Oracle creates one. This file grows slowly, but without limit, so you might want to delete or archive it periodically. You can delete the file even when the database is running.
In previous versions of the Oracle Server, maintaining the physical operating system files associated with logical database objects was problematic. Dropping a logical database object (such as a tablespace) did not delete the associated operating system file, and therefore an extra step was performed to manually delete the files formerly associated with database objects.
Oracle Objective
Configure Oracle Managed Files (OMF)
The Oracle Managed Files (OMF) feature of Oracle9i addresses this issue. You can use two new initialization parameters to define the location of files in the operating system: DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_ LOG_DEST_n. The parameter DB_CREATE_FILE_DEST specifies the default location for new datafiles. The actual operating system file is created with the prefix ora_ and a suffix of .dbf. If the CREATE DATABASE command (or any other commands that use the OMF initialization parameters) fails, the associated data files are removed from the server file system. The parameter DB_CREATE_ONLINE_LOG_DEST_n specifies as many as five locations for online redo log files and control files. The online redo log files have a suffix of .log, and the control files have a suffix of .ctl. You don’t have to use both parameters, and you can dynamically change the values of these parameters with the ALTER SYSTEM command.
OMF Time-Saving Benefits Consider how much OMF helps the busy DBA. Quite often, I forget to delete the datafile(s) associated with a dropped tablespace, causing problems with operating system backups and using disk space that could otherwise be used for other database objects.
Before OMF, I would occasionally perform a manual audit, comparing file listings at the operating system level with the results from querying the views V$DATAFILE, V$CONTROLFILE, and V$LOGFILE. Yet another immediate benefit of using OMF is to store a “base” directory pathname in a single initialization parameter. This makes database creation scripts easier to maintain, allowing for easy re-use in different environments by merely changing one or two initialization parameters.
Summary
T
his chapter briefly discussed the Universal Installer and Enterprise Manager, two of Oracle’s Java-based GUI tools. The OUI has the same interface across all platforms and is used to install multiple products. Oracle Enterprise Manager is a system management tool used to manage components of Oracle and to administer many local and remote databases at one location. OEM comprises a console and management utilities, a repository to save all the metadata information, and the actual nodes (databases and other components) that need to be managed. For connecting to the database as administrator, Oracle has two authentication methods. Operating system authentication is allowed if you are local to the computer where the database is situated or if you have a secure network connection. Password file authentication creates a password file at the server with the SYS password. Users can be granted SYSDBA or SYSOPER privilege, and they can connect to the database with appropriate privileges. You need either of these privileges to shut down or start up the database. Starting up Oracle involves three stages. First, you start the instance, then the instance mounts the database, and finally you open the database. You can start up the database in any of these stages by using the start-up options. The database availability can also be controlled by enabling restricted access. Shutting down the database also involves three stages, but in reverse order. You can shut down the database in four ways. SHUTDOWN NORMAL, the default, waits for all users to log out before shutdown. SHUTDOWN IMMEDIATE disconnects all user sessions and shuts down the database. SHUTDOWN TRANSACTIONAL waits for the users to complete their current transaction and then shuts down the database. SHUTDOWN ABORT simply terminates the instance immediately.
When you start up the database, Oracle uses different parameters to configure memory, to configure the database, and to set limits. These parameters are saved in a file called the parameter file (PFILE), which is read by Oracle during instance start-up. Many of these parameters can be changed dynamically for the session by using the ALTER SESSION command or for the database by using the ALTER SYSTEM command. The alternative parameter file, SPFILE, is a binary file that can be modified on the fly and whose parameters can take effect either for the life of the instance only, after the next restart only, or both. The Optimal Flexible Architecture (OFA) is a set of guidelines specified by Oracle to better manage the Oracle software and the database. OFA enforces a consistent naming convention as well as separate locations for Oracle software, database, and administration files. The database constantly writes information about major database events in a log file called the alert log file. Oracle also writes trace and dump information for debugging session problems.
Exam Essentials Understand the purpose and benefits of OFA. Be able to describe how different types of database and database-related files are stored in different locations. Identify the default database administrator users. Enumerate the two primary default users and the slightly different roles these users play in the database. Identify the role that is granted to other users to allow similar functionality. Be able to create a password file. Know how to specify the location of the password file, the method for adding users to the password file, and the initialization parameter that needs to be modified to allow SYSDBA access to non-default users. Understand the three-tier OEM architecture. Identify the three levels of the hierarchy and their roles. Identify special components that need to be created in each tier to facilitate ease of use for DBAs.
Describe the differences and similarities between the two types of initialization files. Be able to list the key parameters found in both files and the way in which these files are maintained. Understand how OMF simplifies operating system file administration. Be able to identify the two new initialization parameters associated with OMF. Know how the filenames are constructed at the operating system level. Enumerate the steps involved in startup and shutdown of the database. Describe each step by what resources are available at that step. Understand the circumstances under which a DBA would use a particular STARTUP option. Understand the consequences of placing the database in read-only mode. Identify the three categories of diagnostic files. Understand how the locations of these files are specified, in addition to how these files are created and what is stored in each of these files.
Key Terms
B
efore you take the exam, make sure you’re familiar with the following terms: alert log file
Review Questions 1. Which of the following is an invalid database start-up option? A. STARTUP NORMAL B. STARTUP MOUNT C. STARTUP NOMOUNT D. STARTUP FORCE 2. Which two values from the V$SESSION view are used to terminate a
user session? A. SID B. USERID C. SERIAL# D. SEQUENCE# 3. To use operating system authentication to connect to the database as
an administrator, what should the value of the parameter REMOTE_ LOGIN_PASSWORDFILE be set to? A. SHARED B. EXCLUSIVE C. NONE D. OS 4. What information is available in the alert log files? A. Block corruption errors B. Users connecting and disconnecting from the database C. All user errors D. The default values of the parameters used to start up the database
5. Which parameter value is used to set the directory path where the alert
log file is written? A. ALERT_DUMP_DEST B. USER_DUMP_DEST C. BACKGROUND_DUMP_DEST D. CORE_DUMP_DEST 6. Which SHUTDOWN option requires instance recovery when the database
is started the next time? A. SHUTDOWN IMMEDIATE B. SHUTDOWN TRANSACTIONAL C. SHUTDOWN NORMAL D. None of the above 7. Which SHUTDOWN option will wait for the users to complete their
uncommitted transactions? A. SHUTDOWN IMMEDIATE B. SHUTDOWN TRANSACTIONAL C. SHUTDOWN NORMAL D. SHUTDOWN ABORT 8. How do you make a database read-only? (Choose the best answer.) A. STARTUP READ ONLY B. STARTUP MOUNT; ALTER DATABASE OPEN READ ONLY C. STARTUP NOMOUNT; ALTER DATABASE READ ONLY D. STARTUP; ALTER SYSTEM ENABLE READ ONLY 9. Which role is created by default to administer databases? A. DATABASE_ADMINISTRATOR B. SUPER_USER C. DBA D. No such role is created by default; you need to create administrator
10. Which parameter in the ORAPWD utility is optional? A. FILE B. PASSWORD C. ENTRIES D. All the parameters are optional; if you omit a parameter, Oracle
substitutes the default. 11. Which privilege do you need to connect to the database, if the data-
base is started up by using STARTUP RESTRICT? A. ALTER SYSTEM B. RESTRICTED SESSION C. CONNECT D. RESTRICTED SYSTEM 12. At which stage of the database start-up is the control file opened? A. Before the instance start-up B. Instance started C. Database mounted D. Database opened 13. User SCOTT has opened a SQL*Plus session and left for lunch. When
you queried the V$SESSION view, the STATUS was INACTIVE. You terminated SCOTT’s session. What will be the status of SCOTT’s session in V$SESSION? A. INACTIVE B. There will be no session information in V$SESSION view C. TERMINATED D. KILLED
14. Which command will “bounce” the database—that is, shut down the
database and start up the database in a single command? A. STARTUP FORCE B. SHUTDOWN FORCE C. SHUTDOWN START D. There is no single command to “bounce” the database; you need to
shut down the database and then restart it 15. When performing the command SHUTDOWN TRANSACTIONAL, Oracle
performs the following tasks in what order? A. Terminates the instance B. Performs a checkpoint C. Closes the data files and redo log files D. Waits for all user transactions to complete E. Dismounts the database F. Closes all sessions 16. What is the primary benefit of using an SPFILE to maintain the
parameter file? A. The SPFILE can be mirrored across several drives, unlike PFILEs. B. Changes to the database configuration can be made persistent
across shutdown and startup. C. Because the SPFILE is binary, the DBA will be less likely to edit it. D. The ALTER SYSTEM command cannot modify the contents of an
SPFILE. 17. Using SQL*Plus, which two options below will display the value of
the parameter DB_BLOCK_SIZE? A. SHOW PARAMETER DB_BLOCK_SIZE B. SHOW PARAMETERS DB_BLOCK_SIZE C. SHOW ALL D. DISPLAY PARAMETER DB_BLOCK_SIZE
18. When you issue the command ALTER SYSTEM ENABLE RESTRICTED
SESSION, what happens to the users who are connected to the database? A. The users with DBA privilege remain connected, and others are
disconnected. B. The users with RESTRICTED SESSION remain connected, and
others are disconnected. C. Nothing happens to the existing users. They can continue working. D. The users are allowed to complete their current transaction and are
disconnected. 19. Which view has information about users who are granted SYSDBA or
SYSOPER privilege? A. V$PWFILE_USERS B. DBA_PWFILE_USERS C. DBA_SYS_GRANTS D. None of the above 20. Which of the following initialization parameters is NOT used in OMF
operations? A. DB_CREATE_FILE_DEST B. DB_CREATE_FILE_DEST_2 C. DB_CREATE_ONLINE_LOG_DEST_1 D. DB_CREATE_ONLINE_LOG_DEST_5
Answers to Review Questions 1. A. STARTUP NORMAL is an invalid option; to start the database,
you issue the STARTUP command without any options or with STARTUP OPEN. 2. A and C. SID and SERIAL# are used to kill a session. You can query
the V$SESSION view to obtain these values. The command is ALTER SYSTEM KILL SESSION ‘<sid>, <serial#>’. 3. C. The value of the REMOTE_LOGIN_PASSWORDFILE parameter
should be set to NONE to use OS authentication. To use password file authentication, the value should be either EXCLUSIVE or SHARED. 4. A. The alert log stores information about block corruption errors,
internal errors, and the non-default initialization parameters used at instance start-up. The alert log also records information about database start-up, shutdown, archiving, recovery, tablespace modifications, undo segment modifications, and data file modifications. 5. C. The alert log file is written in the BACKGROUND_DUMP_DEST direc-
tory. This directory also records the trace files generated by the background processes. The USER_DUMP_DEST directory has the trace files generated by user sessions. The CORE_DUMP_DEST directory is used primarily on Unix platforms to save the core dump files. ALERT_DUMP_ DEST is not a valid parameter. 6. D. SHUTDOWN ABORT requires instance recovery when the database is
started the next time. Oracle will also roll back uncommitted transactions during start-up. This option shuts down the instance without dismounting the database. 7. B. When SHUTDOWN TRANSACTIONAL is issued, Oracle waits for the
users to either commit or roll back their pending transactions. Once all users have either rolled back or committed their transactions, the database is shut down. When using SHUTDOWN IMMEDIATE, the user sessions are disconnected and the changes are rolled back. SHUTDOWN NORMAL waits for the user sessions to disconnect from the database.
8. B. To put a database into read-only mode, you can mount the data-
base and open the database in read-only mode. This can be accomplished in one step by using STARTUP OPEN READ ONLY. 9. C. The DBA role is created when you create the database and is
assigned to the SYS and SYSTEM users. 10. C. The parameter ENTRIES is optional. You must specify a password
file name and the SYS password. The password file created will be used for authentication. 11. B. The RESTRICTED SESSION privilege is required to access a data-
base that is in restricted mode. You start up the database in restricted mode by using STARTUP RESTRICT, or you change the database to restricted mode by using ALTER SYSTEM ENABLE RESTRICTED SESSION. 12. C. The control file is opened when the instance mounts the database.
The data files and redo log files are opened after the database is opened. When the instance is started, the background processes are started. 13. D. When you terminate a session that is INACTIVE, the STATUS in
V$SESSION will show as KILLED. When SCOTT tries to perform any database activity in the SQL*Plus window, he receives an error that his session is terminated. When an ACTIVE session is killed, the changes are rolled back and an error message is written to the user’s screen. 14. A. STARTUP FORCE will terminate the current instance and start
up the database. It is equivalent to issuing SHUTDOWN ABORT and STARTUP OPEN. 15. D, F, B, C, E, and A. SHUTDOWN TRANSACTIONAL waits for all user
transactions to complete. Once no transactions are pending, it disconnects all sessions and proceeds with the normal shutdown process. The normal shutdown process performs a checkpoint, closes data files and redo log files, dismounts the database, and shuts down the instance. 16. B. Using the ALTER SYSTEM command, the changes can be made to
the current (MEMORY) configuration, to the next restart (SPFILE), or to both (BOTH).
17. A and B. The SHOW PARAMETER command will display the current
value of the parameter. If you provide the parameter name, its value is displayed; if you omit the parameter name, all the parameter values are displayed. SHOW ALL in SQL*Plus will display the SQL*Plus environment settings, not the parameters. 18. C. If you enable RESTRICTED SESSION when users are connected,
nothing happens to the already connected sessions. Future sessions are started only if the user has the RESTRICTED SESSION privilege. 19. A. The dynamic view V$PWFILE_USERS has the username and a value
of TRUE in column SYSDBA if the SYSDBA privilege is granted, or a value of TRUE in column SYSOPER if the SYSOPER privilege is granted. 20. B. Only one file destination is allowed. Control files and redo log files
use the same parameter; the parameter DB_CREATE_ONLINE_LOG_ DEST_n (n can have values from 1 to 5).
Creating a Database and Data Dictionary ORACLE9i DBA FUNDAMENTALS I EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe the prerequisites necessary for database creation Create a database using the Oracle Database Configuration Assistant (DBCA) Create a database manually Identify key data dictionary components Identify the contents and uses of the data dictionary Query the data dictionary
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification website (http://www.oracle.com/ education/certification/) for the most current exam objectives listing.
reating a database requires planning and preparation. You need to prepare the operating system, decide on the configuration parameters, and lay out the physical files of the database for optimum performance. You also need to create the data dictionary and Oracle-supplied PL/SQL packages. In this chapter, you will learn how to create the database by using scripts and Oracle’s Database Configuration Assistant (DBCA). This chapter also discusses the basic initialization parameters, Optimal Flexible Architecture (OFA), Oracle Managed Files (OMF), and the data dictionary views.
Creating a Database
C
reating an Oracle database requires planning and is done in multiple steps. The database is a collection of physical files that work together with an area of allocated memory and background processes. You create the database only once, but you can change the configuration (except the block size) or add more files to the database. Before creating the database, you must have the following:
Necessary hardware resources such as memory and disk space
After you complete these steps, you can create the database using the CREATE DATABASE command. Once the database is created, it is recommended that you create an SPFILE to replace the PFILE parameter file.
Oracle Objective
Describe the prerequisites necessary for database creation
Preparing Resources
Preparing the operating system resources is an important step. Depending on the operating system, you may have to adjust certain configuration parameters. For example, on Unix platforms, you must configure the shared memory parameters, because Oracle uses a single shared memory segment for the SGA (System Global Area). Since a major share of Oracle databases are created on Unix platforms, we will discuss certain operating system parameters that must be configured before you can create any Oracle database. The following list itemizes the Unix kernel parameters and describes their purposes. The super-user administers these kernel parameters. SHMMAX The maximum size of a shared memory segment SHMMNI The maximum number of shared memory identifiers in the system SHMSEG The maximum number of shared memory segments to which a user process can attach SEMMNI The maximum number of semaphore identifiers in the system SHMMAX * SHMSEG The total maximum shared memory that can be allocated Allocate enough memory for creating the SGA when creating the database and for future database operation. It is better to fit the SGA in real memory, rather than using virtual memory, to avoid paging. Paging will degrade performance.
The Oracle software should be installed on the machine on which you will be creating the database. The user account that installs the software needs certain administrative privileges on Windows NT/2000/XP, but on Unix platforms, the user account that installs the software need not have superuser privileges. The super-user privilege is required only to set up the Oracle account and to complete certain post-installation tasks such as creating the oratab file. The oratab file lists all the database instance names on that machine, their Oracle home locations, and whether the database should be started automatically at boot time. Certain Oracle scripts and Enterprise Manager discovery services use this file. The oratab file resides in the /etc directory under AIX, HP-UX, and Tru64 or the /var/opt/oracle directory under Solaris and Linux. The user account that owns the Oracle software should have the necessary privileges to create the data files, redo log files, and control files. You must make sure that enough free space is available to create these files, and you must follow certain Oracle guidelines about where to create the files, which are discussed in the section “Optimal Flexible Architecture.” The parameter file lists the parameters that will be used for creating and configuring the database. The common parameters are discussed in the section “Parameters.” Anyone can make mistakes; before performing any major task, ensure that you have methods to fix the mistakes. If you are already running databases on the server where you want to create the new database, make a full backup of all of them. If you overwrite an existing database file when creating the new database, the existing database will become useless.
Parameters
O
racle uses the parameter file to start up the instance before creating the database. You specify some database configuration values via the parameter file. The purpose and format of the parameter file were discussed in Chapter 3, “Installing and Managing Oracle.” The following parameters affect database configuration and creation: CONTROL_FILES Specifies the control file location(s) for the new database with the full pathname. Specify at least two control files on different disks. You can specify a maximum of eight control file names. Oracle creates these control files when the database is created. Be careful when
specifying the control file; if you specify the control file name of an existing database, Oracle could overwrite the control file, which will damage the existing database. If you do not use this parameter, Oracle uses the default filename, which is operating system dependent. DB_BLOCK_SIZE Specifies the database block size as a multiple of the operating system block size—this value cannot be changed after the database is created. The default block size is 4KB on most platforms. Oracle allows block sizes from 2KB to 32KB, depending on the operating system. DB_NAME Specifies the database name—the name cannot be changed easily after the database is created (you must re-create the control file). The DB_NAME value can be a maximum of eight characters. You can use alphabetic characters, numbers, the underscore (_), the pound symbol (#), and the dollar symbol ($) in the name. No other characters are valid. The first character should be an alphabetic character. Oracle removes double quotation marks before processing the database name. During database creation, the DB_NAME value is recorded in the data files, redo log files, and control file of the database. Table 4.1 lists and describes the other parameters that can be included in the parameter file. You must at least define the DB_CACHE_SIZE, LOG_ BUFFER and SHARED_POOL_SIZE to calculate the SGA, which must fit into real, not virtual, memory. TABLE 4.1
Initialization Parameters Parameter Name
Description
OPEN_CURSORS
The maximum number of open cursors a session can have. The default is 50.
MAX_ENABLED_ROLES
The maximum number of database roles that users can enable. The default is 20.
DB_CACHE_SIZE
The size of the default buffer cache, with blocks sized by DB_BLOCK_SIZE. This parameter can be dynamically altered.
Initialization Parameters (continued) Parameter Name
Description
SGA_MAX_SIZE
The maximum size allowed for all components of the SGA. Sets an upper limit to prevent dynamically altered sizes of other parameters to push the total SGA size over this limit.
SHARED_POOL_SIZE
Size of the shared pool. Can be specified in bytes or KB or MB. The default value on most platforms is 16MB.
LARGE_POOL_SIZE
The large pool area of the SGA. Default value is 0.
JAVA_POOL_SIZE
Size of the Java pool; the default value is 20,000KB. If you are not using Java, specify the value as 0.
PROCESSES
The maximum number of processes that can connect to the instance. This includes the background processes.
LOG_BUFFER
Size of the redo log buffer in bytes.
BACKGROUND_DUMP_DEST
Location of the background dump directory. The alert log file is written in this directory.
CORE_DUMP_DEST
Location of the core dump directory.
USER_DUMP_DEST
Location of the user dump directory.
REMOTE_LOGIN_PASSWORDFILE
The authentication method. When creating the database, make sure you have either commented out this parameter or set it to NONE. If you create the password file before creating the database, you can specify a different value such as EXCLUSIVE or SHARED.
Initialization Parameters (continued) Parameter Name
Description
COMPATIBLE
The release with which the database server must maintain compatibility. You can specify values from 9.0 to the current release number.
SORT_AREA_SIZE
Size of the area allocated for temporary sorts.
LICENSE_MAX_SESSIONS
Maximum number of concurrent user sessions. When this limit is reached, only users with RESTRICTED SESSION privilege are allowed to connect. The default is 0—unlimited.
LICENSE_SESSIONS_WARNING
A warning limit on the number of concurrent user sessions. Messages are written to the alert log when new users connect after this limit is reached. The new user is allowed to connect up to the LICENSE_MAX_SESSIONS value. The default value is 0—unlimited.
LICENSE_MAX_USERS
Maximum number of users that can be created in the database. The default is 0—unlimited.
Environment Variables
I
f you are creating a database on Unix, be sure to set up the appropriate environment variables. The examples in the environment variables below conform with Optimal Flexible Architecture (OFA). ORACLE_BASE The directory at the top of the Oracle tree, for example, /u01/apps/oracle. All versions of Oracle installed on this server are stored under this directory.
ORACLE_HOME The location of the Oracle software, relative to ORACLE_BASE. The OFA-recommended location is $ORACLE_BASE/ product/, which in this case would resolve to /u01/ apps/oracle/product/901. ORACLE_SID The instance name for the database. This name must be unique for all other instances, regardless of version, running on this server. ORA_NLS33 The environment variable that you must set if you want to use a character set other than the default. PATH The standard Unix pathname that should already exist in the Unix environment; you must add the directory for the Oracle binary executables to this path variable: $ORACLE_HOME/bin. LD_LIBRARY_PATH Other program libraries, both Oracle and non-Oracle, that reside in this directory.
The CREATE DATABASE Command
Y
ou create the database using the CREATE DATABASE command. You must start up the instance (with STARTUP NOMOUNT PFILE=) before issuing the command.
Oracle Objective
Create a database manually
The following is a sample database creation command: CREATE DATABASE “PROD01” CONTROLFILE REUSE LOGFILE GROUP 1 (‘/oradata02/PROD01/redo0101.log’, ‘/oradata03/PROD01/redo0102.log) SIZE 5M REUSE, GROUP 2 (‘/oradata02/PROD01/redo0201.log’, ‘/oradata03/PROD01/redo0202.log) SIZE 5M REUSE
MAXLOGFILES 4 MAXLOGMEMBERS 2 MAXLOGHISTORY 0 MAXDATAFILES 254 MAXINSTANCES 1 NOARCHIVELOG CHARACTER SET “WE8MSWIN1252” NATIONAL CHARACTER SET “AL16UTF16” DATAFILE ‘/oradata01/PROD01/system01.dbf’ SIZE 80M AUTOEXTEND ON NEXT 5M MAXSIZE UNLIMITED UNDO TABLESPACE UNDOTBS DATAFILE ‘/oradata04/PROD01/undo01.dbf’ SIZE 35M DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE ‘/oradata05/PROD01/temp01.dbf’ SIZE 20M; Let’s look at the clauses used in the CREATE DATABASE command. The only mandatory portion in this command is the CREATE DATABASE clause. If you omit the database name, Oracle takes the default value from the parameter DB_NAME defined in the initialization parameter file. The value specified in the parameter file and the database name in this command should be the same. The CONTROLFILE REUSE clause overwrites an existing control file. Normally you use this clause only when re-creating a database. If you omit this clause, and any of the files specified by the CONTROL_FILES parameter exist, Oracle returns an error. The LOGFILE clause specifies the location of the online redo log files. If you omit the GROUP clause, Oracle creates the files specified in separate groups with one member in each. A database must have at least two redo groups. In the example, Oracle creates two redo log groups with two members in each. It is recommended that all redo log groups be the same size. The REUSE clause overwrites an existing file, if any, provided the sizes are the same. The next five clauses specify limits for the database. The control file size depends on these limits, because Oracle pre-allocates space in the control file. MAXLOGFILES specifies the maximum number of redo log groups that can ever be created in the database. MAXLOGMEMBERS specifies the maximum number or redo log members (copies of redo log files) for each redo log group. The MAXLOGHISTORY is used only for the Real Application
Cluster configuration. It specifies the maximum number of archived redo log files for automatic media recovery. MAXDATAFILES specifies the maximum number of data files that can be created in this database. Data files are created when you create a tablespace or add more space to a tablespace by adding a data file. MAXINSTANCES specifies the maximum number of instances that can simultaneously mount and open this database. If you want to change any of these limits after the database is created, you must re-create the control file.
The initialization parameter DB_FILES specifies the maximum number of data files accessible to the instance. The MAXDATAFILES clause in the CREATE DATABASE command specifies the maximum number of data files allowed for the database. The DB_FILES parameter cannot specify a value larger than MAXDATAFILES.
You can specify NOARCHIVELOG or ARCHIVELOG to configure the redo log archiving. The default is NOARCHIVELOG; you can change the database to ARCHIVELOG mode by using the ALTER DATABASE command after the database is created. The CHARACTER SET clause specifies the character set used to store data. The default is WE8MSWIN1252 on Windows platforms. The character set cannot be changed after database creation. The NATIONAL CHARACTER SET clause specifies the national character set used to store data in columns specifically defined as NCHAR, NCLOB, or NVARCHAR2. If not specified, the national character set defaults to the database character set. The unqualified DATAFILE clause in this example specifies one or more files created for the SYSTEM tablespace. You can optionally specify the AUTOEXTEND clause, which is discussed in detail in Chapter 6, “Logical and Physical Database Structures.” The UNDO TABLESPACE clause specifies an undo tablespace with one or more associated data files. This tablespace contains undo segments when automatic undo management is enabled with the initialization parameter UNDO_MANAGEMENT=AUTO. The DEFAULT TEMPORARY TABLESPACE clause defines the tablespace location for all temporary segments. If you create a user without specifying a temporary tablespace, this one is used.
Now that you have seen what is involved in creating a database, let’s put this all together: 1. Be sure you have enough resources available and the necessary
privileges. 2. Decide on a database name, control file locations, and a database
block size, and prepare a parameter file including other necessary parameters. 3. Decide on the locations for control files, data files, and redo log files.
If at all possible, spread out data files that may compete for the same resources to different physical volumes. For example, updates to a table will generate I/O against both the table and the index. Therefore, placing the indexes in a different tablespace on a different physical volume may improve performance. 4. Decide on the version of the database and the instance name. Set the
environment variables ORACLE_HOME with the directory name of the Oracle software installation and ORACLE_SID with the instance name. Normally the instance name and database name are the same. Set up the ORA_NLS33 environment variable if you are using a character set other than the default. 5. Start the instance. Using SQL*Plus, connect using a SYSDBA account
and issue STARTUP NOMOUNT. 6. Create the database by using the CREATE DATABASE command.
Using OMF to Create a Database
I
n contrast to using the CREATE DATABASE command, using Oracle Managed Files (OMF) can make the process of creating a database much simpler. As discussed in Chapter 3, if the initialization parameters DB_ CREATE_FILE_DEST and DB_CREATE_ONLINE_DEST_n are defined with the desired operating system locations for the data files and online redo log files, creating a database can be as simple as the following: CREATE DATABASE DEFAULT TEMPORARY TABLESPACE TMP;
As discussed in Chapter 3, using an SPFILE instead of a PFILE for database initialization has many distinct benefits for the DBA, including but not limited to “on-the-fly” modifications to SPFILE contents, with the effect of any parameter change taking place immediately or after the next instance restart. After you configure the init.ora file correctly, create the SPFILE when connected as SYSDBA: SQL> CREATE SPFILE FROM PFILE; By default, both the PFILE and SPFILE reside in the same location. At instance startup, the Oracle Server looks for a file named spfileSID.ora first. If it doesn’t exist, the file spfile.ora is used next. If that file does not exist, initSID.ora (a PFILE) is used.
The Data Dictionary
The most important part of the Oracle database is the data dictionary. The data dictionary is a set of tables and views that hold the database’s metadata information. You cannot update the dictionary directly; Oracle updates the dictionary when you issue any Data Definition Language (DDL) commands. The dictionary is provided as read-only for users and administrators. The contents of the data dictionary and obtaining information from the dictionary are discussed in the section “Querying the Dictionary.”
Oracle Objective
Identify key data dictionary components
The data dictionary consists of base tables and user-accessible views. The base tables are normalized and contain cryptic, version-specific information. You use the views to query the dictionary and extract meaningful information. To create the views, install the additional Oracle-supplied scripts after the database is created. The base tables contain information such as the users of the database and their permissions, the amount of the used and unused space for database
objects, constraint information, and so on. Users and administrators rarely, if ever, need to access the base tables, with the exception of tables such as AUD$, which contains auditing information for objects in the database. When the database is created, Oracle creates two users, SYS and SYSTEM. SYS is the owner of the data dictionary, and SYSTEM is a DBA account. The initial password for SYS is CHANGE_ON_INSTALL; the initial password for SYSTEM is MANAGER. Change these passwords once the database is created.
Never change the definition or contents of the data dictionary base tables. Oracle uses the dictionary information for proper functioning of the database.
Creating the Dictionary
T
he Oracle database is functional only when you create the dictionary views and additional tablespaces, rollback segments, users, and so on. Creating the dictionary views is the next step after you create the database by using the CREATE DATABASE command. Running certain Oracle-supplied scripts creates the dictionary views. We’ll discuss all these topics in this section as well as give you some basics of how PL/SQL packages are created and maintained in the data dictionary.
Data Dictionary Scripts The data dictionary base tables are created under the SYS schema in the SYSTEM tablespace when you issue the CREATE DATABASE command. Oracle automatically creates the tablespace and tables using the sql.bsq script found under the $ORACLE_HOME/rdbms/admin directory. This script creates the following:
The SYSTEM tablespace by using the data file(s) specified in the CREATE DATABASE command
A rollback segment named SYSTEM in the SYSTEM tablespace
Indexes on dictionary tables and sequences for dictionary use
The roles PUBLIC, CONNECT, RESOURCE, DBA, DELETE_CATALOG_ROLE, EXECUTE_CATALOG_ROLE, and SELECT_CATALOG_ROLE
The DUAL table
Don’t modify the definitions in the sql.bsq script—for example, by adding columns, removing columns, or changing the data types or width. You can change these storage parameters: INITIAL, NEXT, MINEXTENTS, MAXEXTENTS, PCTINCREASE, FREELISTS, FREELIST GROUPS, and OPTIMAL.
The DUAL table is a dummy table owned by SYS and accessible to all users of the database. The table has only one column, named DUMMY, and only one row. Do not add more rows to this table.
Running the script catalog.sql creates the data dictionary views. This script creates synonyms on the views to allow users easy access to the views. Before running any data dictionary script, connect to the database as SYS. The dictionary creation scripts are under the $ORACLE_HOME/rdbms/admin directory on most platforms. The script catproc.sql creates the dictionary items necessary for PL/SQL functionality. The other scripts necessary for creating dictionary objects depend on the operating system and the functionality you want in the database. For example, if you are not using Real Application Clusters (RACs), you need not install any RAC-related dictionary items. At a minimum, run the catalog.sql and catproc.sql scripts after creating the database.
Oracle Objective
Identify the contents and uses of the data dictionary
The dictionary creation scripts all begin with cat. Many of the scripts call other scripts. For example, when you execute catalog.sql, it calls the following scripts: standard.sql Creates a package called STANDARD, which contains the SQL functions to implement basic language features cataudit.sql Creates data dictionary views to support auditing
catexp.sql Creates data dictionary views to support import/export catldr.sql Creates data dictionary views to support direct-path load of SQL*Loader catpart.sql Creates data dictionary views to support partitioning catadt.sql Creates data dictionary views to support Oracle objects and types catsum.sql Creates data dictionary views to support Oracle summary management From the name of a script, you can sometimes identify its purpose. The following list indicates the categories of scripts. cat*.sql Catalog and data dictionary scripts dbms*.sql PL/SQL administrative package definitions prvt*.plb PL/SQL administrative package code, in wrapped (encrypted) form uNNNNNN.sql Database upgrade/migration scripts dNNNNNN.sql Database downgrade scripts utl*.sql Additional tables and views needed for database utilities
Administering Stored Procedures and Packages
The PL/SQL stored programs are stored in the data dictionary. They are treated like any other database object. The code used to create the procedure, package, or function is available in the dictionary views DBA_ SOURCE, ALL_SOURCE, and USER_SOURCE—except when you create them with the WRAP utility. The WRAP utility generates encrypted code, which only the Oracle server can interpret. You manage the privileges on these stored programs by using regular GRANT and REVOKE statements. You can GRANT and REVOKE execute privileges on these objects to other users of the database.
The DBA_OBJECTS, ALL_OBJECTS, and USER_OBJECTS views give information about the status of the stored program. If a procedure is invalid, you can recompile it by using the following statement: ALTER PROCEDURE <procedure_name> COMPILE; To recompile a package, compile the package definition and then the package body as in the following statements: ALTER PACKAGE <package_name> COMPILE; ALTER PACKAGE <package_name> COMPILE BODY; To compile the package, procedure, or function owned by any other schema, you must have the ALTER ANY PROCEDURE privilege.
Completing the Database Creation
A
fter creating the database and creating the dictionary views, you must create additional tablespaces to complete the database creation process. Oracle recommends creating the following tablespaces if they were not created in the CREATE DATABASE script or with DBCA (Database Configuration Assistant), discussed later in this chapter. You can create additional tablespaces depending on the requirements of your application. UNDOTBS Holds the undo segments for automatic undo management. When you create the database, Oracle creates a SYSTEM undo segment in the SYSTEM tablespace. For the database that has multiple tablespaces, you must have at least one undo segment that is not in the SYSTEM tablespace for manual undo management or one undo tablespace for automatic undo management. TEMP Holds the temporary segments. Oracle uses temporary segments for sorting and for any intermediate operation. Oracle uses these segments when the information to be sorted will not fit in the SORT_AREA_ SIZE parameter specified in the initialization file. USERS Contains the user tables. INDX Contains the user indexes. TOOLS Holds the tables and indexes created by the Oracle administrative tools.
After creating these tablespaces, you must create additional users for the database. As soon as the database is created, back it up, and then immediately change the passwords for SYS and SYSTEM.
Querying the Dictionary
Y
ou can query the data dictionary views and tables in the same way that you query any other table or view. From the prefix of the data dictionary views, you can determine for whom the view is intended. Some views are accessible to all Oracle users; others are intended for database administrators only.
Oracle Objective
Query the data dictionary
The data dictionary views can be classified into the following categories based on their prefix: DBA_ These views contain information about all structures in the database—they show what is in all users’ schemas. Accessible to the DBA or anyone with the SELECT_CATALOG_ROLE privilege, they provide information on all the objects in the database and have an OWNER column. ALL_ These views show information about all objects that the user has access to. They are accessible to all users. Each view has an OWNER column, providing information about objects accessible by the user. USER_ These views show information about the structures owned by the user (in the user’s schema). They are accessible to all users and do not have an OWNER column. V$ These views are known as dynamic performance views, because they are continuously updated while a database is open and in use, and their contents relate primarily to performance. The actual dynamic performance views are identified by the prefix V_$. Public synonyms for these views have the prefix V$.
GV$ For almost all V$ views, Oracle has a corresponding GV$ view. These are the global dynamic performance views and are useful if you are running Oracle Real Application Clusters. The corresponding GV$ view has an additional column identifying the instance number called INST_ID. The ALL_ views and USER_ views contain almost identical information except for the OWNER column, but the DBA_ views often contain more information useful for administrators.
In Oracle9i, the initialization parameter O7_DICTIONARY_ACCESSIBILITY now defaults to FALSE. Therefore, even users with the SELECT ANY TABLE privilege cannot access the dictionary views unless they have either explicit object permissions or the role SELECT_CATALOG_ROLE.
You can use the data dictionary information to generate the source code for all the objects created in the database. For example, the information on tables is available in the dictionary views DBA_TABLES, DBA_TAB_COLUMNS, or ALL_TABLES, ALL_TAB_COLUMNS, or USER_TABLES, USER_TAB_COLUMNS. The dictionary view DICTIONARY contains names and descriptions of all the data dictionary views in the database. DICT is a synonym for DICTIONARY view; the dynamic performance view V$FIXED_TABLE contains similar information to that found in DICTIONARY. The DICT_COLUMNS dictionary view contains the description of all columns in the dictionary views. If you want to know all the dictionary views that provide information about tables, you can run a query similar to the following. SQL> COL TABLE_NAME FORMAT A25 SQL> COL COMMENTS FORMAT A40 SQL> SELECT * FROM DICT WHERE TABLE_NAME LIKE ‘%TAB%’; The dictionary views ALL_OBJECTS, DBA_OBJECTS, and USER_OBJECTS provide information about the objects in the database. These views contain the timestamp of object creation and the last DDL timestamp. The STATUS column shows whether the object is invalid; this information is especially useful for PL/SQL–stored programs and views.
Query the data dictionary view PRODUCT_COMPONENT_VERSION or V$VERSION to see the version of the database and installed components. Oracle product versions have five numbers. For example, in the version number 9.0.1.0.1, 9 is the version, the first zero is the new features’ release, the first 1 is the maintenance release, the second zero is the generic patch set number, and the last 1 is the platform-specific patch set number.
Data Dictionary Views vs. Dynamic Performance Tables The usual distinction between data dictionary views and dynamic performance views is that data dictionary views are relatively static and that dynamic performance tables are primarily related to performance. In reality, though, there are exceptions to this rule! The dynamic performance view V$VERSION may not change for months, and the data dictionary view DBA_PENDING_TRANSACTIONS may change constantly in a distributed environment. The experienced DBA, therefore, cannot always rely on view prefixes and sometimes just has to know where to look for information about the running database.
The Database Configuration Assistant
T
he Database Configuration Assistant (DBCA) is Oracle’s GUI DBA tool for creating, modifying, or deleting a database. After you answer a few questions, this tool can create a database, save a template for future use, or give you the scripts to create the database. It is a good idea to generate the scripts by using this tool and then customize the script files, if needed, and create the database. You can create the database with a Shared Server configuration (MTS) or a dedicated server configuration. You can also choose
the additional options you want to install in the database, such as Oracle InterMedia and Oracle JVM.
Oracle Objective
Create a database using the Oracle Database Configuration Assistant (DBCA)
You can run the DBCA as part of the Oracle Universal Installer (OUI) or as a stand-alone application. As with the manual creation of the database, be sure to define the environment variables before creating the database. If you choose the typical installation option, you have only a few questions to answer. You also have the option of copying a preconfigured database (template) or creating a new database. The tool generates the initialization parameters based on the type of database you create; the options are Online Transaction Processing (OLTP), Data Warehousing, or Multipurpose. If you choose the custom installation option, you have full control of the SGA sizing. Figure 4.1 shows the SGA memory configuration screen of the DBCA. FIGURE 4.1
To run the DBCA, use the dbca command under Unix, or find DBCA in the Windows Start menu under Oracle/Configuration and Migration Tools/ Database Configuration Assistant. You can easily configure many, if not all, of the other initialization parameters from within the DBCA. Following are some typical parameters:
File locations: pathname for parameter and trace files
Character set values: NLS-related
Archiving: location and format of the archived log files
Trace files: location of the user and system trace files
Storage: block size, sizing control files, datafiles and tablespaces, redo log groups
Sorting: SORT_AREA_SIZE
Figure 4.2 shows a sample DBCA screen in which the character set options and sort area size can be adjusted. If you are an advanced DBA, you can modify virtually every initialization parameter, as shown in Figure 4.3. FIGURE 4.2
The script generated by the DBCA does the following (this is a good template for you to use when creating new databases): 1. Creates a parameter file, starts up the database in NOMOUNT mode, and
creates the database by using the CREATE DATABASE command. 2. Runs catalog.sql. 3. Creates tablespaces for tools (TOOLS), undo (UNDOTBS), temporary
(TEMP), user (USERS), and index (INDX). 4. Runs the following scripts: a. catproc.sql—sets up PL/SQL. b. caths.sql—installs the heterogeneous services (HS) data dictio-
c. otrcsvr.sql—sets up Oracle Trace server stored procedures. d. utlsampl.sql—sets up sample user SCOTT and creates demo tables. e. pupbld.sql—creates product and user profile tables. This script is
run as SYSTEM. 5. Runs the scripts necessary to install the other options chosen.
You can also use the DBCA to manage templates, which makes it easier to create a similar database on the same or a different server. You can create the template from scratch, or generate it from an existing database. You can create these derived templates with or without the data from the original database. In essence, this template management feature let you easily clone a database. When you create a template that contains both the structure and the data from an existing database, any database you create with this template must contain all datafiles, tablespaces and undo segments in the template. You cannot add or remove any datafiles, tablespaces or undo segments before the database is created, nor can you change any initialization parameters. You can change control files, log groups and data file destinations.
Summary
I
n this chapter, you learned how to create a database. The Oracle database is created by using the command CREATE DATABASE. This command runs the sql.bsq script, which in turn creates the data dictionary tables. The three parameters that you should pay particular attention to before creating a database are CONTROL_FILES, DB_NAME, and DB_BLOCK_SIZE. You cannot change the block size of the database once it is created. Running the script catalog.sql creates the data dictionary views. DBAs and users use these views to query the information from the data dictionary. The data dictionary is a set of tables and views that hold the metadata. The views prefixed with DBA_ are accessible only to the DBA or user with the SELECT_CATALOG_ROLE privilege. The views prefixed with ALL_ have information about all objects in the database that the user has any privilege on. The USER_ views show information about the objects that the user owns.
Before creating the database, be sure that you have enough resources, such as disk space and memory. You also need to prepare the operating system by setting the resource parameters, if any, and making sure that you have enough privileges. Oracle Managed Files (OMF) aids in the database creation by centralizing default operating system file locations. PL/SQL has several administrative packages that are useful to the DBA as well as developers. To install these packages you run the script catproc.sql. Most of the administrative scripts are located under the directory $ORACLE_HOME/rdbms/admin. Oracle has a graphical database creation tool, DBCA, designed to ease the administrative burden of creating scripts manually. DBCA also allows the DBA to create database templates, with and without data files.
Exam Essentials Understand the preparation and planning steps for creating a database. Important items to verify include sufficient memory and disk space, along with a sufficient number of physical destinations to ensure recoverability. Identify the environment variables and their purpose. Enumerate the minimum subset of environment variables needed to create or start a database. Use the Database Configuration Assistant to create and maintain templates. Understand the different types of templates that can be created and modified; differentiate between the database objects that can and cannot be adjusted when using a template with datafiles. Be able to manually construct a CREATE DATABASE statement. Understand the minimum set of parameters that need to be defined in init.ora; be able to use OMF to simplify the construction of the CREATE DATABASE statement, and understand how to create an SPFILE from an existing PFILE. Understand how the base tables and data dictionary views are built during database creation. Identify the key script names that create the base tables, views, and packages.
Identify and describe the three categories of data dictionary views. Be able to describe the difference between the three types of views and how the contents differ based on the user rights and permissions in the database. Differentiate the content and purpose of the dynamic performance views vs. the data dictionary views. Describe the conditions under which database information can be retrieved from one type of view instead of another, in addition to the database states under which some views are not available.
Key Terms
B
efore you take the exam, make sure you’re familiar with the following terms: base tables
Review Questions 1. How many control files are required to create a database? A. One B. Two C. Three D. None 2. Which environment variable or registry entry variable represents the
instance name? A. ORA_SID B. INSTANCE_NAME C. ORACLE_INSTANCE D. ORACLE_SID 3. Complete the following sentence: The recommended configuration for
control files is A. One control file per database B. One control file per disk C. Two control files on two disks D. Two control files on one disk 4. You have specified the LOGFILE clause in the CREATE DATABASE
command as follows. What happens if the size of the log file redo0101.log, which already exists, is 10MB? LOGFILE GROUP 1 (‘/oradata02/PROD01/redo0101.log’, ‘/oradata03/PROD01/redo0102.log) SIZE 5M REUSE, GROUP 2 (‘/oradata02/PROD01/redo0201.log’, ‘/oradata03/PROD01/redo0202.log) SIZE 5M REUSE
A. Oracle adjusts the size of all the redo log files to 10MB. B. Oracle creates all the redo log files as 5MB. C. Oracle creates all the redo log files as 5MB except redo0101.log,
which is created as 10MB. D. The command fails. 5. Which command must you issue before you can execute the CREATE
DATABASE command? A. STARTUP INSTANCE B. STARTUP NOMOUNT C. STARTUP MOUNT D. None of the above 6. Which initialization parameter cannot be changed after creating the
database? A. DB_BLOCK_SIZE B. DB_NAME C. CONTROL_FILES D. None; all the initialization parameters can be changed as and when
required. 7. Which of the following objects or structures can be added or removed
from a DBCA template? (Choose three.) A. Tablespaces B. File destinations C. Datafiles D. Control files E. Log file groups 8. When you are creating a database, where does Oracle find informa-
tion about the control files that need to be created?
A. From the initialization parameter file B. From the CREATE DATABASE command line C. From the environment variable D. Files created under $ORACLE_HOME and name derived from
.ctl 9. Which script creates the data dictionary views? A. catalog.sql B. catproc.sql C. sql.bsq D. dictionary.sql 10. Which prefix for the data dictionary views indicates that the contents
of the view belong to the current user? A. ALL_ B. DBA_ C. USR_ D. USER_ 11. Which data dictionary view shows information about the status of a
procedure? A. DBA_SOURCE B. DBA_OBJECTS C. DBA_PROCEDURES D. DBA_STATUS 12. How do you correct a procedure that has become invalid when one of
the tables it is referring to was altered to drop a constraint? A. Re-create the procedure B. ALTER PROCEDURE <procedure_name> RECOMPILE C. ALTER PROCEDURE <procedure_name> COMPILE D. VALIDATE PROCEDURE <procedure_name>
13. Which of the following views does not have information about the
operating system locations of the components? A. V$CONTROLFILE B. V$DATAFILE C. V$PWFILE_USERS D. V$LOGFILE E. V$TEMPFILE 14. How many data files can be specified in the DATAFILE clause when
creating a database? A. One. B. Two. C. More than one; only one will be used for the SYSTEM tablespace. D. More than one; all will be used for the SYSTEM tablespace. 15. Who owns the data dictionary? A. SYS B. SYSTEM C. DBA D. ORACLE 16. What is the default password for the SYS user? A. MANAGER B. CHANGE_ON_INSTALL C. SYS D. There is no default password.
17. Which data dictionary view provides information about the version of
the database and installed components? A. DBA_VERSIONS B. PRODUCT_COMPONENT_VERSION C. PRODUCT_VERSIONS D. ALL_VERSION 18. What is the prefix for dynamic performance views? A. DBA_ B. X$ C. V$ D. X# 19. Which is an invalid clause in the CREATE DATABASE command? A. MAXLOGMEMBERS B. MAXLOGGROUPS C. MAXDATAFILES D. MAXLOGHISTORY 20. Which database underlying table can be updated directly by the DBA
without severe consequences to the operation of the database? A. AUD$ B. LINK$ C. sql.bsq D. DICT E. HELP
Answers to Review Questions 1. D. You do not need any control files to create a database; the control
files are created when you create the database, based on the filenames specified in the CONTROL_FILES parameter of the parameter file. 2. D. The ORACLE_SID environment variable represents the instance
name. When you connect to the database without specifying a connect string, Oracle connects you to this instance. 3. C. Oracle allows multiplexing of control files. If you have two con-
trol files on two disks, one disk failure will not damage both control files. 4. D. The CREATE DATABASE command fails. For you to use the REUSE
clause, the file that exists must be the same size as the size specified in the command. 5. B. You must start up the instance to create the database. Connect to
the database by using the SYSDBA privilege, and start up the instance by using the command STARTUP NOMOUNT. 6. A. The block size of the database cannot be changed after database
creation. The database name can be changed after re-creating the control file with the new name, and the CONTROL_FILES parameter can be changed if the files are copied to a new location. 7. B, D, E. In addition to tablespaces and data files, undo segments and
initialization parameters must also remain exactly the same as defined in the template. 8. A. The control file names and locations are obtained from the initial-
ization parameter file. The parameter name is CONTROL_FILES. If this parameter is not specified, Oracle creates a control file; the location and name depend on the operating system platform. 9. A. The catalog.sql script creates the data dictionary views. The
base tables for these views are created by the script sql.bsq, which is executed when you issue the CREATE DATABASE command. 10. D. DBA_ prefixed views are accessible to the DBA or anyone with the
SELECT_CATALOG_ROLE privilege; these views provide information on all the objects in the database and have an OWNER column. The ALL_ views show information about the structures that the user has access to. USER_ views show information about the structures owned by the user.
11. B. The DBA_OBJECTS dictionary view contains information on the
objects, their creation, and modification timestamp and status. 12. C. The invalid procedure, trigger, package, or view can be recom-
piled by using the ALTER COMPILE command. 13. C. The view V$PWFILE_USERS contains the list of users that have
SYSDBA and SYSOPER rights; however, the password file is not a part of the database. The database consists of data files, log files, and control files. 14. D. You can specify more than one data file; the files will be used for
the SYSTEM tablespace. The files specified cannot exceed the number of data files specified in the MAXDATAFILES clause. 15. A. The SYS user owns the data dictionary. The SYS and SYSTEM users
are created when the database is created. 16. B. The default password for SYS is CHANGE_ON_INSTALL, and for
SYSTEM it is MANAGER. You should change these passwords once the database is created. 17. B. The dictionary view PRODUCT_COMPONENT_VERSION shows infor-
mation about the database version. The view V$VERSION has the same information. 18. C. The dynamic performance views have a prefix of V$. The actual
views have the prefix of V_$, and the synonyms have a V$ prefix. The views are called dynamic performance views because they are continuously updated while the database is open and in use, and their contents relate primarily to performance. 19. B. MAXLOGGROUPS is an invalid clause; the maximum log file groups
are specified using the clause MAXLOGFILES. 20. A. AUD$ contains records that audit DML operations against the
database. No other base tables should be modified directly and should rarely be accessed read-only other than through a data dictionary view. sql.bsq is a script, not a table; DICT is a synonym for the DICTIONARY view. LINK$ and HELP are base tables.
Control and Redo Log Files ORACLE9i DBA FUNDAMENTALS I EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Explain the uses of the control file Describe the contents of the control file Multiplex and manage the control file Manage the control file with Oracle Managed Files (OMF) Obtain control file information Explain the purpose of online redo log files Describe the structure of online redo log files Control log switches and checkpoints Multiplex and maintain online redo log files Manage online redo log files with Oracle Managed Files (OMF)
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification website (http://www.oracle.com/ education/certification/) for the most current exam objectives listing.
his chapter discusses two important components of the Oracle database: the control file and the redo log files. The control file keeps information about the physical structure of the database. The redo log files record all changes made to data. These two files are critical for database recovery in case of a failure. You can multiplex both the control and the redo log files. You will learn more about control files, putting the database in ARCHIVELOG mode, controlling checkpoints, and managing redo logs and control files with Oracle Managed Files (OMF).
Maintaining the Control File
Y
ou can think of the control file as a metadata repository for the physical database. It has the structure of the database—the data files and redo log files that constitute a database. The control file is a binary file, created when the database is created, and is updated with the physical changes whenever you add or rename a file.
Oracle Objective
Explain the uses of the control file Describe the contents of the control file
The control file is updated continuously and should be available at all times. Don’t edit the contents of the control file; only Oracle processes should update its contents. When you start up the database, Oracle uses
the control file to identify the data files, redo log files, and open them. Control files play a major role when recovering a database. The contents of the control file include the following:
Database name to which the control file belongs. A control file can belong to only one database.
Database creation timestamp.
Data files—name, location, and online/offline status information.
Redo log files—name and location.
Redo log archive information.
Tablespace names.
Current log sequence number, a unique identifier that is incremented and recorded when an online redo log file is switched.
Most recent checkpoint information. A checkpoint occurs when all the modified database buffers in the SGA are written to the data files. The system change number (SCN), a number sequentially assigned to each transaction in the database, is also recorded in the control file against the data file name that is taken offline or made read-only.
Begin and end of undo segments.
Recovery Manager’s (RMAN’s) backup information. RMAN is the Oracle utility you use to back up and recover databases.
The control file size is determined by the MAX clauses you provide when you create the database—MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and MAXINSTANCES. Oracle pre-allocates space for these maximums in the control file. Therefore, when you add or rename a file in the database, the control file size does not change. When you add a new file to the database or relocate a file, an Oracle server process immediately updates the information in the control file. Back up the control file after any structural changes. The log writer process (LGWR) updates the control file with the current log sequence number. The checkpoint process (CKPT) updates the control file with the recent checkpoint information. When the database is in ARCHIVELOG mode, the archiver process (ARCn) updates the control file with archiving information such as the archive log file name and log sequence number.
The control file contains two types of record sections: reusable and not reusable. Recovery Manager information is kept in the reusable section. Items such as the names of the backup data files are kept in this section, and once this section fills up, the entries are re-used in a circular fashion.
Multiplexing Control Files Since the control file is critical for the database operation, Oracle recommends a minimum of two control files. You duplicate the control file on different disks either by using the multiplexing feature of Oracle or by using the mirroring feature of your operating system. The next two sections discuss the two ways you can implement the multiplexing feature: using init.ora and using an SPFILE.
Oracle Objective
Multiplex and manage the control file
Multiplexing Control Files Using init.ora Multiplexing is defined as keeping a copy of the same control file in different locations. Copying the control file to multiple locations and changing the CONTROL_FILES parameter in the initialization file init.ora to include all control file names specifies the multiplexing of the control file. The following syntax shows three multiplexed control files. CONTROL_FILES = (‘/ora01/oradata/MYDB/ctrlMYDB01.ctl’, ‘/ora02/oradata/MYDB/ctrlMYDB02.ctl’, ‘/ora03/oradata/MYDB/ctrlMYDB03.ctl’) By storing the control file on multiple disks, you avoid the risk of a single point of failure. When multiplexing control files, updates to the control file can take a little longer, but that is insignificant when compared with the benefits. If you lose one control file, you can restart the database after copying one of the other control files or after changing the CONTROL_FILES parameter in the initialization file. When multiplexing control files, Oracle updates all the control files at the same time, but uses only the first control file listed in the CONTROL_FILES parameter for reading.
When creating a database, you can list the control file names in the CONTROL_FILES parameter, and Oracle creates as many control files as are listed. You can have a maximum of eight multiplexed control file copies. If you need to add more control file copies, do the following: 1. Shut down the database. 2. Copy the control file to more locations by using an operating
system command. 3. Change the initialization parameter file to include the new control file
name(s) in the parameter CONTROL_FILES. 4. Start up the database.
After creating the database, you can change the location of the control files, rename the control files, or drop certain control files. You must have at least one control file for each database. To add, rename, or delete control files, you need to follow the preceding steps. Basically, you shut down the database, use the operating system copy command (copy, rename, or drop the control files accordingly), edit the CONTROL_FILES parameter in init.ora and start up the database.
Multiplexing Control Files Using an SPFILE Multiplexing using an SPFILE is similar to multiplexing using init.ora; the major difference being how the CONTROL_FILES parameter is changed. Follow these steps: 1. Alter the SPFILE while the database is still open:
SQL> ALTER SYSTEM SET CONTROL_FILES = ‘/ora01/oradata/MYDB/ctrlMYDB01.ctl’, ‘/ora02/oradata/MYDB/ctrlMYDB02.ctl’, ‘/ora03/oradata/MYDB/ctrlMYDB03.ctl’, ‘/ora04/oradata/MYDB/ctrlMYDB04.ctl’ SCOPE=SPFILE; This parameter change will only take effect after the next instance restart by using the SCOPE=SPFILE qualifier. The contents of the binary SPFILE are changed immediately, but the old specification of CONTROL_FILES will be used until the instance is restarted. 2. Shut down the database.
3. Copy an existing control file to the new location:
$ cp /ora01/oradata/MYDB/ctrlMYDB01.ctl /ora01/oradata/MYDB/ctrlMYDB04.ctl 4. Start the instance. SQL> STARTUP
If you lose one of the control files, you can shut down the database, copy a control file, or change the CONTROL_FILES parameter and restart the database.
Using OMF to Manage Control Files Using OMF can make the creation and maintenance of control files much easier. To use OMF-created control files, do not specify the CONTROL_FILES parameter in init.ora, but instead make sure that the parameter DB_CREATE_ONLINE_LOG_DEST_n is specified n times starting with 1. Therefore, n is the number of desired control files to be created. The actual names of the control files are system generated and can be found in the alert logs located in $ORACLE_HOME/admin/bdump.
Oracle Objective
Manage the control file with Oracle Managed Files (OMF)
To add more copies of the control file later, use the method described in the previous section, “Multiplexing Control Files Using an SPFILE.”
Creating New Control Files You can create a new control file by using the CREATE CONTROLFILE command. You will need to create a new control file if you lose all the control files that belong to the database, if you want to change any of the MAX clauses in the CREATE DATABASE command, or if you want to change the database name. You must know the data file names and redo log file
names to create the control file. Follow these steps to create the new control file: 1. Prepare the CREATE CONTROLFILE command. You should have the
complete list of data files and redo log files. If you omit any data files, they can no longer be a part of the database. The following is an example of the CREATE CONTROLFILE command. CREATE CONTROLFILE SET DATABASE "ORACLE" NORESETLOGS NOARCHIVELOG MAXLOGFILES 32 MAXLOGMEMBERS 2 MAXDATAFILES 32 MAXINSTANCES 1 MAXLOGHISTORY 1630 LOGFILE GROUP 1 ‘C:\ORACLE\DATABASE\LOG2ORCL.ORA’ SIZE 500K, GROUP 2 ‘C:\ORACLE\DATABASE\LOG1ORCL.ORA’ SIZE 500K DATAFILE ‘C:\ORACLE\DATABASE\SYS1ORCL.ORA’, ‘C:\ORACLE\DATABASE\USR1ORCL.ORA’, ‘C:\ORACLE\DATABASE\RBS1ORCL.ORA’, ‘C:\ORACLE\DATABASE\TMP1ORCL.ORA’, ‘C:\ORACLE\DATABASE\APPDATA1.ORA’, ‘C:\ORACLE\DATABASE\APPINDX1.ORA’ ; The options in this command are similar to the CREATE DATABASE command, discussed in Chapter 4, “Creating a Database and Data Dictionary.” The NORESETLOGS option specifies that the online redo log files should not be reset. 2. Shut down the database. 3. Start up the database with the NOMOUNT option. Remember, to mount
the database, Oracle needs to open the control file. 4. Create the new control file with a command similar to the preceding
example. The control files will be created using the names and locations specified in the initialization parameter CONTROL_FILES.
5. Open the database by using the ALTER DATABASE OPEN command. 6. Shut down the database and back up the database.
The steps provided here are very basic. Depending on the situation, you might have to perform additional steps. Detailing all the steps that might be required to create a control file and the options in opening a database are beyond the scope of this book.
You can generate the CREATE CONTROLFILE command from the current database by using the command ALTER DATABASE BACKUP CONTROLFILE TO TRACE. The control file creation script is written to the USER_DUMP_DEST directory.
After creating the control file, determine whether any of the data files listed in the dictionary are missing in the control file. If you query the V$DATAFILE view, the missing files will have the name MISSINGnnnn. If you created the control file by using the RESETLOGS option, the missing data files cannot be added back to the database. If you created the control file with the NORESETLOGS option, the missing data file can be included in the database by performing a media recovery. You can back up the control file when the database is up by using the command ALTER DATABASE BACKUP CONTROLFILE TO ‘’ REUSE; Another way to back up a control file is by using the command ALTER DATABASE BACKUP CONTROLFILE TO TRACE; This command places the contents of the control file into a text-format trace file, located in USER_DUMP_DEST, albeit with some extraneous information that must be edited out before using it to re-create the control file: Dump file H:\Oracle9i\admin\or90\udump\ORA01568.TRC Wed Oct 10 22:00:05 2001 ORACLE V9.0.1.1.1 - Production vsnsta=0 vsnsql=10 vsnxtr=3 Windows 2000 Version 5.0 Service Pack 2, CPU type 586 Oracle9i Enterprise Edition Release 9.0.1.1.1 - Production With the Partitioning option JServer Release 9.0.1.1.1 - Production Windows 2000 Version 5.0 Service Pack 2, CPU type 586 Instance name: or90
Redo thread mounted by this instance: 1 Oracle process number: 13 Windows thread id: 1568, image: ORACLE.EXE
*** SESSION ID:(8.39) 2001-10-10 22:00:05.000 *** 2001-10-10 22:00:05.000 # The following commands will create a new control file and use it # to open the database. # Data used by the recovery manager will be lost. Additional logs may # be required for media recovery of offline data files. Use this # only if the current version of all online logs are available. STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "OR90" NORESETLOGS NOARCHIVELOG MAXLOGFILES 50 MAXLOGMEMBERS 5 MAXDATAFILES 100 MAXINSTANCES 1 MAXLOGHISTORY 113 LOGFILE GROUP 1 'H:\ORACLE9I\ORADATA\OR90\REDO01.LOG'
SIZE 100M,
GROUP 2 'H:\ORACLE9I\ORADATA\OR90\REDO02.LOG'
SIZE 100M,
GROUP 3 'H:\ORACLE9I\ORADATA\OR90\REDO03.LOG'
SIZE 100M
# STANDBY LOGFILE DATAFILE 'H:\ORACLE9I\ORADATA\OR90\SYSTEM01.DBF', 'H:\ORACLE9I\ORADATA\OR90\UNDOTBS01.DBF', 'H:\ORACLE9I\ORADATA\OR90\CWMLITE01.DBF', 'H:\ORACLE9I\ORADATA\OR90\DRSYS01.DBF', 'H:\ORACLE9I\ORADATA\OR90\EXAMPLE01.DBF', 'H:\ORACLE9I\ORADATA\OR90\INDX01.DBF', 'H:\ORACLE9I\ORADATA\OR90\TOOLS01.DBF', 'H:\ORACLE9I\ORADATA\OR90\USERS01.DBF', 'H:\ORACLE9I\ORADATA\OR90\OEM_REPOSITORY.DBF' CHARACTER SET WE8MSWIN1252 ;
# Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE # Database can now be opened normally. ALTER DATABASE OPEN; # Commands to add tempfiles to temporary tablespaces. # Online tempfiles have complete space information. # Other tempfiles may require adjustment. ALTER TABLESPACE TEMP ADD TEMPFILE 'H:\ORACLE9I\ORADATA\OR90\TEMP01.DBF' REUSE; # End of tempfile additions. #
Oracle recommends backing up the control file whenever you make a change to the database structure, such as adding data files, renaming files, or dropping redo log files.
Querying Control File Information The Oracle data dictionary holds all the information about the control file. The view V$CONTROLFILE lists the names of the control files for the database. The STATUS column should always be NULL; when a control file is missing, the STATUS would be INVALID, but that should never occur because when Oracle cannot update one of the control files, the instance crashes— you can start up the database only after copying a good control file.
You can also use the SHOW PARAMETER command to retrieve the names of the control files. SQL> show parameter control_files NAME TYPE VALUE ---------------- ----------- -----------------------------control_files string H:\Oracle9i\oradata\or90\CONTR OL01.CTL, H:\Oracle9i\oradata\ or90\CONTROL02.CTL, H:\Oracle9i\ oradata\or90\CONTROL03.CTL
The other data dictionary view that gives information about the control file is V$CONTROLFILE_RECORD_SECTION, which displays the control file record sections. The record type, record size, total records allocated, number of records used, and the index position of the first and last records are in this view. For a listing of the record types, record sizes, and usage, run the following query. SQL> SELECT TYPE, RECORD_SIZE, RECORDS_TOTAL, RECORDS_USED 2
Other data dictionary views read information from the control file. Table 5.1 lists and describes these dynamic performance views. You can access these views when the database is mounted, that is, before opening the database. TABLE 5.1
Dictionary Views That Read from the Control File View Name
Description
V$ARCHIVED_LOG
Archive log information such as size, SCN, timestamp, etc.
V$BACKUP
Backup status for individual datafiles that constitute the database.
V$BACKUP_DATAFILE
Contains filename, timestamp, etc. of the data files backed up using RMAN.
V$BACKUP_PIECE
Information about backup pieces, updated when using RMAN.
V$BACKUP_REDOLOG
Information about the archived log files backed up using RMAN.
V$BACKUP_SET
Information about complete, successful backups using RMAN.
V$DATABASE
Database information such as name, creation timestamp, archive log mode, SCN, log sequence number, etc.
V$DATAFILE
Information about the data files associated with the database.
V$DATAFILE_COPY
Information about data files copied during a hot backup or using RMAN.
Dictionary Views That Read from the Control File (continued) View Name
Description
V$DATAFILE_HEADER
Data file header information; the filename and status are obtained from the control file.
V$LOG
Online redo log group information.
V$LOGFILE
Files or members of the online redo log group.
V$THREAD
Information about the log files assigned to each instance.
Maintaining and Monitoring Redo Log Files
R
edo logs record all changes to the database. The redo log buffer in the SGA is written to the redo log file periodically by the LGWR process. The redo log files are accessed and are open during normal database operation; hence they are called the online redo log files. Every Oracle database must have at least two redo log files. The LGWR process writes to these files in a circular fashion. For example, say there are three online redo log files. The LGWR process writes to the first file, and when this file is full, it starts writing to the second file, and then to the third file, and then again to the first file (overwriting the contents).
Oracle Objective
Explain the purpose of online redo log files Describe the structure of online redo log files
Online redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. Redo entries record data that you can use to reconstruct all changes made to the
database, including the undo segments. When you recover the database by using redo log files, Oracle reads the change vectors in the redo records and applies the changes to the relevant blocks. LGWR writes redo information from the redo log buffer to the online redo log files under a variety of circumstances:
A user commits a transaction, even if this is the only transaction in the log buffer.
The redo log buffer becomes one-third full.
When there is approximately 1MB of changed records in the buffer. This total does not include deleted or inserted records.
LGWR always writes its records to the online redo log file before DBWn writes new or modified database buffer cache records to the datafiles.
Each database has its own online redo log groups. A log group can have one or more redo log members (each member is a single operating system file). If you have a Real Application Cluster configuration, in which multiple instances are mounted to one database, each instance will have one online redo thread. That is, the LGWR process of each instance writes to the same online redo log files, and hence Oracle has to keep track of the instance from where the database changes are coming. For single instance configurations, there will be only one thread, and that thread number is 1. The redo log file contains both committed and uncommitted transactions. Whenever a transaction is committed, a system change number is assigned to the redo records to identify the committed transaction. The redo log group is referenced by an integer; you can specify the group number when you create the redo log files, either when you create the database or when you create the control file. You can also change the redo log configuration (add/drop/rename files) by using database commands. The following example shows a CREATE DATABASE command. CREATE DATABASE “MYDB01” LOGFILE ‘/ora02/oradata/MYDB01/redo01.log’ SIZE 10M, ‘/ora03/oradata/MYDB01/redo02.log’ SIZE 10M; Two log file groups are created here; the first file will be assigned to group 1, and the second file will be assigned to group 2. You can have more files
in each group; this practice is known as the multiplexing of redo log files, which we’ll discuss later in the chapter. You can specify any group number— the range will be between 1 and MAXLOGFILES. Oracle recommends that all redo log groups be the same size. The following is an example of creating the log files by specifying the groups. CREATE DATABASE “MYDB01” LOGFILE GROUP 1 ‘/ora02/oradata/MYDB01/redo01.log’ SIZE 10M, GROUP 2 ‘/ora03/oradata/MYDB01/redo02.log’ SIZE 10M;
Log Switch Operations The LGWR process writes to only one redo log file group at any time. The file that is actively being written to is known as the current log file. The log files that are required for instance recovery are known as the active log files. The other log files are known as inactive. Oracle automatically recovers an instance when starting up the instance by using the online redo log files. Instance recovery may be needed if you do not shut down the database properly or if your computer crashes.
Oracle Objective
Control log switches and checkpoints
The log files are written in a circular fashion. A log switch occurs when Oracle finishes writing to one file and starts writing to the next file. A log switch always occurs when the current redo log file is completely full and log writing must continue. You can force a log switch by using the ALTER SYSTEM command. A manual log switch may be necessary when performing maintenance on the redo log files by using the ALTER SYSTEM SWITCH LOGFILE command. Figure 5.1 shows how LGWR writes to the redo log groups in a circular fashion. Whenever a log switch occurs, Oracle allocates a sequence number to the new redo log file before writing to it. As stated earlier, this number is known as the log sequence number. If there are lots of transactions or changes to the database, the log switches can occur too frequently. Size the redo log file appropriately to avoid frequent log switches. Oracle writes to the alert log file whenever a log switch occurs.
Redo log files are written sequentially on the disk, so the I/O will be fast if there is no other activity on the disk (the disk head is always properly positioned). Keep the redo log files on a separate disk for better performance. If you have to store a data file on the same disk as the redo log file, do not put the SYSTEM, UNDOTBS, or any very active data or index tablespace file on this disk.
Database checkpoints are closely tied to redo log file switches. A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and data files. The CKPT process updates the headers of data files and control files; the actual blocks are written to the file by the DBWn process. A checkpoint is initiated when the redo log file is filled and a log switch occurs, when the instance is shut down with NORMAL, TRANSACTIONAL, or IMMEDIATE, when a tablespace status is changed to readonly or put into BACKUP mode, when a tablespace or datafile is taken offline, or when other values specified by certain parameters (discussed later in this section) are reached.
You can force a checkpoint if needed. Forcing a checkpoint ensures that all changes to the database buffers are written to the data files on disk. ALTER SYSTEM CHECKPOINT; Another way to force a checkpoint is by forcing a log file switch. ALTER SYSTEM SWITCH LOGFILE; The size of the redo log affects the checkpoint performance. If the size of the redo log is smaller compared with the number of transactions, a log switch occurs often and so does the checkpoint. The DBWn process writes the dirty buffer blocks whenever a checkpoint occurs. This situation might reduce the time required for instance recovery, but it might also affect the runtime performance. You can adjust checkpoints primarily by using the initialization parameter FAST_START_MTTR_TARGET. This parameter replaces the deprecated parameters FAST_START_IO_TARGET and LOG_CHECKPOINT_TIMEOUT in previous versions of the Oracle database. It is used to ensure that recovery time at instance startup (if required) will not exceed a certain number of seconds. For example, setting FAST_START_MTTR_TARGET to 600 ensures that system recovery will not take more than 600 seconds, by writing redo log buffer blocks more often, writing dirty database buffer cache entries more often, and so on.
Setting the parameter LOG_CHECKPOINTS_TO_ALERT to TRUE logs each checkpoint activity to the alert log file, which is useful for determining if checkpoints are occurring at the desired frequency.
Redo Log Troubleshooting In the case of redo log groups, it’s best to be generous with the number of groups and the number of members for each group. After estimating the number of groups that would be appropriate for your installation, add one more. I can remember many database installations in which I was trying to be overly cautious about disk space usage, not putting things into perspective, and realizing that the slight additional work involved in maintaining either additional or larger redo logs is small in relation to the time needed to fix a problem when the number of users and concurrent active transactions increase.
The space needed for additional log file groups is minimal and is well worth the effort up front to avoid the dreaded “Checkpoint not complete” message in the alert log and potentially increasing the recovery time in the event of an instance failure.
Multiplexing Log Files You can keep multiple copies of the online redo log file to safeguard against damage to these files. When multiplexing online redo log files, LGWR concurrently writes the same redo log information to multiple identical online redo log files, thereby eliminating a single point of redo log failure. All copies of the redo file are the same size and are known as a group, which is identified by an integer. Each redo log file in the group is known as a member. You must have at least two redo log groups for normal database operation.
Oracle Objective
Multiplex and maintain online redo log files
When multiplexing redo log files, it is preferable to keep the members of a group on different disks, so that one disk failure will not affect the continuing operation of the database. If LGWR can write to at least one member of the group, database operation proceeds as normal; an entry is written to the alert log file. If all members of the redo log file group are not available for writing, Oracle shuts down the instance. An instance recovery or media recovery may be needed to bring up the database. You can create multiple copies of the online redo log files when you create the database. For example, the following statement creates two redo log file groups with two members in each. CREATE DATABASE “MYDB01” LOGFILE GROUP 1 (‘/ora02/oradata/MYDB01/redo0101.log’, ‘/ora03/oradata/MYDB01/redo0102.log’) SIZE 10M, GROUP 2 (‘/ora02/oradata/MYDB01/redo0201.log’, ‘/ora03/oradata/MYDB01/redo0202.log’) SIZE 10M;
The maximum number of log file groups is specified in the clause MAXLOGFILES, and the maximum number of members is specified in the clause MAXLOGMEMBERS. You can separate the filenames (members) by using a space or a comma.
Creating New Groups You can create and add more redo log groups to the database by using the ALTER DATABASE command. The following statement creates a new log file group with two members. ALTER DATABASE ADD LOGFILE GROUP 3 (‘/ora02/oradata/MYDB01/redo0301.log’, ‘/ora03/oradata/MYDB01/redo0402.log’) SIZE 10M; If you omit the GROUP clause, Oracle assigns the next available number. For example, the following statement also creates a multiplexed group. ALTER DATABASE ADD LOGFILE (‘/ora02/oradata/MYDB01/redo0301.log’ ‘/ora03/oradata/MYDB01/redo0402.log’) SIZE 10M; To create a new group without multiplexing, use the following statement. ALTER DATABASE ADD LOGFILE ‘/ora02/oradata/MYDB01/redo0301.log’ REUSE; You can add more than one redo log group by using the ALTER DATABASE command—just use a comma to separate the groups.
If the redo log files you create already exist, use the REUSE option and don’t specify the size. The new redo log size will be same as that of the existing file.
Adding New Members If you forgot to multiplex the redo log files when creating the database or if you need to add more redo log members, you can do so by using the ALTER DATABASE command. When adding new members, you do not specify the file size, because all group members will have the same size. If you know the group number, using the following statement will add a member to group 2.
ALTER DATABASE ADD LOGFILE MEMBER ‘/ora04/oradata/MYDB01/redo0203.log’ TO GROUP 2; You can also add group members by specifying the names of other members in the group, instead of specifying the group number. Specify all the existing group members with this syntax. ALTER DATABASE ADD LOGFILE MEMBER ‘/ora04/oradata/MYDB01/redo0203.log’ TO (‘/ora02/oradata/MYDB01/redo0201.log’, ‘/ora03/oradata/MYDB01/redo0202.log’);
Adding New Members Using Storage Manager Adding redo log members is even easier with Storage Manager. Figure 5.2 shows the screen from Storage Manager that you can use to add new redo log members to a redo log group. FIGURE 5.2
Renaming Log Members If you want to move the log file member from one disk to another or just want a more meaningful name, you can rename a redo log member. Before renaming the online redo log members, the new (target) online redo files should exist. The SQL commands in Oracle change only the internal pointer in the control file to a new log file; they do not change or rename the operating system file. You must use an operating system command to rename or move the file. Follow these steps to rename a log member: 1. Shut down the database (a complete backup is recommended). 2. Copy/rename the redo log file member to the new location by using an
operating system command. 3. Start up the instance and mount the database (STARTUP MOUNT). 4. Rename the log file member in the control file. Use ALTER DATABASE
RENAME FILE ‘’ TO ‘’; 5. Open the database (ALTER DATABASE OPEN). 6. Back up the control file.
Dropping Redo Log Groups You can drop a redo log group and its members by using the ALTER DATABASE command. Remember that you should have at least two redo log groups for the database to function normally. The group that is to be dropped should not be the active group or the current group—that is, you can drop only an inactive log file group. If the log file to be dropped is not inactive, use the ALTER SYSTEM SWITCH LOGFILE command. To drop the log file group 3, use the following SQL statement. ALTER DATABASE DROP LOGFILE GROUP 3; When an online redo log group is dropped from the database, the operating system files are not deleted from disk. The control files of the associated database are updated to drop the members of the group from the database structure. After dropping an online redo log group, make sure that the drop is completed successfully, and then use the appropriate operating system command to delete the dropped online redo log files.
Dropping Redo Log Members Similar to conditions for dropping a redo log group, you can drop only the members of an inactive redo log group. Also, if there are only two groups, the log member to be dropped should not be the last member of a group. You can have a different number of members for each redo log group, though it is not advised. For example, say you have three log groups, each with two members. If you drop a log member from group 2, and a failure occurs to the sole member of group 2, the instance crashes. So even if you drop a member for maintenance reasons, ensure that all redo log groups have the same number of members. To drop the log member, use the DROP LOGFILE MEMBER clause of the ALTER DATABASE command. ALTER DATABASE DROP LOGFILE MEMBER ‘/ora04/oradata/MYDB01/redo0203.log’; The operating system file is not removed from the disk; only the control file is updated. Use an operating system command to delete the redo log file member from disk.
If a database is running in ARCHIVELOG mode, redo log members cannot be deleted unless the redo log group has been archived.
Clearing Online Redo Log Files Under certain circumstances, a redo log group member (or all members of a log group) may become corrupted. To solve this problem, you can drop and re-add the log file group or group member. It is much easier, however, to use the ALTER DATABASE CLEAR LOGFILE command. The following example clears the contents of redo log group 3 in the database: ALTER DATABASE CLEAR LOGFILE GROUP 3; Another distinct advantage of this command is that you can clear a log group even if the database has only two log groups, and only one member in each group. You can also clear a log group member even if it has not been archived yet by using the UNARCHIVED keyword. In this case, it is advisable to do a full database backup at the earliest convenience, because the unarchived redo log file is no longer usable for database recovery.
Managing Online Redo Log Files with OMF OMF simplifies online redo log management. As with all OMF-related operations, be sure that the proper initialization parameters are set. If you are multiplexing redo logs in three locations, be sure to set the parameters DB_CREATE_ONLINE_LOG_DEST_1 through _3.
Oracle Objective
Manage online redo log files with Oracle Managed Files (OMF)
To add a new log file group, use the following command: ALTER DATABASE ADD LOGFILE; The filenames for the three new operating systems files are generated automatically. Be sure to set each DB_CREATE_ONLINE_LOG_DEST_n parameter to path names on different physical volumes.
Archiving Log Files You know that online redo log files record all changes to the database. Oracle lets you copy these log files to a different location or to an offline storage medium. The process of copying is called archiving. The archiver process (ARCn) does this archiving. By archiving the redo log files, you can use them later to recover a database, update the standby database, or use the LogMiner utility to audit the database activities. When an online redo log file is full, and LGWR starts writing to the next redo log file, ARCn copies the completed redo log file to the archive destination. It is possible to specify more than one archive destination. The LGWR process waits for the ARCn process to complete the copy operation before overwriting any online redo log file. When the archiver process is copying the redo log files to another destination, the database is said to be in ARCHIVELOG mode. If archiving is not enabled, the database is said to be in NOARCHIVELOG mode. For production systems, for which you cannot afford to lose data, you must run the database in ARCHIVELOG mode so that in the event of a failure, you can recover the database to the time of failure or to a point in time. You can achieve this ability to recover by restoring the database backup and applying the database changes by using the archived log files.
Setting the Archive Destination You specify the archive destination in the initialization parameter file. To change the archive destination parameters during normal database operation, you use the ALTER SYSTEM command. The following parameters are associated with archive log destinations and the archiver process: LOG_ARCHIVE_DEST Specifies the destination to write archive log files. This location should be a valid directory on the server where the database is located. You can change the archiving location specified by this parameter by using ALTER SYSTEM SET LOG_ARCHIVE_DEST = ‘’; LOG_ARCHIVE_DUPLEX_DEST Specifies a second destination to write the archive log files. This destination must be a location on the server where the database is located. This destination can be either a mustsucceed or a best-effort archive destination, depending on how many archive destinations must succeed. You specify the minimum successful number of archive destinations in the parameter LOG_ARCHIVE_MIN_SUCCEED_DEST. You can change the archiving location specified by this parameter by using ALTER SYSTEM SET LOG_ARCHIVE_DUPLEX_DEST = ‘’; LOG_ARCHIVE_DEST_n Using this parameter, you can specify as many as five archiving destinations. These archive locations can be either on the local machine or on a remote machine where the standby database is located. When these parameters are used, you cannot use the LOG_ARCHIVE_DEST or LOG_ARCHIVE_DUPLEX_DEST parameters to specify the archiving location. The syntax for specifying this parameter in the initialization file is as follows: LOG_ARCHIVE_DEST_n = “null_string” | ((SERVICE = | LOCATION = ‘’) [MANDATORY | OPTIONAL] [REOPEN [= ]]) For example, LOG_ARCHIVE_DEST_1 = ((LOCATION=’/archive/ MYDB01’) MANDATORY REOPEN = 60) specifies a location for the archive log files on the local machine at /archive/MYDB01. The MANDATORY clause specifies that writing to this location must succeed. The REOPEN clause specifies when the next attempt to write to this location should be made, when the first attempt did not succeed. The default value is 300 seconds.
Here is another example, which applies the archive logs to a standby database on a remote computer. LOG_ARCHIVE_DEST_2 = (SERVICE=STDBY01) OPTIONAL REOPEN; Here STDBY01 is the Oracle Net connect string used to connect to the remote database. Since writing is optional, the database activity continues even if ARCn could not write the archive log file. It tries the writing operation again since the REOPEN clause is specified. LOG_ARCHIVE_MIN_SUCCEED_DEST Specifies the number of destinations the ARCn process should successfully write at a minimum to proceed with overwriting the online redo log files. The default value of this parameter is 1. If you are using the LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST parameters, setting this value to 1 makes LOG_ARCHIVE_DEST mandatory and LOG_ARCHIVE_DUPLEX_DEST optional. If you set the parameter to 2, writing to both destinations must be successful. If you are using the LOG_ARCHIVE_DEST_n parameter, the LOG_ARCHIVE_MIN_SUCCEED_DEST parameter cannot exceed the total number of enabled destinations. If this parameter value is less than the number of MANDATORY destinations, the parameter is ignored. LOG_ARCHIVE_FORMAT Specifies the format in which to write the filename of the archived redo log files. You can provide a text string and any of the predefined variables. The variables are as follows: %s
Log sequence number
%S
Log sequence number, zero filled
%t
Thread number
%T Thread number, zero filled For example, specifying the LOG_ARCHIVE_FORMAT = ‘arch_%t_%s’ generates the archive log file names as arch_1_101, arch_1_102, arch_1_103, and so on; 1 is the thread number, and 101, 102, and 103 are log sequence numbers. Specifying the format as arch_%S generates filenames such as arch_00000101, arch_00000102, and so on; the number of leading zeros depends on the operating system. LOG_ARCHIVE_MAX_PROCESSES Specifies the maximum number of ARCn processes Oracle should start when starting up the database. By default the value is 1.
LOG_ARCHIVE_START Specifies whether Oracle should enable automatic archiving. If this parameter is set to FALSE, none of the ARCn processes are started. You can override this parameter by using the command ARCHIVE LOG START or ARCHIVE LOG STOP.
Setting ARCHIVELOG Specifying these parameters does not start writing the archive log files; you should place the database in ARCHIVELOG mode to enable archiving of the redo log files. You can specify the ARCHIVELOG clause while creating the database. However, most DBAs prefer to create the database first and then enable ARCHIVELOG mode. To enable ARCHIVELOG mode, follow these steps: 1. Shut down the database. Set up the appropriate initialization
parameters. 2. Start up and mount the database. 3. Enable ARCHIVELOG mode by using the command ALTER DATABASE
ARCHIVELOG. 4. Open the database by using ALTER DATABASE OPEN.
To disable ARCHIVELOG mode, follow these steps: 1. Shut down the database. 2. Start up and mount the database. 3. Disable ARCHIVELOG mode by using the command ALTER DATABASE
NOARCHIVELOG. 4. Open the database by using ALTER DATABASE OPEN.
You can enable automatic archiving by setting the parameter LOG_ARCHIVE_START = TRUE. If you set the parameter to FALSE, Oracle does not start the ARCn process. Therefore, when the redo log files are full, the database will hang, waiting for the redo log files to be archived. You can initiate the automatic archive process by using the command ALTER SYSTEM ARCHIVE LOG START, which starts the ARCn processes; to manually archive all unarchived logs, use the command ALTER SYSTEM ARCHIVE LOG ALL.
Querying Log and Archive Information You can query the redo log file information from the SQL command ARCHIVE LOG LIST or by querying the dynamic performance views. The ARCHIVE LOG LIST command shows whether the database is in ARCHIVELOG mode, whether automatic archiving is enabled, the archival destination, and the oldest, next, and current log sequence numbers. SQL> archive log list Database log mode
Archive Mode
Automatic archival
Enabled
Archive destination
C:\Oracle\oradata\ORADB02\archive
Oldest online log sequence
194
Next log sequence to archive
196
Current log sequence
196
SQL>
The view V$DATABASE shows whether the database is in ARCHIVELOG mode or in NOARCHIVELOG mode.
V$LOG This dynamic performance view contains information about the log file groups and sizes and its status. The valid status codes in this view and their meanings are as follows: UNUSED New log group, never used. CURRENT Current log group. ACTIVE Log group that may be required for instance recovery. CLEARING You issued an ALTER DATABASE CLEAR LOGFILE command. CLEARING_CURRENT Empty log file after issuing the ALTER DATABASE CLEAR LOGFILE command. INACTIVE The log group is not needed for instance recovery.
Here is a query from V$LOG: SQL> SELECT * FROM V$LOG; GROUP# THREAD# SEQUENCE# BYTES MEMBERS ---------- ---------- ---------- ---------- ---------ARCHIVED STATUS FIRST_CHANGE# FIRST_TIM -------- ---------------- ------------- --------1 1 196 1048576 2 NO CURRENT 56686 30-JUL-01 2 INACTIVE
1
194
1048576 36658 28-JUL-01
2
YES
3 INACTIVE
1
195
1048576 36684 28-JUL-01
2
YES
SQL>
V$LOGFILE The V$LOGFILE view has information about the log group members. The filenames and group numbers are in this view. The STATUS column can have the value INVALID (file is not accessible), STALE (file’s contents are incomplete), DELETED (file is no longer used), or blank (file is in use). SQL> SELECT * FROM V$LOGFILE 2 ORDER BY GROUP#; GROUP# STATUS
V$THREAD This view shows information about the threads in the database. A single instance database will have only one thread. This view shows the instance name, thread status, SCN status, log sequence numbers, timestamp of checkpoint, and so on. SQL> SELECT THREAD#, GROUPS, CURRENT_GROUP#, SEQUENCE# 2
FROM V$THREAD; THREAD#
GROUPS CURRENT_GROUP#
SEQUENCE#
---------- ---------- -------------- ---------1
3
1
199
SQL>
V$LOG_HISTORY This view contains the history of the log information. It has the log sequence number, first and highest SCN for each log change, control file ID, and so on. SQL> SELECT SEQUENCE#, FIRST_CHANGE#, NEXT_CHANGE#, 2
TO_CHAR(FIRST_TIME,'DD-MM-YY HH24:MI:SS') TIME
3
FROM V$LOG_HISTORY
4
WHERE SEQUENCE# BETWEEN 50 AND 53;
SEQUENCE# FIRST_CHANGE# NEXT_CHANGE# TIME ---------- ------------- ----------- ----------------50
22622
22709 28-07-01 19:15:22
51
22709
23464 28-07-01 19:15:26
52
23464
23598 28-07-01 19:15:33
53
23598
23685 28-07-01 19:15:39
SQL>
V$ARCHIVED_LOG This view displays archive log information, including archive filenames, size of the file, redo log block size, SCNs, timestamp, and so on.
V$ARCHIVE_DEST This view has information about the five archive destinations, their status, any failures, and so on. The STATUS column can have six values: INACTIVE (not initialized), VALID (initialized), DEFERRED (manually disabled by the DBA), ERROR (error during copy), DISABLED (disabled after error), and BAD PARAM (bad parameter value specified). The BINDING column shows whether the target is OPTIONAL or MANDATORY, and the TARGET column indicates whether the copy is to a PRIMARY or STANDBY database. SQL> SELECT DESTINATION, BINDING, TARGET, REOPEN_SECS 2
V$ARCHIVE_PROCESSES This view displays information about the state of the 10 archive processes (ARCn). The LOG_SEQUENCE is available only if the STATE is BUSY. SQL> SELECT * FROM V$ARCHIVE_PROCESSES; PROCESS STATUS
LOG_SEQUENCE STATE
------- ---------- ------------ ----0 ACTIVE
0 IDLE
1 STOPPED
0 IDLE
2 STOPPED
0 IDLE
3 STOPPED
0 IDLE
4 STOPPED
0 IDLE
5 STOPPED
0 IDLE
6 STOPPED
0 IDLE
7 STOPPED
0 IDLE
8 STOPPED
0 IDLE
9 STOPPED
0 IDLE
10 rows selected. SQL>
Summary
I
n this chapter, we discussed two important components of the Oracle database—the control file and the redo log files. The control file records information about the physical structure of the database along with the database name, tablespace names, log sequence number, checkpoint, and RMAN information. The size of the control file depends on the five MAX clauses you specify at the time of database creation: MAXLOGFILES, MAXLOGMEMBERS, MAXLOGHISTORY, MAXDATAFILES, and MAXINSTANCES. Oracle provides a mechanism to multiplex the control file. The information is concurrently updated to all the control files. The parameter CONTROL_FILES in the parameter file specifies the control files at the time of database creation and afterward at database start-up. You can re-create the control files by specifying all the redo log files and data files. The V$CONTROLFILE view provides the names of the control files.
Redo log files record all changes to the database. The LGWR process writes the redo log buffer information from the SGA to the redo log files. The redo log file is treated as a group. The group can have more than one member. If more than one member is present in a group, the group is known as a multiplexed group, and the LGWR process writes to all the members of the group at the same time. Even if you lose one member, LGWR continues writing with the remaining members. You can use OMF to simplify the creation and maintenance of online redo log files. The LGWR process writes to only one redo log file group at any time. The file that is actively being written to is known as the current log file. The log files that are required for instance recovery are known as active log files. The other log files are known as inactive. A log switch occurs when Oracle finishes writing one file and starts the next file. A log switch always occurs when the current redo log file is completely full and writing must continue. A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and data files. The checkpoint process (CKPT) updates the headers of data files and control files; the DBWn process writes the actual blocks to the file. You can manually initiate a checkpoint or a log switch by using the ALTER SYSTEM SWITCH LOGFILE command. By saving the redo log files to a different (or offline storage) location, you can recover a database or audit the redo log files. The ARCn process does the archiving when the database is in ARCHIVELOG mode. You specify the archive destination in the initialization parameter file. The dictionary views V$LOG and V$LOGFILE provide information about the redo log files.
Understand the structure of the redo log files. Be able to clearly differentiate log file groups and log file group members. Describe how these logs are filled and re-used. Describe the purpose and operation of the LGWR and CKPT processes. Enumerate the conditions under which LGWR is active, and describe how the CKPT process is dependent on the LGWR process. List the primary maintenance operations for redo log groups. Be able to add and delete log file groups and group members, using both the command-line and GUI interfaces. Identify the conditions under which log files are cleared, and describe how to accomplish this task in various scenarios. Be able to use OMF to maintain the redo log files. Become familiar with the dynamic performance views that contain log file information. Use the V$LOG view to extract status and error information about individual log file members. Describe the basic differences between operating a database in ARCHIVELOG mode and operating it in NOARCHIVELOG mode. Identify the initialization parameters and commands that control the archive process. Briefly describe how archive log information is recorded in the control file.
Key Terms
Before you take the exam, be sure you’re familiar with the following terms: ARCHIVELOG
Review Questions 1. Which method is best for renaming a control file? A. Use the ALTER DATABASE RENAME FILE command. B. Shut down the database, rename the control file by using an
operating system command, and restart the database after changing the CONTROL_FILES parameter in the initialization parameter file. C. Put the database in RESTRICTED mode and issue the
ALTER DATABASE RENAME FILE command. D. Shut down the database, change the CONTROL_FILES parameter,
and start up the database. E. Re-create the control file using the new name. 2. Which piece of information is not available in the control file? A. Instance name B. Database name C. Tablespace names D. Log sequence number 3. When you create a control file, the database has to be: A. Mounted B. Not mounted C. Open D. Restricted 4. Which data dictionary view provides the names of the control files? A. V$DATABASE B. V$INSTANCE C. V$CONTROLFILESTATUS D. None of the above
9. What will happen if ARCn could not write to a mandatory archive
destination? A. The database will hang. B. The instance will shut down. C. ARCn starts writing to LOG_ARCHIVE_DUPLEX_DEST if it is
specified. D. Oracle stops writing the archived log files. 10. How many ARCn processes can be associated with an instance? A. Five B. Four C. Ten D. Operating system dependent 11. Which of the following is an invalid status code in the V$LOGFILE view? A. STALE B. Blank C. ACTIVE D. INVALID 12. If you have two redo log groups with four members each, how many
disks does Oracle recommend to keep the redo log files? A. Eight B. Two C. One D. Four
13. What will happen if you issue the following command?
ALTER DATABASE ADD LOGFILE (‘/logs/file1’ REUSE, ‘/logs/file2’ REUSE); A. Statement will fail, because the group number is missing B. Statement will fail, because log file size is missing C. Creates a new redo log group, with two members D. Adds two members to the current redo log group 14. Which two parameters cannot be used together to specify the archive
destination? A. LOG_ARCHIVE_DEST and LOG_ARCHIVE_DUPLEX_DEST B. LOG_ARCHIVE_DEST and LOG_ARCHIVE_DEST_1 C. LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_2 D. None of the above; you can specify all the archive destination
parameters with valid destination names. 15. Which of the following statements is NOT true regarding the use of
OMF for redo logs? A. Dropping log files with OMF automatically drops the related
operating system file. B. OMF manages archived redo log files using the initialization
parameter DB_CREATE_ARCHIVE_LOG_DEST_n. C. A new log file group can be added without specifying a filename in
the ALTER DATABASE statement. D. A log file group managed with OMF can be dropped by specifying
16. Querying which view will show whether automatic archiving is
enabled? A. V$ARCHIVE_LOG B. V$DATABASE C. V$PARAMETER D. V$LOG 17. If you need to have your archive log files named with the log sequence
numbers as arch_0000001, arch_0000002, and so on (zero filled, fixed width), what should be the value of the LOG_ARCHIVE_FORMAT parameter? A. arch_%S B. arch_%s C. arch_000000%s D. arch_%0%s 18. Following are the steps needed to rename a redo log file. Order them
in the proper sequence. A. Use an operating system command to rename the redo log file. B. Shut down the database. C. ALTER DATABASE RENAME FILE ‘oldfile’ TO ‘newfile’ D. STARTUP MOUNT E. ALTER DATABASE OPEN F. Backup the control file.
19. Which of the following commands is a key step in multiplexing redo
log files using an SPFILE? A. ALTER SYSTEM SET CONTROL_FILES=
‘/u01/oradata/PRD/cntrl01.ctl’, ‘/u01/oradata/PRD/cntrl02.ctl’ SCOPE=SPFILE; B. ALTER SYSTEM SET CONTROL_FILES=
‘/u01/oradata/PRD/cntrl01.ctl’, ‘/u01/oradata/PRD/cntrl02.ctl’ SCOPE=MEMORY; C. ALTER SYSTEM SET CONTROL_FILES=
‘/u01/oradata/PRD/cntrl01.ctl’, ‘/u01/oradata/PRD/cntrl02.ctl’ SCOPE=BOTH; D. The number of control files is fixed when the database is created. 20. Which statement will add a member /logs/redo22.log to log
file group 2? A. ALTER DATABASE ADD LOGFILE ‘/logs/redo22.log’ TO
GROUP 2; B. ALTER DATABASE ADD LOGFILE MEMBER ‘/logs/redo22.log’
TO GROUP 2; C. ALTER DATABASE ADD MEMBER ‘/logs/redo22.log’ TO
GROUP 2; D. ALTER DATABASE ADD LOGFILE ‘/logs/redo22.log’;
Answers to Review Questions 1. B. To rename (or multiplex, or drop) a control file, you shut down
the database, rename (or copy, or delete) the control file by using operating systems commands, change the parameter CONTROL_FILES in the initialization parameter file, and start up the database. 2. A. The instance name is not in the control file. The control file has
information about the physical database structure. 3. B. The database should be in the NOMOUNT state to create a control file.
When you mount the database, Oracle tries to open the control file to read the physical database structure. 4. D. The V$CONTROLFILE view shows the names of the control files in
the database. 5. D. Oracle will automatically adjust other parameters (buffer sizes,
intervals, and so on) to ensure that instance recovery will not exceed a specified number of seconds. 6. C. The V$DATABASE view shows whether the database is in
ARCHIVELOG mode or in NOARCHIVELOG mode. 7. B. Having the control files on different disks ensures that even if you
lose one disk, you lose only one control file. If you lose one of the control files, you can shut down the database, copy a control file, or change the CONTROL_FILES parameter and restart the database. 8. B. The redo log file records all changes made to the database. The
LGWR process writes the redo log buffer entries to the redo log files. These entries are used to roll forward, or to update, the data files during an instance recovery. Archive log files are used for media recovery. 9. A. Oracle will write a message to the alert file, and all database oper-
ations will be stopped. Database operation resumes automatically after successfully writing the archived log file. If the archive destination becomes full, you can make room for archives either by deleting the archive log files after copying them to a different location or by changing the parameter to point to a different archive location.
10. C. You can have a maximum of ten archiver processes. 11. C. The STATUS column in V$LOGFILE can have the values INVALID
(file is not accessible), STALE (file’s contents are incomplete), DELETED (file is no longer used), or blank (file is in use). 12. D. Oracle recommends that you keep each member of a redo log
group on a different disk. You should have a minimum of two redo log groups, and it is recommended that you have two members in each group. The maximum number of redo log groups is determined by the MAXLOGFILES database parameter. The MAXLOGMEMBERS database parameter specifies the maximum number of members per group. 13. C. The statement creates a new redo log group with two members.
When you specify the GROUP option, you must use an integer value. Oracle will automatically generate a group number if the GROUP option is not specified. Use the SIZE option if you are creating a new file. Use the REUSE option if the file already exists. 14. B. When using a LOG_ARCHIVE_DEST_n parameter, you cannot use
the LOG_ARCHIVE_DEST or LOG_ARCHIVE_DUPLEX_DEST parameters to specify other archive locations. Using a LOG_ARCHIVE_DEST_n parameter, you can specify as many as five archiving locations. 15. B. You canot manage archived redo logs with OMF. 16. C. You enable automatic archiving by setting the initialization
parameter LOG_ARCHIVE_START = TRUE. All the parameter values can be queried using the V$PARAMETER view. The ARCHIVE LOG LIST command will also show whether automatic archiving is enabled. 17. A. Four formatting variables are available to use with archive log file
names: %s specifies the log sequence number; %S specifies the log sequence number, leading zero filled; %t specifies the thread; and %T specifies the thread, leading zero filled.
18. B, A, D, C, E, and F. The correct order is: 1. Shut down the database. 2. Use an operating system command to rename the redo log file. 3. STARTUP MOUNT 4. ALTER DATABASE RENAME FILE ‘oldfile’ TO ‘newfile’ 5. ALTER DATABASE OPEN 6. Back up the control file. 19. A. The location of the new control files is not valid until an operating
system copy is made of the current control file to the new location(s), and the instance is restarted. The SCOPE=SPFILE option specifies that the parameter change will not take place until a restart. Specifying either MEMORY or BOTH will cause an error, since the new control file does not exist yet. 20. B. When adding log file members, specify the group number or specify
all the existing group members. Option D would create a new group with one member.
Logical and Physical Database Structures ORACLE9i DBA FUNDAMENTALS I EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe the logical structure of tablespaces within the database Create tablespaces Change the size of the tablespace Allocate space for temporary segments Change the status of tablespaces Change the storage settings of tablespaces Implement Oracle Managed Files
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification website (http://www.oracle.com/ education/certification/) for the most current exam objectives listing.
his chapter covers the physical and logical data storage. Chapter 2 briefly discussed the physical and logical structures, and Chapter 5 discussed two of the three components of the physical database structure— control files and redo log files. The third component of the physical structure is data files. Data files belong to logical units called tablespaces. In this chapter, you will learn to manage data files and tablespaces.
Tablespaces and Data Files
T
he database’s data is stored logically in tablespaces and physically in the data files corresponding to the tablespaces. The logical storage management is independent of the physical storage of the data files. A tablespace can have more than one data file associated with it. One data file belongs to only one tablespace. A database can have one or more tablespaces. Figure 6.1 shows the relationship between the database, tablespaces, data files, and the objects in the database. Any object (such as a non-partitioned table, an index, and so on) created in the database is stored in a single tablespace. But the object’s physical storage can be on multiple data files belonging to that tablespace. A segment cannot be stored in multiple tablespaces.
Oracle Exam Objective
Describe the logical structure of tablespaces within the database
The size of the tablespace is the total size of all the data files belonging to that tablespace. The size of the database is the total size of all tablespaces in the database, which is the total size of all data files in the database. The smallest logical unit of storage in a database is a database block. You define the size of the block when you create the database, and you cannot alter it. The database block size is a multiple of the operating system block size. Changing the size of the data files belonging to a tablespace can change the size of that tablespace. You can add more space to a tablespace by adding more data files to the tablespace. You can add more space to the database by adding more tablespaces, by adding more data files to the existing tablespaces, or by increasing the size of the existing data files. When you create a database, Oracle creates the SYSTEM tablespace. All the dictionary objects are stored in this tablespace. The data files you specify when you create the database are assigned to the SYSTEM tablespace. You can add more space to the SYSTEM tablespace after you create the database by adding more data files or by increasing the size of the data files. The PL/SQL program units (such as procedures, functions, packages, or triggers) created in the database are also stored in the SYSTEM tablespace.
Oracle recommends not creating any objects other than the Oracle data dictionary in the SYSTEM tablespace. By having multiple tablespaces, you can do the following:
Separate the Oracle dictionary from other database objects. Doing so reduces contention between dictionary objects and database objects for the same data file.
Control I/O by allocating separate physical storage disks for different tablespaces.
Manage space quotas for users on tablespaces.
Have separate tablespaces for temporary segments (TEMP) and undo management (Rollback segments). You can also create a tablespace for a specific activity—for example, you can place high-update tables in a separate tablespace. When creating the database, you can specify tablespace names for temporary tablespace and undo tablespace.
Group application-related or module-related data together, so that when maintenance is required for the application’s tablespace, only that tablespace need be taken offline, and the rest of the database is available for users.
Back up the database one tablespace at a time.
Make part of the database read-only.
When you create a tablespace, Oracle creates the data files with the size specified. The space reserved for the data file is formatted but does not contain any user data. Whenever spaces for objects are needed, extents are allocated from this free space.
Managing Tablespaces
When Oracle allocates space to an object in a tablespace, it is allocated in chunks of contiguous database blocks known as extents. Each object is allocated a segment, which has one or more extents.
If the object is partitioned, each partition will have a segment allocated. Partitions are discussed in Chapter 8, “Managing Tables, Indexes, and Constraints.”
Oracle maintains the extent information such as extents free, extent size, extents allocated, and so on either in the data dictionary or in the tablespace itself. If you store the extent management information in the dictionary for a tablespace, that tablespace is called a dictionary-managed tablespace. Whenever an extent is allocated or freed, the information is updated in the corresponding dictionary tables. Such updates also generate undo information. If you store the management information in a tablespace itself, by using bitmaps in each data file, such a tablespace is known as a locally managed tablespace. Each bit in the bitmap corresponds to a block or a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables in the data dictionary. To create the tablespace, you use the CREATE TABLESPACE statement. You can modify the characteristics of the tablespace using the ALTER TABLESPACE statement. In the following sections, we’ll create, modify, and drop a tablespace, and we’ll query the tablespace information from the data dictionary.
Creating a Tablespace As the database grows bigger, managing database objects is easier if you have multiple tablespaces. Using the CREATE TABLESPACE statement creates a tablespace. In Oracle9i, the only mandatory clause in the CREATE TABLESPACE statement is the tablespace name. For example, when you specify the statement CREATE TABLESPACE APPLICATION_DATA, Oracle9i creates a locally managed tablespace with system allocated extent sizes. The data file for this tablespace is created at the location you specify in the DB_CREATE_FILE_DEST parameter, and its size is 100MB. The file is
auto extensible with no maximum size and has a name similar to ora_applicat_zyykpt00.dbf. In the following sections, we will discuss the default values for a CREATE TABLESPACE statement and the various types of tablespaces.
Oracle Exam Objective
Create tablespaces
The tablespace name cannot exceed 30 characters. The name should begin with an alphabetic character and can contain alphabetic characters, numeric characters, and the special characters #, _, and $.
Optionally, you can specify file names, file sizes, and default storage parameters when creating tablespaces. The default storage parameters are used whenever a new object is created (whenever a new segment is allocated) in the tablespace. The storage parameters you specify when you create an object override the default storage parameters of the tablespace containing the object. The default storage parameters for the tablespace are used only when you create an object without specifying any storage parameters. In Oracle9i, you can create tablespaces without specifying a file name by setting the parameter DB_CREATE_FILE_DEST to a valid directory on the server where you want to create the file. Files created in such manner are called Oracle Managed Files (OMF).
We will discuss OMF in the “Managing Data Files” section later in this chapter.
You can specify the extent management clause when creating a tablespace. If you do not specify the extent management clause, Oracle creates a locally managed tablespace. You can have both dictionarymanaged and locally managed tablespaces in the same database. A temporary tablespace can be either dictionary-managed or locally managed.
Dictionary-Managed Tablespaces In dictionary-managed tablespaces, all extent information is stored in the data dictionary. A simple example of a dictionary-managed tablespace creation command is as follows: CREATE TABLESPACE APPL_DATA DATAFILE '/disk3/oradata/DB01/appl_data01.dbf' SIZE 100M; EXTENT MANAGEMENT DICTIONARY; This statement creates a tablespace named APPL_DATA; the data file specified is created with a size of 100MB. You can specify more than one file under the DATAFILE clause separated by commas; you may need to create more files if there are any operating system limits on the file size. For example, if you need to allocate 6GB for the tablespace, and the operating system allows only a 2GB maximum, you need three data files for the tablespace. The statement is as follows: CREATE TABLESPACE APPL_DATA DATAFILE '/disk3/oradata/DB01/appl_data01.dbf' SIZE 2000M, '/disk3/oradata/DB01/appl_data02.dbf' SIZE 2000M, '/disk4/oradata/DB01/appl_data03.dbf' SIZE 2000M;
The options available when creating and reusing a data file are discussed in the “Managing Data Files” section later in this chapter.
The following statement creates a tablespace using all optional clauses. CREATE TABLESPACE APPL_DATA DATAFILE '/disk3/oradata/DB-1/appl_data01.dbf' SIZE 100M DEFAULT STORAGE ( INITIAL 256K NEXT 256K MINEXTENTS 2 PCTINCREASE 0 MAXEXTENTS 4096) BLOCKSIZE 4K MINIMUM EXTENT 256K
LOGGING ONLINE PERMANENT EXTENT MANAGEMENT DICTIONARY SEGMENT SPACE MANAGEMENT MANUAL; The clauses in the CREATE TABLESPACE command specify the following: DEFAULT STORAGE Specifies the default storage parameters for new objects that are created in the tablespace. If you specify an explicit storage clause when creating an object, the tablespace defaults are not used for the specified storage parameters. You specify storage parameters within parentheses; no parameter is mandatory, but if you specify the DEFAULT STORAGE clause, you must specify at least one parameter inside the parentheses. BLOCKSIZE Specifies the block size that is used for the objects created in the tablespace. By default, this block size is the database block size, which you define using the DB_BLOCK_SIZE parameter when creating the database. In Oracle9i, a database can have multiple block sizes. The database block size specified by DB_BLOCK_SIZE parameter is used for the SYSTEM tablespace and is known as the standard block size. The valid sizes of non-standard block size are 2KB, 4KB, 8KB, 16KB, and 32KB. If you do not specify a block size for the tablespace, the database block size is assumed. Multiple block sizes in the database are beneficial for large databases with OLTP (Online Transaction Processing) and DSS (Decision Support System) data stored together or for storing large tables. In the “Using Non-standard Block Sizes” section later in this chapter, we’ll discuss the restrictions on specifying non-standard block sizes when you create the tablespace. INITIAL Specifies the size of the object’s (segment’s) first extent. NEXT specifies the size of the segment’s next and successive extents. The size is specified in bytes. You can also specify the size in KB by post-fixing the size with K, or you can specify MB by post-fixing the size with M. The default value of INITIAL and NEXT is 5 database blocks. The minimum value of INITIAL is 3 database blocks for locally managed tablespaces (for manual segment space management, it is 2 blocks plus 1 block for each free list group in the segment) and 2 blocks for dictionary-managed tablespaces; NEXT is 1 database block. Even if you specify sizes smaller
than these values, Oracle allocates the minimum sizes when creating segments in the tablespace. PCTINCREASE Specifies how much the third and subsequent extents grow over the preceding extent. The default value is 50, meaning that each subsequent extent is 50 percent larger than the preceding extent. The minimum value is 0, meaning all extents after the first are the same size. For example, if you specify storage parameters as (INITIAL 1M NEXT 2M PCTINCREASE 0), the extent sizes are 1MB, 2MB, 2MB, 2MB, and so on. If you specify the PCTINCREASE as 50, the extent sizes are 1MB, 2MB, 3MB, 4.5MB, 6.75MB, and so on. The actual NEXT extent size is rounded to a multiple of the block size. MINEXTENTS Specifies the total number of extents allocated to the segment at the time of creation. Using this parameter, you can allocate a large amount of space when you create an object, even if the space available is not contiguous. The default and minimum value is 1. When you specify MINEXTENTS as more than 1, the extent sizes are calculated based on NEXT and PCTINCREASE. MAXEXTENTS specifies the maximum number of extents that can be allocated to a segment. You can specify an integer or UNLIMITED. The minimum value is 1, and the default value depends on the database block size. MINIMUM EXTENT Specifies that the extent sizes are a multiple of the size specified. You can use this clause to control fragmentation in the tablespace by allocating extents of at least the size specified and as always a multiple of the size specified. In the CREATE TABLESPACE example earlier in this chapter, all the extents allocated in the tablespace are a multiple of 256KB. The INITIAL and NEXT extent sizes you specify should be a multiple of MINIMUM EXTENT. LOGGING Specifies that the DDL operations and direct-load INSERT are recorded in the redo log files. LOGGING is the default, and you can omit the clause. When you specify NOLOGGING, data is modified with minimal logging and hence the commands complete faster. Since the changes are not recorded in the redo log files, you need to apply the commands again if you have to recover media. Specifying LOGGING or NOLOGGING in the individual object creation statement overrides the tablespace default. ONLINE Specifies that the tablespace be created online or available as soon as it is created. ONLINE is the default, and hence you can omit the
clause. If you do not want the tablespace to be available, you can specify OFFLINE. PERMANENT Specifies whether the tablespace is to be used to create permanent objects such as tables, indexes, and so on. PERMANENT is the default, and hence you can omit it. If you plan to use the tablespace for temporary segments (such as to handle sorts in SQL), you can mark the tablespace as TEMPORARY. You cannot create permanent objects such as tables or indexes in a TEMPORARY tablespace. We’ll discuss temporary tablespaces later in this chapter. EXTENT MANAGEMENT Until Oracle9i, dictionary managed tablespaces were the default. That is, if you did not specify the EXTENT MANAGEMENT clause, Oracle created a dictionary-managed tablespace. In Oracle9i, to create a dictionary-managed tablespace, you need to explicitly specify the EXTENT MANAGEMENT DICTIONARY clause. If you omit this clause, Oracle creates the tablespace as locally managed. SEGMENT SPACE MANAGEMENT This clause is applicable only to locally managed tablespaces. The valid values are MANUAL and AUTO. MANUAL is the default. If you specify AUTO, Oracle manages the free space in the segments using bit maps rather than free lists. For AUTO, Oracle ignores the storage parameters PCTUSED, FREELISTS, and FREELIST GROUPS when creating objects. Using Non-standard Block Sizes When creating the database, you specify the block size in the initialization parameter using the DB_BLOCK_SIZE parameter. This specification is known as the standard block size for the database. You must choose a block size that suits most of your tables. In most databases, will never need another block size. Oracle9i gives you option of using multiple block sizes, which is especially useful when you’re transporting tablespaces from another database with a different block size. When creating a tablespace with a non-standard block size, you must specify the BLOCKSIZE clause in the CREATE TABLESPACE statement. You cannot alter this block size. The DB_CACHE_SIZE parameter defines the buffer cache size associated with the standard block size. To create tablespaces with non-standard block size, you must set the appropriate initialization parameter to define a buffer cache size for the block size. The initialization parameter is
DB_nK_CACHE_SIZE; n is the non-standard block size. It can have the values 2, 4, 8, 16, or 32, but cannot have the size of the standard block size. For example, if your standard block size is 8KB, you cannot set the parameter DB_8K_CACHE_SIZE. If you need to create a tablespace that uses a different block size, say 16KB, you must set the DB_16K_CACHE_SIZE parameter. By default, the value for DB_nK_CACHE_SIZE parameters is 0MB. Temporary tablespaces should have standard block size.
The DB_nK_CACHE_SIZE The parameter is dynamic; you can alter its value using the ALTER SYSTEM statement.
Locally Managed Tablespace Using the CREATE TABLESPACE command with the EXTENT MANAGEMENT LOCAL clause creates a locally managed tablespace. Locally managed tablespaces manage space more efficiently, provide better methods to reduce fragmentation, and increase reliability. The extent allocation information is stored as bitmaps in the file headers, and hence it improves the speed of allocation and deallocation operations. You cannot specify the DEFAULT STORAGE, TEMPORARY, and MINIMUM EXTENT clauses of the CREATE TABLESPACE in a locally managed tablespace. You can specify that Oracle manage extents automatically by using the AUTOALLOCATE option. When using this option, you cannot specify sizes for the objects created in the tablespace. Oracle manages the extent sizes; you have no control over the extent sizes or the extent’s allocation and deallocation. Following is an example of creating a locally managed tablespace with Oracle managing the extent allocation (since EXTENT MANAGEMENT LOCAL AUTOALLOCATE is the default, it is omitted from the statement): CREATE TABLESPACE USER_DATA DATAFILE '/disk1/oradata/MYDB01/user_data01.dbf' SIZE 300M; You can specify that the tablespace be managed with uniform extents of a specific size by using the UNIFORM SIZE clause. All the extents will be created with the size you specify. You cannot specify extent sizes (STORAGE clause) when creating the tablespace. The following is an example of creating a locally managed tablespace with uniform extent sizes of 512KB.
CREATE TABLESPACE USER_DATA DATAFILE '/disk1/oradata/MYDB01/user_data01.dbf' SIZE 300M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K; If you set the DB_CREATE_FILE_DEST parameter to a valid directory on the server and execute the statement CREATE TABLESPACE MYTS, a new tablespace MYTS is created as locally managed with auto allocated extent sizes, standard block size, and manual segment space management. You manage the free space in the tablespace’s segment using free lists. In Oracle9i, the database can manage the free space by using bitmaps. If you want a locally managed tablespace to have the segment’s free space managed by Oracle, you must specify the SEGMENT SPACE MANAGEMENT AUTO clause in the CREATE TABLESPACE statement. Free Space Management When creating locally managed permanent tablespaces, you can specify the SEGMENT SPACE MANAGEMENT clause to AUTO or MANUAL. MANUAL is the default and is the only behavior available in pre-Oracle9i databases. In pre-Oracle9i databases, you managed the free space in the segments by using free lists. In Oracle9i, Oracle can manage the free space in blocks using bitmaps if you specify the SEGMENT SPACE MANAGEMT AUTO clause while creating a tablespace. A bitmap that shows the status of the block is maintained—the status shows whether a block is available for insert. Thus, performance improves when multiple sessions are doing inserts to the same block. Space is also used effectively for objects with varying size rows. When you set SEGMENT SPACE MANAGEMENT to AUTO, for all segments created in the tablespace Oracle ignores the storage parameters FREELISTS, FREELIST GROUPS, and PCTUSED. Therefore, you need not worry about tuning these parameters! Undo Tablespace Oracle9i can manage undo information automatically. You need not create rollback segments and worry about their sizes and number. For automatic undo management, you must have one undo tablespace. You create the undo tablespace using the CREATE UNDO TABLESPACE statement. When creating undo tablespace, you can specify only the EXTENT MANAGEMENT LOCAL and DATAFILE clauses. Oracle creates a locally managed
permanent tablespace. (Chapter 7 discusses undo management.) The following statement creates an undo tablespace: CREATE UNDO TABLESPACE UNDO_TBS DATAFILE '/ora1/oradata/MYDB/undo_tbs01.dbf' SIZE 500M;
You can create an undo tablespace when creating a database using the UNDO TABLESPACE clause of the CREATE DATABASE statement.
Temporary Tablespace
Oracle Exam Objective
Allocate space for temporary segments
Oracle can manage space for sort operations more efficiently by using temporary tablespaces. By exclusively designating a tablespace for temporary segments, Oracle eliminates allocation and deallocation of temporary segments. A temporary tablespace can be used only for sort segments. Only one sort segment is allocated for an instance in a temporary tablespace, and all sort operations use this sort segment. More than one transaction can use the same sort segment, but each extent can be used by only one transaction. The sort segment for a given temporary tablespace is created at the time of the first sort operation on that tablespace. The sort segment expands by allocating extents until the segment size is sufficient for the total storage demands of all the active sorts running on that instance. A temporary tablespace can be dictionary-managed or locally managed. Using the CREATE TABLESPACE command with the TEMPORARY clause creates a dictionary-managed temporary tablespace. Here is an example: CREATE TABLESPACE TEMP DATAFILE '/disk5/oradata/MYDB01/temp01.dbf' SIZE 300M DEFAULT STORAGE (INITIAL 2M NEXT 2M PCTINCREASE 0 MAXEXTENTS UNLIMITED) TEMPORARY;
When the first sort operation is performed on disk, a temporary segment is allocated with a 2MB initial extent size and 2MB subsequent extent sizes. The extents, once allocated, are freed only when the instance is shut down. Temporary segments are based on the default storage parameters of the tablespace. For a TEMPORARY tablespace, the recommended INITIAL and NEXT parameters should be equal, and the extent size should be a multiple of SORT_AREA_SIZE plus DB_BLOCK_SIZE to reduce the possibility of fragmentation. Keep PCTINCREASE equal to zero. For example, if your sort area size is 64KB and database block size is 8KB, provide the default storage of the temporary tablespace as (INITIAL 136K NEXT 136K PCTINCREASE 0 MAXEXTENTS UNLIMITED). If you are using a PERMANENT tablespace for sort operations, temporary segments are created in the tablespace when the sort is performed and are freed when the sort operation completes. There will be one sort segment for each sort operation, which requires a lot of extent and segment management operations. To create a locally managed temporary tablespace, use the CREATE TEMPORARY TABLESPACE command. The following statement creates a locally managed temporary tablespace: CREATE TEMPORARY TABLESPACE TEMP TEMPFILE '/disk5/oradata/MYDB01/temp01.tmp' SIZE 500M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 5M; Notice that the DATAFILE clause of the CREATE TABLESPACE command is replaced with the TEMPFILE clause. Temporary files are always in NOLOGGING mode and are not recoverable. They cannot be made read-only, cannot be renamed, cannot be created with the ALTER DATABASE command, do not generate any information during the BACKUP CONTROLFILE command, and are not included during a CREATE CONTROLFILE command. The EXTENT MANAGEMENT LOCAL clause is optional and can be omitted; it is provided to improve readability. If you do not specify the extent size by using the UNIFORM SIZE clause, the default size used is 1MB.
An Oracle temporary file (called a tempfile) is not a temporary file in the traditional operating system sense; only the objects within a temporary tablespace consisting of one or more tempfiles are temporary.
When you create a user, that user is assigned a temporary tablespace. By default, the default tablespace (where the user creates objects) and the temporary tablespace (where the user’s sort operations are performed) are both the SYSTEM tablespace. No user should have SYSTEM as their default or temporary tablespace. This unnecessarily increases fragmentation in the SYSTEM tablespace. When creating a database, you can also create a temporary tablespace using the DEFAULT TEMPORARY TABLESPACE clause of the CREATE DATABASE statement. If the default temporary tablespace is defined in the database, all new users will have that tablespace assigned as the temporary tablespace by default, if you do not specify another tablespace for the user’s temporary tablespace.
Altering a Tablespace You can alter a tablespace using the ALTER TABLESPACE command. This command allow you to do the following:
Oracle Exam Objective
Change the default storage parameters of a dictionary-managed tablespace
Change the extent allocation and LOGGING/NOLOGGING modes
Change the tablespace from PERMANENT to TEMPORARY or vice versa
Change the availability of the tablespace
Make the tablespace read-only or read-write
Coalesce the contiguous free space
Add more space by adding new data files or temporary files
Rename files belonging to the tablespace
Begin and end a backup
Change the storage settings of tablespaces
Changing the default storage, extent allocation, or LOGGING/NOLOGGING does not affect the existing objects in the tablespace. The DEFAULT STORAGE
and LOGGING/NOLOGGING clauses are applied to the newly created segments if you do not explicitly define such a clause when creating new objects. For example, to change the storage parameters, use the following statement: ALTER TABLESPACE APPL_DATA DEFAULT STORAGE (INITIAL 2M NEXT 2M); Only the INITIAL and NEXT values of the storage clause are changed; the other storage parameters such as PCTINCREASE or MINEXTENTS remain unaltered. You can change a dictionary-managed temporary tablespace to permanent or vice versa by using the ALTER TABLESPACE command, if the tablespace is empty and the permanent tablespace uses standard block size. You cannot use the ALTER TABLESPACE command, with the TEMPORARY keyword, to change a locally managed permanent tablespace into a locally managed temporary tablespace. You must use the CREATE TEMPORARY TABLESPACE statement to create a locally managed temporary tablespace. However, you can use the ALTER TABLESPACE command to change a locally managed temporary tablespace to a locally managed permanent tablespace. The following statement changes a tablespace to temporary: ALTER TABLESPACE TEMP TEMPORARY;
The clauses in the ALTER TABLESPACE command are all mutually exclusive; you can specify only one clause at a time.
You cannot use the ALTER TABLESPACE statement to change the tablespace’s extent allocation to DICTIONARY or LOCAL.
Tablespace Availability
Oracle Exam Objective
Change the status of tablespaces
You can control the availability of certain tablespaces by placing them offline or online. When you make a tablespace offline, the segments in
that tablespace are not accessible. The data stored in other tablespaces is available for use. When making a tablespace unavailable, you can use the following four options: NORMAL This option is the default. Oracle writes all the dirty buffer blocks in the SGA (System Global Area) to the data files of the tablespace and closes the data files. All data files belonging to the tablespace must be online. You need not do a media recovery when bringing the tablespace online. For example: ALTER TABLESPACE USER_DATA OFFLINE NORMAL TEMPORARY Oracle performs a checkpoint on all online data files. It does not ensure that the data files are available. You might need to perform a media recovery on the offline data files when the tablespace is brought online. For example: ALTER TABLESPACE USER_DATA OFFLINE TEMPORARY; IMMEDIATE Oracle does not perform a checkpoint and does not make sure that all data files are available. You must perform a media recovery when the tablespace is brought back online. For example: ALTER TABLESPACE USER_DATA OFFLINE IMMEDIATE; FOR RECOVER This option places the tablespace offline for point-in-time recovery. You can copy the data files belonging to the tablespace from a backup and apply the archive log files. For example: ALTER TABLESPACE USER_DATA OFFLINE FOR RECOVER; This option is deprecated in Oracle9i, and is available only for backward compatibility. You cannot place the SYSTEM tablespace offline because the data dictionary must always be available for the database to function. If a tablespace is offline when you shut down the database, it remains offline when you start up the database. You can place a tablespace offline by using the statement ALTER TABLESPACE USER_DATA ONLINE. When you take a tablespace offline, SQL statements cannot reference any objects contained in that tablespace. If there are unsaved changes when you take the tablespace offline, Oracle saves rollback data corresponding to those changes in a deferred rollback segment in the SYSTEM tablespace. When you bring the tablespace back online, Oracle applies the rollback data to the tablespace, if needed.
Coalescing Free Space You can use the ALTER TABLESPACE command with the COALESCE clause to coalesce the adjacent free extents. When you free up the extents used by an object, either by altering the object storage or by dropping the object, Oracle does not combine the adjacent free extents. When coalescing tablespaces, Oracle does not combine all free space into one big extent; Oracle combines only the adjacent extents. For example, Figure 6.2 shows the extent allocation in a tablespace before and after coalescing. FIGURE 6.2
Coalescing a tablespace Tablespace USERS before coalescing D
F
F
F
D
F
D
D
D
D
F
F
ALTER TABLESPACE USERS COALESCE; D
F
D
F
F
D = Data extent F = Free extent
The SMON (system monitor) process coalesces the tablespace. If you set the PCTINCREASE storage parameter for the tablespace to a nonzero value, the SMON process automatically coalesces the tablespace’s unused extents. Even if you set PCTINCREASE to zero, Oracle coalesces the tablespace when it does not find a free extent that is big enough. Oracle also does a limited amount of coalescing if the PCTINCREASE value of the object dropped is not zero. If the extent sizes of the tablespace are all uniform, there is no need to coalesce.
Read-Only Tablespace If you do not want users to change any data in the tablespace, you can specify that it is read-only. All objects in the tablespace are available for queries. INSERT, UPDATE, and DELETE operations on the data are not allowed. When the tablespace is made read-only, the data file headers are no longer updated when the checkpoint occurs. You need to back up the read-only tablespaces
only once. You cannot make the SYSTEM tablespace read-only. When you make a tablespace read-only, all the data files must be online, and the tablespace can have no pending transactions. You can drop objects such as tables or indexes from a read-only tablespace, but you cannot create new objects in a read-only tablespace. To make the USERS tablespace read-only, use the following statement: ALTER TABLESPACE USERS READ ONLY; If you issue the ALTER TABLESPACE READ ONLY statement when there are active transactions in the tablespace, the tablespace goes into a transitional read-only mode in which no further DML (Data Manipulation Language) statements are allowed, although existing transactions that are modifying the tablespace are allowed to commit or roll back. To change a tablespace to read-write mode, use the following command: ALTER TABLESPACE USERS READ WRITE; Oracle normally checks the availability of all data files belonging to the database when starting up the database. If you are storing your read-only tablespace on an offline storage medium or on a CD-ROM, you might want to skip the data file availability checking when starting up the database. To do so, set the parameter READ_ONLY_OPEN_DELAYED to TRUE. Oracle checks the availability of data files belonging to read-only tablespaces only at the time of access to an object in the tablespace. A missing or bad read-only file will not be detected at database start-up time.
Adding Space to a Tablespace
Oracle Exam Objective
Change the size of the tablespace
You can add more space to a tablespace by adding more data files to it or by changing the size of the existing data files. To add more data files or temporary files to the tablespace, use the ALTER TABLESPACE command with the ADD [DATAFILE/TEMPFILE] clause. For example, to add a file to a tablespace, run the following command: ALTER TABLESPACE USERS ADD DATAFILE '/disk5/oradata/DB01/users02.dbf' SIZE 25M;
If you are modifying a locally managed temporary tablespace to add more files, use the following statement: ALTER TABLESPACE USER_TEMP ADD TEMPFILE '/disk4/oradata/DB01/user_temp01.dbf' SIZE 100M; For locally managed temporary tablespaces, you can use only the ADD TEMPFILE clause with the ALTER TABLESPACE command.
Dropping a Tablespace You use the DROP TABLESPACE statement to drop a tablespace from the database. If the tablespace to be dropped is not empty, use the INCLUDING CONTENTS clause. For example, to drop the USER_DATA tablespace, use the following statement: DROP TABLESPACE USER_DATA; If the tablespace is not empty, specify the following: DROP TABLESPACE USER_DATA INCLUDING CONTENTS; If there are referential integrity constraints from the objects on other tablespaces referring to the objects in the tablespace that is being dropped, you must specify the CASCADE CONSTRAINTS clause: DROP TABLESPACE USER_DATA INCLUDING CONTENTS CASCADE CONSTRAINTS; When you drop a tablespace, the control file is updated with the tablespace and data file information. The actual data files belonging to the tablespace are removed only if the data files are Oracle Managed Files. If the files are not Oracle managed, you can either use operating system commands to remove the data files belonging to the dropped tablespace or use the AND DATAFILES clause to free up the disk space. The following statement drops the tablespace and removes all data files belonging to the tablespace from the disk. DROP TABLESPACE USER_DATA INCLUDING CONTENTS AND DATAFILES; You cannot drop the SYSTEM tablespace.
Querying Tablespace Information You query tablespace information from the following data dictionary views.
DBA_TABLESPACES The DBA_TABLESPACES view shows information about all tablespaces in the database. (USER_TABLESPACES shows tablespaces that are accessible to the user.) This view contains default storage parameters and specifies the type of tablespace, the status, and so on SQL> SELECT TABLESPACE_NAME, EXTENT_MANAGEMENT, 2 ALLOCATION_TYPE, CONTENTS, 3 SEGMENT_SPACE_MANAGEMENT 4 FROM DBA_TABLESPACES; TABLESPACE_NAME ---------------SYSTEM UNDOTBS CWMLITE DRSYS EXAMPLE TEMP TOOLS USERS APP_DATA APP_INDEX
EXTENT_MAN ---------DICTIONARY LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL LOCAL DICTIONARY LOCAL
ALLOCATIO --------USER SYSTEM SYSTEM SYSTEM SYSTEM UNIFORM SYSTEM SYSTEM USER UNIFORM
10 rows selected. SQL> The following columns are displayed in DBA_TABLESPACES view: TABLESPACE_NAME BLOCK_SIZE INITIAL_EXTENT NEXT_EXTENT MIN_EXTENTS MAX_EXTENTS
PCT_INCREASE MIN_EXTLEN STATUS CONTENTS LOGGING EXTENT_MANAGEMENT ALLOCATION_TYPE PLUGGED_IN SEGMENT_SPACE_MANAGEMENT
V$_TABLESPACE V$TABLESPACE shows the tablespace number, the name, and the backup status from the control file. SQL> SELECT * FROM V$TABLESPACE; TS# ---------2 3 4 11 0 7 1 8 6 10
NAME -----------------------------CWMLITE DRSYS EXAMPLE APP_INDEX SYSTEM TOOLS UNDOTBS USERS TEMP APP_DATA
DBA_FREE_SPACE This view shows the free extents available in all tablespaces. You use this view to find the total free space available in a tablespace. USER_FREE_SPACE shows the free extents in tablespaces accessible to the current user. Locally managed temporary tablespaces are not shown in this view. SQL> SELECT TABLESPACE_NAME, SUM(BYTES) FREE_SPACE 2 FROM DBA_FREE_SPACE 3 GROUP BY TABLESPACE_NAME; TABLESPACE_NAME FREE_SPACE ---------------- ---------APP_DATA 10481664 APP_INDEX 10223616 CWMLITE 14680064 DRSYS 12845056 EXAMPLE 196608 SYSTEM 88281088 TOOLS 4390912 UNDOTBS 208338944 USERS 24051712 9 rows selected. SQL> The following columns are displayed in DBA_FREE_SPACE view: TABLESPACE_NAME FILE_ID BLOCK_ID BYTES BLOCKS RELATIVE_FNO
V$SORT_USAGE This view shows information about the active sorts in the database; it shows the space used, the username, the SQL address, and the hash value. You can join this view with V$SESSION or V$SQL to get more information about the session. SQL> SELECT USER, SESSION_ADDR, SESSION_NUM, SQLADDR, 2 SQLHASH, TABLESPACE, EXTENTS, BLOCKS 3 FROM V$SORT_USAGE; USER SESSION_ SESSION_NUM SQLADDR -------------------- -------- ----------- -------SQLHASH TABLESPACE EXTENTS BLOCKS ---------- -------------------- ---------- ---------SCOTT 030539F4 24 0343E200 1877781575 TEMP 45 360 The following columns are displayed in V$SORT_USAGE view: USERNAME USER SESSION_ADDR SESSION_NUM SQLADDR SQLHASH TABLESPACE CONTENTS SEGTYPE SEGFILE# SEGBLK# EXTENTS BLOCKS SEGRFNO#
Other Views The following views also show information related to tablespaces: DBA_SEGMENTS, USER_SEGMENTS Shows information about the segments, segment types, size, and storage parameter values associated with tablespaces. This example shows the tablespace and total space occupied by each type of segment owned by PM schema. SQL> 2 3 4
SELECT TABLESPACE_NAME, SEGMENT_TYPE, SUM(BYTES) FROM DBA_SEGMENTS WHERE OWNER = 'PM' GROUP BY ROLLUP(TABLESPACE_NAME, SEGMENT_TYPE);
TABLESPACE_NAME --------------EXAMPLE EXAMPLE EXAMPLE EXAMPLE EXAMPLE EXAMPLE
7 rows selected. SQL> DBA_EXTENTS, USER_EXTENTS Shows information about the extents, extent sizes, associated segment, and tablespace. DBA_DATA_FILES Shows data files belonging to tablespaces. DBA_TEMP_FILES Shows temporary files belonging to locally managed temporary tablespaces. V$TEMP_EXTENT_MAP Shows all extents of a locally managed temporary tablespace. V$TEMP_EXTENT_POOL Shows the temporary space used and cached for the current instance, for locally managed temporary tablespaces.
V$TEMP_SPACE_HEADER Shows the space used and free in each temporary tablespace files. V$SORT_SEGMENT Shows information about sort segments. DBA_USERS Shows information about the default and temporary tablespace assignments to users. The following query shows the default tablespace assignments of user HR. SQL> SELECT DEFAULT_TABLESPACE, TEMPORARY_TABLESPACE 2 FROM DBA_USERS 3 WHERE USERNAME = 'HR'; DEFAULT_TABLESPACE TEMPORARY_TABLESPACE ------------------------------ ----------------------EXAMPLE TEMP SQL>
Managing Data Files
Data files (or temporary files) are created when you create a tablespace or when you alter a tablespace to add files. Before Oracle9i, you had to specify a file name and size to create files. In Oracle9i, Oracle can create files and remove them when the tablespace is removed. Such files are known as Oracle Managed Files. We briefly discussed Oracle Managed Files for creating control files and redo log files in Chapter 5, and we’ll look at how you can use OMF to specify data files later in this chapter. You can specify the size of the file when you create a file or reuse an existing file. When you reuse an existing file, that file should not belong to any Oracle database—the contents of the file are overwritten. Use the REUSE clause to specify an existing file. If you omit the REUSE clause and the data file being created exists, Oracle returns an error. For example: CREATE TABLESPACE APPL_DATA DATAFILE '/disk2/oradata/DB01/appl_data01.dbf' REUSE; When you specify REUSE, you can omit the SIZE clause. If you specify the SIZE clause, the size of the file should be the same as the existing file. If the
file to be created does not exist, Oracle creates a new file even if you specify the REUSE clause. Always specify the fully qualified directory name for the file being created. If you omit the directory, Oracle creates the file under the default database directory or in the current directory, depending on the operating system. In the following sections we will discuss how to specify file sizes, resize data files, relocate tablespaces by renaming the data files, and display the dictionary views that contain information about data files and temporary files.
Sizing Files
Oracle Exam Objective
Change the size of the tablespace
You can specify that the data file (or temporary file) grow automatically whenever space is needed in the tablespace. To do so, specify the AUTOEXTEND clause for the file. This functionality enables you to have fewer data files per tablespace and can simplify the administration of data files. You can turn the AUTOEXTEND clause ON and OFF; you can also specify file size increments. You can set a maximum limit for the file size; by default the file size limit is UNLIMITED. You can specify the AUTOEXTEND clause for files when you run the CREATE DATABASE, CREATE TABLESPACE, ALTER TABLESPACE, or ALTER DATAFILE commands. For example: CREATE TABLESPACE APPL_DATA DATAFILE '/disk2/oradata/DB01/appl_data01.dbf' SIZE 500M AUTOEXTEND ON NEXT 100M MAXSIZE 2000M; The AUTOEXTEND ON clause specifies that the automatic file resize feature be enabled for the specified file; NEXT specifies the size by which the file should be incremented; and MAXSIZE specifies the maximum size for the file. When Oracle tries to allocate an extent in the tablespace, it looks for a free extent. If Oracle cannot locate a large enough free extent (even after coalescing), Oracle increases the data file size by 100MB and tries to allocate the new extent.
The following statement disables the automatic file extension feature: ALTER DATABASE DATAFILE '/disk2/oradata/DB01/appl_data01.dbf' AUTOEXTEND OFF; If the file already exists in the database, and you want to enable the auto extension feature, use the ALTER DATABASE command. For example, you can use the following statement: ALTER DATABASE DATAFILE '/disk2/oradata/DB01/appl_data01.dbf' AUTOEXTEND ON NEXT 100M MAXSIZE 2000M; You can increase or decrease the size of a data file or temporary file (thus increasing or decreasing the size of the tablespace) by using the RESIZE clause of the ALTER DATABASE DATAFILE command. For example, to redefine the size of a file, use the following statement: ALTER DATABASE DATAFILE '/disk2/oradata/DB01/appl_data01.dbf' RESIZE 1500M; When decreasing the file size, Oracle returns an error if it finds data beyond the new file size. You cannot reduce the file size below the high-water mark in the file. Reducing the file size helps to reclaim unused space.
Oracle Managed Files
Oracle Exam Objective
Implement Oracle Managed Files
Oracle managed files are appropriate for smaller non-production databases or databases on disks that use the logical volume manager (LVM). LVM is software available with most disk systems to combine partitions of multiple physical disks to one logical volume. LVM can use mirroring, striping, RAID 5, and so on. The following benefits are associated with using the OMF:
Prevention of errors Because Oracle removes the files associated with the tablespace, you cannot make a mistake by removing a file that belongs to an active tablespace. Standard naming convention The files you create using OMF have unique and standard file names. Space retrieval When tablespaces are removed, Oracle removes the files associated with the tablespace, thus freeing up space immediately on the disk. The DBA might forget to remove the file from disk. Easy script writing Application vendors need not worry about the syntax of specifying directory names in the scripts when porting an application to multiple platforms. The same script can be used to create tablespaces on different operating system platforms. You can use OMF to create files and to remove them when the corresponding object (redo log group or tablespace) is dropped from the database. You manage OMF-created files, using the traditional methods for renaming or resizing files.
Creating Files Before you can create Oracle Managed Files, you must set the parameter DB_CREATE_FILE_DEST. You can specify this parameter in the initialization parameter file or set/change it using the ALTER SYSTEM or ALTER SESSION statement. The DB_CREATE_FILE_DEST parameter defines the directory where Oracle can create data files. Oracle must have read/write permission on this directory, and the directory must exist on the server where the database is located. Oracle will not create the directory; it will create only create the data file. You can use OMF to create data files when using the CREATE DATABASE, CREATE TABLESPACE, or ALTER TABLESPACE statements. In the CREATE DATABASE statement, you need not specify the data file names for SYSTEM or UNDO or TEMPORARY tablespaces. You can omit the DATAFILE clause in the CREATE TABLESPACE statement. In the ALTER TABLESPACE ADD DATAFILE statement, you can omit the file name. The data files you create using OMF will have a standard format. For a data file the format is ora_%t_%u.dbf. The format for a temp file is ora_%t_%u.tmp; %t is the tablespace name, and %u is a unique 8-character string that Oracle derives. If the tablespace name is more than 8 characters,
only the first 8 characters are used. The file names that Oracle generates are reported in the alert log file. You can also use the OMF feature to create control files and redo log files of the database. Since these two types of files can be multiplexed, Oracle provides another parameter to specify the location of files— DB_CREATE_ONLINE_LOG_DEST_n, in which n can be 1, 2, 3, 4, or 5. You can also alter these initialization parameters using ALTER SYSTEM or ALTER SESSION. If you set the parameters DB_CREATE_ONLINE_LOG_DEST_1 and DB_CREATE_ONLINE_LOG_DEST_2 in the parameter file when creating a database, Oracle creates two control files (one in each directory) and creates two online redo log groups with two members each (one member each in both directories). The redo log file names will have the format ora_%g_%u.log, in which %g is the log group number and %u is an 8-character string unique to the database. The control file name will have the format ora_%u.ctl, in which %u is an 8-character string. Let’s consider an example of creating a database. The following parameters are set in the initialization parameter file. UNDO_MANAGEMENT = AUTO DB_CREATE_ONLINE_LOG_DEST_1 = '/ora1/oradata/MYDB' DB_CREATE_ONLINE_LOG_DEST_2 = '/ora2/oradata/MYDB' DB_CREATE_FILE_DEST = '/ora1/oradata/MYDB' The CONTROL_FILES parameter is not set. Create the database using the following statement: CREATE DATABASE MYDB DEFAULT TEMPORARY TABLESPACE TEMP; The following files will be created.:
The SYSTEM tablespace data file in /ora1/oradata/MYDB
The TEMP tablespace temp file in /ora1/oradata/MYDB
A control file in /ora1/oradata/MYDB
A control file in /ora2/oradata/MYDB
One member of the first redo log group in /ora1/oradata/MYDB and a second member in /ora2/oradata/MYDB
One member of the second redo log group in /ora1/oradata/MYDB and a second member in /ora2/oradata/MYDB
Because we specified the UNDO_MANAGEMENT clause and did not specify a name for the undo tablespace, Oracle creates SYS_UNDOTBS tablespace as undo tablespace and creates its data file under /ora1/oradata/MYDB. If you omit the DEFAULT TEMPORARY TABLESPACE clause, Oracle will not create a temporary tablespace.
When using OMF to create control files, you must get the names of control files from the alert log and add them to the initialization parameter file using the CONTROL_FILES parameter, for the instance to start again.
If you do not specify the DB_CREATE_ONLINE_LOG_DEST_n parameter when creating a database or when adding a redo log group, OMF creates one control file and two groups with one member each for redo log files in the DB_CREATE_FILE_DEST directory. If you also do not set the DB_CREATE_FILE_ DEST parameter and you did not provide data file names and redo log file names, Oracle creates the files under a default directory (usually $ORACLE_ HOME/dbs), but these files will not be Oracle managed. This is the default behavior of the database.
The data files and temp files that OMF creates will have a default size of 100MB, which is auto extensible with no maximum file size. Each redo log member will be 100MB in size by default. Let’s look at another example that creates two tablespaces. The data file for the APP_DATA tablespace will be stored in the /ora5/oradata/MYDB directory. The data file for the APP_INDEX tablespace will be stored in the /ora6/oradata/MYDB directory. ALTER SESSION SET CREATE TABLESPACE EXTENT MANAGEMENT ALTER SESSION SET
Overriding the Default File Size If you want a different size for the files created by OMF, you can specify the DATAFILE clause without a file name. You can also turn off the auto-extensible
feature of the data file. The following statement creates a tablespace of 10MB and turns off the auto-extensible feature: CREATE TABLESPACE PAY_DATA DATAFILE SIZE 10M AUTOEXTEND OFF; Here is another example, which creates multiple data files for the tablespace. The second and third data files are auto-extensible. CREATE TABLESPACE PAY_INDEX DATAFILE SIZE 20M AUTOEXTEND OFF, SIZE 30M AUTOEXTEND ON MAXSIZE 1000M, SIZE 1M; The following example adds files to an existing tablespace. ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/ora5/oradata/MYDB'; ALTER TABLESPACE USERS ADD DATAFILE; ALTER SYSTEM SET DB_CREATE_FILE_DEST = '/ora8/oradata/MYDB'; ALTER TABLESPACE APP_DATA ADD DATAFILE SIZE 200M AUTOEXTEND OFF;
Once created, Oracle-managed files are treated as other database files. You can rename and resize them, and you must back them up. Archive log files are not OMF.
How Do You Create a Database and Tablespaces with OMF? Your manager has asked you to create a test database for a new application your company just bought. The database is for testing the functionality of the application. The vendor told you that you need four tablespaces: SJC_DATA, SJC_INDEX, WKW_DATA, and WKW_INDEX. The index tablespaces must have uniform extent sizes of 512KB and should have minimum size of 500MB. The SJC_DATA tablespace is to be dictionary managed with the minimum and extent size multiple to be 128K and a tablespace size of 1GB. The SJC_DATA tablespace should be 250MB.
Since this is a database for testing the functionality of the application, you decide to use Oracle Managed Files, which makes your life easier by creating and cleaning the files in the database. Let’s create the database. Your systems administrator has given you four disks: /ora1, /ora2, /ora3, and /ora4, each with 900MB of space. Be sure to include the following in the parameter file: UNDO_MANAGEMENT = AUTO DB_CREATE_FILE_DEST = /ora1 DB_CREATE_ONLINE_LOG_DEST_1 = /ora1 DB_CREATE_ONLINE_LOG_DEST_2 = /ora2 Create the database using the following statement: CREATE DATABASE SJCTEST LOGFILE SIZE 20M DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE SIZE 200M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 2M UNDO TABLESPACE UNDO_TBS SIZE 200M; This statement creates a database named SJCTEST. The system tablespace, undo tablespace, and temporary tablespace are created in /ora1. System tablespace has the default size of 100M; undo tablespace and temporary tablespace have the size of 200MB. Since we did not want each log file member to be 100MB, we specified a smaller size for online redo log members. Two control files and redo log files with two members are created. Each member is stored in /ora1 and /ora2. After running the necessary scripts to create the catalog and packages, we’ll create the tablespaces for the application. ALTER SYSTEM SET DB_CREATE_FILE_DEST = "/ora2";
CREATE TABLESPACE SJC_DATA EXTENT MANAGEMENT DICTIONARY MINIMUM EXTENT 128K DATAFILE SIZE 800M; ALTER SYSTEM SET DB_CREATE_FILE_DEST = "/ora3"; ALTER TABLESPACE SJC_DATA ADD DATAFILE SIZE 200M; CREATE TABLESAPCE WKW_INDEX EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K DATAFILE SIZE 500M; ALTER SYSTEM SET DB_CREATE_FILE_DEST = "/ora4"; CREATE TABLESPACE WKW_DATA; CREATE TABLESPACE SJC_INDEX EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K DATAFILE SIZE 500M; Since we have only 900MB in each file system, we need two data files to allocate 1GB to the SJC_DATA tablespace. This is accomplished in two steps.
Renaming and Relocating Files You rename data files using the RENAME FILE clause of the ALTER DATABASE command. You can also rename data files by using the RENAME DATAFILE clause of the ALTER TABLESPACE command. You use the RENAME functionality to logically move tablespaces from one location to another. Consider the following example. The tablespace USER_DATA has three data files:
You’ll notice that the second file does not follow the naming standard set for your company, so you need to rename the file. Follow these steps: 1. Take the tablespace offline.
ALTER TABLESPACE USER_DATA OFFLINE; 2. Copy or move the file to the new location, or rename the file by using
operating system commands. 3. Rename the file in the database by using one of the following two
commands: ALTER DATABASE RENAME FILE '/disk1/oradata/DB01/userdata2.dbf' TO '/disk1/oradata/DB01/user_data02.dbf'; or ALTER TABLESPACE USER_DATA RENAME DATAFILE '/disk1/oradata/DB01/userdata2.dbf' TO '/disk1/oradata/DB01/user_data02.dbf'; 4. Bring the tablespace online.
ALTER TABLESPACE USER_DATA ONLINE; If you need to relocate the tablespace from disk 1 to disk 2, follow the same steps. You can rename all the files in the tablespace by using a single command. The steps are as follows: 1. Take the tablespace offline.
ALTER TABLESPACE USER_DATA OFFLINE; 2. Copy the file to the new location by using operating system commands
on the disk. 3. Rename the files in the database by using one of the following two
commands. The number of data files specified before the keyword TO should equal the number of files specified after the keyword. ALTER DATABASE RENAME FILE '/disk1/oradata/DB01/user_data01.dbf', '/disk1/oradata/DB01/userdata2.dbf', '/disk1/oradata/DB01/user_data03.dbf'
TO '/disk2/oradata/DB01/user_data01.dbf', '/disk2/oradata/DB01/user_data02.dbf', '/disk2/oradata/DB01/user_data03.dbf'; or ALTER TABLESPACE USER_DATA RENAME DATAFILE '/disk1/oradata/DB01/user_data01.dbf', '/disk1/oradata/DB01/userdata2.dbf', '/disk1/oradata/DB01/user_data03.dbf' TO '/disk2/oradata/DB01/user_data01.dbf', '/disk2/oradata/DB01/user_data02.dbf', '/disk2/oradata/DB01/user_data03.dbf'; 4. Bring the tablespace online.
ALTER TABLESPACE USER_DATA ONLINE; To rename or relocate files that belong to multiple tablespaces, or if the file belongs to the SYSTEM tablespace, follow these steps: 1. Shut down the database. A complete backup is recommended before
making any structural changes. 2. Copy or rename the files on the disk by using operating system
commands. 3. Start up and mount the database (STARTUP MOUNT). 4. Rename the files in the database by using the ALTER DATABASE
RENAME FILE command. 5. Open the database by using ALTER DATABASE OPEN.
If you need to move read-only tablespaces to CD-ROM or to any write-once read-many device, follow these steps: 1. Make the tablespace read-only. 2. Copy the data files that belong to the tablespace to the device. 3. Rename the files in the database by using the ALTER DATABASE
V$TEMPFILE Similar to the V$DATAFILE view, this view shows information about the temporary files. SQL> SELECT FILE#, RFILE#, STATUS, BYTES, BLOCK_SIZE 2 FROM V$TEMPFILE; FILE# RFILE# STATUS BYTES BLOCK_SIZE ---------- ---------- ------- ---------- ---------1
1 ONLINE
10485760
8192
DBA_DATA_FILES This view shows information about the data file names, associated tablespace names, size, status, and so on. SQL> SELECT TABLESPACE_NAME, FILE_NAME, BYTES, 2 AUTOEXTENSIBLE 3 FROM DBA_DATA_FILES;
The following columns are displayed in DBA_DATA_FILES view: FILE_NAME FILE_ID TABLESPACE_NAME BYTES BLOCKS STATUS RELATIVE_FNO AUTOEXTENSIBLE MAXBYTES MAXBLOCKS INCREMENT_BY USER_BYTES USER_BLOCKS
DBA_TEMP_FILES This view shows information similar to that of the DBA_DATA_FILES view for the temporary files in the database. SQL> SELECT TABLESPACE_NAME, FILE_NAME, BYTES, 2 AUTOEXTENSIBLE 3 FROM DBA_TEMP_FILES; TABLESPACE FILE_NAME BYTES AUT ---------- -------------------------------------- --TEMP_LOCAL C:\ORACLE\DB01\TEMP_LOCAL01.DBF 10485760 NO
The maximum number of data files per tablespace is depends on the operating system, but on most operating systems, it is 1022. The maximum number of data files per database is 65,533. The MAXDATAFILES clause in the CREATE DATABASE or CREATE CONTROLFILE statements also limits the number of data files per database. The maximum data file size is also depends on the operating system. There is no limit on the number of tablespaces per database. Because only 65,533 data files are allowed per database, you cannot have more than 65,533 tablespaces, because each tablespace needs at least one data file.
Summary
T
his chapter discussed the tablespaces and data files—the logical storage structures and physical storage elements of the database. A data file belongs to one tablespace, and a tablespace can have one or more data files. The size of the tablespace is the total size of all the data files belonging to that tablespace. The size of the database is the total size of all tablespaces in the database, which is the same as the total size of all data files in the database. Tablespaces are logical storage units used to group data by their type or category.
You create tablespaces using the CREATE TABLESPACE command. Oracle always allocates space to an object in chunks of blocks known as extents. Tablespaces can handle the extent management through the Oracle dictionary or locally in the data files that belong to the tablespace. When creating tablespaces, you can specify default storage parameters for the objects that will be created in the tablespace. If you do not specify any storage parameters when creating an object, the storage parameters for the tablespace are used for the new object. Locally managed tablespaces can have uniform extent sizes, which reduces fragmentation and wasted space. You can also specify that Oracle do the entire extent sizing for locally managed tablespaces. A temporary tablespace is used only for sorting; you can’t create permanent objects in a temporary tablespace. Only one sort segment is created for each instance in the temporary tablespace. Multiple transactions can use the same sort segment, but one transaction can use only one extent. To create locally managed temporary tablespaces, you use the CREATE TEMPORARY TABLESPACE command. Temporary files (instead of data files) are created when you use this command. Although these files are part of the database, they do not appear in the control file, and the block changes do not generate any redo information because all the segments created on locally managed temporary tablespaces are temporary segments. You can alter a tablespace to change its availability or to make it readonly. Data in an offline tablespace is not accessible, whereas data in the read-only tablespaces cannot be modified or deleted. You can drop objects from a read-only tablespace. Space is added to the tablespace by adding new data files to the tablespace or by increasing the size of the data files. You can obtain tablespace information from the dictionary using the DBA_TABLESPACES and V$TABLESPACE views. The data files can be renamed through Oracle. This feature is useful to relocate a tablespace. Oracle9i can manage the physical files belonging to the database using the Oracle Managed Files feature. OMF is good for non-production databases and databases on Logical Volume Manager. The V$DATAFILE, V$TEMPFILE, DBA_DATA_FILES, and DBA_TEMP_FILES views provide information on the data files.
Exam Essentials Understand the syntax of the CREATE TABLESPACE statement. Learn to create locally managed and dictionary-managed tablespaces. Remember that locally managed tablespace is the default. You can create the tablespace without specifying a file name, when using the OMF option. Know the options of the ALTER TABLESPACE statement. The options of the ALTER TABLESPACE statement are mutually exclusive, and you can use them to change the storage settings, change the status, relocate files, and alter the default storage settings. Know the options you can use to take a tablespace offline. You can take a tablespace offline using the NORMAL, TEMPORARY, IMMEDIATE, or FOR RECOVER clause. Understand the difference between each state of the tablespace. Understand the dictionary views. Query the DBA_TABLESPACES, DBA_DATA_FILES, V$SORT_SEGMENT, V$DATAFILE, and DBA_FREE_SPACE dictionary views. The tablespace name is available in most of the views. Note the parameters associated with creating non-standard block sizes. Non-standard block size is new to Oracle9i. You can specify a different block size for the tablespace by using the BLOCKSIZE clause in the CREATE TABLESPACE statement. You must set the DB_nK_CACHE_SIZE parameter, in which n is the non-standard block size. Learn to create tablespaces and add files to tablespaces using OMF. OMF is new to Oracle9i. Learn to set up the OMF parameters using ALTER SESSION and ALTER SYSTEM and create tablespaces and add data files. Understand the format of file names generated by Oracle. Know how to change the size of the tablespace. You can change the size of the tablespace by two ways. Use the ALTER TABLESPACE statement to add a new file to the tablespace, or use the ALTER DATABASE or ALTER TABLESPACE statement to resize an existing file.
Review Questions 1. Which two of the following statements do you execute to make the
USERS tablespace read-only, if the tablespace is offline? A. ALTER TABLESPACE USERS READ ONLY B. ALTER DATABASE MAKE TABLESPACE USERS READ ONLY C. ALTER TABLESPACE USERS ONLINE D. ALTER TABLESPACE USERS TEMPORARY 2. When is a sort segment that is allocated in a temporary tablespace
released? A. When the sort operation completes B. When the instance is shut down C. When you issue ALTER TABLESPACE COALESCE D. When SMON clears up inactive sort segments 3. You created a tablespace using the following statement:
CREATE TABLESPACE MYTS DATAFILE SIZE 200M AUTOEXTEND ON MAXSIZE 2G EXTENT MANAGEMENT LOCAL UNIFORM SIZE 5M SEGMENT SPACE MANAGEMENT AUTO; Which three parameters does Oracle ignore when you create a table in the MYTS tablespace? A. PCTFREE B. PCTUSED C. FREELISTS D. FREELIST GROUPS E. INITIAL
4. What will be the minimum size of the segment created in a tablespace
if the tablespace’s default storage values are specified as (INITIAL 2M NEXT 2M MINEXTENTS 3 PCTINCREASE 50) and no storage clause is specified for the object? A. 2MB B. 4MB C. 5MB D. 7MB E. 8MB 5. Which of the following would you use to add more space to a
tablespace? (Choose two.) A. ALTER TABLESPACE ADD DATAFILE SIZE B. ALTER DATABASE DATAFILE RESIZE C. ALTER DATAFILE RESIZE D. ALTER TABLESPACE DATAFILE
RESIZE 6. If the DB_BLOCK_SIZE of the database is 8KB, what will be the size of
the third extent when you specify the storage parameters as (INITIAL 8K NEXT 8K PCTINCREASE 50 MINEXTENTS 3)? A. 16KB B. 24KB C. 12KB D. 40KB 7. The standard block size for the database is 8KB. You need to create a
tablespace with block size of 16KB. Which initialization parameters should be set? (Choose two.)
A. DB_8K_CACHE_SIZE B. DB_16K_CACHE_SIZE C. DB_CACHE_SIZE D. UNDO_MANAGEMENT E. DB_CREATE_FILE_DEST 8. Which data dictionary view can you query to obtain information
about the files that belong to locally managed temporary tablespaces? A. DBA_DATA_FILES B. DBA_TABLESPACES C. DBA_TEMP_FILES D. DBA_LOCAL_FILES 9. When does the SMON process automatically coalesce the
tablespaces? A. When the initialization parameter COALESCE_TABLESPACES is set
to TRUE B. When the PCTINCREASE default storage of the tablespace is set to 0 C. When the PCTINCREASE default storage of the tablespace is set
to 50 D. Whenever the tablespace has more than one free extent 10. Which operation is permitted on a read-only tablespace? A. Delete data from table B. Drop table C. Create new table D. None of the above
11. How would you drop a tablespace if the tablespace were not empty? A. Rename all the objects in the tablespace and then drop the
tablespace B. Remove the data files belonging to the tablespace from the disk C. Use ALTER DATABASE DROP CASCADE D. Use DROP TABLESPACE INCLUDING
CONTENTS 12. Which command is used to enable the auto-extensible feature for a
file, if the file is already part of a tablespace? A. ALTER DATABASE. B. ALTER TABLESPACE. C. ALTER DATA FILE. D. You cannot change the auto-extensible feature once the data file
created. 13. The database block size is 4KB. You created a tablespace using the
following command. CREATE TABLESPACE USER_DATA DATAFILE 'C:/DATA01.DBF' EXTENT MANAGEMENT DICTIONARY; If you create an object in the database without specifying any storage parameters, what will be the size of the third extent that belongs to the object? A. 6KB B. 20KB C. 50KB D. 32KB
14. Which of the following statements is false? A. You can make a dictionary-managed temporary tablespace
permanent. B. You cannot change the size of the locally managed temporary
tablespace file. C. Once it is created, you cannot alter the extent management of a
tablespace using ALTER TABLESPACE. D. You cannot make a locally managed permanent tablespace
temporary. E. If you do not specify an extent management clause when creating
a tablespace, Oracle creates a locally managed tablespace. 15. Which of the following statements is true regarding the SYSTEM
tablespace? A. Can be made read-only. B. Can be offline. C. Data files can be renamed. D. Data files cannot be resized. 16. What are the recommended INITIAL and NEXT values for a temporary
tablespace, to reduce fragmentation? A. INITIAL = 1MB; NEXT = 2MB B. INITIAL = multiple of SORT_AREA_SIZE + 1; NEXT =
INITIAL C. INITIAL = multiple of SORT_AREA_SIZE + DB_BLOCK_SIZE;
NEXT = INITIAL D. INITIAL = 2 ∴ SORT_AREA_SIZE; NEXT = SORT_AREA_SIZE 17. Which parameter specified in the DEFAULT STORAGE clause of CREATE
TABLESPACE cannot be altered after you create the tablespace?
A. INITIAL B. NEXT C. MAXEXTENTS D. None 18. How would you determine how much sort space is used by a user
session? A. Query the DBA_SORT_SEGMENT view. B. Query the V$SORT_SEGMENT view. C. Query the V$SORT_USAGE view. D. You can obtain only the total sort segment size; you cannot find
information on individual session sort space usage. 19. If you issue ALTER TABLESPACE USERS OFFLINE IMMEDIATE, which
of the following statements is true? (Choose two.) A. All data files belonging to the tablespace must be online. B. Does not ensure that the data files are available. C. Need not do media recovery when bringing the tablespace online. D. Need to do media recovery when bringing the tablespace online. 20. Which format strings does Oracle use to generate OMF file names?
(Choose three.) A. %s B. %t C. %g D. %a E. %u F. %%
Answers to Review Questions 1. C, A. To make a tablespace read-only, all the data files belonging to
the tablespace must be online and available. So, bring the tablespace online, and then make it read-only. 2. B. The sort segment or temporary segment created in a temporary
tablespace is released only when the instance is shut down. Each instance can have one sort segment in the tablespace; the sort segment is created when the first sort for the instance is started. 3. B, C, D. When the tablespace has automatic segment space manage-
ment, Oracle manages free space automatically using bitmaps. For manual segment space management, Oracle uses free lists. 4. D. When the segment is created, it will have three extents; the first
extent is 2MB, the second is 2MB, and the third is 3MB. So the total size of the segment is 7MB. 5. A, B. You can add more space to a tablespace either by adding a data
file or by increasing the size of an existing data file. Option A does not specify a file name and uses the OMF feature to generate file name. 6. A. The third extent size will be NEXT + 0.5 * NEXT, which is 12KB,
but the block size is 8KB, so the third extent size will be 16KB. The initial extent allocated will be 16KB (the minimum size for INITIAL is two blocks), and the total segment size is 16 + 8 + 16 = 40KB. 7. B, C. Set DB_CACHE_SIZE for the standard block size, and set
DB_16K_CACHE_SIZE for the non-standard block size. You must not set the DB_8K_CACHE_SIZE parameter because the standard block size is 8KB. 8. C. You create locally managed temporary tablespaces using the
CREATE TEMPORARY TABLESPACE command. The data files (temporary files) belonging to these tablespaces are in the DBA_TEMP_FILES view. The EXTENT_MANAGEMENT column of the DBA_TABLESPACES view shows the type of the tablespace. You can query the data files belonging to locally managed permanent tablespaces and dictionary-managed (permanent and temporary) tablespaces from DBA_DATA_FILES. Locally managed temporary tablespaces reduce contention on the data dictionary tables.
9. C. The SMON process automatically coalesces free extents in the
tablespace when the tablespace’s PCTINCREASE is set to a nonzero value. You can manually coalesce a tablespace by using ALTER TABLESPACE COALESCE. 10. B. A table can be dropped from a read-only tablespace. When a table
is dropped, Oracle does not have to update the data file; it updates the dictionary tables. Any change to data or creation of new objects is not allowed in a read-only tablespace. 11. D. You use the INCLUDING CONTENTS clause to drop a tablespace
that is not empty. Oracle does not remove the data files that belong to the tablespace, if the files are not Oracle managed; you need to do it manually using an operating system command. Oracle updates only the control file. To remove the files, you include the INCLUDING CONTENTS AND DATAFILES clause. 12. A. You can use the ALTER TABLESPACE command to rename a file
that belongs to the tablespace, but you handle all other file management operations through the ALTER DATABASE command. To enable auto-extension, use ALTER DATABASE DATAFILE AUTOEXTEND ON NEXT MAXSIZE . 13. D. When you create a tablespace with no default storage parame-
ters, Oracle assigns (5 × DB_BLOCK_SIZE) to INITIAL and NEXT; PCTINCREASE is 50. So the third extent would be 50 percent more than the second. The first extent is 20KB, the second is 20KB, and the third is 32KB (because the block size is 4KB). 14. B. You can change the size of a temporary file using ALTER DATABASE
TEMPFILE RESIZE . You cannot rename a temporary file. 15. C. You can rename the data files that belong to the SYSTEM tablespace
when the database is in the MOUNT state by using the ALTER DATABASE RENAME FILE statement. 16. C. The recommended storage for a TEMPORARY tablespace is a multi-
ple of SORT_AREA_SIZE + DB_BLOCK_SIZE. For example, if the sort area size is 100KB and the block size is 4KB, the sort extents should be sized 104KB, 204KB, 304KB, and so on. The disk is sorted only when
there is not enough space available in memory. Memory sort size is specified by the SORT_AREA_SIZE parameter. Therefore, when the sorting is done on disk, the minimum area required is as big as the SORT_AREA_SIZE, and one block is added for the overhead. The INITIAL and NEXT storage parameters should be the same for the TEMPORARY tablespace, and PCTINCREASE should be zero. You can achieve these storage settings by creating a locally managed temporary tablespace with uniform extent sizes. 17. D. You can change all the default storage parameters defined for the
tablespace using the ALTER TABLESPACE command. Once objects are created, you cannot change their INITIAL and MINEXTENTS values. 18. C. The V$SORT_USAGE view provides the number of EXTENTS and
number of BLOCKS used by each sort session. This view provides the username also. It can be joined with V$SESSION or V$SQL to obtain more information on the session or the SQL statement causing the sort. 19. B, D. When you take a tablespace offline with the IMMEDIATE clause,
Oracle does not perform a checkpoint and does not make sure that all data files are available. You must perform a media recovery when the tablespace is brought online. 20. B, C, E. The data file names have the format ora_%t_%u.dbf, redo
log files have the format ora_%g_%u.log, and control files have the format ora_%u.ctl. %t is the tablespace name and can be a maximum of 8 characters, %u is an 8-character unique string, and %g is the redo log group number.
Segments and Storage Structures ORACLE9i DBA FUNDAMENTALS I EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Describe the logical structure of segments within the database Describe the segment types and their uses List the keywords that control block space usage Obtain information about storage structures from the data dictionary Describe the purpose of undo data Implement Automatic Undo Management
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification website (http://www.oracle.com/ education/certification/) for the most current exam objectives listing.
egments are logical storage units that fit between a tablespace and an extent in the logical storage hierarchy. A segment has one or more extents, and it belongs to a tablespace. This chapter covers in detail segments, extents, and blocks. This chapter also discusses the types of segments and the type of information stored in these segments.
Oracle Objective
Describe the logical structure of segments within the database
Data Blocks
A
data block is the smallest logical unit of storage in Oracle. You define the block size with the DB_BLOCK_SIZE initialization parameter when you create the database, and the block size cannot be changed. The block size is a multiple of the operating system block size and is the unit of I/O used in the database. The format of the data block is the same, whether it is used to store a table, index, or cluster. A data block consists of the following: Common and variable header The header portion contains information about the type of block and block address. The block type can be data, index, or undo. The common block header can take 24 bytes, and the variable (transaction) header occupies (24 × INITRANS) bytes. By default, the value of INITRANS for tables is 1 and for indexes is 2. Table directory This portion of the block has information about tables that have rows in this block. The table directory occupies 4 bytes.
Row directory Contains information (such as the row address) about the actual rows in the block. The space allocated for the row directory is not reclaimed, even if you delete all rows in the block. The space is reused when new rows are added to the block. The row directory occupies (4 × number of rows) bytes. Row data The actual rows are stored in this area. Free space This space is available for new rows or for extending the existing rows through updates. Deletions and updates may cause fragmentation in the block; this free space is coalesced by the Oracle Server when deemed necessary.
The space used for the common and variable header, table directory, and row directory in a block is collectively known as the block overhead. The overhead varies, but mostly it is between 84 and 107 bytes. If more rows are inserted into the block (row directory increases) or a large INITRANS is specified (header increases), this overhead size might be greater.
Block Storage Parameters
Oracle Objective
List the keywords that control block space usage
When you create objects such as tables or indexes, you can specify the block storage options. Choosing proper values for these storage parameters can save you a lot of space and provide better performance. The storage parameters affecting the block are as follows: PCTFREE and PCTUSED These two space management parameters control the free space available for inserts and updates on the rows in the block. You can specify these parameters when you create an object. INITRANS and MAXTRANS These two transaction entry parameters control the number of concurrent transactions that can modify or create data in the block. You can specify these parameters when you create an
object. Based on these parameters, space is reserved in the block for transaction entries. FREELIST Each segment has one or more free lists that list the available blocks for future inserts. The FREELIST parameter specifies the number of desired free lists for a segment. By default, one free list is allocated for each segment.
PCTFREE and PCTUSED Before discussing these parameters, let’s consider two important aspects of storing rows in a block: row chaining and row migration. If the table row length is bigger than a block, or if the table has LONG or LOB columns, it is difficult to fit one row entirely in one block. Oracle stores such rows in more than one block. This situation is unavoidable, and storing such rows in multiple blocks is known as row chaining. In some cases, the row will fit into a block with other rows, but due to an update activity, the row length increases and no free space remains available to accommodate the modified row. Oracle then moves the entire row from its original block to a new block, leaving a pointer in the original block to refer to the new block. This process is known as row migration. Both row migration and row chaining affect the performance of queries, because Oracle has to read more than one block to retrieve the row. You can avoid row migration if you plan the block’s free space properly using the PCTFREE and PCTUSED parameters. PCTFREE and PCTUSED are specified in percentages of the data block. PCTFREE specifies what percentage of the block should be allocated as free space for future updates. If the table can undergo a lot of updates and the updates increase the size of the row, set a higher value for the PCTFREE parameter, so that even if the row length increases due to an update, the rows are not moved out of the block (no row migration). Whenever a new row is added to a block, Oracle determines whether the free space will fall below the PCTFREE threshold. If it does, the block is removed from the free list, and the row is stored in another block. PCTUSED specifies when the block can be considered for adding new rows. After the block becomes full as determined by the PCTFREE parameter, Oracle considers adding new rows to the block only when the used space falls below the percent value set by PCTUSED. When the used space in a block falls below the PCTUSED threshold, the block is added to the free list.
To understand the usage of the PCTFREE and PCTUSED parameters, consider an example. The table EMP is created with a PCTFREE value of 10 and a PCTUSED value of 40. When you insert rows into the EMP table, Oracle adds rows to a block until it is 90 percent full (including row data and overhead), leaving 10 percent of the block free for future updates. During an update operation, Oracle uses the free space available if the row length increases. Once no free space is available, Oracle moves the row out of the block and provides a pointer to the new location (row migration). If you delete rows from the table (or update the rows such that the row length decreases), more free space will be available in the block. Oracle starts inserting new rows into the block only when the used space falls below PCTUSED, which is 40 percent. Therefore, when the row data and overhead is below 40 percent of the block, new rows are inserted into the block. Such inserts will continue until the block is 90 percent full. When the block has only PCTFREE (or less) percent free space available, it is removed from the free list. The block is added back to the free list only when the used space in the block falls below PCTUSED percent. The default value of PCTFREE is 10, and the default for PCTUSED is 40. The sum of PCTFREE and PCTUSED cannot be more than 100. If the rows in a table are subject to a lot of updates, and the updates increase the row length, set a higher PCTFREE. If the table has a large number of inserts and deletes, and the updates do not cause the row length to increase, set the PCTFREE low and set the PCTUSED high. A high value for PCTUSED will help to reuse the space freed by deletes faster. If the table row length is larger, or if the table rows are never updated, set the PCTFREE very low so that a row can fit into one block and you fill each block. You can specify PCTFREE when you create a table, an index, or a cluster, and you can specify PCTUSED while creating tables and clusters, but not indexes.
INITRANS and MAXTRANS These transaction entry settings reserve space for transactions in the block. Base these parameters on the maximum number of transactions that can touch a block at any given point in time. INITRANS reserves space in the block header for DML transaction entries. If you do not specify INITRANS, Oracle defaults the value to 1 for table data blocks and to 2 for index blocks and cluster blocks.
When multiple transactions access the data block, space is allocated in the block header for each transaction. When no pre-allocated space is available, Oracle allocates space from the free area of the block for the transaction entry. The space allocated from the free space thus becomes part of the block overhead and is never released. The MAXTRANS parameter limits the number of transaction entries that can concurrently use data in a data block. Therefore, you can limit the amount of free space that can be allocated for transaction entries in a data block by using MAXTRANS. The default value is operating system specific, and the maximum value you can specify is 255. Base the values for INITRANS and MAXTRANS on the number of transactions that can simultaneously update/insert/delete the rows in a block. If the row length is large or the number of users accessing the table is low, set INITRANS to a low value. Some tables, such as an application’s control tables, are accessed frequently by the users, and chances are high that more than one user can access a block simultaneously to update, insert, or delete. If a sufficient amount of transaction entry space is not reserved, Oracle dynamically allocates transaction entry space from the free space available in the block (this is an expensive operation, and the space allocated in this way cannot be reclaimed). When you set MAXTRANS, Oracle limits the number of transaction entries in a block. You can specify INITRANS and MAXTRANS when you create a table, an index, or a cluster. Set a higher INITRANS value for tables and indexes that are queried most often by the application, such as application control tables.
Automatic Space Management If a segment will not contain LOBs (Large OBject types containing large blocks of unstructured data, such as Binary Large OBjects [BLOBs] or Character Large OBjects [CLOBs]), there is an alternative to using PCTUSED and FREELISTS to manage data blocks: Automatic Space Management. In short, bitmaps are used instead of free lists to manage free and used space. The advantages are many. You no longer have to guess at optimal values for PCTUSED and FREELISTS. The space is managed more efficiently, and the performance is greatly enhanced for many INSERT statements occurring concurrently on the same segment. Other than the restrictions on segments containing LOBs, the only other major restriction is on the tablespace that contains the segments that will be automatically managed—the tablespace has to be locally managed, and the automatic management is defined for the entire tablespace and can’t be enabled for individual segments.
The following is an example of a statement that creates a tablespace with automatic segment space management: CREATE TABLESPACE APPL_DATA2 DATAFILE ‘/disk4/oradata/DB01/appl_data02.dbf’ SIZE 200M EXTENT MANAGEMENT LOCAL UNIFORM SIZE 512K SEGMENT SPACE MANAGEMENT AUTO;
Limitations of OEM You are a busy Oracle DBA, and you like to use OEM for most of your day-to-day tasks. You want to make your life even easier by creating new tablespaces whose segment space is automatically managed, so you bring up OEM and browse the tablespaces. Right-clicking Tablespaces, you select Create…, and you find out that there is no option for setting segment space management! In a mild panic, you dig through your documentation and manually construct a CREATE TABLESPACE statement to create the tablespace with the desired characteristics. OEM doesn’t always cover every possible option when creating database objects, so it’s important to keep your command-line SQL*Plus skills sharp. In this scenario, you could set all the basic options for the creation of the tablespace and then click the Show SQL button to at least give you the basics for running the command manually. In fact, for any database operation, it’s a good idea to click the Show SQL button to make sure you know what’s going on behind the scenes, as well as stay on top of your SQL DDL syntax.
Extents
An extent is a logical storage unit that is made up of contiguous data blocks. An extent is first allocated when a segment is created, and subsequent extents are allocated when all the blocks in the segment are full. Oracle can manage the extent’s allocated and free information through the data dictionary
or locally by using the bitmaps on data files. Dictionary-managed tablespaces and locally managed tablespaces are discussed in Chapter 6, “Logical and Physical Database Structures.” You have also seen the parameters that control the size of the extents. To refresh your memory, these are as follows: INITIAL The first extent size for a segment, allocated when the segment (object) is first created. NEXT The second extent size for a segment. PCTINCREASE The size by which the extents should be increased based on the previously allocated extent size. This parameter affects the third extent onward in a segment. MINEXTENTS The minimum number of extents to be allocated when creating the segment. MAXEXTENTS The maximum number of extents that are allowed in a segment. You can set no extent limits by specifying UNLIMITED. When the extents are managed locally, the storage parameters do not affect the size of the extents. For locally managed tablespaces, you can either have uniform extent sizes or variable extent sizes managed completely by Oracle. Once an object (such as a table or an index) is created, its INITIAL and MINEXTENTS values cannot be changed. Changes to NEXT and PCTINCREASE take effect when the next extent is allocated for the object—already allocated extent sizes are not changed.
The header block of each segment contains a directory of the extents in that segment.
Allocating Extents Oracle allocates an extent when an object is first created or when all the blocks in the segment are full. For example, when you create a table, contiguous blocks specified by INITIAL are allocated for the table. If the MINEXTENTS value is more than 1, that many extents are allocated at the time of creation. Even though the table has no data, space is allocated for the table. When all the blocks allocated for the table are completely filled, Oracle allocates another
extent. The size of this extent depends on the values of the NEXT and PCTINCREASE parameters. New extents in locally managed tablespaces are allocated by searching the data file’s bitmap for the amount of contiguous free space required. Oracle looks at each file’s bitmap to find contiguous free space; Oracle returns an error if none of the files have enough free space. In dictionary-managed tablespaces, Oracle allocates extents based on the following rules: 1. If the extent requested is more than 5 data blocks, Oracle adds one
more block to reduce internal fragmentation. For example, if the number of blocks requested is 24, Oracle adds one more block and searches the tablespace where the segment belongs for a free extent with 25 blocks. 2. If an exact match fails, Oracle searches the contiguous free blocks
again for a free extent larger than the required value. When it finds one, Oracle allocates the entire extent for the segment if the number of blocks above the required size is less than or equal to 5 blocks. Using our example, if the free contiguous blocks found is 28 blocks, Oracle allocates 28 blocks to the segment to eliminate fragmentation. If the number of blocks above the required size is more than 5 blocks, Oracle breaks the free extent into two and allocates the required space for the segment. The rest of the contiguous blocks are added to the free list. In our example, Oracle allocates 25 blocks to the segment as an extent and 15 blocks are marked as a free extent if the free extent size is 40 blocks. 3. If step 2 fails, Oracle coalesces the free space in the tablespace and
repeats step 2. 4. If step 3 fails, Oracle checks to see if the files are defined as auto-
extensible; if so, Oracle tries to extend the file and repeats step 2. If Oracle cannot extend the file or cannot allocate an extent even after resizing the data file to its maximum size specified, Oracle issues an error and does not allocate an extent to the segment. Extents are normally de-allocated when you drop an object. To free up the extents allocated to a table or a cluster, use the TRUNCATE DROP STORAGE command to remove all rows. The TRUNCATE command can be used to remove all rows from a table or cluster. The DROP STORAGE clause is the default, and it removes all the extents higher than MINEXTENTS after
removing all rows. The REUSE STORAGE clause does not de-allocate extents; it just removes all the rows from the table/cluster. Rows deleted using the TRUNCATE command cannot be rolled back. Deleting rows by using DELETE does not free up the extents. You can also manually de-allocate extents by using the command ALTER [TABLE/INDEX/CLUSTER] DEALLOCATE UNUSED (discussed in Chapter 8, “Managing Tables, Indexes, and Constraints”).
Querying Extent Information You can query extent information from the data dictionary by using the following views.
Oracle Objective
Obtain information about storage structures from the data dictionary
DBA_EXTENTS This view lists the extents allocated in the database for all segments. It shows the size, segment name, and tablespace name where it resides. SQL> select owner, segment_type, tablespace_name, file_id, bytes 2* from dba_extents where owner='HR' 3 ; OWNER ------HR HR HR HR HR HR HR HR
SEGMENT_TYPE -------------TABLE TABLE TABLE TABLE TABLE TABLE INDEX INDEX
TABLESPACE_NAME FILE_ID BYTES ----------------- -------- -------EXAMPLE 5 65536 EXAMPLE 5 65536 EXAMPLE 5 65536 EXAMPLE 5 65536 EXAMPLE 5 65536 EXAMPLE 5 65536 EXAMPLE 5 65536 EXAMPLE 5 65536
DBA_FREE_SPACE This view lists information about the free extents in a tablespace. SQL> select tablespace_name, max(bytes) largest, 2 min(bytes) smallest, count(*) ext_count 3 from dba_free_space 4* group by tablespace_name SQL> / TABLESPACE_NAME LARGEST SMALLEST EXT_COUNT ----------------- ------------ ----------- ----------CWMLITE 14680064 14680064 1 DRSYS 12845056 12845056 1 EXAMPLE 196608 196608 1 INDX 26148864 26148864 1 OEM_REPOSITORY 36634624 36634624 1 SYSTEM 85872640 85872640 1 TOOLS 4390912 4390912 1 UNDOTBS 207880192 65536 8 USERS 26083328 26083328 1
All free space in the operating system files must be represented in either DBA_FREE_SPACE or DBA_EXTENTS.
Segments
A
segment is a logical storage unit that is made up of one or more extents. Every object in the database that requires space to store data is allocated a segment. The size of the segment is the total of the size of all extents in that segment. When you create a table, an index, a cluster, or a materialized view (snapshot), a segment is allocated for the object (for partitioned tables and indexes, a segment is allocated for each partition). A segment can belong to only one tablespace, but may spread across multiple data files belonging to the tablespace.
There are many types of segments: Table This is the most common type of segment in a database. All data in a table segment must reside in the same tablespace, unlike partitioned tables. Tables that have LOB or VARRAY columns do not store these columns in the same segment. Table Partition To support large enterprise databases that need high levels of availability and performance, a table may be split into partitions, stored in separate tablespaces. The partitions may be accessed by a distinct key range (range partitioning), by a hashing algorithm (hash partitioning), or by both. Each part of the table that resides in a different tablespace is considered a segment. Cluster As the name implies, a cluster segment is a single segment that is composed of one or more tables. The data is stored in key order, and all tables within the cluster have the same storage characteristics. Typically, tables stored in a cluster are joined; for example, an order table and a line-item table. Nested Table If a table has columns that are tables themselves (nested tables), each column is stored in its own segment. Each segment may have its own storage parameters. Index All index entries for a table index are stored in the same segment. Index Organized Table (IOT) An IOT segment is essentially a table and an index combined into a single segment, stored in index order. Access to an IOT is very fast because a query accessing a particular row need only traverse one segment to find the results. Index Partition An index partition segment is similar to a table partition segment in that the index segments are usually stored in separate tablespaces to enhance availability, performance, and scalability. Temporary In a nutshell, temporary segments hold overflow information from sort operations that don’t fit in memory. User-initiated sort operations are usually the result of DML operations such as CREATE INDEX, SELECT ... GROUP BY, or SELECT DISTINCT. These segments are
allocated in the temporary tablespace assigned to the user that runs these statements. Since the activity from these operations causes frequent allocation and de-allocation, it is recommended that a separate tablespace be allocated just for temporary segments. Having a separate tablespace prevents fragmentation on the SYSTEM or other application tablespaces. Entries made to the temporary segment blocks are not recorded in the redo log files. LOB For LOBs in a table that are larger than about 4 KB, space is allocated in a LOB segment, separate from the segment containing the elements of the rest of the table. The only piece of information remaining in the table for a LOB is a pointer to the segment containing the LOB itself. Undo Transactions that change rows in a table also store information in undo segments, specifically, the information that would be needed to restore the row to its original state in case of a rollback or an instance failure. For automatic undo management, all user undo information must reside in an undo tablespace. Bootstrap A special system segment that is used to initialize the data dictionary upon instance startup. It cannot be queried, needs no maintenance, and is basically transparent to all users and administrators of the database.
Segment Storage Parameters Storage parameters for segments generally take preference over storage parameters specified at the tablespace or database level. If no segment-level storage parameters are specified, the default tablespace parameters are used; if these do not exist, the database server defaults are used.
Changes to storage parameters at any of these three levels will only affect new extents and will not affect any extents in existing segments.
Querying Segment Information You can obtain segment information from the data dictionary by using the following views.
DBA_SEGMENTS This view shows the segments created in the database, their size, tablespace, type, storage parameters, and so on. Notice that the LOB segment types are listed as LOBINDEX for index and LOBSEGMENT for data. SQL> select tablespace_name, segment_type, count(*) 2 seg_cnt from dba_segments 3 where owner != 'SYS' 4 group by tablespace_name, segment_type; TABLESPACE_NAME -----------------CWMLITE CWMLITE DRSYS DRSYS DRSYS DRSYS EXAMPLE EXAMPLE EXAMPLE EXAMPLE EXAMPLE EXAMPLE EXAMPLE SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM SYSTEM TOOLS TOOLS USERS
SEGMENT_TYPE SEG_CNT ------------------ --------INDEX 67 TABLE 28 INDEX 76 LOBINDEX 2 LOBSEGMENT 2 TABLE 43 INDEX 132 INDEX PARTITION 84 LOBINDEX 23 LOBSEGMENT 23 NESTED TABLE 3 TABLE 61 TABLE PARTITION 24 INDEX 181 INDEX PARTITION 17 LOBINDEX 22 LOBSEGMENT 22 TABLE 150 TABLE PARTITION 19 INDEX 63 TABLE 29 TABLE 1
V$SORT_SEGMENT This view contains information about every sort segment in a given instance. The view is updated only when the tablespace is of the TEMPORARY type. It shows the number of active users, sort segment size, extents used, extents not used, and so on. SQL> select tablespace_name, extent_size, current_users, 2 total_blocks, used_blocks, free_blocks, max_blocks 3 from v$sort_segment; TABLESPACE_NAME EXTENT_SIZE CURRENT_USERS TOTAL_BLOCKS USED_BLOCKS FREE_BLOCKS MAX_BLOCKS --------------- ----------- ------------- ------------ ----------- ----------- ---------TEMP
8
0
1552
0
1552
1552
Managing Undo Segments
Undo segments record old values of data that were changed by a transaction. Undo segments provide read consistency and the ability to undo changes, as well as assist in crash recovery. Information in an undo segment consists of several entries called undo entries. Before updating or deleting rows, Oracle stores the row as it existed before the operation (known as the before-image data) in an undo segment. An undo entry consists of the beforeimage data along with the block ID and data file number. The undo entries that belong to a transaction are all linked together, so that the transaction can be rolled back, if necessary. The data block header is also updated with the undo segment information to identify where to find the undo information. This information provides a read-consistent view of the data at a given point in time. The changes to data in a serial transaction are stored in a single undo segment. When the transaction is complete (either by a COMMIT or by a ROLLBACK), Oracle finds a new undo segment for the session.
When a user performs an update or a delete operation, the before-image data is saved in the undo segments; then the blocks corresponding to the data are modified. For inserts, the undo entries include the ROWID of the row inserted, because to undo an insert operation, the rows inserted must be deleted. If the transaction modifies an index, the old index keys also will be stored in the undo segments. The undo segments are freed when the transaction ends, but the undo information is not destroyed immediately. The undo segments are used to provide a read-consistent view of relevant data for queries in other sessions that started before the transaction is committed. Oracle records changes to the original data block and undo segment block in the redo log. This second recording of the undo information is important for transactions that are not yet committed or rolled back at the time of a system crash. If a system crash occurs, Oracle automatically restores the undo segment information, including the undo entries for active transactions, as part of instance or media recovery. Once the recovery is complete, Oracle performs the actual rollbacks of transactions that had been neither committed nor rolled back at the time of the system crash.
Creating Undo Segments When you create the database, Oracle creates the SYSTEM undo segment in the SYSTEM tablespace. Every database should have an undo tablespace for non-SYSTEM undo segments, other than the SYSTEM undo segment, if the database contains more than one tablespace. Oracle uses the SYSTEM undo segment primarily for transactions involving objects in the SYSTEM tablespace. For changes involving objects in the non-SYSTEM tablespace, use undo segments in an undo tablespace.
Oracle Objective
Implement Automatic Undo Management
Although multiple undo tablespaces can exist in a database, only one can be active at any given time. The currently active undo tablespace must be large enough to handle the workload for all concurrent transactions. Two initialization parameters control the use of automatic undo management in the database: UNDO_MANAGEMENT and UNDO_TABLESPACE. The
parameter UNDO_MANAGEMENT can be set to AUTO or MANUAL and cannot be dynamically altered after the database is started. The parameter UNDO_TABLESPACE specifies the tablespace to be used for undo segments, and unlike the UNDO_MANAGEMENT parameter, it can be changed dynamically while the instance is running. If no undo tablespace is specified at system startup, the Oracle Server will automatically create one called SYS_UNDOTBS, with a system-assigned data file name in the directory $ORACLE_HOME/dbs.
Maintaining Undo Segments After you create the database, you can create additional undo tablespaces, as in the following example: CREATE UNDO TABLESPACE SYS_UNDOTBS_NIGHT DATAFILE ‘undo2.dbf’ SIZE 15M; Figure 7.1 shows how to create an undo tablespace using OEM. FIGURE 7.1
Most of the other clauses that apply to regular tablespaces also apply to undo tablespaces, such as ADD DATAFILE, ONLINE/OFFLINE, BEGIN BACKUP, END BACKUP, and RENAME. Switching undo tablespaces is also very straightforward: ALTER SYSTEM SET UNDO_TABLESPACE = SYS_UNDOTBS_NIGHT; An undo tablespace can be dropped like any other tablespace, but not until all transactions within the tablespace are complete. First, specify a new undo tablespace as in the previous example. To see if the tablespace has any pending transactions, run the following query: SQL> select rn.name, rs.status 2 from v$rollname rn, v$rollstat rs 3 where rn.name in 4 (select segment_name from dba_segments 5 where tablespace_name = 'UNDOTBS') 6 and rn.usn = rs.usn 7 ; NAME STATUS ------------------------------ --------------_SYSSMU2$ PENDING OFFLINE _SYSSMU8$ PENDING OFFLINE If lines with a status of PENDING OFFLINE are returned from this query, the undo tablespace cannot be dropped. An undo tablespace may need to be enlarged to support long-running queries against the database that need consistent reads. These queries need the original values of the rows of a table, even though another transaction may have changed and already committed rows in the same table; undo segments provide the mechanism to save the original values of the rows and provide this read-consistency. The amount of time that undo data is retained for consistent reads is controlled with the initialization parameter UNDO_RETENTION, specified in seconds. To control system resources and prevent individual users or groups of users from using too much undo space, you can use Resource Manager to place limits on a resource group. Specify the Resource Manager parameter UNDO_POOL. The default value for this parameter is UNLIMITED. The following is an example of specifying UNDO_POOL: EXEC DBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE
(PLAN => 'QPS', GROUP_OR_SUBPLAN => 'DirectMarketing', COMMENT => 'Restrict undo space usage', SWITCH_TIME => 3, SWITCH_ESTIMATE => TRUE, CPU_P1 => 60, UNDO_POOL => 450); In this example, the resource group DirectMarketing under the resource plan QPS will be limited to a total of 450KB of undo information.
Snapshot Too Old Error An ORA-1555 snapshot too old error occurs when Oracle cannot produce a read-consistent view of the data. This error usually happens when a transaction commits after a long-running query has started, and the undo information is overwritten or the undo extents are de-allocated. Here’s an example. User SCOTT has updated the EMP table and has not committed the changes. The old values of the rows updated by SCOTT are written to the undo segment. When user JAKE queries the EMP table, Oracle uses the undo segment to produce a read-consistent view of the table. If JAKE initiated a long query, Oracle fetches the blocks in multiple iterations. User SCOTT can commit his transaction, and the undo segment is marked committed. If another transaction overwrites the same undo segment, JAKE’s transaction will not be able to get the view of the EMP table when the transaction started. This produces a snapshot too old error. You can reduce the chances of generating a Snapshot too old error by estimating the amount of undo activity during peak usage periods and adjusting the size of the undo tablespace.
Querying Undo Information You can query the following dictionary views to obtain information about the undo segments and transactions.
DBA_ROLLBACK_SEGS This view provides information about all undo segments (online or offline), their status, tablespace name, sizes, and so on. Note that some of the following code has been reformatted to fit on our page. The table will appear as a single, long table on your screen. SQL> select segment_name, owner, tablespace_name, initial_extent ini, 2 next_extent next, min_extents min, status stat 3* from dba_rollback_segs SQL> /
V$ROLLNAME This view lists all online undo segments. The USN is the undo segment number, which can be used to join with the V$ROLLSTAT view. SQL> select * from v$rollname; USN ---------0 1 2 3 4
NAME ------------------------SYSTEM _SYSSMU1$ _SYSSMU2$ _SYSSMU3$ _SYSSMU4$
V$ROLLSTAT This view lists the undo statistics. You can join this view with the V$ROLLNAME view to get the undo segment name. The view shows the segment size, OPTIMAL value, number of shrinks since instance start-up, number of active transactions, extents, status, and so on. SQL> select * from v$rollstat 2 where usn = 1 USN --1
V$UNDOSTAT This view collects 10-minute snapshots that reflect the performance of the undo tablespace to aid in adjusting the undo tablespace size to support changing system load requirements. SQL> select begin_time, end_time, undoblks, maxquerylen 2* from v$undostat SQL> / BEGIN_TIME -----------------19-OCT-01 8:05:01 19-OCT-01 7:55:01 19-OCT-01 7:45:01 19-OCT-01 7:35:01 19-OCT-01 7:25:01 19-OCT-01 7:15:01
This chapter discussed the logical storage structures in detail. A data block is the smallest logical storage unit in Oracle. The data block overhead is the space used to store the block information and row information. The overhead consists of a common and variable header, a table directory, and a row directory. The rows are stored in the row data area, and the free space is the space available to accommodate new rows or the space available for the existing rows to expand. The free space can be managed by two parameters: PCTFREE and PCTUSED. PCTFREE determines the amount of free space that should be maintained in a block for future row expansion due to updates. When the used space in a block reaches the PCTFREE threshold, the block is removed from the free list. The block is added back to the free list when the used space drops below PCTUSED. The INITRANS and MAXTRANS parameters specify the concurrent transactions that can access a block. INITRANS reserves transaction space for the specified number of transactions, and MAXTRANS specifies the maximum number of concurrent transactions for the block. Extents are logical storage units consisting of contiguous blocks. Sizes of extents are specified by the INITIAL, NEXT, and PCTINCREASE parameters.
The minimum value of INITIAL should be two blocks. A segment consists of one or more extents, and there are four types of segments. Data segments store the table rows. Index segments store index keys. (The data and index segments used to store LOB or VARRAY data types are known as LOB segment and LOB index segments, respectively.) Temporary segments are used for sort operations, and undo segments are used to store undo information. When the database is created, Oracle creates a SYSTEM undo segment. You should create an undo tablespace for the non-system undo segments. Undo tablespaces are similar to other tablespaces in that they can be modified, added, dropped, backed-up, and switched. You can control the amount of undo space used by a user or consumer group with the Resource Manager directive UNDO_POOL. DBA_EXTENTS and DBA_SEGMENTS are views that can be queried to get information on extents and segments. Undo segment information can be queried from the DBA_ROLLBACK_SEGS, V$ROLLNAME, V$ROLLSTAT, and V$UNDOSTAT views.
Exam Essentials Understand what a segment is and how it fits into the hierarchy of logical database objects. Enumerate the different types of segments in the database, and explain how they are used to enhance the functionality and availability of the database. Describe how the storage clause is applied to different database objects. Know when to specify storage parameters and how storage parameters are determined if they are omitted at segment creation. List the components of a database block and their attributes. Describe the purpose of each database block component and explain how each of the storage parameters changes the characteristics of each component. Understand the features and benefits of automatic segment space management. Describe the benefits of automatic segment space management over manual space configuration. Be able to create a tablespace that is managed automatically. Describe how the server process allocates and frees blocks, and describe the mechanism used for block management. Enumerate the key data dictionary views used to manage segment and extents. Be able to retrieve the amount of free and used space in the database.
Understand the purpose and structure of undo data. Define undo data and explain how it aids in rollback and instance recovery. Describe the structure of an undo segment and how data is written to and read from an undo segment. Differentiate between SYSTEM and non-SYSTEM undo segments. Identify the locations where these two types of segments are stored, and discuss which Oracle processes use them. Identify initialization parameters used for undo management. List the initialization parameters used for undo management and the possible values, and explain when these parameters can be changed. Be able to create and maintain undo segments. Understand when an undo tablespace can be created and how the server can automatically create an undo tablespace. Identify the valid operations to an undo tablespace, and understand the scenarios under which switching undo tablespaces is advantageous. Know how to monitor undo tablespace usage. Identify the views used to extract undo segment usage for the purpose of optimizing the size of the undo tablespace. Be able to restrict the amount of undo space used by a user or group.
Key Terms
B
efore you take the exam, make sure you’re familiar with the following terms: Automatic Space Management
Review Questions 1. Place the following logical storage structures in order—from the
smallest logical storage unit to the largest. A. Segment B. Block C. Tablespace D. Extent 2. When a table is updated, where is the before-image information
(which can be used for undoing the changes) stored? A. Temporary segment B. Redo log buffer C. Undo buffer D. Rollback segment 3. Which parameter specifies the number of transaction slots in a
data block? A. MAXTRANS B. INITRANS C. PCTFREE D. PCTUSED 4. Select the statement that is not true regarding undo tablespaces. A. Undo tablespaces will not be created if they are not specified in the
CREATE DATABASE statement. B. Two undo tablespaces may be active if a new undo tablespace was
specified and there are pending transactions on the old one. C. You can switch from one undo tablespace to another. D. UNDO_MANAGEMENT cannot be changed dynamically while the
5. Which of the following database objects consists of more than one
segment? A. Nested Table B. Partitioned table C. Index Partition D. Undo segment E. None of the above 6. Which of the following segment allocation parameters is ignored when
automatic segment space management is in effect for a tablespace? A. FREELISTS B. PCTFREE C. INITRANS D. MAXTRANS 7. Which data dictionary view would you query to see the free extents in
a tablespace? A. DBA_TABLESPACES B. DBA_FREE_SPACE C. DBA_EXTENTS D. DBA_SEGMENTS 8. Which two data dictionary views can account for the total amount of
space in a data file? A. DBA_FREE_SEGMENTS B. DBA_FREE_SPACE C. DBA_SEGMENTS D. DBA_EXTENTS
9. Which portion of the data block stores information about the table
having rows in this block? A. Common and variable header B. Row directory C. Table directory D. Row data 10. When does Oracle stop adding rows to a block? A. When free space reaches the PCTFREE threshold B. When row data reaches the PCTFREE threshold C. When free space drops below the PCTUSED threshold D. When row data drops below the PCTUSED threshold 11. What main restriction is placed on tablespaces defined with automatic
segment space management? A. The tablespace cannot contain nested tables. B. The tablespace cannot be transportable. C. The tablespace cannot contain LOBs. D. The bootstrap segment cannot reside in a tablespace that has
automatic segment space management enabled. 12. Which dynamic performance view can help you adjust the size of an
undo tablespace? A. V$UNDOSTAT B. V$ROLLSTAT C. V$SESSION D. V$ROLLNAME
13. What is the default value of PCTFREE? A. 40 B. 0 C. 100 D. 10 14. Which data dictionary view can you query to see the OPTIMAL value
for a rollback segment? A. DBA_ROLLBACK_SEGS B. V$ROLLSTAT C. DBA_SEGMENTS D. V$ROLLNAME 15. What is row migration? A. A single row spread across multiple blocks B. Moving a table from one tablespace to another C. Storing a row in a different block when there is not enough room
in the current block for the row to expand D. Deleting a row and adding it back to the same table 16. What can cause the Snapshot too old error? A. Smaller rollback extents B. Higher MAXEXTENTS value C. Larger rollback extents D. Higher OPTIMAL value
17. The sum of the values PCTFREE and PCTUSED cannot exceed which of
the following: A. 255 B. DB_BLOCK_SIZE C. The maximum is operating system dependent. D. 100 18. Which of the following statements may require a temporary segment? A. CREATE TABLE B. CREATE INDEX C. UPDATE D. CREATE TABLESPACE 19. How does Oracle determine the extent sizes for a temporary segment? A. From the initialization parameters B. From the tables involved in the sort operation C. Using the default storage parameters for the tablespace D. The database block size 20. Fill in the blank: The parameter MAXTRANS specifies the maximum
number of concurrent transactions per __________. A. Table B. Segment C. Extent D. Block
Answers to Review Questions 1. B, D, A, and C. A data block is the smallest logical storage unit in
Oracle. An extent is a group of contiguous blocks. A segment consists of one or more extents. A segment can belong to only one tablespace. A tablespace can have many segments. 2. D. Before any DML operation, the undo information (before-image
of data) is stored in the undo segments. This information is used to undo the changes and to provide a read-consistent view of the data. 3. B. INITRANS specifies the number of transaction slots in a data block.
Oracle uses a transaction slot when the data block is being modified. INITRANS reserves space for the transactions in the block. MAXTRANS specifies the maximum number of concurrent transactions allowed in the block. The default for a block in a data segment is 1, and the default for the block in an index segment is 2. 4. A. If a specific undo tablespace is not defined in the CREATE
DATABASE statement, Oracle automatically creates one with the name SYS_UNDOTBS. 5. B. A partitioned table consists of multiple table partition segments in
different tablespaces. 6. A. Enabling automatic segment space management uses bitmaps
instead of freelists to manage free space. 7. B. DBA_FREE_SPACE shows the free extents in a tablespace.
DBA_EXTENTS shows all the extents that are allocated to a segment. 8. B, D. The sum of the free space in DBA_FREE_SPACE plus the space
allocated for extents in DBA_EXTENTS should add up to the total space specified for that tablespace. DBA_FREE_SEGMENTS is not a valid data dictionary view, and DBA_SEGMENTS only contains the number of extents and blocks allocated to each segment. 9. C. The table directory portion of the block stores information about
the table having rows in the block. The row directory stores information such as row address and size of the actual rows stored in the row data area.
10. A. The PCTFREE and PCTUSED parameters are used to manage the free
space in the block. Oracle inserts rows into a block until the free space falls below the PCTFREE threshold. PCTFREE is the amount of space reserved for future updates. Oracle considers adding more rows to the block only when the free space falls below the PCTUSED threshold. 11. C. Table segments that have LOBs cannot reside in a locally
managed tablespace that has automatic segment space management enabled. 12. A. The V$UNDOSTAT view, in conjunction with the value for UNDO_
RETENTION and DB_BLOCK_SIZE parameters, can be used to calculate an optimal undo tablespace size when database activity is at its peak. 13. D. The default value of PCTFREE is 10, and the default for PCTUSED
is 40. 14. B. You can query the OPTIMAL value from the V$ROLLSTAT view.
This view does not show the offline rollback segments. 15. C. Row migration is the movement of a row from one block to a new
block. Row migration occurs when a row is updated and its new size cannot fit into the free space of the block; Oracle moves the row to a new block, leaving a pointer in the old block to the new block. You can avoid this problem by either setting a higher PCTFREE value or specifying a larger block size at database creation. 16. A. Smaller rollback extents can cause the Snapshot too old error
if there are long-running queries in the database. 17. D. These two numbers are percentages that are defined as the per-
centage of a given block, and since these areas cannot overlap, the sum cannot be greater than 100 percent. 18. B. Operations that require a sort may need a temporary segment
(when the sort operation cannot be completed in the memory area specified by SORT_AREA_SIZE). Queries that use DISTINCT, GROUP BY, ORDER BY, UNION, INTERSECT, or MINUS clauses also need a sort of the result set. 19. C. The default storage parameters for the tablespace determine the
extent sizes for temporary segments. 20. D. MAXTRANS specifies the maximum allowed concurrent transac-
Managing Tables, Indexes, and Constraints ORACLE9i DBA FUNDAMENTALS I EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Identify the various methods of storing data Describe Oracle datatypes Distinguish between an extended versus a restricted ROWID Describe the structure of a row Create regular and temporary tables Manage storage structures within a table Reorganize, truncate, drop a table Drop a column within a table List different types of indexes and their uses Create various types of indexes Reorganize indexes Drop indexes Get index information from the data dictionary Monitor the usage of an index
Implement data integrity constraints Maintain integrity constraints Obtain constraint information from the data dictionary
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification website (http://www.oracle.com/ education/certification) for the most current exam objectives listing.
he previous chapters have discussed Oracle’s architecture: physical and logical structures of the database. Data is stored in Oracle as rows and columns. This chapter covers the options available when creating tables, shows how to quickly retrieve data by using indexes, and discusses how the Oracle database can enforce business rules by using integrity constraints.
Some of the material in this chapter is similar to material in Chapter 7 of Sybex’s OCA/OCP: Introduction to Oracle9i SQL Study Guide. For more in-depth information about the objectives covered in the Introduction to Oracle9i: SQL exam, see OCA/OCP: Introduction to Oracle9i SQL Study Guide by Chip Dawes and Biju Thomas, Sybex 2002.
Storing Data
Oracle Objective
Identify the various methods of storing data
A table is the basic form of data storage in Oracle. You can think of a table as a spreadsheet having column headings and many rows of information. A schema, or a user, in the database owns the table. The table columns
have a defined datatype—the data stored in the columns should satisfy the characteristics of the column. You can also define rules for storing data in the columns using integrity constraints. Oracle9i has various types of tables to suit your data storage needs. A table by default means a relational permanent table. The following types of tables are available in Oracle9i to store data. Relational Simply known as a table, the relational table is the most common method for storing data. These tables are permanent and can be partitioned. When you partition a table, you break it into multiple smaller pieces, which improves performance and makes the table easier to manage. To create a relational table, you use the CREATE TABLE ... ORGANIZATION HEAP statement. Since ORGANIZATION HEAP is the default, it can be omitted. Temporary Temporary tables store private data or data that is specific to a session. Other users in the database cannot use these data. Temporary tables are used for temporary data manipulation or for storing intermediary results. To create a temporary table, you use the CREATE GLOBAL TEMPORARY TABLE statement. Index Organized Index Organized Tables (IOTs) store the data in a structured primary key sorted manner. You must define a primary key for each IOT. These tables are similar to relational tables that have a primary key, but they do not use separate storage for the table and primary key as relational tables do. To create an IOT, you use the CREATE TABLE ... ORGANIZATION INDEX statement. External The external table type is new to Oracle9i. As the name indicates, data is stored outside the Oracle database in flat files. External tables are read-only, and no indexes are allowed on external tables. Column names defined in the Oracle database are mapped to the columns in the external file. The default driver used to read external table is SQL*Loader. To create an external table, you use the CREATE TABLE ... ORGANIZATION EXTERNAL statement. Object Object tables are a special kind of tables that support the objectoriented features of the Oracle9i database. In an object table, each row represents an object. We have already discussed in the previous chapters the logical storage structures and parameters. Let’s see how these structures can be related to a table. The following sections discuss how to create and manipulate the types of tables in Oracle9i.
To create a table, you use the CREATE TABLE command. You can create a table under the username used to connect to the database, or, with proper privileges, you can create a table under another username. A database user can be referred to as a schema or as an owner, when the user owns objects in the database. The simplest form of creating a table is as follows: CREATE TABLE ORDERS ( ORDER_NUM NUMBER, ORDER_DATE DATE, PRODUCT_CD VARCHAR2 (10), QUANTITY NUMBER (10,3), STATUS CHAR); ORDERS is the table name; the columns in the table are specified in parentheses separated by commas. The table is created under the username used to connect to the database; to create the table under another schema, you need to qualify the table with the schema name. For example, if you want to create the ORDERS table as being owned by SCOTT, create the table by using CREATE TABLE SCOTT.ORDERS ( ). A column name and a datatype identify each column. For certain datatypes, you can specify a maximum width. You can specify any Oracle built-in datatype or user-defined datatype for the column definition. When specifying user-defined datatypes, the user-defined type must exist before creating the table.
Oracle9i has three categories of built-in datatypes: scalar, collection, and relationship. Collection and relationship datatypes are used for objectrelational functionality of Oracle9i. Table 8.1 lists the built-in scalar datatypes in Oracle. TABLE 8.1
Oracle Built-in Scalar Datatypes Datatype
Description
CHAR (<size> [BYTE | CHAR])
Fixed-length character data with length specified inside parentheses. Data is space padded to fit the column width. You can also include the optional keywords BYTE and CHAR inside parentheses along with size to indicate if the size specified is in bytes or in characters. BYTE is the default. Size defaults to 1 byte if not defined. Maximum is 2000 bytes.
VARCHAR (<size> [BYTE | CHAR])
Same as VARCHAR2.
VARCHAR2 (<size> [BYTE | CHAR])
Variable-length character data. Maximum allowed length is specified in parentheses. You must specify a size; there is no default value. Maximum is 4000 bytes. Unlike the CHAR datatype, VARCHAR2 columns are not blank padded with trailing spaces if the column value is shorter than its maximum specified length. You can specify the size in bytes or characters; by default the size is in characters.
NCHAR (<size>)
Similar to CHAR, but used to store Unicode character set data. NCHAR datatype is fixed length, maximum size 2000 bytes, and default size 1 character.
NVARCHAR2 (<size>)
Same as VARCHAR2; stores Unicode variable length data. The size is specified in characters, and the maximum allowed size is 4000 bytes.
LONG
Stores variable-length character data up to 2GB. Use CLOB or NCLOB datatypes instead. Provided in Oracle9i for backward compatibility. Can have only one LONG column per table.
Stores fixed and floating-point numbers. You can optionally specify a precision (total length including decimals) and scale (digits after decimal point). The default is 38 digits of precision, and the valid range is between –1 × 10−130 and 9.999 99125.
DATE
Stores date data. Has century, year, month, date, hour, minute, and seconds internally. Can be displayed in various formats. You can store the dates from January 1, 4712 BC to December 31, 9999 AD. If you specify a date value without the time component, the default time is 12AM (midnight 00:00:00 hrs).
TIMESTAMP [(<precision>)]
TIMESTAMP datatype stores date and time information with fractional seconds precision. The only difference between DATE and TIMESTAMP datatypes is the ability to store fractional seconds up to a precision of 9 digits. The default precision is 6 and can range from 0 to 9.
TIMESTAMP [(<precision>)] WITH TIME ZONE
TIMESTAMP WITH TIME ZONE is similar to the TIMESTAMP datatype, but stores the time zone displacement. Displacement is the difference between the local time and the Universal Time Coordinate (UTC), also known as Greenwich Mean Time. The displacement is represented in hours and minutes.
TIMESTAMP [(<precision>)] WITH LOCAL TIME ZONE
TIMESTAMP WITH LOCAL TIME ZONE is similar to the TIMESTAMP datatype, but includes the time zone displacement. TIMESTAMP WITH LOCAL TIME ZONE does not store the displacement information in the database, but stores the time as a normalized form of database time zone. The data is always stored in the database time zone, but when the user retrieves data, it is shown in the users local session time zone.
Used to represent a period of time as years and months. The precision specifies the precision needed for the year field, and its default is 2. The precision can have values from 0 to 9. This datatype can be used to store the difference between two date time values, in which the only significant portions are the year and month.
INTERVAL DAY [(precision)] TO SECOND
Used to represent a period of time as days, hours, minutes, and seconds. The precision specifies the precision needed for the day field, and its default is 6. The precision can have values from 0 to 9. Larger precision allows the difference between the dates to be larger. This datatype can be used to store the difference between two date time values, with seconds precision.
RAW (<size>)
Variable-length datatype used to store unstructured data, without a character set conversion. Provided for backward compatibility. Use BLOB or BFILE instead.
LONG RAW
Same as RAW, can store up to 2GB of binary data. LONG RAW is supported in Oracle9i for backward compatibility; you must use BLOB instead.
BLOB
Stores up to 4GB of unstructured binary data.
CLOB
Stores up to 4GB of character data.
NCLOB
Stores up to 4GB of Unicode character data.
BFILE
Stores unstructured binary data in operating system files outside the database. The external file size can be up to 4GB. Oracle stores only the file pointer in the database; the actual file is in the operating system.
Stores binary data representing a physical row address of a table’s row. Occupies 10 bytes.
UROWID
Stores binary data representing any type of row address: physical, logical, or foreign. Up to 4000 bytes.
Collection types are used to represent more than one element, such as an array. There are two collection datatypes: VARRAY and TABLE. Elements in the VARRAY datatype are ordered and have a maximum limit. Elements in a TABLE datatype (nested table) are not ordered, and there is no upper limit to the number of elements, unless restricted by available resources. REF is the relationship datatype, which defines a relationship with other objects by using a reference. It actually stores pointers to data stored in different object tables.
Specifying Storage
Oracle Objective
Manage storage structures within a table
If you create a table without specifying the storage parameters and tablespace, the table will be created in the default tablespace of the user, and the storage parameters used will be those of the default specified for the tablespace. It is always better to estimate the size of the table and specify appropriate storage parameters when creating the table. If the table is too large, you might need to consider partitioning (discussed later) or creating the table in a separate tablespace to help manage the table. Oracle allocates a segment to the table when the table is created. This segment will have the number of extents specified by the storage parameter MINEXTENTS. Oracle allocates new extents to the table as required. Although you can have an unlimited number of extents for a segment, a little planning
can improve the performance of the table. The presence of numerous extents affects the operations on the table, such as truncating a table or scanning a full table. A larger number of extents may cause additional I/Os in the data file and therefore may affect performance. To create the ORDERS table using explicit storage parameters in the USER_DATA tablespace, use the following: CREATE TABLE JAKE.ORDERS ( ORDER_NUM NUMBER, ORDER_DATE DATE, PRODUCT_CD VARCHAR2 (10), QUANTITY NUMBER (10,3), STATUS CHAR) TABLESPACE USER_DATA PCTFREE 5 PCTUSED 75 INITRANS 1 MAXTRANS 255 STORAGE (INITIAL 512K NEXT 512K PCTINCREASE 0 MINEXTENTS 1 MAXEXTENTS 100 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL KEEP); The table will be owned by JAKE and will be created in the USER_DATA tablespace (JAKE should have appropriate space quota privileges in the tablespace; privileges and space quotas are discussed in Chapter 9, “Managing Users and Security”). None of the storage parameters are mandatory to create a table; Oracle assigns default values if you omit them. Let’s discuss the clauses used in the table creation. The TABLESPACE clause specifies where the table is to be created. If you omit the STORAGE clause or any parameters in the STORAGE clause, the default is taken from the tablespace’s default storage (if applicable). If you omit the TABLESPACE clause, the table is created in the default tablespace of the user. The PCTFREE and PCTUSED clauses are block storage parameters. The PCTFREE clause specifies the amount of free space that should be reserved in each block of the table for future updates. In this example, you specify a low PCTFREE for the ORDERS table, because not many updates to the table increase the row length. PCTUSED specifies when the block should be
considered for inserting new rows once the PCTFREE threshold is reached. Here we specified 75, so when the used space falls below 75 (as the result of updates or deletes), new rows will be added to the block. The INITRANS and MAXTRANS clauses specify the number of concurrent transactions that can update each block of the table. Oracle reserves space in the block header for the INITRANS number of concurrent transactions. For each additional concurrent transaction, Oracle allocates space from the free space—which has an overhead of dynamically allocating transaction entry space. If the block is full, and no space is available, the transaction waits until a transaction entry space is available. MAXTRANS specifies the maximum number of concurrent transactions that can touch a block. This specification prevents unnecessarily allocating transaction space in the block header, because the transaction space allocated is never reclaimed. In most cases, the Oracle defaults of INITRANS 1 and MAXTRANS 255 are sufficient. The STORAGE clause specifies the extent sizes, free lists, and buffer pool values. In Chapter 6, “Logical and Physical Database Structures,” we discussed the INITIAL, NEXT, MINEXTENTS, MAXEXTENTS, and PCTINCREASE parameters. These five parameters control the size of the extents allocated to the table. If the table is created on a locally managed uniform extent tablespace, these storage parameters are ignored. The FREELIST GROUPS clause specifies the number of free list groups that should be created for the table. The default and minimum value is 1. Each free list group uses one data block (that’s why the minimum value for INITIAL is two database blocks) known as the segment header, which contains information about the extents, free blocks, and high-water mark of the table. The FREELISTS clause specifies the number of lists for each free list group. The default and minimum value is 1. The free list manages the list of blocks that are available to add new rows. A block is removed from the free list if the free space in the block is below PCTFREE. The block remains out of the free list as long as the used space is above PCTUSED. Create more free lists if the volume of inserts to the table is high. An appropriate number would be the number of concurrent transactions performing inserts to the table. Oracle recommends having FREELISTS and INITRANS be the same value. The FREELIST GROUPS parameter is mostly used for parallel server configuration, in which you can specify a group for each instance. The BUFFER_POOL parameter of the STORAGE clause specifies the area of the database buffer cache to keep the blocks of the table when read from the
data file while querying or for update/delete. There are three buffer pools: KEEP, RECYCLE, and DEFAULT. The default value is DEFAULT. Specify KEEP if the table is small and is frequently accessed. The blocks in the KEEP pool are always available in the data buffer cache of SGA (System Global Area), so I/O will be faster. The blocks assigned to the RECYCLE buffer pool are removed from memory as soon as they are not needed. Specify RECYCLE for large tables or tables that are seldom accessed. If you do not specify KEEP or RECYCLE, the blocks are assigned to the DEFAULT pool, where they will be aged out using an LRU algorithm. If you create the tablespace with the SEGMENT SPACE MANAGEMENT AUTO clause, the parameters PCTUSED, FREELISTS, and FREELIST GROUPS are ignored.
Storing LOB Structures A table can contain columns of type CLOB, BLOB, or NCLOB. These internal large object (LOB) columns can have storage settings that are different from those of the table, and these settings can be stored in a different tablespace for easy management and performance improvement. The following example specifies storage for a LOB column when creating the table. CREATE TABLE LICENSE_INFO (DRIVER_ID
VARCHAR2 (20),
DRIVER_NAME VARCHAR2 (30), DOB
DATE,
PHOTO
BLOB)
TABLESPACE APP_DATA STORAGE (INITIAL 4M NEXT 4M PCTINCREASE 0) LOB (PHOTO) STORE AS PHOTO_LOB (TABLESPACE APP_LARGE_DATA DISABLE STORAGE IN ROW STORAGE (INITIAL 128M NEXT 128M PCTINCREASE 0) CHUNK 4000 PCTVERSION 20 NOCACHE LOGGING);
The table LICENSE_INFO is created with a BLOB datatype column. The table is stored in the APP_DATA tablespace, and the BLOB column PHOTO is stored in the APP_LARGE_DATA tablespace. Let’s look at the various clauses specified for the LOB storage.
The lob segment is given the name of PHOTO_LOB. If a name is not given, Oracle generates a name. You can specify multiple LOB columns in parentheses following the LOB keyword, if they all have the same storage characteristics. In such cases, you cannot specify a name for the LOB segment. For example, if the table has three LOB columns and all have the same characteristics, you may specify the following: LOB (PHOTO, VIDEO, AUDIO) STORE AS (TABLESPACE APP_LARGE_DATA CACHE READS NOLOGGING); The TABLESPACE clause specifies the tablespace where the lob segment(s) should be stored. The tablespace can be managed locally or by the dictionary. If the LOB column is larger than 4000 bytes, data is stored in the LOB segment. Storing data in the LOB segment is known as out of line storage. If the LOB column data is less than 4000 bytes, it is stored inline, along with the other column data of the table. If you omit the TABLESPACE clause, the LOB segment is created in the table’s tablespace. The DISABLE/ENABLE STORAGE IN ROW clause specifies whether LOB data should be stored inline or out of line. ENABLE is the default and stores LOB data along with the other columns if the LOB data is less than 4000 bytes. DISABLE stores the LOB data in the LOB segment regardless of it size. Whether the LOB data is stored inline or out of line, the LOB locator is always stored along with the row. The STORAGE clause specifies the extent sizes and growth parameters. These parameters are the same, as those you would use with a table. The CHUNK clause specifies the total number of bytes of data that will be read or written during LOB manipulation. CHUNK must be a multiple of the database block size. If you specify a value other than a multiple of the block size, Oracle uses the next higher value that is a multiple of block size. For example, if you specify 4000 for CHUNK and the database block size is 2048, Oracle will take the value of 4096. The default value for CHUNK is the database block size, and the maximum value is 32KB. The INITIAL and NEXT values specified in the STORAGE clause must be higher than the value for CHUNK. The PCTVERSION clause specifies the percentage of all used LOB data space that can be occupied by old versions of LOB data pages. Since LOB data changes are not written to the rollback segments, PCTVERSION specifies the percentage of old information that should be kept in the LOB segment for
consistent reads. The default is 10, and the percentage can range from 0 through 100. The CACHE / NOCACHE / CACHE READS clause specifies whether to cache the LOB reads. If the LOB is read and updated frequently, use the CACHE clause. NOCACHE is the default, and it is useful for a LOB that is read infrequently and never updated. CACHE READS caches only the read operation, which is useful for a LOB that is read frequently, but never updated. The LOGGING / NOLOGGING clause specifies whether redo information should be generated for LOB data. NOLOGGING does not write redo and is useful for faster data loads. You cannot specify CACHE and NOLOGGING together.
Creating a Table from a Query
Oracle Objective
Create regular and temporary tables
You can create a table using existing tables or views by specifying a subquery instead of defining the columns. The subquery can refer to more than one table or view. The table is created with the rows returned from the subquery. You can specify new column names for the table, but Oracle derives the datatype and maximum width based on the query result—you cannot specify the datatype with this method. You can specify the storage parameters for the tables created by using the subquery. For example, let’s create a new table from the ORDERS table for the orders that are accepted. Notice that the new column names are specified. CREATE TABLE ACCEPTED_ORDERS (ORD_NUMBER, ORD_DATE, PRODUCT_CD, QTY) TABLESPACE USERS PCTFREE 0 STORAGE (INITIAL 128K NEXT 128K PCTINCREASE 0) AS SELECT ORDER_NUM, ORDER_DATE, PRODUCT_CD, QUANTITY FROM ORDERS WHERE STATUS = ‘A’;
The CREATE TABLE...AS SELECT... will not work if the query refers to columns of LONG datatype. When you create a table using the subquery, only the NOT NULL constraints associated with the columns are copied to the new table. Other constraints and column default definitions are not copied.
Partitioning Tables When tables are very large, you can manage them better by using partitioning. Partitioning is breaking a large table into manageable pieces based on the values in a column (or multiple columns) known as the partition key. If you have a very large table spread across many data files, and one disk fails, you have to recover the entire table. However, if the table is partitioned, you need to recover only that partition. SQL statements can access the required partition(s) rather than reading the entire table. Four partitioning methods are available: Range You can create a range partition in which the partition key values are in a range—for example, you can partitions a transaction table on the transaction date, and you can create a partition for each month or each quarter. The partition column list can be one or more columns. Hash Hash partitions are more appropriate when you do not know how much data will be in a range or whether the sizes of the partition vary. Hash partitions use a hash algorithm on the partitioning columns. The number of partitions required should be specified preferably as a power of two (such as 2, 4, 8, 16, and so on). List If you know all the values that are supposed to be stored in the column and want to create a partition for each value, use the list partition method. You must specify a list of values when defining the partition, and you can group one or more values together. List partitioning gives explicit control over how each row maps to a partition, whereas in range partition a range of values map to a partition. List partitioning is good for managing partitions using discrete values, which may not be possible using range partitioning. Composite This method uses the range partition method to create partitions and the hash partition method to create subpartitions. The logical attributes for all partitions remain the same (such as column name, datatype, constraints, and so on), but each partition can have its own physical attributes (such as tablespace, storage parameters, and so on). Each
partition in the partitioned table is allocated a segment. You can place these partitions on different tablespaces, which can help you to balance the I/O by placing the data files appropriately on disk. Also, by having the partitions in different tablespaces, you can make a partition tablespace read-only. You can specify the storage parameters at the table level or for each partition.
Partitioned tables cannot have any columns with LONG or LONG RAW datatypes.
Range-Partitioned Table To create a range-partitioned table, you specify the PARTITION BY RANGE clause in the CREATE TABLE command. As stated earlier, range partitioning is suitable for tables that have column(s) with a range of values. For example, your transaction table might have the transaction date column, with which you can create a partition for every month or for a quarter. Consider the following example: CREATE TABLE ORDER_TRANSACTION ( ORD_NUMBER
NUMBER(12),
ORD_DATE
DATE,
PROD_ID
VARCHAR2 (15),
QUANTITY
NUMBER (15,3))
PARTITION BY RANGE (ORD_DATE) (PARTITION FY2001Q4 VALUES LESS THAN (TO_DATE(‘01012002’,‘MMDDYYYY’)) TABLESPACE ORD_2001Q4, PARTITION FY2002Q1 VALUES LESS THAN (TO_DATE(‘04012002’,‘MMDDYYYY’)) TABLESPACE ORD_2002Q1 STORAGE (INITIAL 500M NEXT 500M) INITRANS 2 PCTFREE 0, PARTITION FY2002Q2 VALUES LESS THAN (TO_DATE(‘07012002’,‘MMDDYYYY’)) TABLESPACE ORD_2002Q2, PARTITION FY2000Q3 VALUES LESS THAN (TO_DATE(‘10012002’,‘MMDDYYYY’)) TABLESPACE ORD_2002Q3 STORAGE (INITIAL 10M NEXT 10M)) STORAGE (INITIAL 200M NEXT 200M PCTINCREASE 0 MAXEXTENTS 4096) NOLOGGING;
This example creates a range-partitioned table named ORDER_TRANSACTION. PARTITION BY RANGE specifies that the table be range partitioned; the partition column is provided in parentheses (separate multiple columns with commas), and the partition specifications are defined. Each partition specification begins with the keyword PARTITION. You can optionally provide a name for the partition. The VALUES LESS THAN clause defines values for the partition columns that should be in the partition. In the example, each partition is created on different tablespaces; partitions FY2001Q4 and FY2002Q2 inherit the storage parameter values from the table definition, whereas FY2002Q1 and FY2002Q3 have the storage parameters explicitly defined. Records with ORD_DATE prior to 01-Jan-2002 will be stored in partition FY2001Q4; since you did not specify a partition for records with ORD_DATE after 31-Sep-2002, Oracle rejects those rows. NULL value is treated greater than all other values. If the partition key can have NULL values or records with a higher ORD_DATE than the highest upper range in the partition specification list, you must create a partition for the upper range. The MAXVALUE parameter specifies that the partition bound is infinite. In this example, an upper-bound partition can be specified as follows: CREATE TABLE ORDER_TRANSACTION ( ) PARTITION BY RANGE (ORD_DATE) (PARTITION FY1999Q4 VALUES LESS THAN (TO_DATE(‘01012002’,‘MMDDYYYY’)) TABLESPACE ORD_2001Q4, PARTITION FY2999Q4 VALUES LESS THAN (MAXVALUE) TABLESPACE ORD_2999Q4 ) STORAGE (INITIAL 200M NEXT 200M PCTINCREASE 0 MAXEXTENTS 4096) NOLOGGING;
Hash-Partitioned Table You create a hash-partitioned table by specifying the PARTITION BY HASH clause in the CREATE TABLE command. Hash partitioning is suitable for any large table to take advantage of Oracle9i’s performance improvements even if you do not have column(s) with a range of values. Hash partitioning is suitable when you do not know how many rows will be in the table. Choose a column with unique values or more distinct values for the partition key. The following example creates a hash-partitioned table with four partitions. The partitions are created in tablespaces DOC101, DOC102, and DOC103. Since there are four partitions and only three tablespaces listed, Oracle reuses the DOC101 tablespace for the fourth partition. Oracle creates
the partition names as SYS_XXXX. Physical attributes are specified at the table level only. CREATE TABLE DOC_NUMBER DOC_TYPE CONTENTS PARTITION BY PARTITIONS 4
DOCUMENTS1 ( NUMBER(12), VARCHAR2 (20), VARCHAR2 (600)) HASH (DOC_NUMBER, DOC_TYPE) STORE IN (DOC101, DOC102, DOC103)
STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);
The following example creates a hash-partitioned table with named partitions in the tablespaces. CREATE TABLE DOCUMENTS2 ( DOC_NUMBER NUMBER(12), DOC_TYPE VARCHAR2 (20), CONTENTS VARCHAR2 (600)) PARTITION BY HASH (DOC_NUMBER, DOC_TYPE) ( PARTITION DOC201 TABLESPACE DOC201, PARTITION DOC202 TABLESPACE DOC202, PARTITION DOC203 TABLESPACE DOC203, PARTITION DOC203 TABLESPACE DOC204 ) STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);
List-Partitioned Table To create a list-partitioned table, you specify the PARTITION BY LIST clause in the CREATE TABLE command. List partitioning gives explicit control over the rows that are stored in each partition. You can specify NULL as a valid list value. If you insert a row that has a partition column value not defined in the list, Oracle rejects the row. You can specify only one column name as the partition key. The following example creates a list-partitioned table. CREATE TABLE POPULATION_STATS (STATE VARCHAR2 (2), COUNTY VARCHAR2 (30), CITY VARCHAR2 (30), MEN NUMBER, WOMEN NUMBER, BCHILD NUMBER, GCHILD NUMBER)
PARTITION BY LIST (STATE) (PARTITION SC VALUES (’TX’,’LA’,’OK’) TABLESPACE SC_DATA, PARTITION SW VALUES (’NM’,’AZ’) TABLESPACE SW_DATA, PARTITION SE VALUES (’AR’,’MS’,’AL’) TABLESPACE SE_DATA);
Composite-Partitioned Table Composite partitions have range partitions and hash subpartitions. Only subpartitions are physically created on the disk (tablespace); partitions are logical representations only. Composite partitioning gives the flexibility of range and hash for tables with a smaller range of values. In the following example, the table is range partitioned on the MAKE_YEAR column; each partition is subdivided based on the MODEL into 4 subpartitions on 4 tablespaces. Each tablespace will have one subpartition from each partition, that is, 3 subpartitions per tablespace, thereby having 12 subpartitions. CREATE TABLE CARS ( MAKE_YEAR
NUMBER(4),
MODEL
VARCHAR2 (30),
MANUFACTR
VARCHAR2 (50),
QUANTITY
NUMBER)
PARTITION BY RANGE (MAKE_YEAR) SUBPARTITION BY HASH (MODEL) SUBPARTITIONS 4 STORE IN(TSMK1, TSMK2, TSMK3, TSMK4) ( PARTITION M2001 VALUES LESS THAN (2002), PARTITION M2002 VALUES LESS THAN (2003), PARTITION M2001 VALUES LESS THAN (MAXVALUE)) STORAGE (INITIAL 64K NEXT 64K PCTINCREASE 0 MAXEXTENTS 4096);
The following example shows how to name the subpartitions and store each subpartition in a different tablespace. Subpartitions for partition M2001 have explicit storage parameters specified. CREATE TABLE CARS2 ( MAKE_YEAR
NUMBER(4),
MODEL
VARCHAR2 (30),
MANUFACTR
VARCHAR2 (50),
QUANTITY
NUMBER)
PARTITION BY RANGE (MAKE_YEAR) SUBPARTITION BY HASH (MODEL) SUBPARTITIONS 4
Using Other Create Clauses You can specify the following additional clauses while creating a table. These clauses help to manage various types of operations on the table. LOGGING/NOLOGGING LOGGING is the default for the table and tablespace, but if the tablespace is defined as NOLOGGING, the table uses NOLOGGING. LOGGING specifies that table creation and direct-load inserts should be logged to the redo log files. Creating the table by using a subquery and the NOLOGGING clause can dramatically reduce the time to create large tables. If the table creation, initial data population (using a subquery), and direct-load inserts are not logged to the redo log files when using the NOLOGGING clause, you must back up the table (or better yet, the entire tablespace) after such operations are performed. Media recovery will not create or load tables created with the NOLOGGING attribute. You can also specify a separate LOGGING or NOLOGGING attribute for indexes and LOB storage of the table, independent of the table’s attribute. The following example creates a table with the NOLOGGING clause. CREATE TABLE MY_ORDERS (... ...) TABLESPACE USER_DATA STORAGE (... ...) NOLOGGING;
PARALLEL/NOPARALLEL NOPARALLEL is the default. PARALLEL causes the table creation (if created using a subquery) and the DML statements on the table to execute in parallel. Normally, a single-server process performs operations on tables in a transaction (serial operation). When the PARALLEL attribute is set, Oracle uses multiple processes to complete the operation for a full table scan. You can specify a degree for the parallelism; if a degree is not specified, Oracle calculates the optimum degree of parallelism. The parameter PARALLEL_THREADS_PER_CPU determines the number for parallel degree per CPU; usually the default is 2. If you do not specify the degree, Oracle calculates the degree based on this parameter and the number of CPUs available. The following example creates a table by using a subquery. The table creation will not be logged in the redo log file, and multiple processes will query the JAKE.ORDERS table and create the MY_ORDERS table. CREATE TABLE MY_ORDERS (... ...) TABLESPACE USER_DATA STORAGE (... ...) NOLOGGING PARALLEL AS SELECT * FROM JAKE.ORDERS; CACHE/NOCACHE NOCACHE is the default. For small look-up tables that are frequently accessed, you can specify the CACHE clause to have the blocks retrieved using a full table scan placed at the MRU end of the LRU list in the buffer cache; the blocks are not aged out of the buffer cache immediately. The default behavior (NOCACHE) is to place the blocks from a full table scan at the tail end of the LRU list, where they are moved out of the list as soon as a different process or query needs these blocks for storing another table’s blocks in the cache.
Creating Temporary Tables Temporary tables hold information that is available only to the session that created the data. The definition of the temporary table is available to all sessions. To create a temporary table, you use the CREATE GLOBAL TEMPORARY TABLE statement. The data in the table can be session-specific or transactionspecific. The ON COMMIT clause specifies which. The following statement creates a temporary table that is transaction-specific.
CREATE GLOBAL TEMPORARY TABLE INVALID_ORDERS (ORDER# NUMBER (8), ORDER_DT DATE, VALUE NUMBER (12,2)) ON COMMIT DELETE ROWS; Oracle deletes rows or truncates the table after each commit. To define the table as session-specific, use the ON COMMIT PRESERVE ROWS clause. Storage for temporary tablespace is allocated in the temporary tablespace of the user. Segments are created only when the first insert statement is performed on the table. The temporary segments allocated to temporary tables are deallocated at the end of the transaction for transaction-specific tables and at the end of session for session-specific tables. You can create indexes on temporary tables. DML (Data Manipulation Language) statements on temporary tables do not generate redo information, but undo information is generated.
Altering Tables
Oracle Objective
Reorganize, truncate, and drop a table
You can alter a table by using the ALTER TABLE command to change it’s the table’s storage settings, add or drop columns, or modify the column characteristics such as default value, datatype, and length. You can also move the table from one tablespace to another, disable constraints and triggers, and change the clauses such as PARALLEL, CACHE, and LOGGING. In this section, we will discuss altering the storage and space used by the table. You cannot change the STORAGE parameters INITIAL and MINEXTENTS by using the ALTER TABLE command. You can change NEXT, PCTINCREASE, MAXEXTENTS, FREELISTS, and FREELIST GROUPS. The changes will not affect the extents that are already allocated. When you change NEXT, the next extent allocated will have the new size. When you alter PCTINCREASE, the next extent allocated will be based on the current value of NEXT, but further
extent sizes will be calculated with the new PCTINCREASE value. Here is an example of changing the storage parameters: ALTER TABLE ORDERS STORAGE (NEXT 512K PCTINCREASE 0 MAXEXTENTS UNLIMITED);
Allocating and Deallocating Extents You can allocate new extents to a table or a partition manually by using the ALTER TABLE command. You can optionally specify a filename if you want to allocate the extent in a particular data file. You can specify the size of the extent in bytes (use K or M to specify the size in KB or MB). If you omit the size of the extent, Oracle uses the NEXT size. For example, to manually allocate the next extent, use the following: ALTER TABLE ORDERS ALLOCATE EXTENT; To specify the size of the extent to be allocated, use the following: ALTER TABLE ORDERS ALLOCATE EXTENT SIZE 200K; To specify the data file where the extent should be allocated, use the following: ALTER TABLE ORDERS ALLOCATE EXTENT SIZE 200K DATAFILE ‘C:\ORACLE\ORADATA\USER_DATA01.DBF’; The data file should belong to the tablespace where the table or the partition resides. Sometimes the storage space estimated for the table is too high. The table may be created with large extent sizes, or if you do not set the PCTINCREASE size properly, the space allocated to the table may be large. Any other table or object cannot use the space once allocated. You can free up such unused free space by manually deallocating the unused blocks above the high-water mark (HWM) of the table. The HWM indicates the historically highest amount of used space in a segment. The HWM moves only in the forward direction, that is, when new blocks are used to store data in a table, the HWM is increased. When rows are deleted, and even if a block is completely empty, the HWM is not decreased. The HWM is reset only when you TRUNCATE the table. You can use the UNUSED_SPACE procedure of the DBMS_SPACE package to find the HWM of the segment. The following listing shows the parameters for the procedure.
PROCEDURE UNUSED_SPACE Argument Name Type In/Out Default? ------------------------------ ---------- ------ -------SEGMENT_OWNER VARCHAR2 IN SEGMENT_NAME VARCHAR2 IN SEGMENT_TYPE VARCHAR2 IN TOTAL_BLOCKS NUMBER OUT TOTAL_BYTES NUMBER OUT UNUSED_BLOCKS NUMBER OUT UNUSED_BYTES NUMBER OUT LAST_USED_EXTENT_FILE_ID NUMBER OUT LAST_USED_EXTENT_BLOCK_ID NUMBER OUT LAST_USED_BLOCK NUMBER OUT PARTITION_NAME VARCHAR2 IN DEFAULT You can execute the procedure by specifying the owner, type (table, index, table partition, table subpartition, and so on), and name. The HWM is TOTAL_BYTES – UNUSED_BYTES. Here is an example: SQL> variable vtotalblocks number SQL> variable vtotalbytes number SQL> variable vunusedblocks number SQL> variable vunusedbytes number SQL> variable vlastusedefid number SQL> variable vlastusedebid number SQL> variable vlastusedblock number SQL> EXECUTE DBMS_SPACE.UNUSED_SPACE (‘JOHN’, > ‘ORDERS’, ‘TABLE’, :vtotalblocks, :vtotalbytes, > :vunusedblocks, :vunusedbytes, > :vlastusedefid, :vlastusedebid, :vlastusedblock); PL/SQL procedure successfully completed. SQL> select :vtotalbytes, :vunusedbytes from dual; :VTOTALBYTES :VUNUSEDBYTES ------------ ------------1224288 8507904 SQL>
You can use the DEALLOCATE UNUSED clause of the ALTER TABLE command to free up the unused space allocated to the table. For example, to free up all blocks above the HWM, use this statement: ALTER TABLE ORDERS DEALLOCATE UNUSED; You can use the KEEP parameter in the UNUSED clause to specify the number of blocks you want to keep above the HWM after deallocation. For example, to have 100KB of free space available for the table above the HWM, specify the following: ALTER TABLE ORDERS DEALLOCATE UNUSED KEEP 100K; If you do not specify KEEP, and the HWM is below MINEXTENTS, Oracle keeps MINEXTENTS extents. If you specify KEEP, and the HWM is below MINEXTENTS, Oracle adjusts the MINEXTENTS to match the number of extents. If the HWM is less than the size of INITIAL, and KEEP is specified, Oracle adjusts the size of INITIAL. Table 8.2 shows some examples of freeing up space. Let’s assume that the table is created with (INITIAL 1024K NEXT 1024K PCTINCREASE 0 MINEXTENTS 4) and now the table has 10 extents (total size 10,240KB). TABLE 8.2
DEALLOCATE Clause Examples
HWM
DEALLOCATE Clause
Resulting Size
7000KB
UNUSED;
7000KB
Seven; the seventh extent will be split at the HWM.
200KB
UNUSED;
4096KB
Four, because the KEEP clause is not specified.
200KB
UNUSED KEEP 100K;
300KB
One; the initial extent is split at the HWM.
2000KB
UNUSED KEEP 0K;
2000KB
Two; the second extent is split at the HWM.
Extent Count
When a full table scan is performed, Oracle reads each block up to the table’s high-water mark.
You use the TRUNCATE command to delete all rows of the table and to reset the HWM of the table. You can keep the space allocated to the table or deallocate the extents when using TRUNCATE. By default, Oracle deallocates all the extents allocated above MINEXTENTS of the table and the associated indexes. To preserve the space allocated, you must specify the REUSE clause (DROP is the default), as in this example: TRUNCATE TABLE ORDERS REUSE STORAGE;
To truncate a table, you must disable all referential integrity constraints.
Reorganizing Tables You can use the MOVE clause of the ALTER TABLE command on a nonpartitioned table to reorganize or to move from one tablespace to another. You can reorganize the table to reduce the number of extents by specifying larger extent sizes or to prevent row migration. When you move a table, Oracle creates a new segment for the table, copies the data, and drops the old segment. The new segment can be in the same tablespace or in a different tablespace. Since the old segment is dropped only after you create the new segment, you need to make sure you have sufficient space in the tablespace, if you’re not changing to a different tablespace. The MOVE clause can specify a new tablespace, new storage parameters for the table, new free space management parameters, and new transaction entry parameters. You can use the NOLOGGING clause to speed up the reorganization by not writing the changes to the redo log file. The following example moves the ORDERS table to another tablespace named NEW_DATA. New storage parameters are specified, and the operation is not logged in the redo log files (NOLOGGING). ALTER TABLE ORDERS MOVE TABLESPACE NEW_DATA STORAGE (INITIAL 50M NEXT 5M PCTINCREASE 0) PCTFREE 0 PCTUSED 50 INITRANS 2 NOLOGGING; Prior to Oracle8i, you reorganized a table using the export-drop-import method. Queries are allowed on the table while the move operation is in progress, but no insert, update, or delete operations are allowed. The granted permissions on the table are retained.
Dropping a Table If a table is no longer used, you can drop it to free up space. Once you drop a table, the action cannot be undone. The syntax follows: DROP TABLE [schema.]table_name [CASCADE CONSTRAINTS] When you drop a table, the data and definition of the table are removed. The indexes, constraints, triggers, and privileges on the table are also dropped. Oracle does not drop the views, materialized views, or other stored programs that reference the table, but it marks them as invalid. You must specify the CASCADE CONSTRAINTS clause if there are referential integrity constraints referring to the primary key or unique key of this table. Here’s how to drop the table TEST owned by user SCOTT: DROP TABLE SCOTT.TEST;
Truncating a Table The TRUNCATE statement is similar to the DROP statement, but it does not remove the structure of the table, so none of the indexes, constraints, triggers, and privileges on the table are dropped. By default, the space allocated to the table and indexes is freed. If you do not want to free up the space, include the REUSE STORAGE clause. You cannot roll back a truncate operation. Also, you cannot selectively delete rows using the TRUNCATE statement. The syntax of TRUNCATE statement is: TRUNCATE {TABLE|CLUSTER} [<schema>.] [{DROP|REUSE} STORAGE] You cannot truncate the parent table of an enabled referential integrity constraint. You must first disable the constraint and then truncate the table, even if the child table has no rows. The following example demonstrates this process: SQL> CREATE TABLE t1 ( 2 t1f1 NUMBER CONSTRAINT pk_t1 PRIMARY KEY); Table created. SQL> CREATE TABLE t2 (t2f1 NUMBER CONSTRAINT fk_t2 REFERENCES t1 (t1f1)); Table created. SQL> TRUNCATE TABLE t1; truncate table t1 * ERROR at line 1:
ORA-02266: unique/primary keys in table referenced by enabled foreign keys SQL> ALTER TABLE t2 DISABLE CONSTRAINT fk_t2; Table altered. SQL> TRUNCATE TABLE t1; Table truncated. SQL>
TRUNCATE versus DELETE The TRUNCATE statement is similar to a DELETE statement without a WHERE clause, except for the following:
TRUNCATE is very fast on both large and small tables. DELETE will generate undo information, in case a rollback is issued, but TRUNCATE will not generate undo.
TRUNCATE is DDL and, like all DDL, performs an implicit commit— you cannot roll back a TRUNCATE. Any uncommitted DML changes will also be committed with the TRUNCATE.
TRUNCATE resets the high-water mark in the table and all indexes. Since full table scans and index fast-full scans read all data blocks up to the high-water mark, full-scan performance after a DELETE will not improve; after a TRUNCATE, performance will be very fast.
TRUNCATE does not fire any DELETE triggers.
There is no object privilege that can be granted to allow a user to truncate another user’s table. The DROP ANY TABLE system privilege is required to truncate a table in another schema.
When a table is truncated, the storage for the table and all indexes can be reset back to the initial size. A DELETE will never shrink the size of a table or its indexes.
TRUNCATE versus DROP TABLE Using TRUNCATE is also different from dropping and re-creating a table. Compared with dropping and recreating a table, TRUNCATE does not do the following:
Invalidate dependent objects
Drop indexes, triggers, or referential integrity constraints
Use the TRUNCATE statement to delete all rows from a large table; it does not write the rollback entries and is much faster than the DELETE statement when deleting a large number of rows.
Dropping Columns
Oracle Objective
Drop a column within a table
You can drop a column that is not used immediately, or you can mark the column as not used and drop it later. Here is the syntax for dropping a column: ALTER TABLE [<schema>.] DROP {COLUMN | ()} [CASCADE CONSTRAINTS] DROP COLUMN drops the column name specified from the table. You can provide more than one column name separated by commas inside parentheses. The indexes and constraints on the column are also dropped. You must specify CASCADE CONSTRAINTS if the dropped column is part of a multicolumn constraint; the constraint will be dropped. Here is the syntax for marking a column as unused: ALTER TABLE [<schema>.] SET UNUSED {COLUMN | ()} [CASCADE CONSTRAINTS] You usually mark a column as unused instead of dropping it immediately if the table is very large and consumes a lot of resources at peak hours. In such cases, you would mark the column as unused and drop it at off-peak hours. Once the column is marked as unused, you will not see it as part of the table definition. Let’s mark the UPDATE_DT column in the ORDERS table as unused: ALTER TABLE orders SET UNUSED COLUMN update_dt;
The syntax for dropping a column already marked as unused is: ALTER TABLE [<schema>.] DROP {UNUSED COLUMNS | COLUMNS CONTINUE} Use the COLUMNS CONTINUE clause to continue a DROP operation that was previously interrupted. You cannot specify selected column names to drop after marking the column as unused. The DROP UNUSED COLUMNS clause will drop all the columns that are marked as unused. To clear data from the UPDATE_DT column from the ORDERS table, use the following statement: ALTER TABLE orders DROP UNUSED COLUMNS;
Analyzing Tables You can analyze a table to verify the blocks in it, to find the chained and migrated rows, and to collect statistics on the table. You can specify the PARTITION or SUBPARTITION clause to analyze a specific partition or subpartition of the table.
Validating Structure As the result of hardware problems, disk errors, or software bugs, some blocks can become corrupted (logical corruption). Oracle returns a corruption error only when the rows are accessed. (The Export utility identifies logical corruption in tables, because it does a full table scan.) You can use the ANALYZE command to validate the structure or check the integrity of the blocks allocated to the table. If Oracle finds blocks or rows that are not readable, it returns an error. The ROWIDs of the bad rows are inserted into a table. You can specify the name of the table in which you want the ROWIDs to be saved; by default, Oracle looks for the table named INVALID_ROWS. You can create the table using the script utlvalid.sql supplied from Oracle, located in the rdbms/admin directory of the software installation. The structure of the table is as follows: SQL>
Name Null? Type ----------------------- -------- -----------OWNER_NAME VARCHAR2(30) TABLE_NAME VARCHAR2(30) PARTITION_NAME VARCHAR2(30) SUBPARTITION_NAME VARCHAR2(30) HEAD_ROWID ROWID ANALYZE_TIMESTAMP DATE This example validates the structure of the ORDERS table: ANALYZE TABLE ORDERS VALIDATE STRUCTURE; If Oracle encounters bad rows, it inserts them into the INVALID_ROWS table. To specify a different table name, use the following syntax. ANALYZE TABLE ORDERS VALIDATE STRUCTURE INTO SCOTT.CORRUPTED_ROWS; You can also validate the blocks of the indexes associated with the table by specifying the CASCADE clause. ANALYZE TABLE ORDERS VALIDATE STRUCTURE CASCADE; To analyze a partition by name MAY2002 in table GLEDGER, specify ANALYZE TABLE GLEDGER PARTITION (MAY2002) VALIDATE STRUCTURE;
Finding Migrated Rows A row is migrated if the row is moved from its original block to another block because there was not enough free space available in its original block to accommodate the row, which was expanded due to an update. Oracle keeps a pointer in the original block to indicate the new block ID of the row. When there are many migrated rows in a table, performance of the table is affected, because Oracle has to read two blocks instead of one for a given row retrieval or update. You can prevent this problem by specifying an efficient PCTFREE value. A row is chained if the row is bigger than the block size of the database. Normally, the rows of a table with LOB datatypes are more likely to become chained. You can use the LIST CHAINED ROWS clause of the ANALYZE command to find the chained and migrated rows of a table. Oracle writes the ROWID of such rows to a specified table. If no table is specified, Oracle looks for the CHAINED_ROWS table. You can create this table using the script
utlchain.sql supplied from Oracle, located in the rdbms/admin directory of the software installation. The structure of the table is as follows: SQL> @c:\oracle\ora90\rdbms\admin\utlchain.sql Table created. SQL> DESC CHAINED_ROWS Name Null? Type ------------------------ -------- -----------OWNER_NAME VARCHAR2(30) TABLE_NAME VARCHAR2(30) CLUSTER_NAME VARCHAR2(30) PARTITION_NAME VARCHAR2(30) SUBPARTITION_NAME VARCHAR2(30) HEAD_ROWID ROWID ANALYZE_TIMESTAMP DATE Only the ROWIDs are listed in the CHAINED_ROWS table. You can use this information to save these chained rows to a different table, delete them from the source table, and insert them back from the second table. Here is one way to fix migrated rows in a table: 1. Analyze the table to find migrated rows.
ANALYZE TABLE ORDERS LIST CHAINED ROWS; 2. Find the number of migrated rows.
SELECT COUNT(*) FROM CHAINED_ROWS WHERE OWNER_NAME = ‘SCOTT’ AND TABLE_NAME = ‘ORDERS’; 3. If there are migrated rows, create a temporary table to hold the
migrated rows. CREATE TABLE TEMP_ORDERS AS SELECT * FROM ORDERS WHERE ROWID IN (SELECT HEAD_ROWID FROM CHAINED_ROWS WHERE OWNER_NAME = ‘SCOTT’ AND TABLE_NAME = ‘ORDERS’);
INSERT INTO ORDERS SELECT * FROM TEMP_ORDERS; Before deleting the rows, make sure you disable any foreign key constraints referring to the ORDERS table. You will not be able to delete the rows if there are child rows, and most important, defining the constraints with the CASCADE option deletes the child rows! See the “Managing Constraints” section later in this chapter to learn about the foreign key constraints and how to enable/disable them.
Collecting Statistics You can collect statistics about a table and save them in the dictionary tables by using the ANALYZE command. The cost-based optimizer also uses the statistics to generate the execution plan of SQL statements. You can calculate the exact statistics (COMPUTE) of the table or sample a few rows and estimate the statistics (ESTIMATE). By default, Oracle collects statistics for all the columns and indexes in the table. For large tables, you may want to estimate, because when you compute the statistics, Oracle reads each block of the table. The following information is collected and saved in the dictionary when you use the ANALYZE command to collect statistics:
The total number of rows in the table and the number of chained rows
The total number of blocks allocated, the total number of unused blocks, and the average free space in each block
The average row length
Here is an example of analyzing a table using the COMPUTE clause: ANALYZE TABLE ORDERS COMPUTE STATISTICS; When using the ESTIMATE option, you can either specify a certain number of rows or specify a certain percentage of rows in the table. If the rows
specified are more than 50 percent of the table, Oracle does a COMPUTE. If you do not specify the SAMPLE clause, Oracle samples 1064 rows. To specify the number of rows, use the following: ANALYZE TABLE ORDERS ESTIMATE STATISTICS SAMPLE 200 ROWS; To specify a percentage of the table to sample, use the following: ANALYZE TABLE ORDERS ESTIMATE STATISTICS SAMPLE 20 PERCENT; To remove statistics collected on a table, use the DELETE STATISTICS option, as follows: ANALYZE TABLE ORDERS DELETE STATISTICS;
You can also collect the statistics using the DBMS_STATS package. You have the option of collecting the statistics into a non-dictionary table.
Querying Table Information Several data dictionary views are available to provide information about the tables. We will discuss certain views and their columns that you should be familiar with before taking the test.
DBA_TABLES You primarily use the DBA_TABLES, USER_TABLES, and ALL_TABLES views to query for information about tables (TABS is a synonym for USER_TABLES). The views contain the following information (the columns that can be used in the query are provided in parentheses):
DBA_TAB_COLUMNS Use the DBA_TAB_COLUMNS, USER_TAB_COLUMNS, and ALL_TAB_COLUMNS views to display information about the columns in a table. You can query the following information:
Subpartition information for composite partitions in the database.
ALL_OBJECTS DBA_OBJECTS USER_OBJECTS
Information about the objects. For table information such as creation timestamp and modification date, query this view. The OBJECT_TYPE column shows the type of the object, such as table, index, trigger, etc.
Dictionary Views with Table Information (continued) View Name
Contents
ALL_EXTENTS DBA_EXTENTS USER_EXTENTS
Information about the extents allocated to the table. Shows the tablespace, data file, number of blocks, extent size, etc.
The Structure of a Row
Oracle Objective
Describe the structure of a row
Oracle stores data in the form of rows. Rows are stored in blocks. You define the size of the block when you create the database, but you can override the size when you create tablespaces. If the entire row can be inserted into a block, the row is stored as a row piece. If the row to be stored is bigger than the block size, the row is stored using multiple row pieces. The same is true for rows that grow beyond the free space available in a block during updates. A row piece contains a maximum of only 255 columns. If there is data beyond the 255th column in a row, the row is stored as multiple row pieces, a practice known as intra-block chaining. Intra-block chaining does not affect IO performance. A row piece has two parts—a row header and column data. A row header is about 3 bytes for a row piece that is fully contained in a block. The row header includes information about the row piece such as the columns, whether any rows are chained, and whether any cluster keys are present. After the row header is the column data. Column data has two pieces—length and data. The column length occupies 1 byte for data less than 251 bytes and 3 bytes for data over 250 bytes. To conserve space, Oracle does not store null values; only the column length is marked as zero. If the NULL columns are toward the end of the row, Oracle does not even store the column length.
Distinguish between an extended versus a restricted ROWID
ROWID uniquely identifies each row of the table. ROWID is a pseudocolumn in all tables that is not implicitly selected—you must specify ROWID in the query. The ROWID is an 18-byte structure that stores the physical location of the row. Since ROWIDs contain the exact block ID where the row is located, using ROWID is the fastest way to access a row. There are two categories of ROWIDs: Physical Identifies each row of a table, partition, subpartition, or cluster Logical Identifies the rows of an Index Organized Table (IOT—discussed later in this chapter) Unless explicitly specified, ROWID in this chapter means a physical ROWID. There are two formats for the ROWID: Extended This format uses a base-64 encoding scheme to display the ROWID consisting of the characters A–Z, a–z, 0–9, +, and –. The ROWID is an 18-character representation that is stored in 10 bytes. The format is OOOOOOFFFBBBBBBRRR:
OOOOOO is the object number.
FFF is the relative data file number where the block is located; the file number is relative to the tablespace.
BBBBBB is the block ID where the row is located.
RRR is the row in the block.
SQL> SELECT ROWID, ORDER_NUM 2 FROM ORDERS; ROWID -----------------AAAFqsAADAAAAfTAAA AAAFqsAADAAAAfTAAB
Restricted This format is the pre-Oracle8 format, carried forward for compatibility. The restricted format is BBBBBBB.RRRR.FFFF (in base16 or hexadecimal format):
DBMS_ROWID You can use the DBMS_ROWID package to read and convert the ROWID information. This package has several useful functions that you can use to convert the ROWID between extended and restricted formats. You use the function ROWID_TO_RESTRICTED to convert the extended ROWID format to restricted format. The two parameters to this function are the extended ROWID and the conversion type. Conversion type is an integer: 0 to return the ROWID in an internal format, and 1 to return it in a character string. Oracle7 used restricted ROWID format, and Oracle8 and later versions use the extend ROWID format. If the database is upgraded from Oracle7 to Oracle8i, and there are some tables with a ROWID defined as the type of an explicit column in the table, you can convert the restricted ROWID to extended format by using the function ROWID_TO_EXTENDED. There are four parameters to this function: the old ROWID, the object owner, the object name, and the conversion type. The object owner and object name parameters are optional. SQL> 2 2 3 4 SQL>
The ROWID_TO_VERIFY function takes the same parameters as the ROWID_TO_EXTENDED function. This function verifies whether a restricted format ROWID can be converted to extended format by using the ROWID_TO_ EXTENDED function. If the ROWID can be converted, it returns 0; otherwise, it returns 1.
Managing Indexes
Oracle Objective
Describe the different types of indexes and their uses
Indexes are used to access data more quickly than reading the whole table, and they reduce disk I/O considerably when the queries use the available indexes. As with tables, you can specify storage parameters for indexes, create partitioned indexes, and analyze the index to verify structure and collect statistics. You can create any number of indexes on a table. A column can be part of multiple indexes, and you can specify as many as 30 columns in an index. When you specify more than one column, the index is known as a composite index. You can have more than one index with the same index columns, but in a different order. You can create and drop indexes without affecting the base data of the table—indexes and table data are independent. Oracle maintains the indexes automatically: when new rows are added to the table, updated, or deleted, Oracle updates the corresponding indexes. You can create the following types of indexes: Bitmap A bitmap index does not repeatedly store the index column values. Each value is treated as a key, and a bit is set for the corresponding ROWIDs. Bitmap indexes are suitable for columns with low cardinality, such as the SEX column in an EMPLOYEE table, in which the possible values
are M or F. The cardinality is the number of distinct column values in a column. In the EMPLOYEE table example, the cardinality of the SEX column is 2. You cannot create unique or reverse key bitmap indexes. b-tree This type of index is the default. You create the index by using the b-tree algorithm. The b-tree includes nodes with the index column values and the ROWID of the row. The ROWIDs identify the rows in the table. You can create the following types of b-tree indexes: Non-unique This is the default b-tree index; the index column values are not unique. Unique You create this type of b-tree index by specifying the UNIQUE keyword: each column value entry of the index is unique. Oracle guarantees that the combination of all index column values in the composite index is unique. Oracle returns an error if you try to insert two rows with the same index column values. Reverse key To specify the reverse key index you use the REVERSE keyword. The bytes of each column indexed are reversed, but the column order is retained. For example, if column ORDER_NUM has value 54321, Oracle reverses the bytes to 12345 and then adds the column to the index. You can use this type of indexing for unique indexes when inserts to the table are always in the ascending order of the indexed columns. This type of indexing helps to distribute the adjacent valued columns to different leaf blocks of the index and, as a result, improve performance by retrieving fewer index blocks. Leaf blocks are the blocks at the lowest level of the b-tree. Function-based You can create the function-based index on columns with expressions. For example, creating an index on the SUBSTR(EMPID, 1,2) can speed up the queries using SUBSTR(EMPID, 1, 2) in the WHERE clause.
Oracle does not include the rows with NULL values in the index columns when storing the b-tree index. Bitmap indexes store NULL values.
The CREATE INDEX statement creates a non-unique b-tree index on the columns specified. You must specify a name for the index and the table name on which the index should be built. For example, to create an index on the ORDER_DATE column of the ORDERS table, specify the following: CREATE INDEX IND1_ORDERS ON ORDERS (ORDER_DATE); To create a unique index, you must specify the keyword UNIQUE immediately after CREATE. For example: CREATE UNIQUE INDEX IND2_ORDERS ON ORDERS (ORDER_NUM); To create a bitmap index, you must specify the keyword BITMAP immediately after CREATE. Bitmap indexes cannot be unique. For example: CREATE BITMAP INDEX IND3_ORDERS ON ORDERS (STATUS);
Specifying Storage If you do not specify the TABLESPACE clause in the CREATE INDEX statement, Oracle creates the index in the default tablespace of the user. If you don’t specify the STORAGE clause, Oracle inherits the default storage parameters defined for the tablespace. All the storage parameters discussed in the “Managing Tables” section are applicable to indexes and have the same meaning except for PCTUSED. You cannot specify PCTUSED for indexes. Keep the INITRANS for the index more than that specified for the corresponding table, because the index blocks can hold a larger number of rows than a table.
Here is an example of creating an index and specifying the storage: CREATE UNIQUE INDEX IND2_ORDERS ON ORDERS (ORDER_NUM) TABLESPACE USER_INDEX PCTFREE 25 INITRANS 2 MAXTRANS 255 STORAGE (INITIAL 128K NEXT 128K PCTINCREASE 0 MINEXTENTS 1 MAXEXTENTS 100 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL KEEP); When creating indexes for a table with rows, Oracle writes the data blocks with index values up to PCTFREE. The free space reserved by specifying PCTFREE is used when inserting into the table a new row (or updating a row that changes the corresponding index key column value) that needs to be placed between two index key values of the leaf node. If no free space is available in the block, Oracle uses a new block. If many new rows are inserted into the table, keep the PCTFREE of the index high.
Using Other Create Clauses You can use the NOLOGGING clause to specify that information is not written to the redo log files, which speeds index creation. The default is LOGGING. You can also collect statistics about the index while creating the index by specifying the COMPUTE STATISITCS clause. Using this clause avoids another ANALYZE on the index later. The ONLINE clause specifies that the table will be available for DML operations when the index is built. If data is loaded to the table in the order of an index, you can specify the NOSORT clause. Oracle does not sort the rows, but if the data is not sorted, Oracle returns an error. Specifying this clause saves time and temporary space. For multicolumn indexes, eliminating the repeating key columns can save storage space. Specify the COMPRESS clause when creating the index. NOCOMPRESS is the default. This clause can be used only with non-partitioned indexes. Index performance may be affected when using this clause. Specify PARALLEL to create the index using multiple server processes. NOPARALLEL is the default.
The following is an example of creating an index by specifying some of the miscellaneous clauses: SQL> 2 3 4 5 6 7 8
CREATE INDEX IND5_ORDERS ON ORDERS (ORDER_NUM, ORDER_DATE) TABLESPACE INDX NOLOGGING NOSORT COMPRESS SORT COMPUTE STATISTICS;
Index created. SQL>
Partitioning As with tables, you can partition indexes for better manageability and performance. Partitioned tables can have partitioned and/or non-partitioned indexes, and partitioned indexes can be created on partitioned or nonpartitioned tables. When all the index partitions correspond to the table partitions (equipartitioning), the index is called a local partitioned index. Specifying the LOCAL keyword creates local indexes. For local indexes, Oracle maintains the index partition keys automatically, in synch with the table partition. A global partitioned index specifies different partition range values; the partition column values need not belong to a single table partition. Specifying the GLOBAL keyword creates global indexes. You can create four types of partitioned indexes: Local prefixed Local index with leading columns (leftmost column in index) in the order of the partition key. Local non-prefixed Partition key columns are not leading columns, but the index is local. Global prefixed Global index, with leading columns in the order of the partition key. Global non-prefixed Global index, with leading columns not in the partition key order.
Bitmap indexes created on partitioned tables must be local. You cannot create a partitioned bitmap index on a non-partitioned table.
Reverse Key Indexes Specifying the REVERSE keyword creates a reverse key index. Reverse key indexes improve performance of certain OLTP (Online Transaction Processing) applications using the parallel server. The following example creates a reverse key index on the ORDER_NUM and ORDER_DATE column of the ORDERS table. CREATE UNIQUE INDEX IND2_ORDERS ON ORDERS (ORDER_DATE, ORDER_NUM) TABLESPACE USER_INDEX REVERSE;
Function-Based Indexes Function-based indexes are created as regular b-tree or bitmap indexes. Specify the expression or function when creating the index. Oracle precalculates the value of the expression and creates the index. For example, to create a function based on SUBSTR(PRODUCT_ID,1,2), use the following: CREATE INDEX IND4_ORDERS ON ORDERS (SUBSTR(PRODUCT_ID,1,2)) TABLESPACE USER_INDEX; To use the function-based index, you must set the instance initialization parameter QUERY_REWRITE_ENABLED to TRUE and the parameter QUERY_REWRITE_INTEGRITY to TRUSTED. Also, set the COMPATIBLE parameter to 8.1.0 or higher. A query can use this index if its WHERE clause specifies a condition by using SUBSTR(PRODUCT_ID,1,2), as in the following example: SELECT * FROM ORDERS WHERE SUBSTR(PRODUCT_ID,1,2) = ‘BT’;
You must gather statistics for function-based indexes for the cost-based optimizer to use the index. The rule-based optimizer does not use function-based indexes.
Index Organized Tables You can store index and table data together in a structure known as an Index Organized Table (IOT). IOTs are suitable for tables in which the data access is mostly through its primary key, such as look-up tables, which have a code and a description. An IOT is a b-tree index, and instead of storing the ROWID of the table where the row belongs, the entire row is stored as part of the index. You can build additional indexes on the columns of an IOT. You access the data in an IOT in the same way you access the data in a table. Since the row is stored along with the b-tree index, there is no physical ROWID for each row. The primary key identifies the rows in an IOT. Oracle “guesses” the location of the row and assigns a logical ROWID for each row, which permits the creation of secondary indexes. You can partition an IOT, but the partition columns should be a subset of the primary key columns. To build additional indexes on the IOT, Oracle uses a logical ROWID, which is derived from the primary key values of the IOT. The logical ROWID can include a guessed physical location of the row in the data files. This guessed location is not valid when a row is moved from one block to another. If the logical ROWID does not include the guessed location of the ROWID, Oracle has to perform two index scans when using the secondary index. The logical ROWIDs can be stored in columns with the datatype UROWID. To create an IOT, you use the CREATE TABLE command with the ORGANIZATION INDEX keyword. You must specify the primary key for the table when creating the table. SQL> 2 3 4 5 6 7 8 9 10
Altering Indexes Using the ALTER INDEX command, you can make the following alterations in an index:
Change its STORAGE clause, except for the parameters INITIAL and MINEXTENTS
Deallocate unused blocks
Rebuild the index
Coalesce leaf nodes
Manually allocate extents
Change the PARALLEL/NOPARALLEL, LOGGING/NOLOGGING clauses
Modify partition storage parameters, rename partitions, drop partitions, and so on
Specify the ENABLE/DISABLE clause to enable or disable functionbased indexes
Mark the index or index partition as UNUSABLE, thereby disabling the index or index partition
Rename the index
Since the rules for changing the storage parameters, allocating extents, and deallocating extents are similar to those for altering a table, we will provide only some short examples here. To change storage parameters, use the following: ALTER INDEX SCOTT.IND1_ORDERS STORAGE (NEXT 512K MAXEXTENTS UNLIMITED); To allocate an extent, use the following: ALTER INDEX SCOTT.IND1_ORDERS ALLOCATE EXTENT SIZE 200K; To deallocate unused blocks, use the following: ALTER INDEX SCOTT.IND1_ORDERS DEALLOCATE UNUSED KEEP 100K;
If you disable an index by specifying the UNUSABLE clause, you must rebuild the index to make it valid.
Rebuilding/Coalescing Indexes
Oracle Objective
Reorganize indexes
Over time, blocks in an index can become fragmented and leave behind free space in leaf blocks. You can compress these indexes and gain space that can be used for new leaf blocks. You can use the COALESCE clause to free up index leaf blocks within the same branch of the tree. The index storage parameters and tablespace values remain the same. Here is an example: ALTER INDEX IND1_ORDERS COALESCE; If you want to re-create the index on a different tablespace or specify different storage parameters and free up leaf blocks, you can use the REBUILD clause. When you rebuild an index, Oracle drops the original index when the rebuild is complete. (A new index is created even if you do not change the tablespace or storage.) Users can access the index if you specify the ONLINE parameter. Optionally, you can collect statistics while rebuilding the index by using the COMPUTE STATISTICS parameter, and you can specify NOLOGGING so that redo log entries are not generated. You can also specify REVERSE or NOREVERSE to convert a normal index to a reverse key index or vice versa. The following example moves the index to a new tablespace, collecting the statistics while rebuilding, and makes the table available for insert/ update/delete operations by specifying the ONLINE clause. ALTER INDEX IND1_ORDERS REBUILD TABLESPACE NEW_INDEX_TS STORAGE (INITIAL 25M NEXT 5M PCTINCREASE 0) PCTFREE 20 INITRANS 4 COMPUTE STATISTICS ONLINE NOLOGGING;
You can drop indexes using the DROP INDEX command. Oracle frees up all the space used by the index when the index is dropped. When a table is dropped, the indexes built on the table are automatically dropped. For example: DROP INDEX SCOTT.IND5_ORDERS; You cannot drop the indexes used to enforce the uniqueness or primary key of a table. Such indexes can be dropped only after disabling the primary and unique keys.
Analyzing Indexes As you can with tables, you can analyze indexes to validate their structure (to find block corruption) and to collect statistics. You cannot use the ANALYZE command on an index with the LIST CHAINED ROWS clause. You can use the COMPUTE or ESTIMATE clause when collecting statistics. Here are some examples. To validate the structure of an index, use the following: ALTER INDEX IND5_ORDERS VALIDATE STRUCTURE; To collect statistics by sampling 40 percent of the entries in the index, use the following: ALTER INDEX IND5_ORDERS ESTIMATE STATISTICS SAMPLE 40 PERCENT; To delete statistics, use the following: ALTER INDEX IND5_ORDERS DELETE STATISTICS;
Oracle9i provides a method for detecting whether an index is used. You can drop unused indexes from the database to free up space and resources. Each index causes an overhead when DML statements are performed on the table. To enable index monitoring, you use the MONITORING USAGE clause of the ALTER INDEX statement. ALTER INDEX MONITORING USAGE; The V$OBJECT_USAGE view contains information about index usage. If the index is used after the monitoring begins, the USED column will have a value of YES. The following example illustrates index monitoring. It enables monitoring of index PK_DEPT. SQL> alter index pk_dept monitoring usage; Index altered. SQL> select * from v$object_usage; INDEX_NAME ---------PK_DEPT
SQL> The START_MONITORING column has the timestamp of when the monitoring began. Each time you start monitoring, this timestamp is reset. Using the NOMONITORING clause can stop the monitoring of the index. ALTER INDEX NOMONITORING USAGE; Let’s now query the DEPT table and see how the USED column value changes.
SQL> select /*+ index (dept pk_dept) */ * from dept 2 where deptno = 10; DEPTNO ---------10
DNAME -----------ACCOUNTING
LOC ------------NEW YORK
SQL> alter index pk_dept nomonitoring usage; Index altered. SQL> select * from v$object_usage; INDEX_NAME TABLE_NAME MON USE START_MONITORING END_MONITORING ----------- ----------- ---- ---- -------------------- ------------------PK_DEPT DEPT NO YES 11/23/2001 23:54:05 11/24/2001 23:54:05
SQL>
Querying Index Information
Oracle Objective
Get index information from the data dictionary
Several data dictionary views are available to query information about indexes. This section covers certain views and their columns that you should be familiar with before taking the exam.
DBA_INDEXES The DBA_INDEXES, USER_INDEXES, and ALL_INDEXES views are the primary views you can use to query for information about indexes (IND is a synonym for USER_INDEXES). The views have the following information (the columns that can be used in the query are provided in parentheses):
DBA_IND_COLUMNS Use the DBA_IND_COLUMNS, USER_IND_COLUMNS, and ALL_IND_COLUMNS views to display information about the columns in an index. The following information can be queried:
Constraints are created in the database to enforce a business rule and to specify relationships between various tables. You can also enforce business rules by using database triggers and application code. Integrity constraints prevent bad data from being entered into the database. Oracle allows you to create five types of integrity constraints: NOT NULL Prevents NULL values from being entered into the column. These types of constraints are defined on a single column. CHECK Checks whether the condition specified in the constraint is satisfied. UNIQUE Ensures that there are no duplicate values for the column(s) specified. Every value or set of values is unique within the table. PRIMARY KEY Uniquely identifies each row of the table. Prevents NULL values. A table can have only one primary key constraint. FOREIGN KEY Establishes a parent-child relationship between tables by using common columns.
Creating Constraints
Oracle Objective
Implement data integrity constraints
To create constraints, you use the CREATE TABLE or ALTER TABLE statements. You can specify the constraint definition at the column level if the constraint is defined on a single column. You define multiple column constraints at the table level; specify the columns in parentheses separated by a comma. If you do not provide a name for the constraints, Oracle assigns a system-generated name. To provide a name for the constraint, specify the keyword CONSTRAINT followed by the constraint name. In this section, we will discuss the rules for each constraint type and give you some examples of creating constraints.
NOT NULL NOT NULL constraints have the following characteristics:
You define the constraint at the column level.
Use CREATE TABLE to define constraints when creating the table. The following example shows a named constraint on the ORDER_NUM column; for ORDER_DATE, Oracle generates a name. CREATE TABLE ORDERS ( ORDER_NUM NUMBER (4) CONSTRAINT NN_ORDER_NUM NOT NULL, ORDER_DATE DATE NOT NULL, PRODUCT_ID)
Use ALTER TABLE MODIFY to add or remove a NOT NULL constraint on the columns of an existing table. The following code shows examples of removing a constraint and adding a constraint. ALTER TABLE ORDERS MODIFY ORDER_DATE NULL; ALTER TABLE ORDERS MODIFY PRODUCT_ID NOT NULL;
CHECK CHECK constraints have the following characteristics:
They can be defined at the column level or table level.
The condition specified in the CHECK clause should evaluate to a Boolean result and can refer to values in other columns of the same row; the condition cannot use queries.
Environment functions such as SYSDATE, USER, USERENV, UID, and pseudo-columns such as ROWNUM, CURRVAL, NEXTVAL, or LEVEL cannot be used to evaluate the check condition.
One column can have more than one CHECK constraint defined. The column can have a NULL value.
They can be created using CREATE TABLE or ALTER TABLE. CREATE TABLE BONUS ( EMP_ID VARCHAR2 (40) NOT NULL, SALARY NUMBER (9,2), BONUS NUMBER (9,2), CONSTRAINT CK_BONUS CHECK (BONUS > 0));
ALTER TABLE BONUS ADD CONSTRAINT CK_BONUS2 CHECK (BONUS < SALARY);
UNIQUE UNIQUE constraints have the following characteristics:
They can be defined at the column level for single-column unique keys. For a multiple-column unique key (composite key—the maximum number of columns specified can be 32), the constraint should be defined at the table level.
Oracle creates a unique index on the unique key columns to enforce uniqueness. If a unique index or a non-unique index already exists on the table with the same columns in the index, Oracle uses the existing index. To use the existing non-unique index, the table must not contain any duplicate keys.
Unique constraints allow NULL values in the constraint columns.
Storage can be specified for the implicit index created when creating the key. If no storage is specified, the index is created on the default tablespace with the default storage parameters of the tablespace. You can specify the LOGGING and NOSORT clauses, as you would when creating an index. The index created can be a local or a global partitioned index. The index will have the same name as the unique constraint. Following are two examples. The first one defines a unique constraint with two columns and specifies the storage parameters for the index. The second example adds a new column to the EMP table and creates a unique key at the column level. ALTER TABLE BONUS ADD CONSTRAINT UQ_EMP_ID UNIQUE (DEPT, EMP_ID) USING INDEX TABLESPACE INDX STORAGE (INITIAL 32K NEXT 32K PCTINCREASE 0); ALTER TABLE EMP ADD SSN VARCHAR2 (11) CONSTRAINT UQ_SSN UNIQUE;
PRIMARY KEY PRIMARY KEY constraints have the following characteristics:
All characteristics of the UNIQUE key are applicable except that NULL values are not allowed in the primary key columns.
A table can have only one primary key.
Oracle creates a unique index and NOT NULL constraints for each column in the key. Oracle can use an existing index if all the columns of the primary key are in the index. The following example defines a primary key when creating the table. Storage parameters are specified for both the table and the primary key index. CREATE TABLE EMPLOYEE ( DEPT_NO VARCHAR2 (2), EMP_ID NUMBER (4), NAME VARCHAR2 (20) NOT NULL, SSN VARCHAR2 (11), SALARY NUMBER (9,2) CHECK (SALARY > 0), CONSTRAINT PK_EMPLOYEE PRIMARY KEY (DEPT_NO, EMP_ID) USING INDEX TABLESPACE INDX STORAGE (INITIAL 64K NEXT 64K) NOLOGGING, CONSTRAINT UQ_SSN UNIQUE (SSN) USING INDEX TABLESPACE INDX) TABLESPACE USERS STORAGE (INITIAL 128K NEXT 64K);
Indexes created to enforce unique keys and primary keys can be managed as any other index. However, these indexes cannot be dropped explicitly.
You cannot drop indexes created to enforce the primary key or unique constraints.
FOREIGN KEY The foreign key is the column or columns in the table (child table) in which the constraint is created; the referenced key is the primary key, the unique key column, or columns in the table (parent table) that is referenced by the constraint. The following rules are applicable to foreign key constraints:
You can define a foreign key constraint at the column level or table level. Define multiple-column foreign keys at the table level.
The foreign key column(s) and referenced key column(s) can be in the same table (self-referential integrity constraint).
NULL values are allowed in the foreign key columns. The following is an example of creating a foreign key constraint on the COUNTRY_CODE and STATE_CODE columns of the CITY table, which refers to the COUNTRY_CODE and STATE_CODE columns of the STATE table (the composite primary key of the STATE table). ALTER TABLE CITY ADD CONSTRAINT FK_STATE FOREIGN KEY (COUNTRY_CODE, STATE_CODE) REFERENCES STATE (COUNTRY_CODE, STATE_CODE);
The ON DELETE clause specifies the action to be taken when a row in the parent table is deleted and child rows exist with the deleted parent primary key. You can delete the child rows (CASCADE) or set the foreign key column values to NULL (SET NULL). If you omit this clause, Oracle will not allow you to delete from the parent table if child records exist. You must delete the child rows first and then the parent row. Following are two examples of specifying the delete action in a foreign key. ALTER TABLE CITY ADD CONSTRAINT FK_STATE FOREIGN KEY (COUNTRY_CODE, STATE_CODE) REFERENCES STATE (COUNTRY_CODE, STATE_CODE) ON DELETE CASCADE; ALTER TABLE CITY ADD CONSTRAINT FK_STATE FOREIGN KEY (COUNTRY_CODE, STATE_CODE) REFERENCES STATE (COUNTRY_CODE, STATE_CODE) ON DELETE SET NULL;
Creating Disabled Constraints When you create a constraint, it is enabled automatically. You can create a disabled constraint by specifying the DISABLED keyword after the constraint definition. For example: ALTER TABLE CITY ADD CONSTRAINT FK_STATE FOREIGN KEY (COUNTRY_CODE, STATE_CODE) REFERENCES STATE (COUNTRY_CODE, STATE_CODE) DISABLE; ALTER TABLE BONUS ADD CONSTRAINT CK_BONUS CHECK (BONUS > 0) DISABLE;
Dropping Constraints To drop constraints, you use ALTER TABLE. You can drop any constraint by specifying the constraint name. ALTER TABLE BONUS DROP CONSTRAINT CK_BONUS2; To drop unique key constraints with referenced foreign keys, specify the CASCADE clause to drop the foreign key constraints and the unique constraint. Specify the unique key columns(s). For example: ALTER TABLE EMPLOYEE DROP UNIQUE (EMP_ID) CASCADE; To drop primary key constraints with referenced foreign key constraints, use the CASCADE clause to drop all foreign key constraints and then the primary key. ALTER TABLE BONUS DROP PRIMARY KEY CASCADE;
Enabling and Disabling Constraints
Oracle Objective
Maintain integrity constraints
When you create a constraint, the constraint is automatically enabled (unless you specify the DISABLE clause). You can disable a constraint by using the DISABLE clause of the ALTER TABLE statement. When you disable the UNIQUE or PRIMARY KEY constraints, Oracle drops the associated unique index. When you re-enable these constraints, Oracle builds the index.
You can disable any constraint by specifying the clause DISABLE CONSTRAINT followed by the constraint name. Specifying UNIQUE and the column name(s) can disable unique keys, and specifying PRIMARY KEY can disable the table’s primary key. You cannot disable a primary key or a unique key if foreign keys that are enabled reference it. To disable all the referenced foreign keys and the primary or unique key, specify CASCADE. Following are three examples that illustrate disabling. ALTER TABLE BONUS DISABLE CONSTRAINT CK_BONUS; ALTER TABLE EMPLOYEE DISABLE CONSTRAINT UQ_EMPLOYEE; ALTER TABLE STATE DISABLE PRIMARY KEY CASCADE; Using the ENABLE clause of the ALTER TABLE statement enables a constraint. When you enable a disabled unique or primary key, Oracle creates an index if an index with the unique or primary key columns (prefixed) does not already exist. You can specify storage for the unique or primary key when enabling these constraints. For example: ALTER TABLE STATE ENABLE PRIMARY KEY USING INDEX TABLESPACE USER_INDEX STORAGE (INITIAL 2M NEXT 2M); You can use the EXCEPTIONS INTO clause to find the rows that violate a referential integrity or uniqueness condition. The ROWIDs of the invalid rows are inserted into a table. You can specify the name of the table in which you want the ROWIDs to be saved; by default, Oracle looks for the table named EXCEPTIONS. You can create the table using the script utlexcpt.sql supplied from Oracle, located in the rdbms/admin directory of the software installation. The structure of the table is as follows: SQL>
The following example enables the primary key constraint and inserts the ROWIDs of the bad rows into the EXCEPTIONS table: ALTER TABLE STATE ENABLE PRIMARY KEY EXCEPTIONS INTO EXCEPTIONS; You can also use the MODIFY CONSTRAINT clause of the ALTER TABLE statement to enable/disable constraints. Specify the constraint name followed by the MODIFY CONSTRAINT keywords. Following are examples. ALTER TABLE BONUS MODIFY CONSTRAINT CK_BONUS DISABLE; ALTER TABLE STATE MODIFY CONSTRAINT PK_STATE DISABLE CASCADE; ALTER TABLE BONUS MODIFY CONSTRAINT CK_BONUS ENABLE; ALTER TABLE STATE MODIFY CONSTRAINT PK_STATE USING INDEX TABLESPACE USER_INDEX STORAGE (INITIAL 2M NEXT 2M) ENABLE;
Validated Constraints You have seen how to enable and disable a constraint. ENABLE and DISABLE affect only future data that will be added/modified in the table. In contrast, the VALIDATE and NOVALIDATE keywords in the ALTER TABLE command act on the existing data. Therefore, a constraint can have four states: ENABLE VALIDATE This is the default for the ENABLE clause. The existing data in the table is validated to verify that it conforms to the constraint. ENABLE NOVALIDATE This constraint does not validate the existing data, but enables the constraint for future constraint checking. DISABLE VALIDATE The constraint is disabled (any index used to enforce the constraint is dropped), but the constraint remains valid. No DML operation is allowed on the table because future changes cannot be verified. DISABLE NOVALIDATE This is the default for the DISABLE clause. The constraint is disabled, and no checks are done on future or existing data. Let’s look at an example of how these clauses can be used. Say that you have a large data warehouse table, on which bulk data loads are performed every night. This table has a primary key enforced using a non-unique
index—because Oracle does not drop the non-unique index when disabling the constraint. When you do batch loads, you can disable the primary key constraint as follows: ALTER TABLE WH01 MODIFY CONSTRAINT PK_WH01 DISABLE NOVALIDATE; After the batch load completes, you can enable the primary key as follows: ALTER TABLE WH01 MODIFY CONSTRAINT PK_WH01 ENABLE NOVALIDATE;
Oracle does not allow any inserts/updates/deletes on a table with a DISABLE VALIDATE constraint. Changing the constraint status to DISABLE VALIDATE is a quick way to make a table read-only.
Creating a Table, Indexes, and Constraints for a Customer Maintenance Application You, the DBA, must create the tables that are needed to manage a customer database. You are given the physical structure of the tables, the relationship between the tables, and the following information:
Columns of the CUSTOMER_MASTER table and the type of data stored. CUST_ID is the unique identifier of the table—the primary key. This table contains customer name, e-mail address, date of birth, primary contact address type, and status flag.
The CUSTOMER_ADDRESS table keeps the addresses of the customers. Each customer can have a maximum of four addresses—business1, business2, home1, and home2.
CUSTOMER_REFERENCES table keeps information about the new customers introduced by a customer. This table simply keeps the customer ID of the referring customer and the new customer.
Each table has record creation date, created user name, update date, and update username.
You decide to keep the tables and indexes in separate tablespaces and to use the uniform extent feature of the tablespace. This arrangement helps to manage the space on the tablespace more effectively. Data is kept in CUST_DATA tablespace, and indexes are maintained in the CUST_INDX tablespace. You create the tablespaces as follows: CREATE TABLESPACE CUST_DATA DATAFILE ’C:\ORACLE\ORADATA\CUST_DATA01.DBF’ SIZE 512K AUTOEXTEND ON NEXT 128K MAXSIZE 2000K EXTENT MANAGEMENT LOCAL UNIFORM SIZE 64K SEGMENT SPACE MANAGEMENT AUTO; CREATE TABLESPACE CUST_INDX DATAFILE ’C:\ORACLE\ORADATA\CUST_INDX.DBF’ SIZE 256K AUTOEXTEND ON NEXT 128K MAXSIZE 2000K EXTENT MANAGEMENT LOCAL UNIFORM SIZE 32K SEGMENT SPACE MANAGEMENT AUTO; Now you need to create the CUSTOMER_MASTER table. The table needs to have a primary key CUST_ID and a unique key EMAIL. You create a non-unique index on the EMAIL column, which is used to enforce the UNIQUE key. You also want to create an index on the DOB column because each week the firm sends greetings to all customers who are celebrating a birthday that week. The check constraint on the ADD_TYPE ensures that no other values are inserted into the column. CREATE TABLE CUSTOMER_MASTER ( CUST_ID
CHAR (2) CONSTRAINT CK_ADD_TYPE CHECK (ADD_TYPE IN (’B1’,’B2’,’H1’,’H2’)),
CRE_USER
VARCHAR2 (5) DEFAULT USER,
CRE_TIME
TIMESTAMP (3) DEFAULT SYSTIMESTAMP,
MOD_USER
VARCHAR2 (5),
MOD_TIME
TIMESTAMP (3),
CONSTRAINT PK_CUSTOMER_MASTER PRIMARY KEY (CUST_ID) USING INDEX TABLESPACE CUST_INDX) TABLESPACE CUST_DATA; CREATE INDEX CUST_EMAIL ON CUSTOMER_MASTER (EMAIL) TABLESPACE CUST_INDX; ALTER TABLE CUSTOMER_MASTER ADD CONSTRAINT UQ_CUST_EMAIL UNIQUE (EMAIL) USING INDEX CUST_EMAIL; Now you are ready to create the CUSTOMER_ADDRESSES table. You create the table first and then add the primary key and the foreign key. You create the foreign key with an option to defer its checking until commit time. CREATE TABLE CUSTOMER_ADDRESSES ( CUST_ID
VARCHAR2 (10),
ADD_TYPE
CHAR (2),
ADD_LINE1
VARCHAR2 (40) NOT NULL,
ADD_LINE2
VARCHAR2 (40),
CITY
VARCHAR2 (40) NOT NULL,
STATE
VARCHAR2 (2) NOT NULL,
ZIP
NUMBER (5) NOT NULL)
TABLESPACE CUST_DATA; ALTER TABLE CUSTOMER_ADDRESSES ADD CONSTRAINT PK_CUST_ADDRESSES PRIMARY KEY (CUST_ID, ADD_TYPE) USING INDEX TABLESPACE CUST_INDX; ALTER TABLE CUSTOMER_ADDRESSES ADD CONSTRAINT FK_CUST_ADDRESSES FOREIGN KEY (CUST_ID)
REFERENCES CUSTOMER_MASTER; ALTER TABLE CUSTOMER_ADDRESSES ADD CONSTRAINT CK_ADD_TYPE2 CHECK (ADD_TYPE IN (’B1’,’B2’,’H1’,’H2’)); You forgot, however, to enable the constraint deferrable clause and to delete records from CUSTOMER_ADDRESSES table when the row is deleted from CUSTOMER_MASTER table, so you do that now as follows: ALTER TABLE CUSTOMER_ADDRESSES DROP CONSTRAINT FK_CUST_ADDRESSES; ALTER TABLE CUSTOMER_ADDRESSES ADD CONSTRAINT FK_CUST_ADDRESSES FOREIGN KEY (CUST_ID) REFERENCES CUSTOMER_MASTER ON DELETE CASCADE DEFERRABLE INITIALLY IMMEDIATE; Now you create the CUSTOMER_REFERENCES table. Since this table row never grows with updates, you set the PCTFREE of the table to 0. CREATE TABLE CUSTOMER_REFERENCES ( CUST_ID
VARCHAR2 (10) REFERENCES CUSTOMER_MASTER,
CUST_REF_ID
VARCHAR2 (10) REFERENCES CUSTOMER_MASTER,
CRE_USER
VARCHAR2 (5),
CRE_TIME
TIMESTAMP (3) DEFAULT SYSTIMESTAMP,
MOD_USER
VARCHAR2 (5) DEFAULT USER,
MOD_TIME
TIMESTAMP (3),
CONSTRAINT PK_CUST_REFS PRIMARY KEY (CUST_ID, CUST_REF_ID)) TABLESPACE CUST_DATA PCTFREE 0; By reviewing the creating, you find that the CUSTOMER_ADDRESSES table does not have the created and modified user information and that the CUSTOMER_REFERENCES table has a DEFAULT value assigned to the wrong column. You fix these problems as follows: ALTER TABLE CUSTOMER_ADDRESSES ADD ( CRE_USER
ALTER TABLE CUSTOMER_REFERENCES MODIFY MOD_USER DEFAULT NULL; ALTER TABLE CUSTOMER_REFERENCES MODIFY CRE_USER DEFAULT USER; Also, the primary key for the CUSTOMER_REFERENCES table did not specify a tablespace for the primary key index, so it was created in the default tablespace of the table. You correct this as follows: SQL> SELECT TABLESPACE_NAME FROM DBA_INDEXES WHERE 2
INDEX_NAME = ’PK_CUST_REFS’;
TABLESPACE_NAME -----------------------------CUST_DATA SQL> ALTER INDEX PK_CUST_REFS REBUILD TABLESPACE CUST_INDX; Index altered. SQL> SELECT TABLESPACE_NAME FROM DBA_INDEXES WHERE 2
INDEX_NAME = ’PK_CUST_REFS’;
TABLESPACE_NAME -----------------------------CUST_INDX SQL> Now you query the dictionary views to see the table information. SQL> SELECT TABLE_NAME, TABLESPACE_NAME 2
6 rows selected. SQL> Next you query the dictionary views to see the constraint information. Notice that the two foreign key constraints on CUSTOMER_REFERENCES table and the NOT NULL constraints in the CUSTOMER_ADDRESSES table have system generated names.
Deferring Constraint Checks By default, Oracle checks whether the data conforms to the constraint when the statement is executed. Oracle allows you to change this behavior if the constraint is created using the DEFERRABLE clause (NOT DEFERRABLE is the default). Oracle specifies that the transaction can set the constraint-checking behavior. INITIALLY IMMEDIATE specifies that the constraint be checked for conformance at the end of each SQL statement (this is the default). INITIALLY DEFERRED specifies that the constraint be checked for conformance at the end of the transaction. You cannot change the DEFERRABLE status of a constraint using ALTER TABLE MODIFY CONSTRAINT; you must drop and re-create the constraint, and you can change the INITIALLY [DEFERRED/ IMMEDIATE] clause using ALTER TABLE. If the constraint is DEFERRABLE, you can change the behavior by using the SET CONSTRAINTS command or by using the ALTER SESSION SET CONSTRAINT command. You can enable or disable deferred constraint checking by listing all the constraints or by specifying the ALL keyword. You use the SET CONSTRAINTS command to set the constraint-checking behavior for the current transaction, and you use the ALTER SESSION command to set the constraint-checking behavior for the current session. Let’s look at an example. Create a primary key constraint on the CUSTOMER table and a foreign key constraint on the ORDERS table as DEFERRABLE.
Although the constraints are created DEFERRABLE, they are not deferred because of the INITIALLY IMMEDIATE clause. ALTER TABLE CUSTOMER ADD CONSTRAINT PK_CUST_ID PRIMARY KEY (CUST_ID) DEFERRABLE INITIALLY IMMEDIATE; ALTER TABLE ORDERS ADD CONSTRAINT FK_CUST_ID FOREIGN KEY (CUST_ID) REFERENCES CUSTOMER (CUST_ID) ON DELETE CASCADE DEFERRABLE; If you try to add a row to the ORDERS table with a CUST_ID that is not available in the CUSTOMER table, Oracle returns an error immediately, even though you plan to add the CUSTOMER row soon. Since the constraints are verified for conformance at each SQL statement, you must insert the CUSTOMER row first and then insert the row to the ORDERS table. Because the constraints are defined as DEFERRABLE, you can change this behavior by using the following command: SET CONSTRAINTS ALL DEFERRED; Now, you can insert rows in these tables in any order. Oracle checks the constraint conformance only at commit time. If you want deferred constraint checking as the default, create/modify the constraint by using INITIALLY DEFERRED. For example: ALTER TABLE CUSTOMER MODIFY CONSTRAINT PK_CUST_ID INITIALLY DEFERRED;
Querying Constraint Information
Oracle Objective
Obtain constraint information from the data dictionary
You can query constraints, their columns, type, status, and so on from the dictionary, using the following views.
DBA_CONSTRAINTS You can query the ALL_CONSTRAINTS, DBA_CONSTRAINTS, and USER_CONSTRAINTS views to get information about the constraints. The CONSTRAINT_TYPE column shows the type of constraint—C for check, P for primary key, U for unique key, and R for referential (foreign key); V and O are associated with views. For check constraints, the SEARCH_CONDITION column shows the check condition. NOT NULL constraints are listed in this view as CHECK constraints. NOT NULL constraint information can also be found in the NULLABLE column of the DBA_TAB_COLUMNS view. Here is a sample query to get the constraint information: SQL> SELECT CONSTRAINT_NAME, CONSTRAINT_TYPE, DEFERRED, 2 DEFERRABLE, STATUS 3 FROM DBA_CONSTRAINTS 4 WHERE TABLE_NAME = ‘ORDERS’; CONSTRAINT_NAME ----------------CK_QUANTITY PK_ORDERS
C C P
DEFERRED --------IMMEDIATE DEFERRED
DEFERRABLE -------------NOT DEFERRABLE DEFERRABLE
STATUS -------DISABLED ENABLED
SQL>
DBA_CONS_COLUMNS The ALL_CONS_COLUMNS, DBA_CONS_COLUMNS, and USER_CONS_COLUMNS views show the columns associated with the constraints. SQL> SELECT CONSTRAINT_NAME, COLUMN_NAME, POSITION 2 FROM DBA_CONS_COLUMNS 3 WHERE TABLE_NAME = ‘ORDERS’; CONSTRAINT_NAME -----------------------------CK_QUANTITY PK_ORDERS PK_ORDERS
COLUMN_NAME POSITION --------------- ---------QUANTITY ORDER_NUM 2 ORDER_DATE 1
This chapter discussed the various options available for creating tables, indexes, and constraints. You create tables using the CREATE TABLE command. By default, the table is created in the current schema. To create the table in another schema, you qualify the table with the schema name. You can create storage parameters when creating the table. The storage parameters that specify the extent sizes are INITIAL, NEXT, and PCTINCREASE. Once the table is created, you cannot change the INITIAL and MINEXTENTS parameters. PCTFREE controls the free space in the data block. Partitioning the tables lets you manage large tables more easily and results in better query performance. Partitioning is breaking a large table into smaller, more manageable pieces. Four partitioning methods are available: range, hash, list and composite. You can also create indexes on partitioned tables. The indexes can be equipartitioned, which results in what is also known as a local index, meaning index partitions will have the same partition keys and number of partitions as the partitioned table. You can create partitioned indexes on partitioned tables or non-partitioned tables. Similarly, partitioned tables can have partitioned local, partitioned global, or nonpartitioned indexes. You can alter tables and indexes to deallocate extents or unused blocks. You use the DEALLOCATE clause of the ALTER TABLE statement to release the free blocks that are above the high-water mark (HWM). You can also manually allocate space to a table or an index by using the ALLOCATE EXTENT clause. You move or reorganize tables can be using the MOVE clause. You can specify new tablespace and storage parameters. You can use the ANALYZE command on tables and indexes to validate the structure and to identify corrupt blocks. You can also use the ANALYZE command to collect statistics and to find and fix the chained rows in a table. The ROWID of the table is an 18-character representation of the physical location of the row. You can use the DBMS_ROWID package to convert ROWIDs between restricted and extended formats. You can query information on tables from DBA_TABLES, DBA_TAB_COLUMNS, DBA_TAB_PARTITIONS, and so on. You can create indexes as b-tree or bitmap. Bitmap indexes save storage space for low cardinality columns. You can create reverse key or functionbased indexes. An Index Organized Table (IOT) stores the index and row
data in the b-tree structure. Specify tablespace and storage when creating indexes. When you create indexes ONLINE, the table is available for insert/ update/delete operations during indexing. You can use the REBUILD clause of the ALTER INDEX command to move the index to a different tablespace or to reorganize the index. You can also change a reverse key index to a normal index and vice versa. You can monitor index usage and then drop unused indexes from the database to save resources. You create constraints on tables to enforce business rules. There are five types of constraints: not null, check, unique, primary key, and foreign key. You can create the constraints to check conformance at each SQL statement or when committing the changes—checking for conformance at each statement is the default. You can enable and disable constraints. Enable constraints with the NOVALIDATE clause to save time after large data loads.
Exam Essentials Understand the Oracle datatypes. Know the datatypes that you can use when creating/altering tables. Learn the new datatypes introduced in Oracle9i—TIMESTAMP and INTERVAL. Know the default width and precision of each datatype. Know the row. Learn the structure of the row and its components. The row header normally occupies 3 bytes; columns with length less than 251 bytes occupy 1 byte for column length storage, and columns with length of more than 250 bytes occupy 2 bytes for storing the column length in the row piece. Know the ROWID. ROWID is the physical location of the row. Understand the components that constitute the ROWID and the datatypes available to store the ROWID information. Know the syntax for creating tables. The CREATE TABLE statement is a complex statement. Understand all the clauses available to specify its storage, segment space management, and logging characteristics. Know how to create the different types of tables. Understand the keywords that appear in the ORGANIZATION clause of the CREATE TABLE statement. HEAP is the default and is used to create regular tables. INDEX creates an IOT, and EXTERNAL creates an external table.
Know how to rebuild a table. You can reorganize a table by using the MOVE clause of the ALTER TABLE statement. The table can be rebuilt on the same tablespace or on a different tablespace with new storage parameters. Drop a column from table. Dropping a column from table was introduced in Oracle8i. For large tables, you can mark a column as unused and later drop the column. Dropping the column saves disk space. Understand how to use the data dictionary to find table information. Know the dictionary views that contain information about the tables, its storage segments and extents, partitions, creation /modification timestamps, and so on. The most commonly used views to query table information are DBA_TABLES, DBA_TAB_COLUMNS, DBA_SEGMENTS, and DBA_OBJECTS. Understand the different types of indexes available. Know the different types of indexes and when to use them. The most commonly used index is the b-tree index. Bitmap, reverse-key, and function-based are the other types of indexes. Understand how to manage indexes. Indexes over time grow large and contain a lot of free space. Such indexes can be coalesced. You can also reorganize indexes using the ALTER INDEX REBUILD statement. You can move the index to a different tablespace, and you can use new storage parameters. Monitor index usage. The ability to monitor index usage is new in Oracle9i. It is good to know which indexes are not used over a period of time, so that you can drop them and save resources. Use the MONITORING USAGE clause of the ALTER INDEX statement to monitor indexes. Know the different types of constraints that can be created on a table. Understand their purpose and syntax of creation. NOT NULL is a column level constraint, and the other constraints can be defined at the column level or table level. Know the dictionary views to get constraint information. You use the data dictionary views DBA_CONSTRAINTS and DBA_CONS_COLUMNS to get constraint information. The INDEX_OWNER and INDEX_NAME columns of DBA_CONSTRAINTS provide the index used to enforce uniqueness or primary key constraint.
Review Questions 1. A table is created as follows: CREATE TABLE MY_TABLE (COL1 NUMBER) STORAGE (INITIAL 2M NEXT 2M MINEXTENTS 6 PCTINCREASE 0);
When you issue the following statement, what will be the size of the table, if the high-water mark of the table is 200KB? ALTER TABLE MY_TABLE DEALLOCATE UNUSED KEEP 1000K; A. 1000KB B. 200KB C. 12000KB D. 2MB E. 13MB 2. Which command is used to drop a constraint? A. ALTER TABLE
MODIFY CONSTRAINT
B. DROP CONSTRAINT C. ALTER TABLE
DROP CONSTRAINT
D. ALTER CONSTRAINT
DROP
3. When you define a column with datatype TIMESTAMP WITH LOCAL
TIME ZONE, what is the precision of seconds stored? A. 2 B. 6 C. 9 D. 0
4. Which data dictionary view has the timestamp of the table creation? A. DBA_OBJECTS B. DBA_SEGMENTS C. DBA_TABLES D. All the above 5. What happens when you issue the following statement and the
CHAINED_ROWS table does not exist in the current schema? ANALYZE TABLE EMPLOYEE LIST CHAINED ROWS; A. Oracle creates the CHAINED_ROWS table. B. Oracle updates the dictionary with the number of chained rows in
the table. C. Oracle creates the CHAINED_ROWS table under the SYS schema; if
one already exists under SYS, Oracle uses it. D. The statement fails. 6. The following statement is issued against the primary key constraint
(PK_BONUS) of the BONUS table. (Choose two statements that are true.) ALTER TABLE BONUS MODIFY CONSTRAINT PK_BONUS DISABLE VALIDATE;
A. No new rows can be added to the BONUS table. B. Existing rows of the BONUS table are validated before disabling the
constraint. C. Rows can be modified, but the primary key columns cannot
change. D. The unique index created when defining the constraint is dropped.
7. Which clause in the ANALYZE command checks for the integrity of the
rows in the table? A. COMPUTE STATISTICS B. VALIDATE STRUCTURE C. LIST INVALID ROWS D. None of the above 8. Which statement is not true? A. A partition can be range-partitioned. B. A subpartition can be range-partitioned. C. A partition can be hash-partitioned. D. A subpartition can be hash-partitioned. 9. A table is created with an INITRANS value of 2. Which value would
you choose for INITRANS of an index created on this table? A. 4 B. 2 C. 1 10. When validating a constraint, why would you specify the EXCEPTIONS
clause? A. To display the ROWIDs of the rows that do not satisfy the constraint B. To move the bad rows to the table specified in the EXCEPTIONS
clause C. To save the ROWIDs of the bad rows in the table specified in the
EXCEPTIONS clause D. To save the bad rows in the table specified in the EXCEPTIONS
15. Which keyword do you use in the CREATE INDEX command to create
a function-based index? A. CREATE FUNCTION INDEX B. CREATE INDEX
ORGANIZATION INDEX
C. CREATE INDEX
FUNCTION BASED
D. None of the above 16. Which data dictionary view shows statistical information from the
ANALYZE INDEX VALIDATE STRUCTURE command? A. INDEX_STATS B. DBA_INDEXES C. IND D. None; VALIDATE STRUCTURE does not generate statistics. 17. A constraint is created with the DEFERRABLE INITIALLY IMMEDIATE
clause. What does this mean? A. Constraint checking is done only at commit time. B. Constraint checking is done after each SQL statement, but you
can change this behavior by specifying SET CONSTRAINTS ALL DEFERRED. C. Existing rows in the table are immediately checked for constraint
violation. D. The constraint is immediately checked in a DML operation, but
subsequent constraint verification is done at commit time. 18. Which script creates the CHAINED_ROWS table? A. catproc.sql B. catchain.sql C. utlchain.sql D. No script is necessary; ANALYZE TABLE LIST CHAINED ROWS
19. What is the difference between a unique constraint and a primary
key constraint? A. A unique key constraint requires a unique index to enforce the
constraint, whereas a primary key constraint can enforce uniqueness using a unique or non-unique index. B. A primary key column can be NULL, but a unique key column
cannot be NULL. C. A primary key constraint can use an existing index, but a unique
constraint always creates an index. D. A unique constraint column can be NULL, but primary key
column(s) cannot be NULL. 20. You can monitor an index for its usage by using the MONITORING
USAGE clause of the ALTER INDEX statement. Which data dictionary view do you use to query the index usage? A. USER_INDEX_USAGE B. V$OBJECT_USAGE C. V$INDEX_USAGE D. DBA_INDEX_USAGE
Answers to Review Questions 1. C. You use the KEEP parameter in the DEALLOCATE clause to specify
the amount of space you want to keep in the table above the HWM. If you do not specify the KEEP parameter, Oracle deallocates all the space above the HWM if the HWM is above MINEXTENTS; otherwise, free space is de-allocated above MINEXTENTS. 2. C. Constraints are defined on the table and are dropped using the
ALTER TABLE command DROP clause. For dropping the primary key, you can also specify PRIMARY KEY instead of the constraint name. Similarly, to drop a unique constraint, you can also specify UNIQUE (). 3. B. The TIMESTAMP datatypes have a default precision of 6 digits.
The range of values can be from 0 to 9. 4. A. The DBA_OBJECTS view has information about all the objects cre-
ated in the database and has the timestamp and status of the object in the column CREATED. DBA_TABLES does not show the timestamp. 5. D. If you do not specify a table name to insert the ROWID of chained/
migrated rows, Oracle looks for a table named CHAINED_ROWS in the user’s schema. If the table does not exist, Oracle returns an error. The dictionary (the DBA_TABLES view) is updated with the number of chained rows when you do a COMPUTE STATISTICS on the table. 6. A and D. DISABLE VALIDATE disables the constraint and drops the
index, but keeps the constraint valid. No DML operation is allowed on the table. 7. B. The VALIDATE STRUCTURE clause of the ANALYZE TABLE com-
mand checks the structure of the table and makes sure all rows are readable. 8. B. Subpartitions can only be hash-partitioned. A partition can be
range-partitioned or hash-partitioned. 9. A. Since index blocks hold more entries per block than table data
blocks hold, you should provide a higher value of INITRANS to the index than to the table.
10. C. If you specify the EXCEPTIONS INTO clause when validating or
enabling a constraint, the ROWIDs of the rows that do not satisfy the constraint are saved in the table specified in the EXCEPTIONS clause. You can remove the bad rows or fix the column values and enable the constraint. 11. B. The BUFFER_POOL parameter specifies a buffer pool cache for the
table or index. The KEEP pool retains the blocks in the SGA. RECYCLE removes blocks from the SGA as soon as the operation is completed, and the DEFAULT pool is for objects for which KEEP or RECYCLE is not specified. 12. D. You use the MOVE clause to reorganize a table. You can specify
new tablespace and storage parameters. Queries are allowed on the table, but no DML operations are allowed during the move. 13. B. When you change the storage parameters for an existing index or
table, you cannot change the MINEXTENTS and INITIAL values. 14. A. The format of a ROWID is OOOOOOFFFBBBBBBRRR;
OOOOOO is the object number, FFF is the relative data file number where the block is located, BBBBBB is the block ID where the row is located, and RRR is the row in the block. 15. D. You don’t need to specify a keyword to create a function-based
index; you need only to specify the function itself. To enable a functionbased index, you set the parameter QUERY_REWRITE_ENABLED to TRUE, and you set QUERY_REWRITE_INTEGRITY to TRUSTED. 16. A. The INDEX_STATS and INDEX_HISTOGRAMS views show statistical
information from the ANALYZE INDEX VALIDATE STRUCTURE statement. 17. B. DEFERRABLE specifies that the constraint can be deferred using the
SET CONSTRAINTS command. INITIALLY IMMEDIATE specifies that the constraint’s default behavior is to validate the constraint for each SQL statement. 18. C. The utlchain.sql script, located in the rdbms/admin directory
for the Oracle software installation, creates the table. When chained or migrated rows are found in the table after you issue the ANALYZE TABLE LIST CHAINED ROWS command, the ROWIDs of such chained/ migrated rows are inserted into the CHAINED_ROWS table.
19. D. Columns that are part of the primary key cannot accept NULL
values. 20. B. The V$OBJECT_USAGE view has information about the indexes
that are monitored. The START_MONITORING and END_ MONITORING columns give the start and end timestamp of monitoring. If an index is used, the USED column will have a value YES.
Managing Users, Security, and Globalization Support ORACLE9i DBA FUNDAMENTALS I EXAM OBJECTIVES OFFERED IN THIS CHAPTER: Manage passwords using profiles Administer profiles Control use of resources using profiles Obtain information about profiles, password management, and resources Create new database users Alter and drop existing database users Monitor information about existing users Identify system and object privileges Grant and revoke privileges Identify auditing capabilities Create and modify roles Control availability of roles Remove roles Use predefined roles Display role information from the data dictionary Choose database character set and national character set for a database
Specify the language-dependent behavior using initialization parameters, environment variables, and the ALTER SESSION command Use the different types of National Language Support (NLS) parameters Explain the influence on language-dependent application behavior Obtain information about Globalization Support usage
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Training and Certification Web site (http://www.oracle.com/education/certification/) for the most current exam objectives listing.
ontrolling database access and resource limits is an important aspect of the DBA’s function. You use profiles to manage the database and system resources and to manage database passwords and password verification. You control database and data access using privileges, and you create roles to manage the privileges. This chapter covers creating users and assigning proper resources and privileges. It also discusses the auditing capabilities of the database and using Globalization Support.
Profiles
Y
ou use profiles to control the database and system resource usage. Oracle provides a set of predefined resource parameters that you can use to monitor and control database resource usage. You can define limits for each resource by using a database profile. You also use profiles for password management. You can create profiles for different user communities and then assign a profile to each user. When you create the database, Oracle creates a profile named DEFAULT, and if you do not specify a profile for the user, Oracle assigns the user the DEFAULT profile.
Managing Users, Security, and Globalization Support
Oracle lets you control the following types of resource usage through profiles:
Concurrent sessions per user
Elapsed and idle time connected to the database
CPU time used
Private SQL and PL/SQL area used in the SGA (System Global Area)
Logical reads performed
Amount of private SGA space used in Shared Server configurations
Resource limits are enforced only if you have set the parameter RESOURCE_LIMIT to TRUE. Even if you have defined profiles and assigned profiles to users, Oracle enforces them only when this parameter is set to TRUE. You can set this parameter in the initialization parameter file so that every time the database starts, the resource usage is controlled for each user using the assigned profile. You enable or disable resource limits using the ALTER SYSTEM command. The default value of RESOURCE_LIMIT is FALSE. The limits for each resource are specified as an integer; you can set no limit for a given resource by specifying UNLIMITED, or you can use the value specified in the DEFAULT profile by specifying DEFAULT. The DEFAULT profile initially has the value UNLIMITED for all resources. After you create the database, you can modify the DEFAULT profile. Most resource limits are set at the session level; a session is created when a user connects to the database. You can control certain limits at the statement level (but not at the transaction level). If a user exceeds a resource limit, Oracle aborts the current operation, rolls back the changes made by the statement, and returns an error. The user has the option of committing or rolling back the transaction, because the statements issued earlier in the transaction are not aborted. No other operation is permitted when a session-level limit is reached. The user can disconnect, in which case the transaction is committed. You use the following parameters to control resources: SESSIONS_PER_USER Limits the number of concurrent user sessions. No more sessions from the current user are allowed when this threshold is reached.
CPU_PER_SESSION Limits the amount of CPU time a session can use. The CPU time is specified in hundredths of a second. CPU_PER_CALL Limits the amount of CPU time a single SQL statement can use. The CPU time is specified in hundredths of a second. This parameter is useful for controlling runaway queries, but you should be careful to specify this limit for batch programs. LOGICAL_READS_PER_SESSION Limits the number of data blocks read in a session, including the blocks read from memory and from physical reads. LOGICAL_READS_PER_CALL Limits the number of data blocks read by a single SQL statement, including the blocks read from memory and from physical reads. PRIVATE_SGA Limits the amount of space allocated in the SGA for private areas, per session. Private areas for SQL and PL/SQL statements are created in the SGA in the multithreaded architecture. You can specify K to indicate the size in KB or M to indicate the size in MB. If you specify neither K or M, the size is in bytes. This limit does not apply to dedicated server architecture connections. CONNECT_TIME Specifies the maximum number of minutes a session can stay connected to the database (total elapsed time, not CPU time). When the threshold is reached, the user is automatically disconnected from the database; any pending transaction is rolled back. IDLE_TIME Specifies the maximum number of minutes a session can be continuously idle, that is, without any activity for a continuous period of time. When the threshold is reached, the user is disconnected from the database; any pending transaction is rolled back. COMPOSITE_LIMIT A weighted sum of four resource limits: CPU_PER_ SESSION, LOGICAL_READS_PER_SESSION, CONNECT_TIME, and PRIVATE_ SGA. You can define a cost for the system resources (the resource cost on one database may be different from another, based on the number of transactions, CPU, memory, and so on) known as the composite limit. The upcoming section “Managing Profiles” discusses setting the resource cost.
Managing Users, Security, and Globalization Support
Managing Passwords
Oracle Objective
Manage passwords using profiles
You also use profiles to manage passwords. You can set the following by using profiles: Account locking Number of failed login attempts, and the number of days the password will be locked. Password expiration How often passwords should be changed, whether passwords can be reused, and the grace period after which the user is warned that the password change is due. Password complexity Use of a customized function to verify the password complexity—for example, the password should not be the same as the user ID, cannot include commonly used words, and so on. You can use the following parameters in profiles to manage passwords: FAILED_LOGIN_ATTEMPTS Specifies the maximum number of consecutive invalid login attempts (providing an incorrect password) allowed before the user account is locked. PASSWORD_LOCK_TIME Specifies the number of days the user account will remain locked after the user has made FAILED_LOGIN_ATTEMPTS number of consecutive failed login attempts. PASSWORD_LIFE_TIME Specifies the number of days a user can use one password. If the user does not change the password within the number of days specified, all connection requests return an error. The DBA then has to reset the password. PASSWORD_GRACE_TIME Specifies the number of days the user will get a warning before the password expires. This is a reminder for the user to change the password. PASSWORD_REUSE_TIME Specifies the number of days a password cannot be used again after changing it.
PASSWORD_REUSE_MAX Specifies the number of password changes required before a password can be reused. You cannot specify a value for both PASSWORD_REUSE_TIME and PASSWORD_REUSE_MAX; one should always be set to UNLIMITED, because you can enable only one type of password history method. PASSWORD_VERIFY_FUNCTION Specifies the function you want to use to verify the complexity of the new password. Oracle provides a default script, which you can modify.
You can specify minutes or hours as a fraction or expression in parameters that require days as a value. One hour can be represented as 0.042 days or 1/24, and one minute can be specified as 0.000694 days, 1/24/60, or 1/1440.
Managing Profiles
Oracle Objective
Administer profiles
You can create many profiles in the database that specify both resource management parameters and password management parameters. However, you can assign a user only one profile at any given time. To create a profile, you use the CREATE PROFILE command. You need to provide a name for the profile and then specify the parameter names and their values separated by space(s). As an example, let’s create a profile to manage passwords and resources for the accounting department users. The users are required to change their password every 60 days, and they cannot reuse a password for 90 days. They are allowed to make a typo in the password only six consecutive times while connecting to the database; if the login fails a seventh time, their account is locked forever (until the DBA or security department unlocks the account). The accounting department users are allowed a maximum of six database connections; they can stay connected to the database for 24 hours, but an inactivity of 2 hours will terminate their session. To prevent users from
Managing Users, Security, and Globalization Support
performing runaway queries, in this example we will set the maximum number of blocks they can read per SQL statement to 1 million. SQL> CREATE PROFILE ACCOUNTING_USER LIMIT 2 SESSIONS_PER_USER 6 3 CONNECT_TIME 1440 4 IDLE_TIME 120 5 LOGICAL_READS_PER_CALL 1000000 6 PASSWORD_LIFE_TIME 60 7 PASSWORD_REUSE_TIME 90 8 PASSWORD_REUSE_MAX UNLIMITED 9 FAILED_LOGIN_ATTEMPTS 6 10 PASSWORD_LOCK_TIME UNLIMITED Profile created. In the example, parameters such as PASSWORD_GRACE_TIME, CPU_PER_ SESSION, and PRIVATE_SGA are not used. They will have a value of DEFAULT, which means the value will be taken from the DEFAULT profile. The DBA or security officer can unlock a locked user account by using the ALTER USER command. The following example shows the unlocking of SCOTT’s account. SQL> ALTER USER SCOTT ACCOUNT UNLOCK; User altered.
Composite Limit The composite limit specifies the total resource cost for a session. You can define a weight for each resource based on the available resources. The following resources are considered for calculating the composite limit:
CPU_PER_SESSION
LOGICAL_READS_PER_SESSION
CONNECT_TIME
PRIVATE_SGA
The costs associated with each of these resources are set at the database level by using the ALTER RESOURCE COST command. By default, the
resources have a cost of 0, which means they should not be considered for a composite limit (they are inexpensive). A higher cost means that the resource is expensive. If you do not specify any of these resources in ALTER RESOURCE COST, Oracle will keep the previous value. For example: SQL> ALTER RESOURCE COST 2 LOGICAL_READS_PER_SESSION 10 3 CONNECT_TIME 2; Resource cost altered. Here CPU_PER_SESSION and PRIVATE_SGA will have a cost of 0 (if they have not been modified before). You can define limits for each of the four parameters in the profile as well as set the composite limit. The limit that is reached first is the one that takes effect. The following statement adds a composite limit to the profile you created earlier. SQL> ALTER PROFILE ACCOUNTING_USER LIMIT 2 COMPOSITE_LIMIT 1500000; Profile altered. The cost for the composite limit is calculated as follows: Cost = (10 × LOGICAL_READS_PER_SESSION) + (2 × CONNECT_TIME) If the user has performed 100,000 block reads and was connected for two hours, the cost thus far would be (10 × 100,000) + (2 × 120) = 1,000,240. The user will be restricted when this cost exceeds 1,500,000 or when the values for LOGICAL_READS_PER_SESSION or CONNECT_TIME set in the profile are reached.
Password Verification Function You can create a function to verify the complexity of the passwords and assign the function name to the PASSWORD_VERIFY_FUNCTION parameter in the profile. When a password is changed, Oracle checks to see whether the supplied password satisfies the conditions specified in this function. Oracle provides a default verification function, known as VERIFY_FUNCTION, which is in the rdbms/admin directory of your Oracle software installation; the script is named utlpwdmg.sql. The password verification function should be owned by SYS and should have the following characteristics.
Managing Users, Security, and Globalization Support
FUNCTION SYS. ( <userid_variable> IN VARCHAR2 (30), <password_variable> IN VARCHAR2 (30), IN VARCHAR2 (30) ) RETURN BOOLEAN Oracle’s default password verification function checks that the password conforms to the following:
Is not the same as the username
Has a minimum length
Is not too simple; a list of words is checked
Contains at least one letter, one digit, and one punctuation mark
Differs from the previous password by at least three letters
If the new password satisfies all the conditions, the function returns a Boolean result of TRUE, and the user’s password is changed.
Altering Profiles Using the ALTER PROFILE command changes profile values. You can change any parameter in the profile using this command. The changes take effect the next time the user connects to the database. For example, to add a password verification function and set a composite limit to the profile you created in the previous example, use the following: SQL> ALTER PROFILE ACCOUNTING_USER LIMIT 2 PASSWORD_VERIFY_FUNCTION VERIFY_FUNCTION 3 COMPOSITE_LIMIT 1500 Profile altered.
Dropping Profiles To drop a profile, you use the DROP PROFILE command. If any user is assigned the profile you want to drop, Oracle returns an error. You can drop such profiles by specifying CASCADE, in which case the users who have that profile will be assigned the DEFAULT profile. SQL> DROP PROFILE ACCOUNTING_USER CASCADE; Profile dropped.
Assigning Profiles To assign profiles to users, you use the CREATE USER or ALTER USER command. These commands are discussed later in the chapter. This example assigns the ACCOUNTING_USER profile to an existing user named SCOTT: SQL> ALTER USER SCOTT 2 PROFILE ACCOUNTING_USER; User altered.
Querying Profile Information
Oracle Objective
Obtain information about profiles, password management, and resources
You can query profile information from the DBA_PROFILES view. The following example shows information about the profile created previously. The RESOURCE_TYPE column indicates whether the parameter is KERNEL (resource) or PASSWORD. SQL> 2 3 4
Managing Users, Security, and Globalization Support
CONNECT_TIME PRIVATE_SGA
UNLIMITED DEFAULT
9 rows selected. The view USER_RESOURCE_LIMITS shows the limit defined for the current user for resource, and the view USER_PASSWORD_LIMITS shows the limit defined for the password. SQL> SELECT * FROM USER_PASSWORD_LIMITS; RESOURCE_NAME ------------------------FAILED_LOGIN_ATTEMPTS PASSWORD_LIFE_TIME PASSWORD_REUSE_TIME PASSWORD_REUSE_MAX PASSWORD_VERIFY_FUNCTION PASSWORD_LOCK_TIME PASSWORD_GRACE_TIME
7 rows selected. You can query the system resource cost from the RESOURCE_COST view. SQL> SELECT * FROM RESOURCE_COST; RESOURCE_NAME UNIT_COST ------------------------- ---------CPU_PER_SESSION 10 LOGICAL_READS_PER_SESSION 0 CONNECT_TIME 2 PRIVATE_SGA 0
Users
A
ccess to the Oracle database is provided using database accounts known as usernames (users). If the user owns database objects, the account
is known as a schema, which is a logical grouping of all the objects owned by the user. Persons requiring access to the database should have a valid username created in the database. The following properties are associated with a database user account: Authentication method Each user must be authenticated to connect to the database by using a password, through the operating system, or via the Enterprise Directory Service. Operating system authentication is discussed in the “Privileges and Roles” section. Default and temporary tablespaces The default tablespace specifies a tablespace for the user to create objects if another tablespace is not explicitly specified. The user needs a quota assigned in the tablespace to create objects, even if the tablespace is the user’s default. You use the temporary tablespace to create the temporary segments; the user need not have any quota assigned in this tablespace. Space quota The user needs a space quota assigned in each tablespace in which they want to create the objects. By default, a newly created user does not have any space quota allocated on any tablespace to create schema objects. For the user to create schema objects such as tables or materialized views, you must allocate a space quota in tablespaces. Profile The user can have a profile to specify the resource limits and password settings. If you don’t specify a profile when you create the user, the DEFAULT profile is assigned.
When you create the database, the SYS and SYSTEM users are created. SYS is the schema that owns the data dictionary.
Managing Users, Security, and Globalization Support
To create users in the database, you use the CREATE USER command. Specify the authentication method when you create the user. A common authentication method is using the database; the username is assigned a password, which is stored encrypted in the database. Oracle verifies the password when establishing a connection to the database. As an example, let’s create a user JOHN with the various clauses available in the CREATE USER command. SQL> 2 3 4 5 6 7 8 9
CREATE USER JOHN IDENTIFIED BY “B1S2!” DEFAULT TABLESPACE USERS TEMPORARY TABLESPACE TEMP QUOTA UNLIMITED ON USERS QUOTA 1M ON INDX PROFILE ACCOUNTING_USER PASSWORD EXPIRE ACCOUNT UNLOCK
User created. The IDENTIFIED BY clause specifies that the user will be authenticated using the database. To authenticate the user using the operating system, specify IDENTIFIED EXTERNALLY. The password specified is not case sensitive. If you do not specify the DEFAULT TABLESPACE clause, the SYSTEM tablespace is assigned as the default tablespace. When you create the database, you can specify a default temporary tablespace (DEFAULT TEMPORARY TABLESPACE clause of the CREATE DATABASE statement, discussed in Chapter 4). You can also define it later using the ALTER DATABASE statement. If such default temporary tablespace is specified and you do not specify the TEMPORARY TABLESPACE clause in the CREATE USER command, the database’s default temporary tablespace is assigned to the user. If the database does not have a default temporary tablespace defined, SYSTEM tablespace is assigned as the temporary tablespace by default. You cannot specify the undo tablespace as the default or temporary tablespace. The following query shows whether a default temporary tablespace is defined for the database. SQL> SELECT * FROM DATABASE_PROPERTIES 2 WHERE PROPERTY_NAME = 'DEFAULT_TEMP_TABLESPACE';
PROPERTY_NAME PROPERTY_VALUE -------------------------- ------------------------DESCRIPTION ---------------------------------------------------DEFAULT_TEMP_TABLESPACE TEMP Name of default temporary tablespace SQL> Although the default and temporary tablespaces are specified, JOHN does not initially have any space quota on the USERS tablespace (or any tablespace). You allocate quotas on the USERS and INDX tablespaces through the QUOTA clause. You can specify the QUOTA clause any number of times with the appropriate tablespace name and space limit. The space limit is specified in bytes, but can be followed by K or M to indicate KB or MB. To create extents, the user should have a sufficient space quota in the tablespace. UNLIMITED specifies that the quota on the tablespace is not limited. The PROFILE clause specifies the profile to be assigned. PASSWORD EXPIRE specifies that the user will be prompted (if using SQL*Plus, otherwise the DBA should change the password) for a new password at the first login. ACCOUNT UNLOCK is the default; you can specify ACCOUNT LOCK to initially lock the account. The user JOHN can connect to the database only if he has the CREATE SESSION privilege. Granting privileges and roles is discussed later, in the section “Privileges and Roles.” The CREATE SESSION privilege is granted to user JOHN by specifying the following: SQL> GRANT CREATE SESSION TO JOHN; Grant succeeded.
To create extents, a user with the UNLIMITED TABLESPACE system privilege does not need any space quota in any tablespace.
Managing Users, Security, and Globalization Support
You can modify all the characteristics you specified when creating a user by using the ALTER USER command. You can also assign or modify the default roles assigned to the user (discussed later in this chapter). Changing the default tablespace of a user affects only the objects created in the future. The following example changes the default tablespace of JOHN and assigns a new password. ALTER USER JOHN IDENTIFIED BY SHADOW2# DEFAULT TABLESPACE APPLICATION_DATA; You can lock or unlock a user’s account as follows: ALTER USER <username> ACCOUNT [LOCK/UNLOCK] You can also expire the user’s password: ALTER USER <username> PASSWORD EXPIRE Users must change the password the next time they log in, or you must change the password. If the password is expired, SQL*Plus prompts for a new password at login time. In the following example, setting the quota to 0 revokes the tablespace quota assigned. The objects created by the user in the tablespace remain there, but no new extents can be allocated in that tablespace. ALTER USER JOHN QUOTA 0 ON USERS;
Users can change their password by using the ALTER USER command; they do not need the ALTER USER privilege to do so. They can also change their password using the PASSWORD command in SQL*Plus.
Dropping Users You can drop a user from the database by using the DROP USER command. If the user (schema) owns objects, Oracle returns an error. If you specify the CASCADE keyword, Oracle drops all the objects owned by the user and then drops the user. If other schema objects, such as procedures, packages, or views, refer to the objects in the user’s schema, they become invalid. When you drop objects, space is freed up immediately in the relevant tablespaces. The following example shows how to drop the user JOHN, with all the owned objects. DROP USER JOHN CASCADE;
You cannot drop a user who is currently connected to the database.
Authenticating Users In this section, we will discuss two widely used user-authenticating methods:
Authentication by the database
Authorization by the operating system
When you use database authentication, you define a password for the user (the user can change the password), and Oracle stores the password in the database (encrypted). When users connect to the database, Oracle compares the password supplied by the user with the password in the database. By default, the password supplied by the user is not encrypted when sent over the network. To encrypt the user’s password, you must set the ORA_ENCRYPT_LOGIN environment variable to TRUE on the client machine. Similarly, when using database links, the password sent across the network is not encrypted. To encrypt such connections, you must set the DBLINK_ ENCRYPT_LOGIN initialization parameter to TRUE. Passwords are encrypted using the data encryption standard (DES) algorithm. When you use authorization by the operating system, Oracle verifies the operating system login account and connects to the database—users need not specify a username and password. Oracle does not store the passwords of such operating-system authenticated users, but they must have a username in the database. The initialization parameter OS_AUTHENT_PREFIX determines the prefix used for operating system authorization. By default, the value is OPS$. For example, if your operating system login name is ALEX, the database username should be OPS$ALEX. When Alex specifies CONNECT / or does not specify a username to connect to the database, Oracle tries to connect Alex to the OPS$ALEX account. You can set the OS_AUTHENT_PREFIX parameter to a null string “”; this will not add any prefix. To create an operating-system authenticated user, use the following: SQL> CREATE USER OPS$ALEX IDENTIFIED EXTERNALLY; To connect to a remote database using operating system authorization, set the REMOTE_OS_AUTHENT parameter to TRUE. You must be careful in using this parameter, because connections can be made from any computer.
Managing Users, Security, and Globalization Support
For example, if you have an operating system account named ORACLE and a database account OPS$ORACLE, you can connect to the database from the machine where the database resides. If you set REMOTE_OS_AUTHENT to TRUE, you can log in to any server with the ORACLE operating system account and connect to the database over the network. If a user creates an operating system ID named ORACLE and is on the network, the user can connect to the database using operating system authorization.
Complying with Oracle Licensing Terms The DBA is responsible for ensuring that the organization complies with the Oracle licensing agreement. Chapter 4, “Creating a Database and Data Dictionary,” discussed the parameters that can be set in the initialization file to enforce license agreements. They are as follows: LICENSE_MAX_SESSIONS Maximum number of concurrent user sessions. When this limit is reached, only users with the RESTRICTED SESSION privilege are allowed to connect. The default is 0—unlimited. Set this parameter if your license is based on concurrent database usage. LICENSE_SESSIONS_WARNING A warning limit on the number of concurrent user sessions. The default value is 0—unlimited. A warning message is written in the alert file when the limit is reached. LICENSE_MAX_USERS Maximum number of users that can be created in the database. The default is 0—unlimited. Set this parameter if your license is based on the total number of database users. You can change the value of these parameters dynamically by using the ALTER SYSTEM command. For example: ALTER SYSTEM SET LICENSE_MAX_SESSIONS = 256 LICENSE_SESSIONS_WARNING = 200; The high-water mark (HWM) column of the V$LICENSE view shows the maximum number of concurrent sessions created since instance start-up and the limits set. A value of 0 indicates that no limit is set.
SQL> SELECT * FROM V$LICENSE; SESSIONS_MAX SESSIONS_WARNING SESSIONS_CURRENT SESSIONS_HIGHWATER USERS_MAX ------------ ---------------- ---------------- ------------------ --------256 200 105 115 0 You can obtain the total number of database users from the DBA_USERS view: SELECT COUNT(*) FROM DBA_USERS;
Querying User Information
Oracle Objective
Monitor information about existing users
You can query user information from the data dictionary views DBA_ USERS and USER_USERS. USER_USERS shows only one row: information about the current user. You can obtain the user account status, password expiration date, account locked date (if locked), encrypted password, default and temporary tablespaces, profile name, and creation date from this view. Oracle creates a numeric ID and assigns it to the user when the user is created. SYS has an ID of 0. SQL> SELECT USERNAME, DEFAULT_TABLESPACE, 2 TEMPORARY_TABLESPACE, PROFILE, 3 ACCOUNT_STATUS, EXPIRY_DATE 4 FROM DBA_USERS 5 WHERE USERNAME = ‘JOHN’; USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE PROFILE ACCOUNT_ST EXPIRY_DA -------- ------------------ -------------------- --------------- ---------- --------JOHN USERS TEMP ACCOUNTING_USER OPEN 22-OCT-00
The view ALL_USERS shows the username and creation date. SQL> SELECT * FROM ALL_USERS 2 WHERE USERNAME LIKE ‘SYS%’;
Managing Users, Security, and Globalization Support
USERNAME USER_ID CREATED -------- ---------- --------SYS 0 13-JUL-00 SYSTEM 5 13-JUL-00 The views DBA_TS_QUOTAS and USER_TS_QUOTAS list the tablespace quota assigned to the user. A value of −1 indicates an unlimited quota. SQL> SELECT TABLESPACE_NAME, BYTES, MAX_BYTES, BLOCKS, 2 MAX_BLOCKS 3 FROM DBA_TS_QUOTAS 4 WHERE USERNAME = ‘JOHN’; TABLESPACE BYTES MAX_BYTES BLOCKS MAX_BLOCKS ---------- ---------- ---------- ---------- ---------INDX 0 1048576 0 128 USERS 0 -1 0 -1 The V$SESSION view shows the users currently connected to the database, and V$SESSTAT shows the session statistics. You can find the description for the statistic codes from V$SESSTAT in V$STATNAME. SQL> SELECT USERNAME, OSUSER, MACHINE, PROGRAM 2 FROM V$SESSION 3 WHERE USERNAME = ‘JOHN’; USERNAME OSUSER MACHINE PROGRAM -------- ------- ------------- ----------------JOHN KJOHN USA.CO.AU SQLPLUSW.EXE SQL> 2 3 4 5 6
SELECT FROM WHERE AND AND AND
A.NAME, B.VALUE V$STATNAME A, V$SESSTAT B, V$SESSION C A.STATISTIC# = B.STATISTIC# B.SID = C.SID C.USERNAME = ‘JOHN’ A.NAME LIKE '%session%';
NAME VALUE ---------------------------------------- ---------session logical reads 729 session stored procedure space 0 CPU used by this session 12 session connect time 0 session uga memory 98368 session uga memory max 159804 session pga memory 296416 session pga memory max 296416 session cursor cache hits 0 session cursor cache count 0 The current username connected to the database is available in the system variable USER. Using SQL*Plus, you can run SHOW USER to get the username. SQL> SHOW USER USER is “JOHN” SQL>
Copying User Accounts from One Database to Another The password provided with the IDENTIFIED BY clause is encrypted and stored in the database. You can also provide the encrypted password using the IDENTIFIED BY VALUES clause as the user’s password. Enclose such entries in single quotes. This method is useful for copying user accounts from one database to another, without compromising their password. The PASSWORD column in the DBA_USERS view provides the encrypted password. As an example, let’s create a user called BENJAMIN with the password NOICE. CREATE USER benjamin IDENTIFIED BY noice DEFAULT TABLESPACE users; You can query the encrypted password from the data dictionary.
You can drop the user from the database and create the user again by supplying the encrypted password. If you execute the same SQL statement in any Oracle database, the password assigned to user BENJAMIN will be NOICE. DROP USER benjamin;
CREATE USER benjamin IDENTIFIED BY VALUES 'C2851F759114DCAC' DEFAULT TABLESPACE users; Let’s now grant the CREATE SESSION privilege to Benjamin and verify that his password is really NOICE. SQL> GRANT CREATE SESSION TO benjamin; Grant succeeded. SQL> SQL> CONNECT benjamin/noice Connected. SQL> Now let’s generate a script from the data dictionary to create all user accounts with their existing passwords. You can change the CREATE USER to ALTER USER for synchronizing passwords between databases (for example, your test and production databases). SET ECHO OFF FEEDBACK OFF PAGES 0 LINES 200 TRIMS ON SPOOL createusers.sql SELECT ‘CREATE USER ‘|| username || ‘ IDENTIFIED BY VALUES ‘’’ || password || ‘’’;’
SPOOL OFF SET FEEDBACK ON PAGES 24 LINES 80 The script generated will be saved in file createusers.sql, whose content will be similar to the following. CREATE USER OE IDENTIFIED BY VALUES 'D1A2DFC623FDA40A'; CREATE USER SH IDENTIFIED BY VALUES '54B253CBBAAA8C48'; CREATE USER BENJAMIN IDENTIFIED BY VALUES 'C2851F759114DCAC';
Managing Privileges
Oracle Objective
Identify system and object privileges
In the Oracle database, privileges control access to the data and restrict the actions users can perform. Through proper privileges, users can create, drop, or modify objects in their own schema or in another user’s schema. Privileges also determine the data to which a user should have access. You can grant privileges to a user by means of two methods:
You can assign privileges directly to the user.
You can assign privileges to a role, and then assign the role to the user.
A role is a named set of privileges, which eases the management of privileges. For example, if you have 10 users needing access to the data in the accounting tables, you can grant the privileges required to a role and grant the role to the 10 users. There are two types of privileges: Object privileges Object privileges are granted on schema objects that belong to a different schema. The privilege can be on data (to
Managing Users, Security, and Globalization Support
read, modify, delete, add, or reference), on a program (to execute), or on an object (to change the structure). System privileges System privileges provide the right to perform a specific action on any schema in the database. System privileges do not specify an object, but are granted at the database level. Certain system privileges are very powerful and should be granted only to trusted users. System privileges and object privileges can be granted to a role. PUBLIC is a user group defined in the database; it is not a database user or a role. Every user in the database belongs to this group. Therefore, if you grant privileges to PUBLIC, they are available to all users of the database.
A user and a role cannot have the same name.
Object Privileges Object privileges are granted on a specific object. The owner of the object has all privileges on the object. The owner can grant privileges on that object to any other users of the database. The owner can also authorize another user in the database to grant privileges on the object to other users. For example, user JOHN owns a table named CUSTOMER and grants read and update privileges to JAMES. (To specify multiple privileges, separate them with a comma.) SQL> GRANT SELECT, UPDATE ON CUSTOMER TO JAMES; JAMES cannot insert into or delete from CUSTOMER; JAMES can only query and update rows in the CUSTOMER table. JAMES cannot grant the privilege to another user in the database, because JAMES is not authorized by JOHN to do so. If the privilege is granted with the WITH GRANT OPTION, JAMES can grant the privilege to others. SQL> GRANT SELECT, UPDATE ON CUSTOMER 2 TO JAMES WITH GRANT OPTION; The INSERT, UPDATE, or REFERENCES privileges can be granted on columns also. For example: SQL> GRANT INSERT (CUSTOMER_ID) ON CUSTOMER TO JAMES;
The following are the object privileges that can be granted to users of the database: SELECT Grants read (query) access to the data in a table, view, sequence, or materialized view. UPDATE Grants update (modify) access to the data in a table, column, view, or materialized view. DELETE Grants delete (remove) access to the data in a table, view, or materialized view. INSERT Grants insert (add) access to a table, column, view, or materialized view. EXECUTE Grants execute (run) privilege on a PL/SQL stored object, such as a procedure, package, or function. READ Grants read access on a directory. INDEX Grants index creation privilege on a table. REFERENCES Grants reference access to a table or columns to create foreign keys that can reference the table. ALTER Grants access to modify the structure of a table or sequence. ON COMMIT REFRESH Grants the privilege to create a refresh-on-commit materialized view on the specified table. QUERY REWRITE Grants the privilege to create a materialized view for query rewrite using the specified table. WRITE Allows the external table agent to write a log file or a bad file to the directory. This privilege is associated only with the external tables. UNDER Grants the privilege to create a sub-view under a view. The following are some points related to object privileges that you need to remember:
Object privileges can be granted to a user, a role, or PUBLIC.
If a view refers to tables or views from another user, you must have the privilege WITH GRANT OPTION on the underlying tables of the view to
Managing Users, Security, and Globalization Support
grant any privilege on the view to another user. For example, JOHN owns a view, which references a table from JAMES. To grant the SELECT privilege on the view to another user, JOHN should have received the SELECT privilege on the table WITH GRANT OPTION.
Any object privilege received on a table provides the grantee the privilege to lock the table.
The SELECT privilege cannot be specified on columns; to grant columnlevel SELECT privileges, create a view with the required columns and grant SELECT on the view.
You can specify ALL or ALL PRIVILEGES to grant all available privileges on an object (for example, GRANT ALL ON CUSTOMER TO JAMES).
Even if you have the DBA privilege, to grant privileges on objects owned by another user you must have been granted the appropriate privilege on the object WITH GRANT OPTION.
Multiple privileges can be granted to multiple users and/or roles in one statement. For example, GRANT INSERT, UPDATE, SELECT ON CUSTOMER TO ADMIN_ROLE, JULIE, SCOTT;
System Privileges System privileges are the privileges that enable the user to perform an action; they are not specified on any particular object. Like object privileges, system privileges also can be granted to a user, a role, or PUBLIC. There are many system privileges in Oracle; Table 9.1 summarizes the privileges used to manage objects in the database. The CREATE, ALTER, and DROP privileges provide the ability to create, modify, and drop the object specified in the user’s schema. When a privilege is specified with ANY, the user is authorized to perform the action on any schema in the database. Table 9.2 shows the types of privileges that are associated with certain types of objects. For example, the SELECT ANY TABLE privilege gives the user the ability to query all tables or views in the database, regardless of who owns them; the SELECT ANY SEQUENCE privilege gives the user the ability to select from all sequences in the database.
Managing Users, Security, and Globalization Support
TABLE 9.2
Additional System Privileges (continued) If you have
You’ll be able to
MANAGE TABLESPACE
Perform tablespace management operations such as taking them offline or online and beginning or ending backup.
UNLIMITED TABLESPACE
Create objects in any tablespace; space in the database is not restricted.
ADMINISTER DATABASE TRIGGER
Create a trigger in the database (you still need the CREATE TRIGGER or CREATE ANY TRIGGER privilege).
BECOME USER
Become another user while doing a full import.
ANALYZE ANY
Use the ANALYZE command on any table, index, or cluster in any schema in the database.
AUDIT ANY
Audit any object or schema in the database.
COMMENT ANY TABLE
Create comments on any tables in the database.
GRANT ANY PRIVILEGE
Grant any system privilege.
GRANT ANY ROLE
Grant any role.
ON COMMIT REFRESH
Create a refresh-on-commit materialized view on any table in the database.
UNDER ANY VIEW
Create sub-views under object views
SYSOPER
Start up and shut down the database; mount, open, or back up the database; use ARCHIVELOG and RECOVER commands; use the RESTRICTED SESSION privilege.
SYSDBA
Perform all SYSOPER actions plus create or alter a database.
Privileges with the ANY parameter are powerful and should be granted to responsible users. Privileges with the ANY parameter provide access to all such objects in the database, including SYS-owned dictionary objects. For example, if you give a user the ALTER ANY TABLE privilege, that user can use
the privilege on a data dictionary table. To protect the dictionary, Oracle provides an initialization parameter, O7_DICTIONARY_ACCESSIBILITY, that controls access to the data dictionary. If this parameter is set to TRUE (the Oracle7 behavior, default is FALSE), a user with the ANY privilege can exercise that privilege on the SYS dictionary objects. It is not possible to access the dictionary with the ANY privilege if this parameter is set to FALSE. For example, a user with SELECT ANY TABLE can query the DBA_ views, but when O7_DICTIONARY_ACCESSIBILITY is set to FALSE, the user cannot query the dictionary views. You can, however, grant the user specific access to the dictionary views (via object privileges). When we discuss roles later in this chapter, you’ll learn how to provide query access to the dictionary. Here are some points to remember about system privileges:
To connect to the database, you need the CREATE SESSION privilege.
To truncate a table that belongs to another schema, you need the DROP ANY TABLE privilege.
The CREATE ANY PROCEDURE (or EXECUTE ANY PROCEDURE) privilege allows the user to create, replace, or drop (or execute) procedures, packages, and functions; this includes Java classes.
The CREATE TABLE privilege gives you the ability to create, alter, drop, and query tables in a schema.
SELECT, INSERT, UPDATE, and DELETE are object privileges, but SELECT ANY, INSERT ANY, UPDATE ANY, and DELETE ANY are system privileges (in other words, they do not apply to a particular object).
Managing Users, Security, and Globalization Support
SQL> GRANT CREATE ANY TABLE TO JOHN; If John must be able to grant this privilege to others, he should be granted the privilege with the WITH ADMIN OPTION clause (or should have the GRANT ANY PRIVILEGE privilege). SQL> GRANT CREATE ANY TABLE TO JOHN WITH ADMIN OPTION;
Revoking Privileges You can revoke a user’s object privileges and system privileges by using the REVOKE statement. You can revoke a privilege if you have granted it to the user or if you have been granted that privilege with the WITH ADMIN OPTION (for system privileges) or the WITH GRANT OPTION (for object privileges) clauses. Here are some examples of revoking privileges. To revoke the UPDATE privilege granted to JAMES from JOHN on JOHN’s CUSTOMER table, use the following: SQL> REVOKE UPDATE ON CUSTOMER FROM JAMES; To revoke the SELECT ANY TABLE and CREATE TRIGGER privileges granted to JULIE, use the following: SQL> REVOKE SELECT ANY TABLE, CREATE TRIGGER 2 FROM JULIE; To revoke the REFERENCES privilege, specify the CASCADE CONSTRAINTS clause, which will drop the referential integrity constraint created using the privilege. You must use this clause if any constraints exist. SQL> REVOKE REFERENCES ON CUSTOMER 2 FROM JAMES CASCADE CONSTRAINTS; The following statement revokes all the privileges granted by JAMES on the STATE table to JULIE. JAMES executes this statement. SQL> REVOKE ALL ON STATE FROM JULIE; Keep the following in mind when revoking privileges:
If multiple users (or administrators) have granted an object privilege to a user, revoking the privilege by one administrator will not prevent the user from performing the action, because the privileges granted by the other administrators are still valid.
To revoke the WITH ADMIN OPTION or WITH GRANT OPTION, you must revoke the privilege and re-grant the privilege without the clause.
If a user has used their system privileges to create or modify an object, and subsequently the user’s privilege is revoked, no change is made to the objects that the user has already created or modified. The user just can no longer create or modify the object.
If a PL/SQL program or view is created based on an object privilege (or a DML system privilege such as SELECT ANY, UPDATE ANY, and so on), revoking the privilege will invalidate the object.
If user A is granted a system privilege WITH ADMIN OPTION, and grants the privilege to user B, user B’s privilege still remains when user A’s privilege is revoked.
If user A is granted an object privilege WITH GRANT OPTION, and grants the privilege to user B, user B’s privilege is also automatically revoked when user A’s privilege is revoked, and the objects that use the privileges under user A and user B are invalidated.
Querying Privilege Information You can query privilege information from the data dictionary by using various views. Table 9.3 lists and describes the views that provide information related to privileges. TABLE 9.3
Privilege Information View Name
Description
ALL_TAB_PRIVS DBA_TAB_PRIVS USER_TAB_PRIVS
Lists the object privileges. ALL_TAB_PRIVS shows only the privileges granted to the user and to PUBLIC.
ALL_TAB_PRIVS_MADE USER_TAB_PRIVS_MADE
Lists the object grants made by the current user or grants made on the objects owned by the current user.
ALL_TAB_PRIVS_RECD USER_TAB_PRIVS_RECD
Lists the object grants received by the current user or PUBLIC.
Managing Users, Security, and Globalization Support
TABLE 9.3
Privilege Information (continued) View Name
Description
ALL_COL_PRIVS_MADE USER_COL_PRIVS_MADE
Lists column privileges made by the current user.
ALL_COL_PRIVS_RECD USER_COL_PRIVS_RECD
Lists column privileges received by the current user.
DBA_SYS_PRIVS USER_SYS_PRIVS
Lists system privilege information.
SESSION_PRIVS
Lists the system privileges available for the current session.
Here are some sample queries using the dictionary views to query privileges information. To list information about privileges granted on the table CUSTOMER, use the following: SQL> SELECT * FROM DBA_TAB_PRIVS 2 WHERE TABLE_NAME = ‘CUSTOMER’; GRANTEE --------------SCOTT JAMES JAMES ACCOUNTS_MANAGE
Fixing an Insufficient Privilege Error after Querying the Dictionary User Benjamin is getting an error message when he tries to do the following update. As a DBA, you need to understand the roles and privileges and grant Benjamin the appropriate privilege. SQL> UPDATE COUNTRIES 2
SET
COUNTRY_NAME = 'USA'
3
WHERE
COUNTRY_ID = 'US';
UPDATE COUNTRIES * ERROR at line 1: ORA-01031: insufficient privileges SQL> Since Benjamin is not qualifying the table name with a username, a synonym must exist. Let’s find the synonym definition and the owner of the COUNTRIES table. SQL> SELECT * FROM DBA_SYNONYMS 2
Managing Users, Security, and Globalization Support
Two synonyms are created; one is a private synonym owned by OE and the other is a public synonym. Benjamin must be using the public synonym. Both synonyms refer to the COUNTRIES table owned by the HR schema. Let’s query and find out which privilege on the HR.COUNTRIES table is assigned to Benjamin. SQL> SELECT PRIVILEGE 2
FROM
DBA_TAB_PRIVS
3
WHERE
GRANTEE = 'BENJAMIN'
4
AND
TABLE_NAME = 'COUNTRIES'
5
AND
OWNER = 'HR';
no rows selected SQL> No privileges are assigned on table HR.COUNTRIES to Benjamin. Let’s query and find out whether the privileges on this table are granted to any role. SQL> SELECT GRANTEE, PRIVILEGE 2
SQL> The HR_SELECT role has query privileges, and the HR_UPDATE role has all privileges on this table. Query to find out which roles are assigned to Benjamin.
SQL> Benjamin has two roles assigned, CONNECT and HR_SELECT. To fix Benjamin’s problem, you can either grant him the HR_UDPATE role or grant update privileges on the HR.COUNTRIES table. GRANT HR_UPDATE TO BENJAMIN; Or CONNECT HR/HR GRANT UPDATE ON COUNTRIES TO BENJAMIN;
Managing Roles
Oracle Objective
Create and modify roles
A role is a named group of privileges that you can use to ease the administration of privileges. For example, if your accounting department has 30 users and all need similar access to the tables in the accounts receivable application, you can create a role and grant the appropriate system and object privileges to the role. You can grant the role to each user of the accounting department, instead of granting each object and system privilege to individual users. Using the CREATE ROLE command creates the role. No user owns the role; it is owned by the database. When a role is created, no privileges are associated with it. You must grant the appropriate privileges to the role. For example, to create a role named ACCTS_RECV and grant certain privileges to the role, use the following: CREATE ROLE ACCTS_RECV;
Managing Users, Security, and Globalization Support
GRANT SELECT ON GENERAL_LEDGER TO ACCTS_RECV; GRANT INSERT, UPDATE ON JOURNAL_ENTRY TO ACCTS_RECV; Similar to users, roles can also be authenticated. The default is NOT IDENTIFIED, which means no authorization is required to enable or disable the role. The following authorization methods are available: Database Using a password associated with the role, the database authorizes the role. Whenever such roles are enabled, the user is prompted for a password if the role is not one of the default roles for the user. In the following example, a role ACCOUNTS_MANAGER is created with a password. SQL> CREATE ROLE ACCOUNTS_MANAGER IDENTIFIED BY ACCMGR; Operating system The role is authorized by the operating system. This is useful when the operating system can associate its privileges with the application privileges, and information about each user is configured in operating system files. To enable operating system role authorization, set the parameter OS_ROLES to TRUE. The following example creates a role, authorized by the operating system. SQL> CREATE ROLE APPLICATION_USER IDENTIFIED EXTERNALLY; You can change the role’s password or authentication method by using the ALTER ROLE command. You cannot rename a role. For example: SQL> ALTER ROLE ACCOUNTS_MANAGER IDENTIFIED BY MANAGER; To drop a role, use the DROP ROLE command. Oracle will let you drop a role even if it is granted to users or other roles. When you drop a role, it is immediately removed from the users’ role lists. SQL> DROP ROLE ACCOUNTS_MANAGER;
Using Predefined Roles
Oracle Objective
Use predefined roles
When you create the database, Oracle creates six predefined roles. These roles are defined in the sql.bsq script, which is executed when you run the
CREATE DATABASE command. The following roles are predefined: CONNECT Privilege to connect to the database, to create a cluster, a database link, a sequence, a synonym, a table, and a view, and to alter a session. RESOURCE Privilege to create a cluster, a table, and a sequence, and to create programmatic objects such as procedures, functions, packages, indextypes, types, triggers, and operators. DBA All system privileges with the ADMIN option, so the system privileges can be granted to other users of the database or to roles. SELECT_CATALOG_ROLE Ability to query the dictionary views and tables. EXECUTE_CATALOG_ROLE Privilege to execute the dictionary packages (SYS-owned packages). DELETE_CATALOG_ROLE Ability to drop or re-create the dictionary packages. Also, when you run the catproc.sql script as part of the database creation, the script executes catexp.sql, which creates two more roles: EXP_FULL_DATABASE Ability to make full and incremental exports of the database using the Export utility. IMP_FULL_DATABASE Ability to perform full database imports using the Import utility. This is a very powerful role.
Removing Roles
Oracle Objective
Remove roles
You can remove roles from the database using the DROP ROLE statement. When you drop a role, all privileges that users had through the role are lost. If they used the role to create objects in the database or to manipulate data, those objects and changes remain in the database. To drop a role named HR_UPDATE, use the following statement: DROP ROLE HR_UPDATE;
Managing Users, Security, and Globalization Support
To drop a role, you must have been granted the role with the ADMIN OPTION, or you must have the DROP ANY ROLE system privilege.
Enabling and Disabling Roles
Oracle Objective
Control availability of roles
If a role is not the default role for a user, it is not enabled when the user connects to the database. You use the ALTER USER command to set the default roles for a user. You can use the DEFAULT ROLE clause with the ALTER USER command in four ways, as illustrated in the following examples. To specify the named roles CONNECT and ACCOUNTS_MANAGER as default roles, use the following: ALTER USER JOHN DEFAULT ROLE CONNECT, ACCOUNTS_MANAGER; To specify all roles granted to the user as the default, use the following: ALTER USER JOHN DEFAULT ROLE ALL; To specify all roles except certain roles as the default, use the following: ALTER USER JOHN DEFAULT ROLE ALL EXCEPT RESOURCE, ACCOUNTS_ADMIN; To specify no roles as the default, use the following: ALTER USER JOHN DEFAULT ROLE NONE; You can specify only roles granted to the user as default roles. The DEFAULT ROLE clause is not available in the CREATE USER command. Default roles are enabled when the user connects to the database and do not require a password. You enable or disable roles using the SET ROLE command. You specify the maximum number of roles that can be enabled in the initialization parameter MAX_ENABLED_ROLES (the default is 20). You can enable or disable only roles granted to the user. If a role is defined with a password, you must supply the password when you enable the role. For example: SET ROLE ACCOUNTS_ADMIN IDENTIFIED BY MANAGER;
To enable all roles, specify the following: SET ROLE ALL; To enable all roles, except the roles specified, use the following: SET ROLE ALL EXCEPT RESOURCE, ACCOUNTS_USER; To disable all roles, including the default roles, use the following: SET ROLE NONE;
How Can You Establish a Security Policy Using Default Roles? You have an application that was developed in house, and you want to control the application’s security through roles. Your users are smart and use tools such as Microsoft Access and SQL*Plus to access the database. You do not want any updates to the database outside the application, but at the same time you do not want to limit the users’ ability to use these other tools. The application tracks data changes, so you want all users to use their own ID to connect to the application and make changes. The application uses the database privileges of the user to make changes. Solution: You need at least two roles defined in the database—one to query the application tables and another to update data. The role that updates data needs to be password protected. Let’s create two roles. CREATE ROLE APP_QUERY; CREATE ROLE APP_UPDATE IDENTIFIED BY FNDMYPSWD; Now, grant SELECT privileges on tables to the APP_QUERY role, and grant INSERT, UPDATE, and DELETE privileges on tables to the APP_UPDATE role. Grant the necessary roles to users. GRANT APP_QUERY, APP_UPDATE TO CHRIS; Next, change the users default role to only APP_QUERY. ALTER USER CHRIS DEFAULT ROLE APP_QUERY;
Managing Users, Security, and Globalization Support
If you have other roles that you want to establish as the default for the user, you can use the following: ALTER USER CHRIS DEFAULT ROLE ALL EXCEPT APP_UPDATE; You can query the DBA_ROLE_PRIVS view to see default and non-default roles for the user. The DEFAULT_ROLE column will display NO for non-default roles. In the application, you must set the appropriate role that gives the privilege to manipulate the data. SET ROLE APP_UPDATE IDENTIFIED BY FNDMYPSWD; If you need to set more than one role, you can specify all the required roles in the SET command. SET ROLE APP_UPDATE IDENTIFIED BY FNDMYPSWD, APP_ADMIN; You can develop methods to encrypt and decrypt passwords, instead of hard coding them in the application.
Querying Role Information
Oracle Objective
Display role information from the data dictionary
The data dictionary view DBA_ROLES lists the roles defined in the database. The column PASSWORD specifies the authorization method. SQL> SELECT * FROM DBA_ROLES; ROLE -----------------------------CONNECT RESOURCE DBA SELECT_CATALOG_ROLE EXECUTE_CATALOG_ROLE DELETE_CATALOG_ROLE
The view SESSION_ROLES lists the roles that are enabled in the current session. SQL> SELECT * FROM SESSION_ROLES; ROLE -----------------------------CONNECT DBA SELECT_CATALOG_ROLE HS_ADMIN_ROLE EXECUTE_CATALOG_ROLE DELETE_CATALOG_ROLE EXP_FULL_DATABASE IMP_FULL_DATABASE JAVA_ADMIN The view DBA_ROLE_PRIVS (or USER_ROLE_PRIVS) lists all the roles granted to users and roles. SQL> SELECT * FROM DBA_ROLE_PRIVS 2 WHERE GRANTEE = ‘JOHN’; GRANTEE ----------JOHN JOHN
The view ROLE_ROLE_PRIVS lists the roles granted to the roles, ROLE_ SYS_PRIVS lists the system privileges granted to roles, and ROLE_TAB_PRIVS shows information on the object privileges granted to roles. SQL> SELECT * FROM ROLE_ROLE_PRIVS 2 WHERE ROLE = ‘DBA’; ROLE ------DBA DBA
database usage. When you create the database, Oracle creates the SYS.AUD$ table, known as the audit trail, which stores the audited records. To enable auditing, set the initialization parameter AUDIT_TRAIL to TRUE or DB. When this parameter is set to OS, Oracle writes the audited records to an operating system file instead of inserting them into the SYS.AUD$ table. You use the AUDIT command to specify the audit actions. Oracle has three types of auditing capabilities: Statement auditing Audits SQL statements. (Example: AUDIT SELECT BY SCOTT audits all SELECT statements performed by SCOTT.) Privilege auditing Audits privileges. (Example: AUDIT CREATE TRIGGER audits all users who exercise their CREATE TRIGGER privilege.) Object auditing Audits the use of a specific object. (Example: AUDIT SELECT ON JOHN.CUSTOMER monitors the SELECT statements performed on the CUSTOMER table.) You can restrict the auditing scope by specifying the user list in the BY clause. You can use the WHENEVER SUCCESSFUL clause to specify that only successful statements are to be audited. The WHENEVER NOT SUCCESSFUL clause limits auditing to failed statements. You can also specify BY SESSION or BY ACCESS; BY SESSION as the default. BY SESSION specifies that one audit record is inserted for one session, regardless of the number of times the statement is executed. BY ACCESS specifies that one audit record is inserted each time the statement is executed. Following are some examples of auditing. To audit the connection and disconnection to the database, use the following: AUDIT SESSION; To audit only successful logins, use the following: AUDIT SESSION WHENEVER SUCCESSFUL; To audit only failed logins, use the following: AUDIT SESSION WHENEVER NOT SUCCESSFUL; To audit successful logins of specific users, use the following: AUDIT SESSION BY JOHN, ALEX WHENEVER SUCCESSFUL; To audit the successful updates and deletes on the CUSTOMER table, use the following: AUDIT UPDATE, DELETE ON JOHN.CUSTOMER
Managing Users, Security, and Globalization Support
BY ACCESS WHENEVER SUCCESSFUL; To turn off auditing, use the NOAUDIT command. You can specify all options available in the AUDIT statement to turn off auditing except BY SESSION and BY ACCESS. When you turn off auditing, Oracle turns off the action, regardless of its BY SESSION or BY ACCESS specification. To turn off the object auditing enabled on the CUSTOMER table, use the following: NOAUDIT UPDATE, DELETE ON JOHN.CUSTOMER;
Oracle always monitors certain database activities and writes them to operating system files, even if auditing is disabled. Oracle writes audit records when the instance starts up and shuts down and when a user connects to the database with administrator privileges.
Using Globalization Support
Oracle Objective
Choose database character set and national character set for a database
You use Globalization Support, known as National Language Support (NLS) in the previous release of Oracle, to store and retrieve data in a native language and format. Oracle supports a wide variety of languages and character sets. Globalization support lets you communicate with end users in their native language using their familiar date formats, number formats, and sorting sequence. Oracle uses Unicode (a worldwide encoding standard for computer usage) to support the languages. You define the database character set when you create the database using the CHARACTER SET clause of the CREATE DATABASE command. The character set stores data in the CHAR, VARCHAR2, CLOB, and LONG columns, and stores table names, column names, and so on in the dictionary, stores
PL/SQL variables in memory, and so on. If you do not specify a character set when you create the database, Oracle uses the US7ASCII character set. US7ASCII is a seven-bit ASCII character set that uses a single byte to store a character, and it can represent 128 characters (27). Other widely used single-byte character sets are WE8ISO8859P1 (the Western European eight-bit ISO [International Organization for Standardization] standard 8859 Part I) and UTF8. These character sets use eight bits to represent a character and can represent 256 characters (28). Oracle also supports multibyte character encoding. Multibyte encoding is used to represent languages such as Japanese, Chinese, Hindi, and so on. Multibyte encoding schemes can be fixed-width encoding schemes or variablewidth encoding schemes. In a variable-width encoding scheme, certain characters are represented using one byte, and two or more bytes represent other characters. The options to change the database character set after you create the database are limited. You can change the database character set only if the new character set is a superset of the current character set; that is, all the characters represented in the current character set must be available in the new character set. WE8ISO8859P1 and UTF8 are a superset of US7ASCII. To change the database character set, use the following: ALTER DATABASE CHARACTER SET WE8ISO8859P1;
You must be careful when changing the database character set; be sure to back up of the database before the change. You cannot roll back this action, which may result in loss of data or data corruption.
Oracle lets you choose an additional character set for the database that enhances the character-processing capabilities. You specify the second character set when you create the database using the NATIONAL CHARACTER SET clause. If you do not specify NATIONAL CHARACTER SET, Oracle uses the Unicode character set AF16UTF16. The national character set stores data in NCHAR, NVARCHAR2, and NCLOB data type columns. The national character set can be either AF16UTF16 or UTF8. AF16UTF16 is the default. AF16UTF16 and UTF8 are Unicode character sets. You cannot specify AF16UTF16 as the database character set. When choosing a
Managing Users, Security, and Globalization Support
multibyte character set for your database, remember that, by default, the VARCHAR2, CHAR, NVARCHAR2, and NCHAR data types specify the maximum length in bytes, not in characters. You can change this default behavior by setting the NLS_LENGTH_SEMANTICS=CHAR or by providing the semantic information along with the column definition (as in VARCHAR2 (20 CHAR)). If the character set used is two bytes, VARCHAR2 (10) can hold a maximum of five characters. The client machine can specify a character set different from the database character set by using local environment variables. The database character set should be a superset to the client character set. Oracle converts the character set automatically, but there is some overhead associated with this conversion. Certain character sets can support multiple languages. For example, the character set WE8ISO8859P1 can support all western European languages such as English, Finnish, Italian, Swedish, Danish, French, German, Spanish, and so on.
The Unicode Character Set Unicode is a universal character encoding scheme that allows you to store information from any major language using a single character set. Unicode provides a unique code value for every character, regardless of the platform, program, or language. Unicode has both 16-bit and 8-bit encoding. UTF-16 is the 16-bit encoding of Unicode. It is a fixed-width multibyte encoding in which the character codes 0x00 through 0x7F have the same meaning as they do in ASCII. One Unicode character is 2 bytes in this encoding. Characters from both European and Asian scripts are represented in 2 bytes. AF16UTF16 is UTF-16 encoded character set. UTF-8 is the 8-bit encoding of Unicode. It is a variable-width multibyte encoding in which the character codes 0x00 through 0x7F have the same meaning as they do in ASCII. One Unicode character can be 1 byte, 2 bytes, or 3 bytes in this encoding. Characters from the European scripts are represented in either 1 or 2 bytes, and characters from most Asian scripts are represented in 3 bytes. AL32UTF8, UTF8, and UTFE are UTF-8 encoded character sets. You can specify a UTF-8 encoded character set as a database character set; such a database is known as a Unicode database.
Specify the language-dependent behavior using initialization parameters, environment variables, and the ALTER SESSION command
Oracle provides several NLS parameters to customize the database and client workstations to suit the native format. These parameters have a default value based on the database and national character set chosen to create the database. Specifying the NLS parameters can change the default values. You can customize the NLS parameters in the following ways:
By specifying the initialization file, which will be used at the instance startup (Example: NLS_DATE_FORMAT = “YYYY-MM-DD”)
By setting the parameter as an environment variable (Example on Unix: csh: setenv NLS_DATE_FORMAT YYYY-MM-DD or using the MS-Windows registry)
By setting the parameter in the Oracle session using the ALTER SESSION command (Example: ALTER SESSION SET NLS_DATE_ FORMAT = “YYYY-MM-DD”)
By using certain SQL functions (Example: TO_CHAR (SYSDATE, ‘YYYY-MM-DD’, ‘NLS_DATE_LANGUAGE = AMERICAN’))
The parameter specified in SQL functions has the highest priority; the next highest is the parameter specified using ALTER SESSION, then the environment variable, then the initialization parameters, and finally the database default parameters have the lowest priority. You cannot change certain parameters using ALTER SESSION, and you cannot specify certain parameters as environment variables. Parameter specification areas are discussed in the following sections.
Oracle Objective
Use the different types of National Language Support (NLS) parameters
Managing Users, Security, and Globalization Support
NLS_LENGTH_SEMANTICS Specified at the session level (using ALTER SESSION) or as an initialization parameter. Defines the character length semantics as byte or character. The default is BYTE. NLS_LENGTH_ SEMANTICS does not apply to tables in SYS and SYSTEM; they are always in BYTE semantic. NLS_LANG Specified only as an environment variable. NLS_LANG has three parts: the language, the territory, and the character set. None of the parts are mandatory. The format to specify NLS_LANG is _ .. The language specifies the language to be used for displaying Oracle error messages, day names, month names, and so on. The territory specifies the default date format, numeric formats, and monetary formats. The character set specifies the character set to be used by the client machine—for example, AMERICAN_AMERICA .WE8ISO8859P1, in which AMERICAN is the language, AMERICA is the territory, and WE8ISO8859P1 is the character set. NLS_LANGUAGE Specified at the session level or as an initialization parameter. Sets the language to be used. The session value overrides the NLS_LANG setting. The default values for the NLS_DATE_LANGUAGE and NLS_SORT parameters are derived from NLS_LANGUAGE. NLS_TERRITORY Specified at the session level or as an initialization parameter. Sets the territory. The session value overrides the NLS_LANG setting. The default values for parameters such as NLS_CURRENCY, NLS_ ISO_CURRENCY, NLS_DATE_FORMAT, and NLS_NUMERIC_CHARACTERS are derived from NLS_TERRITORY. NLS_DATE_FORMAT Specified at the session level, as an environment variable, or as an initialization parameter. Sets a default format for date displays. NLS_DATE_LANGUAGE Specified at the session level, as an environment variable, or as an initialization parameter. Sets a language explicitly for day and month names in date values. NLS_TIMESTAMP_FORMAT Specified at the session level, as an environment variable, or as an initialization parameter. This parameter defines the default timestamp format to use with the TO_CHAR and TO_ TIMESTAMP functions.
NLS_TIMESTAMP_TZ_FORMAT Specified at the session level, as an environment variable, or as an initialization parameter. This parameter defines the default timestamp with time zone format to use with the TO_CHAR and TO_TIMESTAMP_TZ functions NLS_CALENDAR Specified at the session level, as an environment variable, or as an initialization parameter. Sets the calendar Oracle uses. NLS_NUMERIC_CHARACTERS Specified at the session level, as an environment variable, or as an initialization parameter. Specifies the decimal character and group separator (for example, in 234,224.99, the comma is the group separator and the period is the decimal character). NLS_CURRENCY Specified at the session level, as an environment variable, or as an initialization parameter. Specifies a currency symbol. NLS_ISO_CURRENCY Specified at the session level, as an environment variable, or as an initialization parameter. Specifies the ISO currency symbol. For example, when the NLS_ISO_CURRENCY value is AMERICA, the currency symbol for U.S. dollars is $, and the ISO currency symbol is USD. NLS_DUAL_CURRENCY Specified at the session level, as an environment variable, or as an initialization parameter. Specifies an alternate currency symbol. Introduced to support the Euro. NLS_SORT Specified at the session level, as an environment variable, or as an initialization parameter. Specifies the language to use for sorting. You can specify any valid language. The ORDER BY clause in a SQL statement uses this value for the sort mechanism. For example: ALTER SESSION SET NLS_SORT = GERMAN; SELECT * FROM CUSTOMERS ORDER BY NAME; In this example, the NAME column will be sorted using the German linguistic sort mechanism. You can also explicitly set the sort language by using the NLSSORT function, rather than altering the session parameter. The following example demonstrates this method: SELECT * FROM CUSTOMERS ORDER BY NLSSORT(NAME, “NLS_SORT= GERMAN”);
Managing Users, Security, and Globalization Support
NCHAR Considerations When Migrating to Oracle9i The Oracle8i Server introduced a national character (NCHAR, NVARCHAR2, NCLOB) datatype that allows for an alternate character set in addition to the original database character set. NCHAR datatypes support a number of special, fixed-width Asian character sets that were introduced to provide for higher performance when processing of Asian character data. In Oracle9i, the NCHAR datatypes are limited to the Unicode character set encoding (UTF8 and AL16UTF16) only. Any other Oracle8i Server character sets that were available under the NCHAR datatype, including Asian character sets (for example, JA16SJISFIXED), are not supported. So, if your Oracle8i database has NCHAR, NVARCHAR2, and NCLOB columns, when migrating to Oracle9i, you have to use the export/import utility when converting these to the Unicode character set. The process is as follows:
1. Export all tables containing NCHAR columns from Oracle8i. 2. Drop the tables (or the NCHAR Columns). 3. Upgrade the database to Oracle9i. 4. Import the tables (or columns) into Oracle9i.
Using NLS To Change Application Behavior
Oracle Objective
Explain the influence on language-dependent application behavior
Globalization Support enables applications to use local symbols and semantics regardless of where the database resides. The various NLS parameters define locale specific to the country and language. Oracle9i supports many languages. The user of the application need not know any other culture or language or conventions to use the application. You can set the following NLS parameters so that everything is in local format for the user.
Language You can store, retrieve, and manipulate data in local native languages. Oracle9i supports all major languages and subsets of languages using the Unicode character set. Use the NLS_LANG and the NLS_LANGUAGE parameters. Geographical location You can set the geographic-specific information, such as currency symbol, date formats, and numeric conventions, local to the user. Use NLS_TERRITORY. Date and time formats You can set date and time formats specific to the location or convenience. Use NLS_DATE_FORMAT, NLS_DATE_LANGUAGE, NLS_TIMESTAMP_FORMAT, and NLS_TIMESTAMP_TZ_FORMAT. Currency and numeric formats You can display the currency symbol local to the application. Some countries use the comma (,) as a decimal separator, and some countries use the period (.) as a decimal separator. You can specify such settings using NLS_CURRENCY, NLS_DUAL_CURRENCY, NLS_ISO_CURRENCY, and NLS_NUMERIC_CHARACTERS. Calendar and Sorting You can set local calendars and specify linguistic sorting using the NLS parameters NLS_CALENDAR and NLS_SORT.
Obtaining NLS Data Dictionary Information
Oracle Objective
Obtain information about Globalization Support usage
You can obtain NLS data dictionary information from the data dictionary using the following views: NLS_DATABASE_PARAMETERS Shows the parameters defined for the database (the database default values) NLS_INSTANCE_PARAMETERS Shows the parameters specified in the initialization parameter file NLS_SESSION_PARAMETERS Shows the parameters that are in effect in the current session V$NLS_VALID_VALUES Shows the allowed values for the language, territory, and character set definitions
Managing Users, Security, and Globalization Support
The following examples show NLS information from the data dictionary views and examples of changing session NLS values. SQL> SELECT * FROM NLS_DATABASE_PARAMETERS; PARAMETER -----------------------NLS_LANGUAGE NLS_TERRITORY NLS_CURRENCY NLS_ISO_CURRENCY NLS_NUMERIC_CHARACTERS NLS_CHARACTERSET NLS_CALENDAR NLS_DATE_FORMAT NLS_DATE_LANGUAGE NLS_SORT NLS_TIME_FORMAT NLS_TIMESTAMP_FORMAT NLS_TIME_TZ_FORMAT NLS_TIMESTAMP_TZ_FORMAT NLS_DUAL_CURRENCY NLS_COMP NLS_NCHAR_CHARACTERSET NLS_RDBMS_VERSION NLS_LENGTH_SEMANTICS NLS_NCHAR_CONV_EXCP
VALUE -----------------------------AMERICAN AMERICA $ AMERICA ., UTF8 GREGORIAN DD-MON-YY AMERICAN BINARY HH.MI.SSXFF AM DD-MON-YY HH.MI.SSXFF AM HH.MI.SSXFF AM TZH:TZM DD-MON-YY HH.MI.SSXFF AM TZH:T $ BINARY US7ASCII 9.0.1.1.1 BYTE FALSE
20 rows selected. SQL> ALTER SESSION SET NLS_DATE_FORMAT = ‘DD-MM-YYYY HH24:MI:SS’; Session altered. SQL> ALTER SESSION SET NLS_DATE_LANGUAGE = ‘GERMAN’; Session altered.
VALUE ------------------------------AMERICAN AMERICA $ AMERICA ., UTF8 GREGORIAN DD-MON-RR AMERICAN BINARY HH.MI.SSXFF AM DD-MON-RR HH.MI.SSXFF AM HH.MI.SSXFF AM TZR DD-MON-RR HH.MI.SSXFF AM TZR $ BINARY BYTE FALSE
Managing Users, Security, and Globalization Support
NLS_NCHAR_CHARACTERSET NLS_RDBMS_VERSION
AL16UTF16 9.0.1.1.1
20 rows selected SQL>
Summary
T
his chapter discussed the security aspects of the Oracle database: profiles, privileges, and roles. You use profiles to control the database and system resource usage. You also use profiles to manage passwords. You can create various profiles for different user communities and assign a profile to each user. When you create the database, Oracle creates a profile named DEFAULT, which is assigned to the users when you do not specify a profile for the user. Profiles can monitor the resource use by session or on a per-call basis. Resource limits are enforced only when the parameter RESOURCE_ LIMIT is set to TRUE. Using profiles, you can lock an account, manage password expiration and reuse, and verify password complexity. When an account is locked, the DBA must unlock it in order for the user to connect to the database. You cannot drop profiles when they are assigned to users; you can drop such profiles only when you use the CASCADE keyword. You create users in the database to use the database. Every user needs the CREATE SESSION privilege to be able to connect to the database. Users are assigned a default and a temporary tablespace. When the user does not specify a tablespace when creating a table, the table is created in the default tablespace. The temporary tablespace is used for creating sort segments. If you do not specify default or temporary tablespaces, Oracle assigns SYSTEM as the user’s default and temporary tablespace. Users are granted a space quota on the tablespace. When they exceed this quota, no more extents can be allocated to the objects. If a user has the UNLIMITED TABLESPACE system privilege, there are no space quota restrictions. Before connecting a user to the database, Oracle authenticates the username. The authentication method can be via the database, whereby the user specifies a password, or it can be via the operating system. The operating system authentication method uses the operating system login information to connect to the database; such users are created with the IDENTIFIED
EXTERNALLY clause. You cannot drop a user who is connected to the database. You must terminate the user session before dropping the user. To control the actions performed by users on the database, you use privileges. A role is a named set of privileges, which makes managing privileges easy. There are two types of privileges: object and system. Object privileges specify allowed actions on a specific object (the owner of the object has to grant the privilege to other users or authorize other users to grant the privileges on the object); system privileges specify the allowed actions on the database. To manage privileges, you use the GRANT and REVOKE commands. Any privilege granted to PUBLIC is available to all users in the database. You can monitor database actions or statements using the AUDIT command. You can audit statements, privilege usage, or object usage, and you can restrict auditing to specific users, successful statements, or failed statements. You can also limit the number of audit records generated by specifying auditing by session (one record per session) or by access (one record per DDL, DML, and so on). Oracle can store and retrieve data in a native language and format using the Globalization Support feature. The character set used determines the default language and the conventions used. You specify the character set when you create the database. You can specify only the Unicode character set as the national character set for the database. Oracle provides several parameters that can determine the characteristics and conventions of data displayed to the user. You can specify these parameters for a session, for the instance, or in certain SQL functions. If none are specified, the database defaults are used.
Exam Essentials Know how to differentiate between object privileges and system privileges. System privileges give the privileges to perform certain actions on the database; object privileges give privileges to perform operations on specific objects. Learn the various privileges that can be granted on objects. Insert, update, delete, select, query rewrite, and execute are certain privileges that you can grant on schema objects. Know which privilege is applicable to which object.
Managing Users, Security, and Globalization Support
Learn the various system privileges. System privileges can be granted on objects owned by a user (such as CREATE TABLE) or on any schema in the database (such as CREATE ANY TABLE). Other system privileges provide administration capabilities. Know how to grant and revoke privileges. Understand the implications of granting and revoking privileges, especially object privileges. You use the GRANT statement to grant privileges, and you use the REVOKE statement to remove privileges. Understand the WITH ADMIN OPTION and WITH GRANT OPTION clauses. Learn to monitor suspicious database activity. Know the database auditing capabilities of the database. You can audit sessions, DDL operations, and DML operations. Create and manage profiles. Know the components you can control when managing resources and passwords. Understand account locking and password verification. Accounts can be locked after unsuccessful login attempts. Passwords can be set to expire after a certain period, and new passwords can be verified against certain rules. Understand data dictionary views. Know the dictionary views used to provide information about users, privileges, roles, and sessions. Know how to manage users. Create new users and change the tablespace assignments and space quota of existing users. Know how to enable and disable roles. Roles granted to users can be set as default or non-default. Roles can also be password protected. Understand the SET ROLE command. Understand Globalization Support. Know the difference between the database character set and the national character set. Understand the datatypes that store information in the national character set. Also learn the characteristics of the Unicode character set. Know the dictionary views with NLS information. Understand the information available in the dictionary views NLS_DATABASE_ PARAMETERS, NLS_INSTANCE_PARAMETERS and NLS_ SESSION_PARAMETERS.
Managing Users, Security, and Globalization Support
Review Questions 1. Profiles cannot be used to restrict which of the following? A. CPU time used B. Total time connected to the database C. Maximum time a session can be inactive D. Time spent reading blocks 2. Which command is used to assign a profile to an existing user? A. ALTER PROFILE. B. ALTER USER. C. SET PROFILE. D. The profile should be specified when creating the user; it cannot be
changed. 3. Which resource is not used to calculate the COMPOSITE_LIMIT? A. PRIVATE_SGA B. CPU_PER_SESSION C. CONNECT_TIME D. LOGICAL_READS_PER_CALL 4. Choose the option that is not true. A. Oracle creates a profile named DEFAULT when the database is
created. B. Profiles cannot be renamed. C. DEFAULT is a valid name for a profile resource. D. The SESSIONS_PER_USER resource in the DEFAULT profile initially
5. What is the maximum number of profiles that can be assigned to
a user? A. 1 B. 2 C. 32 D. Unlimited 6. What happens when you create a new user and do not specify
a profile? A. Oracle prompts you for a profile name. B. No profile is assigned to the user. C. The DEFAULT profile is assigned. D. The SYSTEM profile is assigned. 7. Which resource specifies the value in minutes? A. CPU_PER_SESSION B. CONNECT_TIME C. PASSWORD_LOCK_TIME D. All the above 8. Which password parameter in the profile definitions can restrict the
user from using the old password for 90 days? A. PASSWORD_REUSE_TIME B. PASSWORD_REUSE_MAX C. PASSWORD_LIFE_TIME D. PASSWORD_REUSE_DAYS 9. Which dictionary view shows the password expiration date for a user? A. DBA_PROFILES B. DBA_USERS C. DBA_PASSWORDS D. V$SESSION
Managing Users, Security, and Globalization Support
10. Which clause in the CREATE USER command can be used to specify no
limits on the space allowed in tablespace APP_DATA? A. DEFAULT TABLESPACE B. UNLIMITED TABLESPACE C. QUOTA D. PROFILE 11. User JAMES has a table named JOBS created on the tablespace USERS.
When you issue the following statement, what effect it will have on the JOBS table? ALTER USER JAMES QUOTA 0 ON USERS; A. No more rows can be added to the JOBS table. B. No blocks can be allocated to the JOBS table. C. No new extents can be allocated to the JOBS table. D. The table JOBS cannot be accessed. 12. Which view would you query to see whether John has the CREATE
TABLE privilege? A. DBA_SYS_PRIVS B. DBA_USER_PRIVS C. DBA_ROLE_PRIVS D. DBA_TAB_PRIVS 13. Which clause should you specify to enable the grantee to grant the
system privilege to other users? A. WITH GRANT OPTION B. WITH ADMIN OPTION C. CASCADE D. WITH MANAGE OPTION
14. Which of the following is not a system privilege? A. SELECT B. UPDATE ANY C. EXECUTE ANY D. CREATE TABLE 15. Which data dictionary view can you query to see whether a user has
the EXECUTE privilege on a procedure? A. DBA_SYS_PRIVS B. DBA_TAB_PRIVS C. DBA_PROC_PRIVS D. SESSION_PRIVS 16. To grant the SELECT privilege on the table CUSTOMER to all users in the
database, which statement would you use? A. GRANT SELECT ON CUSTOMER TO ALL USERS; B. GRANT ALL ON CUSTOMER TO ALL; C. GRANT SELECT ON CUSTOMER TO ALL; D. GRANT SELECT ON CUSTOMER TO PUBLIC; 17. Which role in the following list is not a predefined role from Oracle? A. SYSDBA B. CONNECT C. IMP_FULL_DATABASE D. RESOURCE 18. How do you enable a role? A. ALTER ROLE B. ALTER USER C. SET ROLE D. ALTER SESSION
Managing Users, Security, and Globalization Support
19. What is accomplished when you issue the following statement?
ALTER USER JOHN DEFAULT ROLE ALL; A. John is assigned all the roles created in the database. B. Future roles granted to John will not be default roles. C. All of John’s roles are enabled, except the roles with passwords. D. All of John’s roles are enabled when he connects to the database. 20. Which command defines CONNECT and RESOURCE as the default roles
for user JAMES? A. ALTER USER B. ALTER ROLE C. SET ROLE D. SET PRIVILEGE 21. Which data dictionary view shows the database character set? A. V$DATABASE B. NLS_DATABASE_PARAMETERS C. NLS_INSTANCE_PARAMETERS D. NLS_SESSION_PARAMETERS 22. Choose two NLS parameters that cannot be modified using the ALTER
SESSION statement. A. NLS_CHARACTERSET B. NLS_SORT C. NLS_NCHAR_CHARACTERSET D. NLS_TERRITORY
Answers to Review Questions 1. D. There is no resource parameter in the profile definition to monitor
the time spent reading blocks, but you can restrict the number of blocks read per SQL statement or per session. 2. B. You use the PROFILE clause in the ALTER USER command to
set the profile for an existing user. You must have the ALTER USER privilege to do this. 3. D. Call-level resources are not used to calculate the COMPOSITE_
LIMIT. You can set the resource cost of the four resources (the fourth is LOGICAL_READS_PER_SESSION) using the ALTER RESOURCE COST command. 4. D. All resources in the default profile have a value of UNLIMITED
when the database is created. You can change these values. 5. A. A user can have only one profile assigned. You can query the
profile assigned to a user from the DBA_USERS view. 6. C. The DEFAULT profile is created when the database is created and is
assigned to users if you do not specify a profile for the new user. Before you can assign a profile, you must create the user in the database. 7. B. CONNECT_TIME is specified in minutes, CPU_PER_SESSION is spec-
ified in hundredths of a second, and PASSWORD_LOCK_TIME is specified in days. 8. A. PASSWORD_REUSE_TIME specifies the number of days required
before the old password can be reused; PASSWORD_REUSE_MAX specifies the number of password changes required before a password can be reused. At least one of these parameters must be set to UNLIMITED. 9. B. The DBA_USERS view shows the password expiration date,
account status, and locking date along with the user’s tablespace assignments, profile, creation date, and so on. 10. C. You use the QUOTA clause to specify the amount of space allowed
on a tablespace; you can specify a size or UNLIMITED. The user will have unlimited space if the system privilege UNLIMITED TABLESPACE is granted.
Managing Users, Security, and Globalization Support
11. C. When the space quota is exceeded or quota is removed from a user
on a tablespace, the tables remain in the tablespace, but no new extents can be allocated. 12. A. CREATE TABLE is a system privilege. You can query system privi-
leges from DBA_SYS_PRIVS or USER_SYS_PRIVS. 13. B. The WITH ADMIN OPTION specified with system privileges enables
the grantee to grant the privileges to others, and the WITH GRANT OPTION specified with object privileges enables the grantee to grant the privilege to others. 14. A. SELECT, INSERT, UPDATE, DELETE, EXECUTE, and REFERENCES
are object privileges. SELECT ANY, UPDATE ANY, and so on are system privileges. 15. B. The DBA_TAB_PRIVS, USER_TAB_PRIVS, and ALL_TAB_PRIVS
views show information about the object privileges. 16. D. PUBLIC is the group or class of database users to which all users
of the database belong. 17. A. SYSDBA and SYSOPER are not roles; they are system privileges. 18. C. You use the SET ROLE command to enable or disable granted roles
for the user. The view SESSION_ROLES shows the roles that are enabled in the session. All default roles are enabled when the user connects to the database. 19. D. Default roles are enabled when a user connects to the database
even if the roles are password authorized. 20. A. The ALTER USER command defines the default role(s) for a user. 21. B. The NLS_DATABASE_PARAMETERS view shows the database
character set and all the NLS parameter settings. The character set cannot be changed at the instance or session level, so the character set information does not show up in the NLS_INSTANCE_PARAMETERS and NLS_SESSION_PARAMETERS views. 22. A and C. You cannot change the character set after creating the data-
base. The CHARACTER SET and NATIONAL CHARACTER SET clauses are used in the CREATE DATABASE command.
A alert log A log that is written to the BACKGROUND_DUMP_DEST directory that is specified in the initialization parameter file and that shows start-ups, shutdowns, ALTER DATABASE and ALTER SYSTEM commands, and a variety of error statements. alert log file A text file that logs significant database events and messages. The alert log file stores information about block corruption errors, internal errors, and the non-default initialization parameters used at instance start-up. archive logs Logs that are copies of the online redo logs and that are saved to another location before the online copies are reused. ARCHIVELOG A mode of database operation. When the Oracle database is run in ARCHIVELOG mode, the online redo log files are copied to another location before they are overwritten. These archived log files can be used for point-in-time recovery of the database. They can also be used for analysis. archiver process (ARCn) Copies the online redo log files to archive log files. asynchronous I/O Multiple I/O activities performed at the same time without any dependencies. audit trail Records generated by auditing, which are stored in the database in the table SYS.AUD$. Auditing enables the DBA to monitor suspicious database activity. automatic archiving The automatic creation of archive logs after the appropriate redo logs have been switched. The LOG_ARCHIVE_START parameter must be set to TRUE in the init.ora file for automatic archiving to take place. automatic space management For data block maintenance in a tablespace, using bitmaps instead of free lists to manage free and used space. An alternative to using PCTUSED, FREELISTS, and FREELIST GROUPS to manage data blocks.
B BACKGROUND_DUMP_DEST An init.ora parameter that determines the location of the alert log and Oracle background process trace files. base tables The lowest-level tables in the data dictionary. They are highly normalized and contain cryptic, version-specific information. The data dictionary views are based on these tables. before image Image of the transaction data before the transaction occurred. This image is stored in the rollback (undo) segments when DML operations are performed. bitmap index An indexing method used by Oracle to create the index by using bitmaps. Used for low-cardinality columns. block The smallest unit of storage in an Oracle database. Data is stored in the database in blocks. The block size is defined when the database is created and is a multiple of the operating system block size. b-tree
An algorithm used for creating indexes.
C cache recovery The part of instance recovery in which all the data that is not in the data files is reapplied to the data files from the online redo logs. change vectors database.
A description of a change made to a single block in the
checkpoint process (CKPT) A checkpoint is an event that flushes the modified data from the buffer cache to the disk and updates the control file and data files. The checkpoint process updates the headers of data files and control files; the actual blocks are written to the file by the DBWn process. checkpointing The process of updating the SCN in all the data files and control files in the database in conjunction with all necessary data blocks in the data buffers being written to disk. This is done for the purposes of ensuring database consistency and synchronization.
coalesce Combine neighboring free extents to form a single extent on a tablespace. commit To save or permanently store the results of a transaction to the database. control file Maintains information about the physical structure of the database. The control file contains the database name and timestamp of database creation, backup information, and the name and location of every data file and redo log file. current online redo logs LGWR process.
Logs that are actively being written to by the
D data block The smallest unit of data storage in Oracle. The block size is specified when the database is created. data block buffers Memory buffers containing data blocks that get flushed to disk if modified and committed. data dictionary A collection of database tables and views containing metadata about the database, its structures, its privileges, and its users. Oracle accesses the data dictionary frequently during the parsing of SQL statements. data dictionary cache An area in the shared pool that holds the most recently used database dictionary information. The data dictionary cache is also known as the row cache because it holds data as rows instead of buffers (which hold entire blocks of data). data file The data files in a database contain all the database data. One data file can belong to only one database and to one tablespace. Tablespaces can consist of more than one data file. data segment
A segment that stores table or cluster data.
data types Used to specify certain characteristics for table columns, such as numeric, alphanumeric, date, and so on.
database The physical structure that stores the actual data. The Oracle server consists of the database and the instance. database buffer cache The area of memory that caches the database data. It holds the recent blocks that are read from the database data files, and it holds new or modified blocks that are to be written to the database data files. database buffers
See data block buffers.
database writer process (DBWn) The DBWn process writes the changed database blocks from the SGA to the data file. There can be a maximum of 10 database writer processes (DBW0 through DBW9). DBA tools GUI tools integrated with the OEM. Administrators can use these tools for complete database administration rather than using SQL*Plus. DBWn
See database writer process.
degree of parallelism The number of parallel processes you choose to enable for a particular parallel activity such as recovery. DICTIONARY A data dictionary view that contains the name and description of all the data dictionary views in the database. dictionary-managed tablespace A tablespace in which the extent allocation and de-allocation information is managed through the data dictionary. dirty buffers The blocks in the database buffer cache that are changed, but not yet written to the disk. distributed transactions
Transactions that occur in remote databases.
DUAL A dummy table owned by SYS; it has one column and one row and is useful for computing a constant expression with a SELECT statement. dump file The file where the logical backup is stored. This file is created by the Export utility and read by the Import utility. dynamic performance views Data dictionary views that are continuously updated while a database is open and in use. Their contents relate primarily to performance. These views have the prefix V$.
E environment variables Operating system variables, usually on Unix, that define file locations and other parameters for the database. execute The stage in SQL processing that runs the parsed SQL code from the library cache. extent A contiguous allocation of blocks for data or index storage. An extent has multiple blocks, and a segment can have multiple extents. extent management Extents allocated to tablespaces can be managed locally in the data files or through the data dictionary. You can specify the extent management clause when creating a tablespace. The default is dictionary-managed.
F fetch The stage in SQL query processing that returns the data to the user process. free buffers overwritten.
The blocks in the database buffer cache that can be
free lists A logical storage structure within each segment that maintains the list of available blocks for future inserts into that segment. function-based index Indexes created on functions or expressions, to speed up queries containing WHERE clauses with a particular function or expression.
H header block The first block in a data file; it contains information about the data file, such as freelist and checkpoint information. high-water mark (HWM) The maximum number of blocks used by the table. The high-water mark is not reset when you delete rows.
I Index Organized Table (IOT) Table rows stored in a b-tree index, using a primary key. Avoids duplication of storage for table data and index information. index segment
A segment that stores index information.
INITRANS A segment data block parameter that controls the number of concurrent transactions that can modify or create data in the block. instance The memory structures and background processes of the Oracle server. integrity constraints business rules.
Structures built into the database to enforce
J Java pool An optional area in the SGA used for processing server-side Java language procedures.
L large pool An optional area in the SGA used for specific database operations, such as backup, recovery, or the User Global Area (UGA) space when using a Shared Server configuration.
library cache An area in the shared pool of the SGA that stores the parsed SQL statements. When a SQL statement is submitted, the server process searches the library cache for a matching SQL statement; if it finds one, re-parsing of the SQL statement is not necessary. locally managed tablespace A tablespace that has the extent allocation and de-allocation information managed through bitmaps in the associated data files of the tablespace. log buffers Memory buffers containing the entries that are written to the log files. log file group Two or more log files, usually stored on different physical disks, that are written to in parallel and are considered multiplexed for the purposes of database recovery. For the purposes of log file creation and recovery, a single, non-multiplexed log file is still considered part of a log group. log sequence number
A sequence number assigned to each redo log file.
log writer process (LGWR) The LGWR process writes the redo log buffer entries (change vectors) to the online redo log files. A redo log entry is any change, or transaction, that has been applied to the database, committed or not. LOG_ARCHIVE_DEST An init.ora parameter that determines the destination of the archive logs. LOG_ARCHIVE_DEST_n An init.ora parameter that determines the other destinations of the archive logs, remote or local. This parameter supports a maximum of five locations, n being a number from 1 through 5. Only one of these destinations can be remote. LOG_ARCHIVE_DUPLEX_DEST An init.ora parameter that determines the duplexed, or second, destination of archive logs in a two-location archive log configuration. LOG_ARCHIVE_START An init.ora parameter that enables automatic archiving.
logging The recording of DML statements, creation of new objects, and other changes in the redo logs. logical attributes For tables and indexes, logical attributes are the columns, data types, constraints, and so on. logical structures The database structures as seen by the user. Tablespaces, segments, extents, blocks, tables, and indexes are all examples of logical structures. LogMiner A utility that can be used to analyze the redo log files. It can provide a fix for logical corruption by building redo and undo SQL statements from the contents of the redo logs. LogMiner is a set of PL/SQL packages and dynamic performance views.
M Management Server The middle tier between the console GUI and managed nodes in the Enterprise Manager setup. It processes and coordinates all system management tasks and distributes these tasks to Intelligent Agents on the nodes. MAXTRANS A segment data block parameter that specifies the maximum number of concurrent transactions that can modify or create data in the block. metadata Information about the objects that are available in the database. The data dictionary information is the metadata for the Oracle database. mount A stage in starting up the database. When the instance started mounts a database, the control file is opened to read the data file and redo log file information before the database can be opened. multiplexing Oracle’s mechanism for writing to more than one copy of the redo log file or control file. It involves mirroring, or making duplicate copies. Multiplexing ensures that even if you lose one member of the redo log group or one control file, you can recover using the other one. It intersperses blocks from Oracle data files within a backup set.
multithreaded configuration A database configuration whereby one shared server process takes requests from multiple user processes. In a dedicated server configuration, there will be one server process for one user process. In Oracle 9i, this is known as a Shared Server configuration, formerly known as MTS.
N National Language Support (NLS) Enables Oracle to store and retrieve information in a format and language that can be understood by users anywhere in the world. The database character set and various other parameters are used to enhance this capability. NOARCHIVELOG Mode of database operation, whereby the redo log files are not preserved for recovery or analysis purposes. Nologging Not recording DML statements, the creation of new objects, and other changes in the redo logs—therefore making the changes unrecoverable until the next physical backup. non-current online redo logs Online redo logs that are not in the current or active group being written to.
O object privileges Privileges granted on an object for users other than the object owner. They allow these users to manipulate data in the object or to modify the object. online redo logs Redo logs that are being written to by the LGWR process at some point in time. See archive logs. Optimal Flexible Architecture (OFA) A standard that defines the optimal way to set up an Oracle database. It includes guidelines for specifying database file locations for better performance and management. ORA_NLS33 An environment variable to set if using a character set other than US7ASCII.
Oracle Enterprise Manager (OEM) A DBA system management tool that performs a variety of DBA tasks, including running the RMAN utility in GUI mode, managing different components of Oracle, and administering the databases at one location. Oracle Managed Files (OMF) A method to ease the maintenance of database file locations for data files, control files, and online redo log files. Two new initialization parameters define the location of files in the operating system: DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n. Oracle Real Application Cluster (RAC) An Oracle database that consists of at least two servers, or nodes, each with an instance but sharing one database. Formerly known as Oracle Parallel Server. Oracle Recovery Manager (RMAN) The Recovery Manager utility, which is responsible for the backup and recovery of Oracle databases. Oracle Universal Installer (OUI) A Java-based GUI tool used to install all Oracle products. ORACLE_HOME The environment variable that defines the location where the Oracle software is installed. ORACLE_SID The environment variable that defines the database instance name. If you are not using OracleNet, connections are made to this database instance by default. operating system authentication An authentication method used to connect administrators and operators to the database to perform administrative tasks. Connection is made to the database by verifying the operating system privileges.
P package A stored PL/SQL program that holds a set of other programs such as procedures, functions, cursors, variables, and so on. parallel query processes Oracle background processes that process a portion of a query. Each parallel query process runs on a separate CPU.
PARALLEL_MAX_SERVERS An init.ora parameter that determines the maximum number of parallel query processes at any given time. parameter file A binary or text file with parameters to configure memory, database file locations, and limits for the database. This file is read when the database is started. When you are using utilities such as SQL*Loader or Export or Import, you can specify the command-line parameters in a parameter file, which can be reused for other exports or imports. parsing A stage in SQL processing wherein the syntax of the SQL statement, object names, and user access are verified. Oracle also prepares an execution plan for the statement. partitioning Breaking the table or index into multiple smaller, more manageable chunks. password file authentication An authentication method used to connect administrators and operators to the database to perform administrative tasks. Oracle creates a file on the server with the SYS password; users are added to this file when they are granted SYSOPER or SYSDBA privilege. PCTFREE A segment block parameter that specifies what percentage of the block should be allocated as free space for future updates. PCTUSED A segment block parameter that specifies when a block can be considered for adding new rows. New rows can be added to the block only when the used space falls below this percentage. PFILE A text-based file containing the initial values for memory, file locations, and other parameters in the instance. PGA
See Program Global Area.
physical attributes For tables and indexes, physical attributes are the physical storage characteristics, such as the extent size, tablespace name, and so on. physical structures The database structures that store the actual data and operation of the database. Data files, control files, and redo log files constitute the physical structure of the database. pinned buffers accessed.
The blocks in the database buffer cache that are being
privileges Authorization granted on an object in the database or an authorization to perform an activity. procedure or function A PL/SQL program that is stored in the database in a compiled form. A function always returns one value; a procedure does not. You can pass a parameter to and from the procedure or function. process
A daemon, or background program, that performs certain tasks.
process monitor process (PMON) Performs recovery of failed user processes. This process is mandatory and is started by default when the database is started. It frees up all the resources held by the failed processes. profiles A set of named parameters used to control the use of resources and to manage passwords. Program Global Area (PGA) A non-shared memory area allocated to the server process. Also known as the Process Global Area or Private Global Area. PUBLIC A users group available in all databases of which all users are members. When a privilege is granted to PUBLIC, it is available to all users in the database.
R read-consistent image Image of the transaction data before the transaction occurred. This image is available to all users not executing the transaction. read-only tablespace A tablespace that allows only read activity, such as SELECT statements. It is available only for querying. The data is static and doesn’t change. No write activity (for example, INSERT, UPDATE, and DELETE statements) is allowed. Read-only tablespaces need to be backed up only once.
read-write tablespace A tablespace that allows both read and write activity, including SELECT, INSERT, UPDATE, and DELETE statements. By default, a tablespace is opened in read-write mode. record sections Logical areas within the control file. The two types of record sections are reusable and not reusable. recoverer process (RECO) An optional background process used with distributed transactions to resolve failures. recovery catalog Information stored in a database used by the RMAN utility to back up and restore databases. Recovery Manager (RMAN) An automated tool from Oracle that can perform and manage the backup and recovery process. redo buffers redo entry
See log buffers. See redo record.
redo log buffer The area in the SGA that records all changes to the database. The changes are known as redo entries, or change vectors, and are used to reapply the changes to the database in case of a failure. redo log file The redo log buffers from the SGA are periodically copied to the redo log files. Redo log files are critical to database recovery. redo logs Record all changes to the database, whether the transactions are committed or rolled back. Redo logs are classified as online redo logs or offline redo logs (also called archive logs), which are simply copies of online redo logs. redo record A group of change vectors. Redo entries record data that you can use to reconstruct all changes to the database, including the rollback segments. Redundant Array of Inexpensive Disks (RAID) The storage of data on multiple disks for fault tolerance, to protect against individual disk crashes. If one disk fails, that disk can be rebuilt from the other disks. RAID is available in variations, termed RAID 0 through 5 in most cases. report A query of the catalog that is more detailed than a list and that describes what may need to be done.
reverse key index An index in which column values are reversed before being added to the index entry. role A named group of system and object privileges used to ease the administration of privileges to users. roll back
To undo a transaction from the database.
roll-forward-and-roll-back process Applying all the transactions, committed or not committed, to the database and then undoing all uncommitted transactions. row cache
See data dictionary cache.
row chaining Storing a row in multiple blocks because the entire row cannot fit in one block. Usually row chaining occurs when the table has large VARCHAR2 or LOB columns. row migration Moving a row from one block to another due to an update operation, because not enough free space is available to accommodate the updated row. ROWID Exact physical location of the row on disk. ROWID is a pseudocolumn in all tables.
S schema A logical structure used to group a set of database objects owned by a user. segment A logical structure that holds data. Every object created to store data is allocated a segment. A segment has one or more extents. server process A background process that takes requests from the user process and applies them to the Oracle database. session A job or a task that Oracle manages. When you log in to the database by using SQL*Plus or any tool, you start a session. SGA
shared pool An area in the SGA that holds information such as parsed SQL, PL/SQL procedures and packages, the data dictionary, locks, character set information, security attributes, and so on. SMON
See system monitor process.
sort area An area in the PGA that is used for sorting data during query processing. space quota Maximum space allowed for a user in a tablespace for creating objects. SPFILE A binary file containing instance parameter values, such as memory, file locations, and other database modes of operation. It is not meant to be edited by a standard text editor; it is created from a standard PFILE and then modified by the ALTER SYSTEM command thereafter. See also PFILE. sql.bsq The SQL commands automatically run when the CREATE DATABASE command is executed. These commands create the minimal set of tables, indexes, user accounts, and roles for proper database operation. standard block size Block size specified when creating the database. The system tablespace and temporary tablespaces use this block size. Other tablespaces can use a non-standard block size. structure Either a physical or a logical object that is part of the database, such as files or database objects themselves. system change number (SCN) A unique number generated at the time of a COMMIT, acting as an internal counter to the Oracle database and used for recovery and read consistency. System Global Area (SGA) A memory area in the Oracle instance that is shared by all users. system monitor process (SMON) Performs instance recovery at database start-up by using the online redo log files. It is also responsible for cleaning up temporary segments in the tablespaces that are no longer used and for coalescing the contiguous free space in the tablespaces. system privileges Privileges granted to perform an action, as opposed to a privilege on an object.
T tablespace A logical storage structure at the highest level. A tablespace can have many segments that may be used for data, index, sorting (temporary), or rollback information. The data files are directly related to tablespaces. A segment can belong to only one tablespace. tempfile Physical file associated with the locally managed temporary tablespace. templates A logical structure created with the Oracle Database Creation Assistant (DBCA) to make it easier to create a similar database on the same or a different server. It can be created from scratch, or it can be generated from an existing database. temporary segment A segment created for sorting data. Temporary segments are also created when an index is built or a table is built using CREATE TABLE AS. temporary tablespace A tablespace that stores temporary segments for sorting and creating tables and indexes.
U undo entries The block and row information used to undo the changes made to a row. undo segment
A database segment containing undo entries.
undo tablespace A database tablespace used only for holding undo segments. Only one undo tablespace can be active at a time. Unicode Character set that allows you to store information from any language using a single character set. user process A process that is started by the application tool that communicates with the Oracle server process. USER_DUMP_DEST An init.ora parameter that determines the location of the user process trace files.
V V$CONTROLFILE The dictionary view that gives the names of the control files. It provides information about the control files that can be useful in the backup and recovery process. V$DATAFILE The view that provides information about the data files. V$LOG_HISTORY A V$ view that displays history regarding the redo log files. V$LOGFILE The dictionary view that gives the redo log file names and status of each redo log file. It provides information about the online redo log files. V$TABLESPACE The view providing information about the tablespaces in the database.
To Our Valued Readers: In a CertCities.com article dated December 15, 2001, Oracle certification was ranked #2 in a list of the “10 Hottest Certifications for 2002.” This shouldn’t come as a surprise, especially when you consider the fact that the OCP program nearly tripled in size (from 30,000 to 80,000) in the last year. Oracle continues to expand its dominance in the database market, and as companies begin integrating Oracle9i systems into their IT infrastructure, you can be assured of high demand for professionals with the Oracle Certified Associate and Oracle Certified Professional certifications. Sybex is proud to have helped thousands of Oracle certification candidates prepare for the exams over the years, and we are excited about the opportunity to continue to provide professionals like you with the skills needed to succeed in the highly competitive IT industry. Our authors and editors have worked hard to ensure that the Oracle9i Study Guide you hold in your hands is comprehensive, in-depth, and pedagogically sound. We’re confident that this book will meet and exceed the demanding standards of the certification marketplace and help you, the Oracle9i certification candidate, succeed in your endeavors. Good luck in pursuit of your Oracle9i certification!
Neil Edde Associate Publisher—Certification Sybex, Inc.
Software License Agreement: Terms and Conditions The media and/or any online materials accompanying this book that are available now or in the future contain programs and/or text files (the “Software”) to be used in connection with the book. SYBEX hereby grants to you a license to use the Software, subject to the terms that follow. Your purchase, acceptance, or use of the Software will constitute your acceptance of such terms. The Software compilation is the property of SYBEX unless otherwise indicated and is protected by copyright to SYBEX or other copyright owner(s) as indicated in the media files (the “Owner(s)”). You are hereby granted a single-user license to use the Software for your personal, noncommercial use only. You may not reproduce, sell, distribute, publish, circulate, or commercially exploit the Software, or any portion thereof, without the written consent of SYBEX and the specific copyright owner(s) of any component software included on this media. In the event that the Software or components include specific license requirements or end-user agreements, statements of condition, disclaimers, limitations or warranties (“End-User License”), those End-User Licenses supersede the terms and conditions herein as to that particular Software component. Your purchase, acceptance, or use of the Software will constitute your acceptance of such End-User Licenses. By purchase, use or acceptance of the Software you further agree to comply with all export laws and regulations of the United States as such laws and regulations may exist from time to time. Software Support Components of the supplemental Software and any offers associated with them may be supported by the specific Owner(s) of that material, but they are not supported by SYBEX. Information regarding any available support may be obtained from the Owner(s) using the information provided in the appropriate read.me files or listed elsewhere on the media. Should the manufacturer(s) or other Owner(s) cease to offer support or decline to honor any offer, SYBEX bears no responsibility. This notice concerning support for the Software is provided for your information only. SYBEX is not the agent or principal of the Owner(s), and SYBEX is in no way responsible for providing any support for the Software, nor is it liable or responsible for any support provided, or not provided, by the Owner(s).
If you discover a defect in the media during this warranty period, you may obtain a replacement of identical format at no charge by sending the defective media, postage prepaid, with proof of purchase to: SYBEX Inc. Product Support Department 1151 Marina Village Parkway Alameda, CA 94501 Web: http://www.sybex.com After the 90-day period, you can obtain replacement media of identical format by sending us the defective disk, proof of purchase, and a check or money order for $10, payable to SYBEX. Disclaimer SYBEX makes no warranty or representation, either expressed or implied, with respect to the Software or its contents, quality, performance, merchantability, or fitness for a particular purpose. In no event will SYBEX, its distributors, or dealers be liable to you or any other party for direct, indirect, special, incidental, consequential, or other damages arising out of the use of or inability to use the Software or its contents even if advised of the possibility of such damage. In the event that the Software includes an online update feature, SYBEX further disclaims any obligation to provide this feature for any specific duration other than the initial posting. The exclusion of implied warranties is not permitted by some states. Therefore, the above exclusion may not apply to you. This warranty provides you with specific legal rights; there may be other rights that you may have that vary from state to state. The pricing of the book with the Software by SYBEX reflects the allocation of risk and limitations on liability contained in this agreement of Terms and Conditions. Shareware Distribution This Software may contain various programs that are distributed as shareware. Copyright laws apply to both shareware and ordinary commercial software, and the copyright Owner(s) retains all rights. If you try a shareware program and continue using it, you are expected to register it. Individual programs differ on details of trial periods, registration, and payment. Please observe the requirements stated in appropriate files. Copy Protection
Warranty SYBEX warrants the enclosed media to be free of physical defects for a period of ninety (90) days after purchase. The Software is not available from SYBEX in any other form or media than that enclosed herein or posted to www.sybex.com.
The Software in whole or in part may or may not be copyprotected or encrypted. However, in all cases, reselling or redistributing these files without authorization is expressly forbidden except as specifically provided for by the Owner(s) therein
Acknowledgments First, I want to thank the Lord, my savior, for making this all possible. Thanks to Mae, Elizabeth, and Richard for direction and guidance throughout the writing of this book. Rebecca for your hard work, edits, and suggestions, which greatly improved this book and made my job much easier. John Anwanwan and Damir Bersinic for your technical edits and reviews, which enhanced the quality of this writing, tremendously. I want to give a belated thanks to the Hobbs family for getting me involved initially. Finally, I want to thank my wife and family for supporting me throughout this process. Thanks for providing me the time to work on this book. I know this was a great sacrifice. I sincerely appreciate it! —Doug Stuns I would like to thank the entire Sybex team for another great effort. The team may have changed from the last book, but the results are as good as ever. The team of Mae Lum, Elizabeth Hurley, and Richard Mills was fantastic to work with. I would also like to extend a special thanks to Rebecca Rider for her outstanding editing job. It made my job much easier to work with this terrific group of people. I also would like to thank Joe Johnson, once again, for giving me a chance on the original Oracle8i OCP series. The second time around has been just as gratifying as the first. I would also like to thank my family for all of their love and support. Thanks for believing in me once again. With your faith and the Lord’s guidance, anything is possible. —Matt Weishan Sybex would like to thank electronic publishing specialist Jill Niles and indexer Ann Rogers for their valuable contributions to this book.
Introduction There is high demand for professionals in the information technology (IT) industry, and Oracle certifications are the hottest credential in the database world. You have made the right decision to pursue certification, because being Oracle certified will give you a distinct advantage in this highly competitive market. Many readers may already be familiar with Oracle and do not need an introduction to the Oracle database world. For those who aren’t familiar with the company, Oracle, founded in 1977, sold the first commercial relational database and is now the world’s leading database company and the second-largest independent software company, with revenues of more than $10 billion, serving more than 145 countries. Oracle databases are the de facto standard for large Internet sites, and Oracle advertisers are boastful but honest when they proclaim, “The Internet Runs on Oracle.” Almost all big Internet sites run Oracle databases. Oracle’s penetration of the database market runs deep and is not limited to dot-com implementations. Enterprise resource planning (ERP) application suites, data warehouses, and custom applications at many companies rely on Oracle. The demand for DBA resources remains higher than others during weak economic times. This book is intended to help you on your exciting path toward becoming an Oracle9i Oracle Certified Professional (OCP) and Oracle Certified Master (OCM). To get the maximum benefit from this book, you should already be knowledgeable in networking, operating systems, Oracle SQL, and DBA concepts. Using this book and a practice database, you can start learning Oracle and pass the 1Z0-032 test: Oracle9i Database: Fundamentals II.
Why Become an Oracle Certified Professional? The number one reason to become an OCP is to gain more visibility and greater access to the industry’s most challenging opportunities. Oracle certification is the best way to demonstrate your knowledge and skills in Oracle database systems. The certification tests are scenario-based, which is the most effective way to assess your hands-on expertise and critical problemsolving skills. Certification is proof of your knowledge and shows that you have the skills required to support Oracle core products. The Oracle certification program
can help a company identify proven performers who have demonstrated their skills and who can support the company’s investment in Oracle technology. It demonstrates that you have a solid understanding of your job role and the Oracle products used in that role. OCPs are among the best paid in the IT industry. Salary surveys consistently show the OCP certification to yield higher salaries than other certifications, including Microsoft, Novell, and Cisco. So, whether you are beginning a career, changing careers, securing your present position, or seeking to refine and promote your position, this book is for you!
Oracle Certifications Oracle certifications follow a track that is oriented toward a job role. There are database administration, database operator, and developer tracks. Within each track, Oracle has a three-tiered certification program:
The first tier is the Oracle Certified Associate (OCA). OCA certification typically requires you to complete two exams, the first via the Internet and the second in a proctored environment.
The next tier is the Oracle Certified Professional (OCP), which builds upon and requires an OCA certification. The additional requirements for OCP certification are additional proctored exams.
The third and highest tier is the Oracle Certified Master (OCM). OCM certification builds upon and requires OCP certification. To achieve OCM certification, you must attend two advanced Oracle Education, classroom courses (from a specific list of qualifying courses) and complete a practicum exam.
The following material will address only the database administration track, because at the time of this writing, it was the only 9i track offered by Oracle. The other tracks have 8 and 8i certifications and will undoubtedly have 9i certifications. See the Oracle website at http://www.oracle.com/ education/certification for the latest information.
Oracle9i Certified Database Associate The role of the database administrator (DBA) has become a key to success in today’s highly complex database systems. The best DBAs work behind the scenes, but are in the spotlight when critical issues arise. They plan, create,
maintain, and ensure that the database is available for the business. They are always watching the database for performance issues and to prevent unscheduled downtime. The DBA’s job requires broad understanding of the architecture of Oracle database and expertise in solving problems. The Oracle9i Certified Database Associate is the entry-level certification for the database administration track and is required to advance toward the more senior certification tiers. This certification requires you to pass two exams that demonstrate your knowledge of Oracle basics:
1Z0-007: Introduction to Oracle9i: SQL
1Z0-031: Oracle9i Database: Fundamentals I
The 1Z0-007 exam, Introduction to Oracle9i: SQL, is offered on the Internet. The 1Z0-031 exam, Oracle9i Database: Fundamentals I, is offered at a Sylvan Prometric facility.
Oracle9i Certified Professional (OCP) The OCP tier of the database administration track challenges you to demonstrate your continuing experience and knowledge of Oracle technologies. The Oracle9i Certified Database Administrator certification requires achievement of the Certified Database Associate tier, as well as passing the following two exams at a Sylvan Prometric facility:
1Z0-032: Oracle9i Database: Fundamentals II
1Z0-033: Oracle9i Database: Performance Tuning
Oracle9i Certified Master The Oracle9i Certified Master is the highest level of certification that Oracle offers. To become a certified master, you must first achieve OCP status, then complete two advanced instructor-led classes at an Oracle education facility, and finally pass a hands-on exam at Oracle Education. The classes and practicum exam are offered only at an Oracle education facility and may require travel. The advanced classes that will count toward your OCM requirement include the following:
Oracle9i: High Availability in an Internet Environment
Oracle9i: Database: Implement Partitioning
Oracle9i: Real Application Clusters Implementation
Oracle9i: Data Warehouse Administration
Oracle9i: Advanced Replication
Oracle9i: Enterprise Manager
Passing Scores The 1Z0-032: Oracle9i Database: Fundamentals II exam consists of two sections—basic and mastery. At the time this book was written, the passing score for the basic section is 71 percent, and for the mastery section it is 56 percent. Please download and read the Oracle9i Certification candidate guide before you take the exam. The basic section covers the fundamental concepts and the mastery section covers more difficult questions, which are mostly based on practice and experience. You must pass both sections to pass the exam. The objectives, test scoring, number of questions, and so on, are listed at http://www.oracle.com/education/certification.
More Information The most current information about Oracle certification can be found at http://www.oracle.com/education/certification. Follow the Certification link and choose the track that you are interested in. Read the Candidate Guide for the test objectives and test contents, and keep in mind that they can change at any time without notice.
OCA/OCP Study Guides The Oracle9i database administration track certification consists of four tests: two for OCA level and two more for OCP level. Sybex offers several study guides to help you achieve this certification:
OCA/OCP: Introduction to Oracle9i™ SQL Study Guide (exam 1Z0-007: Introduction to Oracle9i: SQL)
OCA/OCP: Oracle9i™ DBA Database Fundamentals I Study Guide (exam 1Z0-031: Oracle9i Database: Fundamentals I)
Use the tuning/diagnostics tools STATSPACK, TKPROF, and EXPLAIN PLAN.
Tune the size of data blocks, the shared pool, the buffer caches, and rollback segments.
Diagnose contention for latches, locks, and rollback segments.
Tips for Taking the OCP Exam Use the following tips to help you prepare for and pass each exam.
Each OCP test contains about 55–80 questions to be completed in 90 minutes. Answer the questions you know first, so that you do not run out of time.
Many questions on the exam have answer choices that at first glance look identical. Read the questions carefully. Do not just jump to conclusions. Make sure that you clearly understand exactly what each question asks.
Most of the test questions are scenario-based. Some of the scenarios contain nonessential information and exhibits. You need to be able to identify what’s important and what’s not important.
Do not leave any questions unanswered. There is no negative scoring. After selecting an answer, you can mark a difficult question or one that you’re unsure of and come back to it later.
When answering questions that you are not sure about, use a process of elimination to get rid of the obviously incorrect answers first. Doing this greatly improves your odds if you need to make an educated guess.
If you’re not sure of your answer, mark it for review and then look for other questions that may help you eliminate any incorrect answers. At the end of the test, you can go back and review the questions that you marked for review.
Where Do You Take the Exam? You may take the exams at any of the more than 800 Sylvan Prometric Authorized Testing Centers around the world. For the location of a testing center near you, call 1-800-891-3926. Outside the United States and Canada,
contact your local Sylvan Prometric Registration Center. Usually, the tests can be taken in any order. To register for a proctored Oracle Certified Professional exam at a Sylvan Prometric test center:
Determine the number of the exam you want to take.
Register with Sylvan Prometric online at http://www.2test.com or in North America, by calling 1-800-891-EXAM (800-891-3926). At this point, you will be asked to pay in advance for the exam. At the time of this writing, the exams are $125 each and must be taken within one year of payment.
When you schedule the exam, you’ll get instructions regarding all appointment and cancellation procedures, the ID requirements, and information about the testing-center location.
You can schedule exams up to six weeks in advance or as soon as one working day before the day you wish to take it. If something comes up and you need to cancel or reschedule your exam appointment, contact Sylvan Prometric at least 24 hours in advance.
What Does This Book Cover? This book covers everything you need to know to pass the Oracle9i Database: Fundamentals II exam. This exam is part of the Oracle9i Certified Database Administrator certification tier in the database administration track. It teaches you the basics of Oracle networking and backup and recovery. Each chapter begins with a list of exam objectives. Chapter 1 Introduces the Oracle network architecture and the responsibilities of the DBA for managing the Oracle network. Chapter 2 Discusses the setup and administration of Oracle Net on the Oracle server. It explains how to configure the Oracle Net server-side components and how to troubleshoot server-side network problems. Chapter 3 Explains how to set up and administer Oracle Net client-side components. It demonstrates and discusses how to configure Oracle so that clients can connect to an Oracle server. It also discusses troubleshooting Oracle client-side connectivity problems.
Chapter 4 Introduces the Oracle Shared Server. It discusses when to use Shared Server and how to configure Shared Server within the Oracle environment. Chapter 5 Introduces the backup and recovery overview. The types of failures of an Oracle database are discussed. Chapter 6 Discusses instance and media recovery structures. Oracle processes, memory structures, and files relating to recovery are discussed. The importance of checkpointing, redo logs, and archived logs are also discussed. Chapter 7 Explains how to configure the database for archive logging. The difference between archive logging and no archive logging is discussed. Chapter 8 Introduces recovery manager overview and configuration. The RMAN repository, channel allocation, and RMAN configuration are discussed. Chapter 9 Discusses user-managed and RMAN-based backup methods. Different examples are performed with each of these backup methods. Chapter 10 Discusses user-managed and RMAN-based complete recovery methods. Different examples of complete recovery are performed. Chapter 11 Discusses user-managed and RMAN-based incomplete recovery methods. Different examples of incomplete recovery are performed. Chapter 12 Introduces RMAN maintenance. Maintaining the RMAN repository, retention policies, backups, and backup availability are discussed. Chapter 13 Introduces recovery catalog creation and maintenance. This chapter describes the recovery catalog and how to create it. Performing maintenance on the recovery catalog, creating and running scripts, generating lists and reports, and backing up the recovery catalog are all discussed. Chapter 14 Discusses transporting data between databases with the Export and Import utilities.
Chapter 15 Introduces the SQL*Loader utility and the direct-load insert operation. This chapter also discusses the use of each of these data loading methods. Each chapter ends with review questions that are specifically designed to help you retain the knowledge presented. To really nail down your skills, read and answer each question carefully.
How to Use This Book This book can provide a solid foundation for the serious effort of preparing for the OCP database administration exam track. To best benefit from this book, use the following study method: 1. Take the Assessment Test immediately following this introduction.
(The answers are at the end of the test.) Carefully read over the explanations for any questions you get wrong, and note which chapters the material comes from. This information should help you plan your study strategy. 2. Study each chapter carefully, making sure that you fully understand
the information and the test objectives listed at the beginning of each chapter. Pay extra close attention to any chapter related to questions you missed in the Assessment Test. 3. Complete all hands-on exercises in the chapter, referring to the chap-
ter so that you understand the reason for each step you take. If you do not have an Oracle database available, be sure to study the examples carefully. Answer the Review Questions related to that chapter. (The answers appear at the end of each chapter, after the “Review Questions” section.) 4. Note the questions that confuse or trick you, and study those sections
of the book again. 5. Before taking the exam, try your hand at the Bonus Exams that are
included on the CD that comes with this book. The questions on these exams appear only on the CD. These will give you a complete overview of what you can expect to see on the real test. 6. Remember to use the products on the CD included with this book. The
electronic flashcards and the EdgeTest exam preparation software have been specifically designed to help you study for and pass your
exam. The electronic flashcards can be used on your Windows computer or on your Palm device. To learn all the material covered in this book, you’ll need to apply yourself regularly and with discipline. Try to set aside the same time period every day to study, and select a comfortable and quiet place to do so. If you work hard, you will be surprised at how quickly you learn this material. All the best!
What’s on the CD? We have worked hard to provide some really great tools to help you with your certification process. All of the following tools should be loaded on your workstation when you’re studying for the test.
The EdgeTest for Oracle Certified DBA Preparation Software Provided by EdgeTest Learning Systems, this test-preparation software prepares you to pass the Oracle9i Database: Fundamentals II exam. In this test, you will find all of the questions from the book, plus two additional Bonus Exams that appear exclusively on the CD. In addition, you can take the Assessment Test, test by chapter, or take an exam randomly generated from all of the questions.
Electronic Flashcards for PC and Palm Devices You should read the OCP: Oracle9i Database: Fundamentals II Study Guide carefully, particularly the Review Questions at the end of each chapter, and you should also take advantage of the Bonus Exams included on the CD. But wait, there’s more! Be sure to test yourself with the flashcards included on the CD. If you can get through these questions and you understand the answers, you’ll know that you’re ready for the exam. The flashcards include 150 questions specifically written to hit you hard and make sure you are ready for the exam. Between the Review Questions, the Bonus Exams, and the flashcards, you should be more than prepared for the exam.
OCP: Oracle9i Database: Fundamentals II Study Guide in PDF Sybex is now offering the Oracle certification books on CD so that you can read the book on your PC or laptop. These exams appear in Adobe Acrobat
format and Acrobat Reader 5 is also included on the CD so that you can view these. This will be extremely helpful to readers who fly or commute on a bus or train and don’t want to carry a book, as well as to readers who find it more comfortable reading from their computer.
About the Authors Doug Stuns, OCP, has been an Oracle DBA for more than a decade. He has worked for the Oracle Corporation in consulting and education roles for five years and is the founder and owner of SCS, Inc., an Oracle-based consulting company. To contact Doug, you can e-mail him at [email protected]. Matthew Weishan is an OCP and Certified Technical Trainer with more than nine years of experience with Oracle databases. He is currently a Senior Specialist for EDS in Madison, Wisconsin, working as an Oracle DBA for several large clients. He also served as an Oracle DBA instructor for several years. He has over 18 years of experience in the IT industry and has worked as a senior systems analyst, lead consultant, and lead database administrator for several Fortune 500 companies. To contact Matt, you can email him at [email protected].
Assessment Test 1. What type of incomplete recovery is based on each transaction? A. Time-based B. Change-based C. Cancel-based D. Stop-based 2. What statement best describes the recovery catalog? A. A mandatory feature of RMAN B. An optional feature of RMAN that stores metadata about the
backups C. A mandatory feature of RMAN that stores metadata about the
backups D. An optional feature of RMAN 3. What files can store load data when SQL*Loader is being used?
(Choose all that apply.) A. General log files B. Input files C. Control files D. Discard log files 4. Which of the following are physical structures of the Oracle database?
(Choose all that apply.) A. Control files B. Input files C. Parameter files D. Alert logs
5. Which of the following RMAN commands would you need to execute
in order to store information in the recovery catalog and the actual data files that were backed up by OS commands? A. BACKUP DATAFILE B. BACKUP C. DATAFILE COPY D. CATALOG DATAFILECOPY 6. Which of the following is the correct way(s) to perform control file
backups? (Choose all that apply.) A. Alter the database backup control file to TRACE. B. Alter the database backup control file to ‘’. C. Alter the system backup control file to TRACE. D. Alter the system backup control file to ‘’. 7. Which of these are roles of Oracle Net in the Oracle network
architecture? (Choose all that apply.) A. Handles communications between the client and server B. Handles server-to-server communications C. Used to establish an initial connection to an Oracle server D. Acts as a messenger, which passes requests between clients and
servers E. All of the above 8. What are the roles that you must grant the RMAN schema owner of
the recovery catalog? (Choose all that apply.) A. dba B. connect C. resource D. recovery_catalog_owner
9. Which of the following are advantages of Shared Server? (Choose all
that apply.) A. Fewer server processes B. Manages more connections with the same or less memory C. Better client response time D. All of the above 10. What type of failure requires an incomplete recovery? (Choose all
that apply.) A. Any media failure involving the system tablespace B. The loss of inactive or active online redo logs C. The loss of an archived log since the last current backup D. The loss of a control file 11. Which of the following is true about dispatchers? A. They listen for client connection requests. B. They take the place of dedicated servers. C. They place client requests on a response queue. D. All of the above. 12. Which command-line option of the Export utility groups commands
together in a common file? A. config.ora B. PARFILE C. ifile D. commandfile 13. What type of incomplete recovery requires the DBA to manually stop
the recovery at a certain point? A. Cancel-based B. Time-based C. Change-based D. Sequence-based
14. What is the new parameter file that has been introduced in Oracle9i? A. spfile.ora B. init.ora C. config.ora D. ifile.ora 15. What utility can you use to verify corruption of both backup and
online data files? A. DBMS_REPAIR B. DBVERIFY C. ANALYZE D. DB_CHECKSUM 16. When you open a database with the ALTER DATABASE OPEN RESETLOGS,
you need to perform which command in RMAN to the incarnation of the database? A. REGISTER B. UNREGISTER C. RESET D. UNSET 17. What is the primary configuration file of the localnaming option? A. sqlnet.ora B. tnsnames.ora C. listener.ora D. names.ora 18. What is the name of the manual allocation channel method that
utilizes tape? A. ALLOCATE CHANNEL C1 TYPE ‘SBT_TAPE’ B. ALLOCATE CHANNEL C1 TYPE DLT_TAPE C. CONFIGURE DEFAULT DEVICE TYPE SBT_TAPE D. CONFIGURE DEFAULT DEVICE TYPE TAPE
19. What prerequisites are required to implement direct-load insert?
(Choose all that apply.) A. Parallel DML must be enabled. B. Initialization parameters must be configured for parallel query. C. The SQL*Loader utility must be configured for parallel processing. D. There must be hints in DML statements. E. Multiple SQL*Loader control files and data files must be present. 20. What does Dynamic Registration do? A. Allows a listener to automatically register with an Oracle server B. Allows an Oracle server to automatically register with a listener C. Allows clients to automatically register with an Oracle listener D. None of the above 21. An open backup can be performed when which of the following is true
about the database? (Choose all that apply.) A. It is in NOARCHIVELOG mode. B. It is in ARCHIVELOG mode. C. When tablespaces are placed in BACKUP mode with the ALTER
TABLESPACE BEGIN BACKUP command. D. The database is shut down. 22. Which of the following backup and recovery parameters utilize their
operations within the LARGE_POOL memory of the SGA? (Choose all that apply.) A. DBWR_IO_SLAVES B. ASYNC_IO_SLAVES C. BACKUP_TAPE_IO_SLAVES D. SYNCH_IO_SLAVES
23. Which statement best describes incomplete recovery? A. No data whatsoever is lost. B. Data is lost after the point of failure. C. Some data is lost because the recovery is prior to the point failure. D. Some data is lost because the data file recovery is incomplete. 24. Which of the following best describes network access control? A. It allows clients and servers using different protocols to communicate. B. It sets up rules to allow or disallow connections to Oracle servers. C. It funnels client connections into a single outgoing connection to
the Oracle server. D. None of the above. 25. Resynching the recovery catalog should be performed when you do
what to the target database? A. Undo a database resynch. B. Remove a database reset. C. Undo the most recent database resynch only. D. Make a physical change to the target database. 26. Which commands move data files in the RMAN recovery process?
(Choose all that apply.) A. SET NEWFILE B. SET RENAME C. SET NEWNAME D. SWITCH 27. The main difference between logging and tracing is A. Tracing cannot be disabled. B. Logging cannot be disabled. C. Logging records only significant events. D. Tracing records only significant events.
28. Which of the following parameters ensures the number of successful
archive destinations before the redo information can be written over? A. LOG_ARCHIVE_SUCCESS B. LOG_ARCHIVE_MIN_SUCCEED_DEST C. LOG_MIN_SUCCESS D. LOG_ARCHIVE_SUCCEED 29. What is the location of the trace file generated when the Oracle
PMON process encounters an error? A. USER_DUMP_DEST B. BACKGROUND_DUMP_DEST C. CORE_DUMP_DEST D. ARCH_DUMP_DEST 30. Which of the following parameters is used to improve the performance
of instance recovery operations? A. FAST_START_MTTR_TARGET B. FAST_START C. CHECKPOINT_INTERVAL D. CHECKPOINT 31. What utility can be used to check to see if a client can see an Oracle
listener? A. netstat B. namesctl C. tnsping D. lsnrctl E. None of the above
32. Which of the following commands generate reports from the recovery
catalog or target database control file? (Choose all that apply.) A. REPORT B. LIST C. SELECT D. PUTLINE 33. Which of the following Oracle processes is not mandatory at startup? A. SMON B. PMON C. DBWR D. ARCH 34. Which of the following is a major difference between the RESTORE
command in earlier versions of RMAN and in Oracle9i RMAN? A. Nothing has changed. B. The decision about whether files need to be restored or not. C. Only backup sets are restored. D. Only image copies are restored. 35. Which of the following is true about shared servers? A. They talk to dispatchers. B. They execute client requests. C. They talk directly to the listener. D. They talk directly to a client process. 36. Which init.ora parameter is responsible for setting multiple remote
archive locations? A. LOG_ARCHIVE_DUPLEX_DEST B. LOG_ARCHIVE_DEST_n C. LOG_DEST_ARCHIVE_n D. LOG_ARCHIVE_DEST_DUPLEX
37. Which of these is not a layer of the Oracle Net Stack? A. Two-Task Common B. Oracle Net Foundation C. Oracle Call Interface D. Application E. All of these are layers in the Oracle Net Stack 38. What special activity must be performed to execute a CROSSCHECK
command? A. ALLOCATE CHANNEL B. ALLOCATE CHANNEL FOR MAINTENANCE TYPE DISK C. AUTOMATIC CHANNEL ALLOCATION D. ALLOCATE CHANNEL FOR UPGRADE TYPE DISK 39. Which of these is not a way to resolve a net service name? A. Localnaming B. Hostnaming C. Oracle Internet Directory D. Internal Naming 40. What is the correct command syntax you need to use to execute a
script called complete_bac within the recovery catalog? A. start {execute script complete_bac;} B. RUN { EXECUTE SCRIPT complete_bac; } C. execute script complete_bac; D. run execute script complete_bac;
41. Which type of read-only tablespace recovery causes restoration
and recovery of the tablespace and associated data files? (Choose all that apply.) A. Read-only backup and read-only recovery B. Read-only backup and read-write recovery C. Read-write backup and read-only recovery with backup taken
immediately after it was made read only D. Read-write backup and read-only recovery 42. What new Oracle9i feature allows you to query old data even if the
original data has been deleted? A. Flashback Query B. Parallel query C. Fast recovery D. Undo query 43. Third-party tape hardware vendors require what aspect of RMAN to
function properly? A. Recovery catalog B. Media management library C. RMAN in GUI through Enterprise Manager D. RMAN in command line mode 44. What are the two different technical methods of exporting data?
(Choose all that apply.) A. Conventional B. User C. Full D. Direct export
45. A client is unable to connect to the PROD Oracle Server. Which of the
following client-side checks could you NOT perform from the client workstation to troubleshoot the problem? A. Check the NAMES.DIRECTORY_PATH in the sqlnet.ora file on
the client. B. Perform tnsping PROD from the client. C. Perform lsnrctl services from the client. D. Check the TNS_ADMIN Registry setting on the client. 46. Which command is responsible for allowing you to move data files to
a new location? A. ALTER DATABASE MOVE B. ALTER DATABASE RENAME C. ALTER SYSTEM MOVE D. ALTER SYSTEM RENAME 47. Which of the following best describes the function of the Oracle Net
Manager? A. It is a graphical tool used to configure critical Oracle network files. B. It is a tool used to configure the Oracle protocols. C. It is a graphical tool used to monitor Oracle connections. D. It is a tool used to troubleshoot Oracle connection problems. 48. What status determines that the tape is not available in the
CROSSCHECK comparison? A. NOT AVAILABLE B. UNAVAILABLE C. EXPIRED D. INVALID
49. What configuration file controls the listener? A. tnsnames.ora B. listener.ora C. sqlnet.ora D. names.ora 50. The RMAN repository best defines what? (Choose all that apply.) A. Recovery catalog B. Control file C. Target database D. ATL database 51. Process failures and instance failures are both what types of failure? A. Media failure B. User failure C. Non-media failure D. Statement failure 52. Which init.ora parameter configures the database for automatic
archiving? A. LOG_ARCHIVE_START=TRUE B. LOG_START_ARCHIVE=TRUE C. LOG_AUTO_ARCHIVE=TRUE D. LOG_ARCHIVE_AUTO=TRUE 53. Which command-line utility is used to start and stop the listener? A. listener B. lsnrctl C. listen D. listen_ctl
54. User-managed backup and recovery best defines which statement? A. Custom backup and recovery performed with OS commands and
database commands B. Non-automated RMAN-based backups C. A new type of backup that uses RMAN but is performed by a user D. Automated RMAN-based backup 55. What Oracle background process has the responsibility of performing
the roll forward in instance recovery? A. PMON B. SMON C. RECO D. DBWR 56. Which of the following commands would you use to make a backup
set unavailable? A. CHANGE B. MAKE C. FORCE D. EXPIRE 57. What are some of the issues of network complexity that the database
administrator should consider? (Choose all that apply.) A. How much time it will take to configure a client B. What type of work clients will be performing C. What type of protocols are being used D. The size and number of transactions that will be done E. All of the above
58. Complete recovery is best defined by which of the following statements? A. Most transactions are recovered. B. All transactions are recovered except the last archived log. C. All committed transactions are recovered. D. There is no data lost whatsoever. 59. What are the different technical methods of loading data with the
SQL*Loader utility? (Choose all that apply.) A. Direct-path load B. Conventional load C. Default-path load D. External-path load 60. What are the three primary network configurations? A. N-tier architecture B. Single-tier architecture C. Multi-tier architecture D. Two-tier architecture 61. Which of the following recoveries can be performed when the data-
base is in ARCHIVELOG mode? A. Only incomplete recovery B. Only complete recovery C. Only partial recovery D. Complete recovery and incomplete recovery 62. What is the primary purpose of using checkpoints? A. To decrease free memory buffers in the SGA B. To write non-modified database buffers to the database files and
to synchronize the physical structures of the database accordingly C. To record modified database buffers that are written to the database
files and to synchronize the physical structures of the database accordingly D. To increase free memory buffers in the SGA
63. What command would you use to retain a backup past the
retention date? A. HOLD B. RETAIN C. KEEP D. STORE 64. Which view can be used to identify clean-up issues after a failed hot
or online backup? A. V$BACKUP B. ALL_BACKUP C. USER_BACKUP D. DBA_BACKUP 65. How does Oracle Shared Server differ from a dedicated server?
(Choose all that apply.) A. Clients use dispatchers instead of dedicated connections. B. The System Global Area contains request and response queues. C. Shared server processes execute client requests. D. All of the above. 66. What is the disadvantage of the hostnaming option? A. It cannot use bequeath connections. B. It cannot use Oracle Shared Server connections. C. It cannot use client load balancing. D. All of the above.
67. Which of the following options is the RMAN BACKUP command capable
of performing? (Choose all that apply.) A. Incremental backup B. Full backup C. Image copy D. Current control file backup E. Backup set creation 68. What does IIOP stand for? A. Internet Interactive Objects Protocol B. Internet Instance Objects Protocol C. Internet Inter-Orb Protocol D. Internet Inter-Objects Protocol E. None of the above 69. What type of failure would require the DBA to issue the RECOVER
DATABASE command? A. User process B. Media failure C. Instance failure D. Statement failure 70. What mode must that database be in to run the ALTER TABLESPACE
BEGIN BACKUP command? A. NOARCHIVELOG B. startup nomount C. startup mount D. ARCHIVELOG
Answers to Assessment Test 1. B. Change-based recovery is based upon a unique SCN number that
each transaction uniquely contains. See Chapter 11 for more information. 2. B. The recovery catalog is an optional feature of RMAN. Though
Oracle recommends that you use it, it isn’t required. One major benefit of the recovery catalog is that it stores metadata about backups in a database that can be reported or queried. See Chapter 8 for more information. 3. C. The data file or control file both can be used to store load data.
The control file should only store small data loads for one-time use or test purposes. See Chapter 15 for more information. 4. A, C. Control files and parameter files make up two of the physical
structures of the Oracle database. Data files and redo logs make up the other physical structures. See Chapter 6 for more information. 5. D. The command CATALOG DATAFILECOPY backs up data files that
were copied or backed up by OS commands in user-managed backups. See Chapter 12 for more information. 6. A, B. The control file can be backed up in two ways: in ASCII format,
it can be backed up to a TRACE file, or in binary format, it can be backed up to a new location. The ALTER DATABASE BACKUP CONTROLFILE TO TRACE and ALTER DATABASE BACKUP CONTROLFILE TO ‘’ commands perform backups of the control file to ASCII format and to binary format. See Chapter 9 for more information. 7. E. Oracle Net is responsible for handling client-to-server and server-
to-server communications in an Oracle environment. It manages the flow of information in the Oracle network infrastructure. Oracle Net is used to establish the initial connection to the Oracle server and then it acts as the messenger, which passes requests from the client back to the server or between two Oracle servers. See Chapter 1 for more information.
8. B, C, D. The roles that are required for the RMAN schema that owns
the recovery catalog are connect, resource, and recovery_catalog_ owner. See Chapter 13 for more information. 9. A, B. Oracle Shared Server allows Oracle servers to manage a greater
number of connections utilizing the same amount or less memory and process resources. If an Oracle server is constrained by these resources, Oracle Shared Server can be an alternative configuration that can provide relief. See Chapter 4 for more information. 10. B, C. The loss of inactive or active online redo logs will require an
incomplete recovery because the backup will not have all the required logs to apply to the database. The loss of an archived log since the last current backup will also not allow a complete recovery for the same reason as a missing redo log. See Chapter 11 for more information. 11. B. Dispatchers take the place of the dedicated server processes. The
dispatchers are responsible for responding to client requests by placing the requests on a request queue (not a response queue) in the SGA; they also retrieve completed requests that were placed on a response queue by the shared server and pass them back to the client. See Chapter 4 for more information. 12. B. The PARFILE command option allows you to group export com-
mands together in a file so that you don’t have to interactively respond to the prompts when you are running the export. This also allows you to script exports more efficiently. See Chapter 14 for more information. 13. A. Cancel-based recovery requires the DBA to manually cancel
the recovery process at the command line. See Chapter 11 for more information. 14. A. The spfile.ora is new for Oracle9i. This is the binary initializa-
tion file that is the default when Oracle is started. This file contains persistent parameters. The init.ora file is searched only if there isn’t a spfile.ora initialization file. See Chapter 6 for more information. 15. B. The DBVERIFY utility can verify both online data files and copies
of online data files. See Chapter 9 for more information.
16. C. The RESET command must be used on the incarnation of the data-
base within the recovery catalog if the target database has been opened with ALTER DATABASE OPEN RESETLOGS. See Chapter 13 for more information. 17. B. The main characteristic of the localnaming method is that it uses
the tnsnames.ora file to resolve service names. In fact, this method is sometimes called the tnsnames.ora method. The file contains information about the service name and connect descriptors for each service name that a client can contact. See Chapter 3 for more information. 18. A. CONFIGURE DEFAULT DEVICE settings configure the allocation
channel automatically. ALLOCATE CHANNEL TYPE methods are used to manually configure channels. The type ‘SBT_ TAPE’ configures the manual channel for tape. See Chapter 8 for more information. 19. A, B, D. The database must be configured for parallel query or it
must have the appropriate initialization parameters, such as PARALLEL_MIN_SERVERS and PARALLEL_MAX_SERVERS, set up. The session must be enabled to run parallel DML. And the appropriate hints must be entered in the DML statements to allow direct-load insert. See Chapter 15 for more information. 20. B. Dynamic Registration allows an Oracle server to automatically
register with a listener. This reduces the amount of maintenance work the DBA has to do to maintain the listener.ora file in a localnaming environment. See Chapter 2 for more information. 21. B, C. An open backup is also called a hot backup and it can be per-
formed when the database is in ARCHIVELOG mode by executing the ALTER TABLESPACE BEGIN BACKUP command. See Chapter 9 for more information. 22. A, C. The DBWR_IO_SLAVES and BACKUP_TAPE_IO_SLAVES are initial-
ization parameters that can improve the performance of backup and recovery operations. These parameters use the LARGE_POOL memory to perform their operations. See Chapter 6 for more information.
23. C. The statement that accurately describes incomplete recovery is
“Some data is lost because the recovery is prior to the point of failure.” See Chapter 11 for more information. 24. B. Client access control is a feature of Connection Manager that
makes Connection Manager function in a manner similar to that of a firewall. Connections can be accepted or rejected on the basis of the client location, the destination server, and the Oracle service that the client is attempting to connect to. This gives the DBA flexibility they need to configure access control to the Oracle environment. See Chapter 1 for more information. 25. D. The resynch command should be used when you make physical
changes to the target database, such as adding new data files or control files. See Chapter 13 for more information. 26. C, D. The SET NEWNAME and SWITCH commands work together to
restore RMAN backups to new locations. See Chapter 10 for more information. 27. C. Logging records significant events, such as starting and stopping
the listener, along with certain kinds of network errors. Tracing records all events that occur, even when an error does not happen. The trace file provides a great deal of information that logs do not. See Chapter 2 for more information. 28. B. The LOG_ARCHIVE_MIN_SUCCEED_DEST parameter determines the
number of successful archive destinations required before the redo logs can be overwritten. See Chapter 7 for more information. 29. B. The Oracle PMON process is a background process. All trace files
generated from the background process go into the BACKGROUND_ DUMP_DEST location. See Chapter 12 for more information. 30. A. The FAST_START_MTTR_TARGET parameter determines the number
of seconds that instance recovery will require. This parameter is an integer value between 0 and 3600. See Chapter 6 for more information.
31. C. The tnsping utility can be used to check to see if a client can con-
tact a listener. The command format is tnsping . For example, tnsping DBA 3 would attempt to contact the DBA database three times. This utility also provides information on how long it takes to contact the listener. See Chapter 3 for more information. 32. A, B. The REPORT and LIST commands generate report outputs in
the RMAN utility. See Chapter 13 for more information. 33. D. The ARCH or ARCn process is not a mandatory process. Archive
logging can be enabled and disabled. See Chapter 6 for more information. 34. B. In Oracle9i, the RESTORE command now makes the decision of
whether or not files need to be restored. In earlier versions of RMAN, files were restored upon request even if it was unnecessary. See Chapter 10 for more information. 35. B. The shared server processes are responsible for executing the client
requests. They retrieve the requests from a request queue and place the completed request in the appropriate dispatcher response queue. See Chapter 4 for more information. 36. B. LOG_ARCHIVE_DEST_n (n being integer value) is responsible for
multiple remote archive locations. LOG_ARCHIVE_DUPLEX_DEST is also capable of multiple destinations but not ones that are remote. See Chapter 7 for more information. 37. E. All of these are part of the Oracle Net Stack. The stack consists of
Application, OCI, Two-Task Common, Oracle Net Foundation, Oracle Protocol Adapters, and Network Protocol. See Chapter 1 for more information. 38. B. The CROSSCHECK command requires the use of the ALLOCATE
CHANNEL FOR MAINTENANCE TYPE DISK or SBT_TAPE to perform comparison activities on the disk/tape media and the recovery catalog contents. See Chapter 12 for more information.
39. D. Internal Naming is not one of the methods used to resolve a net
service name, but localnaming, hostnaming, and Oracle Internet Directory are. See Chapter 3 for more information. 40. B. The correct syntax to execute the script is RUN { EXECUTE SCRIPT
<script_name>; }. For more information, see Chapter 13. 41. B, D. Choice B, read-only backup and read-write recovery, will
require the restoration and recovery of the data files because changes have been made to the database since the backup. Choice D will also require restoration of the data file and recovery up to the point when the tablespace was made read-only. In choice A, no changes are made because the tablespace is read-only throughout. Choice C doesn’t require restoration and recovery because the backup of the database was taken immediately after the tablespace was made read-only. See Chapter 10 for more information. 42. A. Flashback Query allows you to query old deleted data by rebuilding
the necessary data elements in the undo tablespaces. See Chapter 5 for more information. 43. B. The media management library (MML), or Media Management
Layer, is a third-party vendor library, which is linked in with the Oracle kernel so that the server session generated by RMAN interfaces with the third-party vendor’s hardware. See Chapter 8 for more information. 44. A, D. The two methods of exporting data are conventional and direct
export. Conventional is the default method, which uses the standard SQL command processing, and direct export bypasses certain aspects of the SQL evaluation layer to improve performance. See Chapter 14 for more information. 45. C. The listener would not be running on the client. This would be a
server-side check that would be performed. See Chapter 2 for more information.
to ‘’ is the command that allows you to move a data file to a new location. Remember that OS commands, such as cp in Unix, are necessary to copy the file to the new location. The ALTER DATABASE RENAME command just updates the control file and data dictionary. See Chapter 10 for more information. 47. A. The Oracle Net Manager is a graphical tool that provides a way
to configure most of the critical network files for the Oracle server. See Chapter 2 for more information. 48. C. The backup sets that are not on the media disk/tape but are in the
recovery catalog return a status of EXPIRED. See Chapter 12 for more information. 49. B. The listener.ora file contains the configuration information for
the listener. This file contains information about the listening locations, the service names that the listener is listening for, and a section for optional listener parameters, such as logging and tracing parameters. There should be only one listener.ora file on a machine. If multiple listeners are used, each listener should have its own entry in the listener.ora file. See Chapter 2 for more information. 50. A, B. The RMAN repository is the control file that stores the backup
information if the recovery catalog is not used. The recovery catalog is a database that stores the RMAN repository information, otherwise the RMAN repository is the target database’s control file. See Chapter 8 for more information. 51. C. Process failures and instance failures are both types of non-media
failure. These types of failure are usually less critical. See Chapter 5 for more information. 52. A. The correct parameter is LOG_ARCHIVE_START=TRUE. See Chapter 7
for more information. 53. B. The lsnrctl command-line utility is used to start and stop the
listener. You can also use this utility to get information about the status of the listener and make modifications to the listener.ora file. See Chapter 2 for more information.
54. A. User-managed backup is the term used to describe the standard
backups that have been used from the inception of Oracle. These backups are usually custom written through the use of OS and database commands. See Chapter 9 for more information. 55. B. The system monitor (SMON) process is responsible for applying
all of the committed or uncommitted changes in the online redo logs. See Chapter 6 for more information. 56. A. The CHANGE command makes the backup set either available or
unavailable in the recovery catalog. See Chapter 12 for more information. 57. B, C, D. The DBA needs to consider such items as the number of
clients the network will need to support, the type of work the clients will be doing, the locations of the clients in the network, and the size of transactions that will be done in the network. See Chapter 1 for more information. 58. D. Complete recovery means that all transactions are recovered. No
data is lost and none must be reentered when the database is recovered. See Chapter 10 for more information. 59. A, B, D. The conventional load is the default load that performs normal
SQL command processing. The direct-path load performs an expedited processing that bypasses the buffer and writes directly to data files. The external-path load is a load used for processing external files. See Chapter 15 for more information. 60. A, B, D. The three primary network configurations are single-tier,
two-tier, and n-tier architecture. Single-tier was the predominant architecture for many years when the mainframe dominated the corporate environment. Two-tier architecture came into vogue with the introduction of the PC and has been a dominant architecture ever since. With the inception of the Internet, more organizations are turning towards n-tier architecture as a means to leverage many computers and enhance flexibility and performance of their applications. See Chapter 1 for more information. 61. D. When the database is in ARCHIVELOG mode, both complete
and incomplete recovery can be performed. See Chapter 7 for more information.
62. C. The main purpose of the database checkpoint is to record that the
modified buffers have been written to the data files and to establish data consistency, which enables faster recovery in the event of a failure. See Chapter 6 for more information. 63. C. The KEEP command causes a backup to be kept past the retention
setting in the database. See Chapter 12 for more information. 64. A. The V$BACKUP view can be used to identify whether a database is
actively being backed up or not. See Chapter 9 for more information. 65. D. Oracle Shared Server uses a shared model. Clients share processes
called dispatchers that handle their requests. Clients also share processes called shared servers that execute their requests. The sharing is done through modifications to the SGA. See Chapter 4 for more information. 66. C. The disadvantage is that certain functionality, such as client load
balancing and failover, is not available when you use the hostnaming method. See Chapter 3 for more information. 67. A, B, D, E. The RMAN BACKUP command is capable of performing
all of the options with the exception of creating image copies. Image copies are created by the RMAN COPY command. See Chapter 9 for more information. 68. C. The Internet Inter-Orb Protocol is supported by Oracle Net to
allow for support of Enterprise JavaBeans and CORBA. See Chapter 2 for more information. 69. B. A media failure would most likely cause the DBA to get actively
involved in the recovery of the database by entering recovery commands if this was a user-managed recovery. The other failures mentioned are usually handled by Oracle automatically. See Chapter 5 for more information. 70. D. The database must be in ARCHIVELOG mode so that the tablespaces
can be backed up online. See Chapter 7 for more information.
Introduction to Network Administration ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Explain solutions included with Oracle9i for managing complex networks. Describe Oracle networking add-on solutions. Explain the key components of the Oracle Net layered architecture. Explain Oracle Net Services role in client server connections. Describe how web client connections are established through Oracle networking products.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
etworks have evolved from simple terminal-based systems to complex multitiered systems. Modern networks can be comprised of many computers on multiple operating systems using a wide variety of protocols and communicating across wide geographic areas. One need look no further than the explosion of the Internet to see how networking has matured and what a profound impact networks are having on the way we work and communicate with one another. While networks have become increasingly complex, they also have become easier to use and manage. For instance, we all take advantage of the Internet without knowing or caring about the components that make this communication possible because the complexity of this huge network is completely hidden from us. The experienced Oracle database administrator has seen this maturation process in the Oracle network architecture as well. From the first version of SQL*Net to the latest releases of Oracle Net, Oracle has evolved its network strategy and infrastructure to meet the demands of the rapidly changing landscape of network communications. This chapter highlights the areas that database administrators (DBAs) need to consider when implementing an Oracle network strategy. It also looks at the responsibilities the database administrator has when managing an Oracle network. The chapter then explores the most common types of network configurations and introduces the features of Oracle Net—the connectivity management software that is the backbone of the Oracle network architecture. It will also explore the Oracle network architecture and summarize the Oracle network infrastructure.
There are many factors involved in making network design decisions. First and foremost is the design of the Oracle network architecture itself. It is flexible and configurable, and it has the scalability to accommodate a range of network sizes. Also, when you are working with an Oracle network, there are a variety of network configurations to choose from. The sections that follow summarize the areas that the DBA needs to consider when designing the Oracle network infrastructure.
Network Complexity Issues The complexity of the network plays an important role in many of your network design decisions. Consider the following questions to determine network complexity:
How many clients will the network need to support?
What type of work will the clients be doing?
What are the locations of the clients? In complex networks, clients may be geographically dispersed over a wide area.
What types of clients are going to be supported? Will these be PCbased clients or terminal-based clients? Will these be thin clients that will do little processing or fat clients that will do the majority of the application processing?
What is the projected growth of the network?
Where will the processing take place? Will there be any middle-tier servers involved, such as an application server or transaction server?
What types of network protocols will be used to communicate between the clients and servers?
Will Oracle servers have to communicate with other Oracle servers in the enterprise?
Will the network involve multiple operating systems?
Are there any special networking requirements for the applications that will be used? This is especially important to consider when you are dealing with third-party applications.
Network Security Issues Network security has become even more critical as companies expose their systems to larger and larger numbers of users through internets and intranets. Consider the following questions to determine the security of a network:
Does the organization have any special requirements for secure network connections? What kinds of information will be sent across the Oracle network?
Can you ensure secure connections across a network without risk of information tampering? This may involve sending the data in a format that makes it tamperproof and also ensures that the data cannot be captured and read by parties other than the client and the intended Oracle server.
Is there a need to centralize the authorizations an individual has to each of the Oracle servers? In large organizations with many Oracle services, this can be a management and administration issue.
Interfacing Existing Systems with New Systems The following issues should be considered when existing computer systems must communicate with Oracle server networks:
Does the application that needs to perform the communication require a seamless, real-time interface?
Does the existing system use a non-Oracle database such as DB2 or Sybase?
Will information be transferred from the existing system to the Oracle server on a periodic basis? If so, what is the frequency and what transport mechanisms should be used? Will the Oracle server need to send information back to the existing system?
Do applications need to gather data from multiple sources, including Oracle and non-Oracle databases, simultaneously?
What are the applications involved that require this interface?
Will these network requirements necessitate design changes to existing systems?
The database administrator has many design issues to consider and plays an important role when implementing a network of Oracle servers in the enterprise. Here are some of the key responsibilities of the DBA in the Oracle network implementation process:
Understand the network configuration options available and know which options should be used based on the requirements of the organization.
Understand the underlying network architecture of the organization in order to make informed design decisions.
Work closely with the network engineers to ensure consistent and reliable connections to the Oracle servers.
Understand the tools available for configuring and managing the network.
Troubleshoot connection problems on the client, middle tier, and server.
Ensure secure connections and use the available network configurations, when necessary, to attain higher degrees of security for sensitive data transmissions.
Stay abreast of trends in the industry and changes to the Oracle architecture that may have an impact on network design decisions.
Network Configurations
T
here are three basic types of network configurations to select from when you are designing an Oracle infrastructure. The simplest type is the singletier architecture. This has been around for years and is characterized by the use of terminals for serial connections to the Oracle server. The other types of network configurations are the two-tier, or client/server, architecture and the most recently introduced n-tier architecture. Let’s take a look at each of these configuration alternatives.
Single-Tier Architecture Single-tier architecture was the standard for many years before the birth of the PC. Applications utilizing single-tier architecture are sometimes referred to as green-screen applications because most of the terminals using them, such as the IBM 3270 terminal, have green screens. Single-tier architecture is commonly associated with mainframe-type applications. This architecture is still in use today for many mission-critical applications, such as Order Processing and Fulfillment and Inventory Control, because it is the simplest architecture to configure and administer. Because the terminals are directly connected to the host computer, the complexities of network protocols and multiple operating systems don’t exist. When a single-tier architecture is being used, users interact with the database using terminals. These terminals are non-graphical, character-based devices. Figure 1.1 shows an example of the single-tier architecture. In this type of architecture, client terminals are directly connected to larger server systems such as mainframes. All of the intelligence exists on the mainframe, and all processing takes place there. Simple serial connections also exist on the mainframe. Although no complex network architecture is necessary, a single-tier architecture is somewhat limiting in terms of scalability and flexibility. Because all of the processing must take place on the server, the server can become the bottleneck to increasing performance. FIGURE 1.1
Single-tier architecture
Direct connection
Dumb terminal
Mainframe
Two-Tier Architecture Two-tier architecture gained popularity with the introduction of the PC and is commonly referred to as client/server computing. In a two-tier environment, clients connect to servers over a network using a network protocol, which is the agreed-upon method for the client to communicate with the
server. TCP/IP is a very popular network protocol and has become the de facto standard of network computing. Whether TCP/IP or some other network protocol is chosen, both the client and the server must be able to understand the chosen protocol. Figure 1.2 shows an example of a two-tier architecture. FIGURE 1.2
Two-tier architecture Network connection utilizing a protocol such as TCP/IP Intelligent client P/C
Server
This architecture has definite benefits over single-tier architecture. First of all, client/server computing introduces the graphical user interface; this interface is easier to understand and learn, and it offers more flexibility than the traditional character-based interfaces of the single-tier architecture. Also, two-tier architecture allows the client computer to share the application processing load. To a certain degree, this reduces the processing requirements of the server. The two-tier architecture does have some faults, even though at one time, it was thought to be the panacea of all networking architectures. Unfortunately, the main problem, that of scalability, persists. Notice that the term client/server computing contains a slash (/). The slash represents the invisible component of the two-tier architecture and the one that is often overlooked: the network! When prototyping projects, many developers fail to consider the network component and soon find out that what worked well in a small environment may not scale effectively to larger, more complex systems. There was a great deal of redundancy in the two-tier architecture model because application software was required on every desktop. As a result of this scenario, many companies end up with bloated PCs and large servers that still do not provide adequate performance. What is needed is a more scalable model for network communications. That is what n-tier architecture provides.
N-Tier Architecture N-tier architecture is the next logical step after two-tier architecture. Instead of dividing application processing work between a client and a server, you divide the work up among three or more machines. The n-tier architecture introduces middleware components, one or more computers that are situated between the client and the Oracle server, which can be used for a variety of tasks. Some of those tasks include the following:
Moving data between machines that work with different network protocols.
Serving as firewalls that can control client access to the servers.
Offloading processing of the business logic from the clients and servers to the middle tier.
Executing transactions and monitoring activity between clients and servers to balance the load among multiple servers.
Acting as a gateway to bridge existing systems to new systems.
The Internet provides the ultimate n-tier architecture with the user’s browser providing a consistent presentation interface. This common interface means less training of staff and also increases the potential reuse of client-side application components. N-tier architecture makes it possible to take advantage of technologies such as networked computers. Such computers can make for economical, low-maintenance alternatives to the personal computer. Because much of the application processing can be done by application servers, the client computing requirements for these networked computers are greatly reduced. In addition, the processing of transactions can also be offloaded to transaction servers, which reduces the burden on the database servers. The n-tier model is very scalable and divides the tasks of presentation, business logic and routing, and database processing among many machines, which means that this model accommodates large applications. In addition, the reduction of processing load on the database servers means that the servers can do more work with the same amount of resources. Also, the transaction servers can balance the flow of network transactions intelligently, and application servers can reduce the processing and memory requirements of the client (see Figure 1.3).
racle Net is the glue that bonds the Oracle network together. It is responsible for handling client-to-server and server-to-server communications, and it can be configured on the client, the middle-tier application, web servers, and the Oracle server. Oracle Net also manages the flow of information in the Oracle network infrastructure. First, it is used to establish the initial connection to the Oracle server, and then it acts as the messenger, passing requests from the client back to the server or passing them between two Oracle servers. Basically, Oracle Net handles all negotiations between the client and server during the client connection. In the section entitled “The Oracle Net Stack Architecture” later in this chapter, we discuss the architectural design of Oracle Net. In addition to functioning as an information manager, Oracle Net supports the use of middleware products such as Oracle9i Application Server (Oracle9iAS) and Oracle Connection Manager. These products allow n-tier architectures to be used in the enterprise, which increases the flexibility and performance of application designs. To learn more about these products and some of the features of Oracle Net, read the following sections, which mirror the five categories of networking solutions that Oracle Net provides: Connectivity, Directory Services, Scalability, Security, and Accessibility.
Connectivity: Multi-Protocol Support Oracle Net supports a wide range of industry-standard protocols including TCP/IP, IBM LU6.2, Named Pipes, and DECnet. (Unlike its predecessor Net8, Oracle Net no longer supports the Novell IPX/SPX protocol.) This support is handled transparently and allows Oracle Net to establish connectivity to a wide range of computers and a wide range of operating environments.
Oracle Net now adds support for a new protocol designed for System Area Networks (SANs) that are used in clustered environments. (SANs are special configurations of hardware that are used for situations in which multiple servers need high-speed communications between them.) The new Virtual Interface (VI) protocol is lightweight and works with a specific hardware configuration to relieve network activity responsibility from the CPUs and place it on special network adapters. See the Oracle9i Net Services Administrator’s Guide (Part No. A90154-01) for details on the use, configuration, and restrictions on the VI protocol. This guide may be obtained from the Oracle Technology Network website at technet.oracle.com. At this website, you will find all of the Oracle9i documentation in either Adobe Acrobat format or HTML format.
Connectivity: Multiple Operating Systems Oracle Net can operate on many different operating system platforms, from Windows NT/2000, to all variants of Unix, to large mainframe-based operating systems. This range allows users to bridge existing systems to other Unix or PC-based systems, which increases the data access flexibility of the organization without making wholesale changes to the existing systems.
Connectivity: Java and Internet With the introduction of Oracle8i, Oracle enabled connectivity to Oracle servers from applications using Java components such as Enterprise JavaBeans and Common Object Request Broker Architecture (CORBA), which is a standard for defining object interaction across a network. Oracle Net continues this trend by supporting standard connectivity solutions such as the Internet Inter-ORB Protocol (IIOP) and the General Inter-ORB Protocol (GIOP). These features allow clients to connect to applications interfacing with an Oracle database via a web browser. By utilizing features such as Secured Sockets Layer (SSL), client connections can obtain a greater degree of security across the Internet.
Directory Services: Directory Naming Directory Naming allows for network names to be resolved through a centralized naming repository. The central repository takes the form of a Lightweight Directory Access Protocol (LDAP)–compliant server. LDAP is a
protocol and language that defines a standard method of storage, identification, and retrieval of services. It provides a simplified way to manage directories of information, whether this information is about users in an organization or Oracle instances connected to a network. The LDAP server allows for a standard form of managing and resolving names in an Oracle environment. The quality of these services excels because LDAP provides a single industry standard interface to a directory service such as Oracle Internet Directory (OID). By utilizing Oracle Internet Directory, you ensure security and reliability of the directory information because information is stored in the Oracle database.
As of Oracle9i, Directory Naming has become the preferred method of centralized naming within an Oracle environment, replacing the Oracle Names Server. The Oracle Names Server can still be utilized in Oracle8i and earlier versions, however. The Oracle Names Server can also still be configured as a proxy to an LDAP-compliant Names Directory Service to ease the migration from Oracle Names to Directory Naming.
Directory Services: Oracle Internet Directory The Oracle Internet Directory (OID) is an LDAP 3–compliant directory service, which provides the repository and infrastructure needed to enable a centralized naming solution using Directory Naming. OID can be used with both Oracle8i and 9i databases. In Oracle9i, the OID runs as an application. The OID service can be run on a remote server and it can communicate with the Oracle server using Oracle Net. The OID is a scalable architecture, and it provides mechanisms for replicating service information among other Oracle servers. OID also provides security in a number of ways. First of all, it can be integrated into a Secure Sockets Layer (SSL) environment to ensure user authentication. Also, an administrator can maintain policies that grant or deny access to services. These policies are defined for entities within the Oracle Internet Directory tree structure.
Scalability: Oracle Shared Server Oracle Shared Server (formerly known as Multithreaded Server) is an optional configuration of the Oracle server that allows support for a larger number of concurrent connections without increasing physical resource requirements. This is accomplished by sharing resources among groups of users.
Scalability: Connection Manager Oracle Connection Manager is a middleware solution that provides three additional scalability features: Multiplexing Connection Manager can group together many client connections and send them as a single multiplexed network connection to the Oracle server. This reduces the total number of network connections the server has to manage. Network access Connection Manager can be configured with rules that restrict access by IP address. This rules-based configuration can be set up to accept or reject client connection requests. Also, connections can be restricted by point of origin, destination server, or Oracle server. Cross-protocol connectivity This feature allows clients and servers that use different network protocols to communicate. Connection Manager acts as a translator, providing two-way protocol conversion. Oracle Connection Manager is controlled by a set of background processes that manage the communications between clients and servers. This option is not configured using the graphical Oracle Net Manager tool. Figure 1.4 provides an overview of the Connection Manager architecture. FIGURE 1.4
Connection Manager architecture
Oracle Connection Manager
Oracle server running Shared Server
CMGW process
Oracle server
CMADMIN process
Many simultaneous connections
One Shared Server connection carrying all of the client requests
Security: Advanced Security The threat of data tampering is becoming an issue of increasing concern to many organizations as network systems continue to grow in number and complexity and as users gain increasing access to systems. Sensitive business transactions are being conducted with greater frequency and, in many cases, are not protected from unauthorized tampering or message interception. Oracle Advanced Security, formerly known as the Advanced Security Option and the Advanced Networking Option, not only provides the tools necessary to ensure secure transmissions of sensitive information, but it also provides mechanisms to confidently identify and authenticate users in the Oracle enterprise. When configured on the client and the Oracle server, Oracle Advanced Security supports secured data transactions by encrypting and optionally checksumming the transmission of information that is sent in a transaction. Oracle supports encryption and checksumming by taking advantage of industry-standard algorithms, such as RSA RC4, Standard DES and Triple DES, and MD5 checksumming. These security features ensure that data transmitted from the client has not been altered during transmission to the Oracle server. Oracle Advanced Security also gives the database administrator the ability to authenticate users connecting to the Oracle servers. In fact, there are a number of authentication features for ensuring that users are really who they claim to be. These are offered in the form of token cards, which use a physical card and a user identifying PIN number to gain access to the system; the biometrics option, which uses fingerprint technology to authenticate user connection requests; Public Key; and certificate-based authentication. Another feature of Oracle Advanced Security is the ability to have a single sign-on mechanism for clients. Single sign-on is accomplished with a centralized security server that allows the user to connect to any of the Oracle services in the enterprise using a single user ID and password. Oracle leverages the industrystandard features of Kerberos to enable these capabilities. (Kerberos is an authentication mechanism based on the sharing of secrets between two systems.) This greatly simplifies the privilege matrix that administrators must manage when they are dealing with large numbers of users and systems.
Security: Firewall Support Firewalls have become an important security mechanism in corporate networks. Firewalls are generally a combination of hardware and software that
are used to control network traffic and prevent intruders from compromising corporate network security. Firewalls fall into two broad categories: IP-filtering firewalls IP-filtering firewalls monitor the network packet traffic on IP networks and filter out packets that either originated or did not originate from specific groups of machines. The information contained in the IP packet header is interrogated to obtain this information. Vendors of this type of firewall include Network Associates and Axent Communications. Proxy-based firewalls Proxy-based firewalls prevent information from outside the firewall from flowing directly into the corporate network. Instead, the firewall acts as a gatekeeper, inspecting packets and sending only the appropriate information through to the corporate network. This prevents any direct communication between clients outside the firewall and applications inside the firewall. Check Point Software Technologies and Cisco are examples of vendors that market proxy-based firewalls. Oracle works closely with the vendors of both types of product to ensure support of database traffic through these types of mechanism. Oracle supplies the Oracle Net Application Proxy Kit to the vendors of firewalls. This product can be incorporated into the firewall architecture to allow database packets to pass through the firewall and still maintain a high degree of security.
Know Thy Firewall It is important to understand your network infrastructure, the network routes you are using to obtain database connections, and the type of firewall products you are using. I have had more than one situation in which firewalls have caused connectivity issues between a client and an Oracle server. For instance, I remember what happened after a small patch was applied to a firewall when I was working as a DBA for one of my former employers. In this case, employees started experiencing intermittent disconnects from the Oracle database. It took many days of investigation and network tracing before we pinned down the exact problem. When we did, we contacted the firewall vendor and they sent us a new patch to apply that corrected the problem.
More recently, when I was working as a DBA for a large corporate client, the development staff started experiencing a similar problem. It turns out that the networking routes for the development staff had been modified to have connections routed through a new firewall. This firewall was configured to have a connection timeout after 20 minutes of inactivity, which was too short an amount of time for this department. As a result, we increased the timeout parameter to accommodate the development staff’s needs. These are examples of the types of network changes that a DBA needs to be aware of to avoid unnecessary downtime and to avoid wasting staff time and resources.
Accessibility: Heterogeneous Services Heterogeneous Services provide the ability to communicate with non-Oracle databases and services. These services allow organizations to leverage and interact with their existing data stores without having to necessarily move the data to an Oracle server. The suite of Heterogeneous Services is comprised of the Oracle Transparent Gateway and Generic Connectivity. These products allow Oracle to communicate with non-Oracle data sources in a seamless configuration. Heterogeneous Services also integrate existing systems with the Oracle environment, which allows you to leverage your investment in those systems. These services also allow for two-way communication and replication from Oracle data sources to non-Oracle data sources. Transparent Gateway The Transparent Gateway product seamlessly extends the reach of Oracle to non-Oracle data stores, which allows you to treat non-Oracle data sources as if they were part of the Oracle environment. In fact, the user is not even aware that the data being accessed is coming from a non-Oracle source. This can significantly reduce the time and investment necessary to transition from existing systems to the Oracle environment. Transparent Gateway fully supports SQL and the Oracle transaction control features, and it currently supports access to more than 30 non-Oracle data sources. Generic Connectivity Generic Connectivity provides a set of agents, which contain basic connectivity capabilities. It also provides a foundation so that you can custom build connectivity solutions using standard
OLE DB, Microsoft’s interface to data access. OLE DB requires an Open Database Connectivity (ODBC) driver to interface to the agents. You can also use ODBC as a stand-alone connection solution. For example, with the proper Oracle ODBC driver, you can access an Oracle database from programs such as Microsoft Excel. (These drivers can be obtained from Oracle or third-party vendors.) Because these drivers are generic in nature, they do not provide as robust an interface to external services as does the Transparent Gateway.
Accessibility: External Procedures In some development efforts, it may be necessary to interface with procedures that reside outside of the database. These procedures are typically written in a third-generation language, such as C. Oracle Net provides the ability to invoke such external procedures from Oracle PL/SQL callouts. When a call is made, a process will be started that acts as an interface between Oracle and the external procedure. This callout process defaults to the name extproc. The listener is then responsible for supplying information, such as a library or procedure name and any parameters, to the called procedure. These programs are then loaded and executed under the control of the extproc process.
The Oracle Net Stack Architecture
T
he Oracle Net software is comprised of a series of programs that form a type of stack architecture. Each of these programs is responsible for handling various aspects of network communications, and each functions as a layer of the stack. This section discusses the architecture of the Oracle Net stack and defines the responsibilities of each portion. To successfully complete the OCP exam, you need to understand the structure and responsibilities of the Oracle Net stack. The structure and function of the Oracle Net stack is based on the Open Systems Interconnection (OSI) model.
The OSI Model The Open Systems Interconnection (OSI) model is a widely accepted model that defines how data communications are carried out across a network.
There are seven layers in the OSI model, and each layer is responsible for some aspect of network communication. The upper layers of the model handle responsibilities such as communicating with the application and presenting data. The lower layers are responsible for transporting data across the network. The upper layers pass information, such as the destination of the data and how the data should be handled, to the lower layers. The lower layers communicate status information back to the upper layers. Table 1.1 shows the layers of the OSI model and the responsibilities each has in order for communications across a network to be executed. As you can see from this table, this layered approach allows for a separation of responsibilities. It also allows for the separation of the logical aspects of network communications, such as presentation and data management, from the physical aspects of communications, such as the physical transmission of bits across a network. TABLE 1.1
The Layers of the OSI Model OSI Model Layer
Responsibilities
Application Layer
Interacts with the application. Accepts commands and returns data.
Presentation Layer
Settles data differences between client and server. Also responsible for data format.
Session Layer
Manages network traffic flow. Determines whether data is being sent or received.
Transport Layer
Handles interaction of the network processes on the source and destination. Error correction and detection occurs here.
Network Layer
Delivers data between nodes.
Data Link Layer
Maintains connection reliability and retransmission functionality.
The Oracle Communications Stack The OSI model is the foundation of the Oracle communications stack architecture. Each of the layers of the Oracle communications stack has characteristics and responsibilities that are patterned after the OSI model. Oracle interacts with the underlying network at the very highest levels of the OSI model. In essence, it is positioned above the underlying network infrastructure and communicates with the underlying network. Oracle uses Oracle Net on the client and server to facilitate communications. The communications stack functions as a conduit to share and manage data between the client and server. The layers of Oracle communications stack are as follows:
The application (client) layer
The Oracle Call Interface (OCI) layer (client) or Oracle Program Interface (OPI) layer (server)
The Two-Task Common (TTC) layer
The Oracle Net Foundation layer
The Oracle Protocol Adapters (OPA) layer
The Network Specific Protocols layer
The Network Program Interface (NPI for server-to-server communications only) layer
Figure 1.5 depicts the relationship of each of the layers of the stack on both the client and the server. The client process makes network calls that traverse down the Oracle Net client layers to the network protocol. The server receives the network request, processes it, and returns the results to the client.
The Application Layer (Client) The application layer of the Oracle communications stack provides the same functionality as the Application Layer of the OSI model. This layer is responsible for interacting with the user, which involves providing the interface components, screen, and data control elements. Interfaces such as forms or menus are examples of the application layer. This layer communicates with the Oracle Call Interface (OCI) layer.
Network Specific Protocol layer (TCP/IP, TCP/IP SSL, LU6.2, VI)
Network Specific Protocol layer (TCP/IP, TCP/IP SSL, LU6.2, VI)
Oracle Net components
Network connection
The Oracle Call Interface (OCI) Layer (Client) The Oracle Call Interface (OCI) layer is responsible for all of the SQL processing that occurs between a client and the Oracle server. The OCI layer exists on the client only. There is an analogous server component called the Oracle Program Interface (OPI) layer on the server. The OCI layer is responsible for opening and closing cursors, binding variables in the server’s shared memory space, and fetching rows. Because the OCI is an open architecture, third-party products can write applications that interface directly with this layer of the communications stack. The OCI layer passes information directly to the Two-Task Common (TTC) layer.
The Two-Task Common (TTC) Layer The Two-Task Common (TTC) layer is responsible for negotiating any datatype or character set differences between the client and the server. The Two-Task Common layer acts as a translator, converting values from one character set to another. The TTC will determine if any datatype differences
are present when the connection is established. The TTC layer passes information to the Oracle Net Foundation layer. This layer shares some of the characteristics of the Presentation Layer from the OSI model.
The Oracle Net Foundation Layer The Oracle Net Foundation layer (formerly known as the Transparent Network Substrate or TNS layer) is an integral component of the Oracle communications stack architecture, and it is analogous to the Session Layer in the OSI model. It is based on the Transparent Network Substrate (TNS), which allows Oracle Net to be a very flexible architecture, interfacing with a wide variety of network protocols. The TNS interface shields both the client and server from the complexities of network communications. At this layer of the communications stack, Oracle Net interfaces with the other layers of the stack and their underlying protocols. It is this layer that provides the level of abstraction necessary to make Oracle Net a flexible and adaptable architecture, and it is this layer that compensates for differences in connectivity issues between machines and underlying protocols. This layer also handles interrupt messages and passes information directly to the Oracle Protocol Adapters (OPA) layer. The Oracle Net Foundation layer has several sublayers: Network interface (NI) sublayer The network interface sublayer provides a common interface on which the clients and servers can process functions. This layer of the stack is also responsible for handling any break requests. Network routing (NR) sublayer This is where Oracle Net keeps its network roadmap of how to get from the source or client to the destination or server. Network naming (NN) sublayer This layer takes network alias information and changes it into Oracle Net destination address information. Network authentication (NA) sublayer This layer is responsible for any negotiations necessary for authenticating connections. Network session (NS) sublayer The network session layer handles the bulk of activity in an Oracle network connection. This layer is responsible for such things as negotiating the initial connection request from the client. It is also responsible for managing the Oracle Net buffer contents and passing the buffer information between the client and the server. It also handles
special features of connection management, such as buffer pooling and multiplexing, if these options are used.
The Oracle Protocol Adapters (OPA) Layer The Oracle Protocol Adapters (OPA) layer interfaces with the underlying network. This becomes the entry point into the underlying network. This layer maps the Oracle Net Foundation layer functions to the analogous functions in the underlying protocol. There are different adapters for each protocol supported. This layer, in conjunction with the Network Specific Protocol layer, is analogous to the Network Layer in the OSI model.
The Network Specific Protocol Layer This is the actual transport layer that carries the information from the client to the server. Some of the protocols supported by Oracle Net include TCP/ IP, and DECnet. These protocols are not supplied with the Oracle software and must be in place to facilitate network communications.
The Oracle Program Interface (OPI) Layer (Server Only) For every request made from the client, the Oracle Program Interface layer is responsible for sending the appropriate response. So when clients issue SQL statements requesting data from the database, the OPI interface fulfills that request. In server-to-server communication, the Network Program Interface (NPI) is used instead of the OPI.
The Network Program Interface (NPI) Layer (Server Only) The Network Program Interface (NPI) layer is found only on the Oracle server. It is analogous to the OCI on the client. This layer is responsible for server-toserver communications, whereas the OPI is used in client-server communications and is used in distributed environments where databases communicate with each other across database links. This layer is analogous to the Presentation Layer of the OSI model, but from the server perspective.
Oracle Net and Java Support To provide the ability to interface with Java code that can be written and deployed inside the Oracle server environment, Oracle Net supports the General Inter-ORB Protocol (GIOP). This protocol allows Object Request
Brokers to talk to one another over the Internet. An Object Request Broker is a piece of software that handles the routing of object requests in a distributed network. The Internet Inter-ORB Protocol (IIOP) is a flavor of GIOP running under TCP/IP that supports the Secured Sockets Layer (SSL) for making secured network connections across the Internet. This type of connectivity would be used if an application were accessing Java procedures that were written and stored in the Oracle database. In this case, the Oracle9i Java Virtual Machine (JVM) would be running on the Oracle server, providing the Object Request Broker functionality. The only portion of Oracle Net that is required is the Oracle Net Foundation layer. This streamlined communications stack allows for more efficient connectivity of Oracle servers when server-side Java procedures are being used. Figure 1.6 shows how the modifications streamline the Oracle communications stack. FIGURE 1.6
IIOP stack communications Server Client
Application
Server Process
General Inter-Orb Protocol (GIOP)
Transparent Network Substrate (TNS)
TCP/IP with Secured Sockets
Oracle TCP/IP with Secured Sockets TCP/IP
The Internet
Oracle Net also provides robust connectivity for web-based applications. Connections to the Oracle server are available directly through Java applications, Java applets, or via an application server such as Oracle9iAS.
Internet Connectivity with Oracle Net Connections initiated to an Oracle server via a web browser are much like client-server applications. The main difference is that the application server acts as the client, providing communications to and from the Oracle server. If a Java application resides on the web server, the Java Database Connectivity (JDBC) OCI driver is used to initiate communications to the Oracle server. (JDBC is an interface that allows Java programs to interact with data stored in tabular form, such as in an Oracle database.) If the connection is made via a Java applet, then the JDBC Thin Driver is used. The JDBC Thin Driver does not require Oracle client software to be installed, hence the term “thin driver.” This driver provides the connectivity from the applet to the Oracle server. Typically in this environment, an HTTP request is initiated at the client and sent to the application server. The application server forwards the request to the appropriate database service for processing. Oracle Net serves as the mechanism for communication between the application server and the database. HTTP is used to send the request to the application server and to receive the response from the application server. Figure 1.7 shows an example of web connections when using a Java application. FIGURE 1.7
Web connectivity via a Java application Java application
RDBMS
JDBC OCI driver Client
Oracle Net
TCP/IP network connection
Oracle Net Oracle server
Web Connections with No Application Server A database can be configured to accept requests directly from a web environment without a middle-tier application server. This is because HTTP and IIOP can be configured on the Oracle server to accommodate these types of connections. Oracle supports the development and deployment of common Java objects, such as Enterprise JavaBeans (EJB), within the Oracle server itself. A client can make a request to the Oracle database and interface with these components directly. Oracle Net must be configured on the server to accept and process these types of requests. What makes this solution attractive is that no
software needs to be deployed to the client; all the client needs is a web browser to interact with the database.
Summary
T
here are several key components that are necessary to understand in order to succeed when you are networking in an Oracle environment. The main responsibilities of the network administrator include determining the applications and type of connections that will be supported, the number of users and the locations from which they will be accessing the network, and the security issues involved in protecting sensitive information, such as single sign-on and data encryption. In addition to being aware of their own responsibilities, the DBA needs to choose from the three basic types of network configurations when setting up their Oracle network: single-tier architecture, two-tier architecture, and ntier architecture. Because systems have evolved from the simpler single-tier architecture to the more complex n-tier architecture, which can include connections through middle-tier servers and the Internet, database administrators will most likely find themselves choosing between the two architectures that Oracle Net is an integral part of: two-tier or n-tier architectures. Oracle Net manages the flow of information from client computers to Oracle servers and forms the foundation of all networked computing in the Oracle environment. Oracle Net is comprised of a series of layers that make up the Oracle Net stack architecture. This architecture is based on the OSI model of networking and provides the basic building blocks of network interaction. Each layer in the Oracle Net stack is responsible for one or more networking tasks. Requests and responses are sent up and down the stack, which exists on both the client and the server. In addition to the main network architecture that supports connections to an Oracle server, Oracle Net provides services that can be divided into five main categories: Connectivity, Directory Services, Scalability, Security, and Accessibility. Connectivity solutions include support for multiple protocols, multiple operating systems, and Java and Internet. Directory Services provide an infrastructure to resolve Oracle service names through a centralized naming repository. Scalability solutions include Connection Manager and Oracle Shared Server. Security options include Oracle Advanced Security, which provides an additional layer of security options and robust support
for many varieties of firewalls. Accessibility support includes Heterogeneous Services and support for calling external procedures. Oracle Net also provides connectivity to Java stored procedures: the HTTP and IIOP protocols. This chapter provides the foundation of knowledge that you will need to understand when you are designing an Oracle network infrastructure. The decisions you make about the network design have ramifications in terms of the scalability, security, and flexibility of your Oracle environment. When you understand the underlying network architecture and the network options available to you, you will be able to make informed choices when you are designing your Oracle network.
Exam Essentials Know the database administrator’s responsibilities and how they relate to network administration. You should be able to list the responsibilities of the database administrator with respect to network administration. Can you define the basic network configuration choices and summarize the strengths and weaknesses of these options? Understand what Oracle Net is and the functionality it provides. You should be able to define the five categories of functionality that Oracle Net provides and what functionality falls into each category. You should also understand what functionality the Oracle Shared Server and Oracle Connection Manager options provide. In addition, you should be able to define Oracle Advanced Security and know when to use it. Be able to define the uses of the Heterogeneous Services and the situations in which these options are useful. Heterogeneous Services provide the ability to communicate with non-Oracle databases and services. These services allow organizations to leverage and interact with their existing data stores without having to necessarily move the data to an Oracle server. Be able to define the Oracle Net stack architecture. The Oracle Net architecture is based on the OSI model of network computing. This model divides the responsibility of conducting network transactions among various layers. You should know the names and definitions of the various layers of the Oracle Net stack.
Understand Oracle Net support for Java stored on the Oracle server. Oracle supports Java stored on the server by supporting GIOP, which allows Object Request Brokers to talk to one another over the Internet. The Oracle9i JVM provides the Object Request Broker functionality on the Oracle server. Be familiar with Oracle’s Internet connection options. You should have a basic understanding of the connection options Oracle provides from the Internet. This includes connections made via an application server and connections made directly to the Oracle server from a web browser.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: application layer
Review Questions 1. All of the following are examples of networking architectures except: A. Client/server B. N-tier C. Single-tier D. Two-tier E. All of the above are examples of network architectures. 2. You manage one non-Oracle database and several Oracle databases. An
application needs to access the non-Oracle databases as if it were part of the Oracle databases. What tool will solve this business problem? Choose the best answer. A. Oracle Advanced Security B. Oracle Connection Manager C. Heterogeneous Services D. Oracle Net E. None of the above 3. Which of the following is true about Oracle Net? A. It is not an option included in the Oracle Enterprise installation. B. It only works on TCP/IP platforms. C. It has an open API. D. It is never installed directly on a client workstation. 4. A DBA wants to centrally administer all of the Oracle network services
in a large Oracle9i installation with many network services. Which facility would best provide this functionality at minimal cost? A. Advanced Security B. Heterogeneous Services C. Oracle Shared Server D. Oracle Internet Directory
5. What are TCP/IP, DECnet, and LU6.2 all examples of? A. Computer programming languages B. Oracle Net connection tools C. Networking protocols D. Network programming languages 6. Which feature of Oracle Net best describes this statement: “Oracle
Net supports TCP/IP and LU6.2.”? A. GUI tools integration B. Robust tracing and diagnostic tools C. Zero configuration on the client D. Network transport protocol support 7. What is a solution that Oracle9i employs with Oracle Net that allows
connectivity of Java Components such as Enterprise JavaBeans? A. LU6.2 B. IPA C. GIOP D. Oracle Internet Directory 8. What is the standard that the Oracle Net communications stack is
9. “Responsible for moving bits across the wire” describes which of the
following OSI layers? A. Application Layer B. Physical Layer C. Data Link Layer D. Network Layer 10. What is the default name of the process that is used to make external
calls via Oracle Net? A. externalproc B. external C. extproc D. procext 11. IIOP is an example of which of the following? A. Tools to use for Oracle Net B. Oracle network integration utilities C. Internet network protocol D. Portions of the Oracle Net stack 12. Connection Manager provides which of the following? A. Multiplexing B. Cross protocol connectivity C. Network access control D. All of the above
13. Which of the following is true about the OCI layer? A. It displays the graphical interface. B. Its datatype conversions are handled. C. It interfaces directly with the protocol adapters. D. It interfaces directly with the TTC layer. E. None of the above. 14. To which of the choices below does the following statement apply?
“Prevents direct communication between a client and applications inside the corporate network.” A. Proxy-based firewalls B. Filter-based firewalls C. Both types of firewalls D. Neither type of firewall 15. When a connection is made via a Java applet, what type of driver is
utilized? A. JDBC OCI driver B. JDBC Thin Driver C. ODBC driver D. OCI driver 16. A client workstation connects to a transaction server, which passes on
requests to the Oracle database. This is a description of which of the following? A. Single-tier architecture B. Client/server architecture C. N-tier architecture D. None of the above
17. Which Oracle Net networking product can be best described as
middleware? A. Oracle Internet Directory B. Oracle Connection Manager C. Oracle Advanced Networking D. Oracle Shared Server 18. Which of the following are characteristics of complex networks? A. Multiple protocols B. Diverse geographic locations C. Multiple operating systems D. Multiple hardware platforms E. All of the above 19. What is the preferred method of centralized naming in an Oracle9i
environment? A. Oracle Names Server B. Oracle Connection Manager C. Oracle Shared Server D. Directory Naming with Oracle Internet Directory 20. Which of the following is an example of the ability to group connec-
tions together? A. Protocol Interchange B. Network Access Control C. Multiplexing D. Data Integrity checking E. None of the above
Answers to Review Questions 1. E. All of these are examples of network connectivity configurations.
Networking can be as simple as a dumb terminal connected directly to a server via a serial connection. It can also be as complex as an n-tier architecture that may involve clients, middleware, the Internet, and database servers. 2. C. Oracle Advanced Security would not solve this application problem
because it addresses security and not accessibility to non-Oracle databases. Oracle Net would be part of the solution, but another Oracle Network component is necessary. Connection Manager would also not be able to accommodate this requirement on its own. Heterogeneous Services is the correct answer because these services provide crossplatform connectivity to non-Oracle databases. 3. C. Oracle Net is included in the Oracle Enterprise installation and
works with a variety of protocols. It also has a client and a server component. The only statement that is true about Oracle Net is that it has an open Applications Program Interface (API), which means that third-party software can write to these specifications to interact directly with Oracle Net. 4. D. Advanced Security, Heterogeneous Services, and Oracle Shared
Server would not provide a solution to this business need because none of these address the issue of centrally managing network services. The best solution to the problem is the Oracle Internet Directory because it would facilitate centralized naming. 5. C. TCP/IP, DECnet, and LU6.2 are all examples of network protocols. 6. D. Oracle Net allows for support of multiple protocols. TCP/IP and
LU6.2 are two examples of the protocols that Oracle Net supports. 7. C. The General Inter-ORB Protocol is a protocol that supports con-
8. C. The Oracle Net communications stack is based on the Open Systems
Interconnection (OSI) model. NPI and OCI are parts of the Oracle Net stack and API stands for Applications Program Interface. 9. B. The Physical Layer is responsible for sending the actual data bits
across the network. The other layers are all above this base layer. 10. C. The default name of the external procedure process is extproc.
lsnrctl is a utility used to manage the listener service. External and procext are not valid responses. 11. C. IIOP is an example of an Internet network protocol. 12. D. Connection Manager is a middleware solution that provides for
multiplexing of connections, cross protocol connectivity, and network access control. All of the answers describe Connection Manager. 13. D. The OCI layer is below the application layer and above the TTC
layer. The call interface handles such things as cursor management and SQL execution. This information is passed on to the Two-Task Common layer. 14. A. Proxy-based firewalls prevent any direct contact between a client
and applications inside a corporate firewall. Filter-based firewalls inspect the packet headers but pass the packet on without modification to the destination application. Proxy-based firewalls act more as a relay between external clients and internal applications. 15. B. The JDBC Thin Driver would be utilized if a Java applet is used to
communicate with an Oracle database through an application server. 16. C. When you introduce middle tiers into the processing of a transac-
tion, this is known as n-tier architecture. 17. B. The Connection Manager is a middle-tier option that provides
multi-protocol interchange, connection concentration, and client access control. 18. E. All of these are characteristics of complex networks.
19. D. Oracle Internet Directory and Directory Naming are displacing the
Oracle Names Server as the preferred method of centralized naming. 20. C. Multiplexing is a characteristic of the Oracle Connection Manager
that allows several incoming requests to be handled and transmitted simultaneously over a single outgoing connection. This is a scalability feature provided by Connection Manager.
Configuring Oracle Net on the Server ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Identify how the listener responds to incoming connections. Configure the listener using Oracle Net Manager. Control the listener using the Listener Control Utility (lsnrctl). Describe Dynamic Service Registration. Configure the listener for IIOP and HTTP connections.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
he DBA must configure Oracle Net on the server in order for client connections to be established. This chapter will focus on how to configure the basic network elements of the Oracle server. First, we’ll discuss ways to manage and configure the main Oracle server network components and the listener process, as well as how to use the Oracle Net Manager and the lsnrctl command line utility. We’ll also discuss the Dynamic Registration feature, which allows instances to automatically register with listeners on an Oracle server. We will also look at how to configure the listener so that it can connect clients to an Oracle server over the Internet using IIOP and HTTP connections. Finally, we will explore ways to troubleshoot server-side connectivity problems. This chapter describes the first steps you need to take when you are configuring the Oracle Net environment. Once Oracle Net is properly configured on the database server, clients will be able to connect to the Oracle database without being directly connected to the server where the Oracle database is located. Without this configuration, this type of connectivity cannot be accomplished.
The Oracle Listener
T
he Oracle listener is the main server-side Oracle networking component that allows connections to be established between client computers and an Oracle database. You can think of the listener as a big ear that listens for connection requests to Oracle services. The type of Oracle service being requested is part of the connection descriptor information supplied by the process requesting a connection, and the service name resolves to an Oracle database. The listener can listen for any number of databases configured on the server, and it is able to listen for
requests being transported on a variety of protocols, such as TCP/IP, DECnet, and LU6.2. A client connection can be initiated from the same machine the listener resides on, or it may come from some remote location. The listener is configured using a centralized file called listener.ora. Though there is only one listener.ora file configured per machine, there may be numerous listeners on a server, and it is this file that contains all of the configuration information for every listener configured on the server. If there are multiple listeners configured on a single server, they are usually there to balance connection requests and minimize the burden of connections on a single listener.
The content and structure of the listener.ora file will be discussed later in this chapter.
Every listener is a named process that is running on the server. The default name of the Oracle listener is LISTENER and it is typically created when you install Oracle. If you configure multiple listeners, each one would have a unique name. Below is an example of the default configuration of the listener.ora file. # LISTENER.ORA Network Configuration File: D:\oracle\ora90\network\admin\listener.ora # Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = mjworn) (PORT = 1521))) (ADDRESS_LIST = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0)) ) ) ) SID_LIST_LISTENER = (SID_LIST =
How Do Listeners Respond to Connection Requests? There are several ways a listener can respond to a client request for a connection. The response is dependent upon several factors, such as how the server-side network components are configured and what type of connection the client is requesting. The listener will then respond to the connection request in one of two ways. The listener can spawn a new process and pass control of the client session to the process. In a dedicated server environment, every client connection is serviced by its own server-side process. Server-side processes are not shared among clients. Depending on the capabilities of the operating system, two types of dedicated server connections are possible: bequeath connections and redirect connections. Each results in a separate process that handles client processing, but the mechanics of the actual connection initiation process are different.
As of Oracle9i, prespawned dedicated processes are no longer supported.
Bequeath Connections Bequeath connections are possible when the underlying operating system, such as Unix, supports the inheritance of network endpoints. What this means is that the operating system has the ability to spawn a new process and pass information to this process; this allows the operating system server process to initiate a conversation with the client immediately.
Another name for bequeath connections is direct hand-off connections.
The following steps, which show the connection process for the bequeath connections, are exhibited in Figure 2.1: 1. The client contacts the Oracle server after resolving the service name. 2. The server spawns a dedicated process and bequeaths control of the
connection to the process. The new process inherits all control information of the spawned process, including the TCP/IP socket information from the server that spawned the process. 3. The server process notifies the client to start sending information to it
by sending a RESEND packet to the client. 4. The client sends a CONNECT packet to the newly established server
process. 5. The server responds back with an ACCEPT packet and now manages
the client requests. FIGURE 2.1
Bequeath connection process Oracle server Client computer
In order to have bequeathed sessions supported on Windows NT/2000, the Registry setting USE_SHARED_SOCKET needs to be set. This setting can be set in HKEY_LOCAL_MACHINE SOFTWARE ORACLE HOMEX where X is equal to the HOME that you are using, such as HOME0. By default, this Registry setting is not initialized and therefore Windows NT/2000 will use a redirect type connection to establish communication when dedicated clients connections are used.
Redirect Connections The listener can also redirect the user to a server process or a dispatcher process. This type of connection can occur when the operating system does not directly support bequeath connections or the listener is not on the same physical machine as the Oracle server. The following steps are illustrated in Figure 2.2: 1. The client contacts the Oracle server after resolving the service name. 2. The listener spawns a new process or dispatcher in the case of Oracle
Shared Server. In the case of Oracle Shared Servers, if there is remaining capacity on existing shared servers, then a dispatcher process will not need to be spawned. 3. The new process or thread selects a TCP/IP port to use to control inter-
action with user clients. This information is then passed back to the listener. 4. The listener sends information back to the client, redirecting the client
to the new server or dispatcher port. The original network connection between the listener and the client is disconnected. 5. The client then sends a connect signal to the server or dispatcher pro-
cess to establish a network connection. 6. The dispatcher or server process sends an acknowledgement back to
the client. 7. If Oracle Shared Server processes are being used, PMON sends infor-
mation to the listener about the number of connections being serviced
by the dispatchers. The listener uses this information to maintain consistent loads between the dispatchers. FIGURE 2.2
Redirect connection process Client computer
sqlplus scott/tiger@prod
Server process or dispatcher
5 6
2 3
Oracle server
PMON
7 1 4
Listener on server
Managing Oracle Listeners
T
here are a number of ways in which a DBA can configure the serverside listener files. As part of the initial Oracle installation process, the installer will prompt the DBA to create a default listener. If the DBA does not want to supply one in this manner, they can also use the Oracle Net Configuration Assistant to configure a listener. If this method is chosen, the installer uses the set of screens that are a part of this assistant to do the initial listener configuration. Figure 2.3 shows an example of the opening screen for the Oracle Net Configuration Assistant.
The Oracle Net Configuration Assistant is meant to be an easy, wizard-based tool that you can use to conduct a basic setup for both client- and server-side network products.
Static service registration occurs when entries are added to the listener .ora file manually by using the Oracle Net Assistant. It is static because you are adding this information manually. Static service registration is necessary if you will be connecting to pre-Oracle8i instances using Oracle Enterprise Manager or you will be connecting to external services. There is another method of managing listeners that does not require the manual updating of service information in the listener.ora file. This is called dynamic service registration. Dynamic service registration is a feature that allows an Oracle instance to automatically register itself with an Oracle listener. The benefit of this feature is that it does not require the DBA to perform any updates of server-side network files when new Oracle instances are created. Dynamic service registration will be covered in more detail in the section entitled “Dynamic Registration of Services” later in this chapter.
If you are using Oracle9i, you must configure an Oracle9i listener to connect to the Oracle server. Oracle9i listeners are backward compatible and can listen for connection requests to earlier Oracle database versions.
If you want to be able to set up more than just basic configurations of Oracle network files, you will have to use the Oracle Net Manager. In the next few sections, you will learn how to use the Oracle Net Manager to configure the server-side network files.
Using Oracle Net Manager
O
racle Net Manager is a tool you can use to create and manage most client- and server-side configuration files. The Oracle Net Manager has evolved from the Oracle7 tool, Network Manager, to the latest Oracle9i version. Throughout this evolution, Oracle has continued to enhance the functionality and usability of the tool.
We strongly recommend using the Oracle Net Manager to create and manage all of your network files. This is because these files need to be in a specific format, and the Oracle Net Manager ensures that the files are created in that format. If the files are not in the correct format, you may have problems with your network connections. Something as subtle as an extra space or tab can cause problems with network connections, so if you were used to cutting and pasting old entries to create new entries in these files, it is better now to use Oracle Net Manager to create new entries.
If you are using a Windows NT/2000 environment, you can start the Oracle Net Manager by choosing Start Programs Your Oracle9i Programs choice Configuration and Migration Tools Oracle Net Manager. In a Unix environment, you can start it by running ./netmgr from your $ORACLE_ HOME/bin directory. Figure 2.4 shows an example of the Oracle Net Manager opening page.
Configuring Listener Services Using the Oracle Net Manager You will want to use the Oracle Net Manager to configure the listener. As stated in Chapter 1, “Introduction to Network Administration,” the Oracle Net Manager provides an easy-to-use graphical interface for configuring most of the network files you will be using. By using Oracle Net Manager, you can ensure that the files are created in a consistent format, which will reduce the potential for connection problems. When you first start the Oracle Net Manager, the opening screen displays a list of icons down the right-hand side of the screen under the Network folder. The choices under the Local folder relate to different network configuration files. Here are the different network file choices and what each configures. Icon
As stated in the previous chapter, Oracle Names Server is being replaced by the Oracle Internet Directory as the preferred method for centralized Oracle service names resolution. However, Oracle Names Server can still be configured from Oracle Net Manager and is supported under Oracle9i. However, no revisions are being done to the product, and future releases of Oracle may not support Names Server.
Creating the Listener Earlier, we said that by default, Oracle will create a listener called LISTENER when it is initially installed. The default settings that Oracle uses for the listener.ora file are shown here. Section of the File
Setting
Listener Name
LISTENER
Port
1521
Protocols
TCP/IP and IPC
Host Name
Default Host Name
SID Name
Default Instance
You can use Oracle Net Manager to create a non-default listener or change the definition of existing listeners. The Oracle Net Manager has a wizard interface for creating most of the basic network elements, such as the listener.ora and tnsnames.ora files. Follow these steps to create the listener: 1. Click the plus (+) sign next to the Local icon. 2. Click the Listeners folder. 3. Click the plus sign icon or select Create from the Edit menu. 4. The Choose Listener Name dialog box appears. If this is the first
listener being created, the Oracle Net Manager defaults to LISTENER if no listener is configured or to LISTENER1 if a default listener is already
created. Click OK if this is correct or enter a new name. Here is an example of the Choose Listener Name screen.
5. After you have chosen a name for the listener, you can configure the
listening locations; to do this, click the Listening Locations drop-down list box and make your selection. Then click the Add Address button at the bottom of the screen as shown in Figure 2.5. FIGURE 2.5
Listener Locations screen from Oracle Net Manager
6. A new screen appears on the right side of Oracle Net Manager. Depend-
ing on your protocol, the prompts will be somewhat different. By default, TCP/IP information is displayed. If you are using TCP/IP, the Host and Port fields are filled in for you. The host is the name of the machine in which the listener is running, and the port is the listening location for TCP/IP connections. The default value for the port is 1521.
7. Save your information by selecting File Save Network Configuration.
After saving your information, look in the directory where it was saved.
You always know where the files are stored by looking at the top banner of the Oracle Net Manager screen.
The Oracle Net Manager actually creates three files in this process: listener.ora, tnsnames.ora, and sqlnet.ora. The tnsnames.ora does not contain any information. The sqlnet.ora file may contain a few entries at this point, but these can be ignored for right now. The listener.ora file will contain information as shown in the code listed below. We will discuss the structure and content of the listener.ora file later on in the chapter. # LISTENER.ORA Network Configuration File: D:\oracle\ora90\network\admin\listener.ora # Generated by Oracle configuration tools. LISTENER1 = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = mjworn) (PORT = 1521))) ) )
Adding Service Name Information to the Listener After the listener has been created with the name, protocol, and listening location information, you define the network services that the listener is responsible for connecting users to. There is no limit to the number of network service names for which a listener can listen. The steps to add the service name information are as follows: 1. Select the listener to configure by clicking the Listeners icon and high-
lighting the name of the listener you wish to configure. 2. After selecting the listener, choose Database Services from the drop-
3. Click the Add Database button at the bottom of the screen. 4. Enter values in the prompts for Global Database Name, Oracle Home
Directory, and SID. The entries for SID and the global database name are the same if you are using a flat naming convention (see Figure 2.6). FIGURE 2.6
Add Service screen from Oracle Net Manager
5. Choose File Save to save your configuration. Here is an example of
Parameters for the Listening Location Section of listener.ora (continued) Parameter
Description
PORT
Contains the address the listener is listening on.
SID_LIST_LISTENER
Defines the list of Oracle services that the listener is configured to listen for.
SID_DESC
Describes each Oracle SID.
GLOBAL_DBNAME
Identifies the global database name. This entry should match the SERVICE_NAMES entry in the init.ora file for the Oracle service.
ORACLE_HOME
Shows the location of the Oracle Executables on the server.
SID_NAME
Contains the name of the Oracle SID for the Oracle Instance.
Adding Additional Listeners To add more listeners, follow the steps outlined above. Listeners must have unique names and listen on separate ports, so give the listener a new name and assign it to a different port (1522, for example). You also must assign service names to the listener.
Optional listener.ora Parameters You can set optional parameters that add functionality to the listener. Optional parameters are added by choosing the General Parameters dropdown list box at the top right of the screen. Table 2.2 describes these parameters and where they can be found in the Oracle Net Manager. As you will see, some parameters cannot be added directly from the Oracle Net Manager and have to be added manually. These optional parameters also have the listener name appended to them so that you can tell which listener definition they belong to. For example, if the parameter STARTUP_WAIT_TIME is set for the default listener, the parameter created is START_WAIT_TIME_ LISTENER.
Optional listener.ora Parameter Definitions Oracle Net Manager Prompt
listener.ora Parameter
Startup Wait Time
STARTUP_WAIT_TIME
Defines how long a listener will wait before it responds to a STATUS command in the lsnrctl command line utility.
Unavailable from Net Manager
CONNECT_TIMEOUT
Defines how long a listener will wait for a valid response from a client once a session is initiated. The default is 10 seconds.
Unavailable from Net Manager
SAVE_CONFIG_ON_STOP
Specifies whether modifications made during an lsnrctl session should be saved when exiting.
Log File
LOG_FILE—Will not be in listener.ora file if default setting is used. By default, listener logging is enabled with the log created in the default location.
Specifies where a listener will write log information. This is ON by default and defaults to %ORACLE_ HOME%\network\log\ listener.log.
Trace Level
TRACE_LEVEL—Not present if tracing is disabled. Default is OFF.
Sets the level of detail if listener connections are being traced. Valid values include Off, User, Support, Admin.
Trace File
TRACE_FILE
Specifies the location of listener trace information. Defaults to %ORACLE_ HOME\network\trace\ listener.trc.
Optional listener.ora Parameter Definitions (continued) Oracle Net Manager Prompt
listener.ora Parameter
Require a Password for listener operations
PASSWORDS
Description Specifies password required to perform administrative tasks in the lsnrctl command line utility.
Managing the Listener Using lsnrctl
Once you have created and saved the listener definition, you need to start the listener. The listener must be started before clients can connect to it and request database connections. The listener cannot be started or stopped from the Oracle Net Manager. The listener is started using the command line tool lsnrctl. Other Oracle network components, such as Connection Manager, have command line tools that are used to stop and start the associated processes.
On Windows NT/2000, the listener runs as a service. Services are programs that run in the background on Windows NT/2000. You can start the listener from the Windows NT/2000 Services panel. Choose Start Settings Control Panel Services. Then select the name of the listener service from the list of services. If your %ORACLE_HOME% directory was ORANT, and the name of your listener was Listener, you would look for a name such as OracleOraHome90TNSListener. Select the listener name and choose Start.
The lsnrctl Utility The lsnrctl utility is located in the %ORACLE_HOME%\bin directory on Windows NT/2000 and in $ORACLE_HOME/bin on Unix systems. Windows NT/2000 users familiar with earlier releases of Oracle will notice that utility names no longer have version extensions. For example, in Oracle8, the tool was called lsnrctl80. All of the tools now have a consistent name across platforms— the version extensions have been dropped to comply with this.
Type lsnrctl at the command line. The code below shows what a resulting login screen looks like: C:\>lsnrctl LSNRCTL for 32-bit Windows: Version 9.0.1.1.1 - Production on 08-OCT-2001 20:30: 02 Copyright (c) 1991, 2001, Oracle Corporation. reserved.
All rights
Welcome to LSNRCTL, type "help" for information. LSNRCTL>
Starting the Listener The listener has commands to perform various functions. You can type help at the LSNRCTL> prompt to get a list of these commands. To start the default listener named LISTENER, type start at the prompt. If you want to start a different listener, you would have to type in that listener name after start. For example, typing start listener1 would start the LISTENER1 listener. The following code shows the results of starting the default listener: C:\>lsnrctl start LSNRCTL for 32-bit Windows: Version 9.0.1.1.1 - Production on 08-OCT-2001 20:32:09 Copyright (c) 1991, 2001, Oracle Corporation. All rights reserved. Starting tnslsnr: please wait... TNSLSNR for 32-bit Windows: Version 9.0.1.1.1 - Production System parameter file is D:\oracle\ora90\network\admin\listener.ora Log messages written to D:\oracle\ora90\network\log\listener.log Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=mprntw507953.cmg .com)(PORT=1521)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXT PROC0ipc))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=mprntw507953) (PORT=1521))) STATUS of the LISTENER -----------------------Alias LISTENER Version TNSLSNR for 32-bit Windows: Version 9.0.1.1.1 - Production Start Date 08-OCT-2001 20:32:11 Uptime 0 days 0 hr. 0 min. 2 sec Trace Level off Security OFF SNMP OFF Listener Parameter File D:\oracle\ora90\network\admin\listener.ora Listener Log File D:\oracle\ora90\network\log\listener.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=mprntw507953.cmg .com)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXT PROC0ipc))) Services Summary... Service "MJW" has 1 instance(s). Instance "MJW", status UNKNOWN, has 1 handler(s) for this service... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully This listing shows a summary of information presented; it includes information such as the services the listener is listening for, the log locations, and whether tracing is enabled for the listener.
Reloading the Listener If the listener is running and modifications are made to the listener.ora file either manually or with Oracle Net Manager, the listener has to be reloaded to refresh the listener with the most current information. The reload command will reread the listener.ora file for the new definitions. As you can see, it is not necessary to stop and start the listener to reload it. Though stopping and restarting the listener can also accomplish a reload, using the reload command is better because the listener is not actually stopped, which makes this process more efficient. The following code shows an example of the reload command:
Reloading the listener has no effect on clients connected to the Oracle server.
C:\>lsnrctl reload LSNRCTL for 32-bit Windows: Version 9.0.1.1.1 - Production on 08-OCT-2001 20:34:26 Copyright (c) 1991, 2001, Oracle Corporation. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=mprntw507953) (PORT=1521))) The command completed successfully
Showing the Status of the Listener You can display the status of the listener by using the status command. The status command shows whether the listener is active, the locations of the logs and trace files, how long the listener has been running, and the services for the listener. This is a quick way to verify that the listener is up and running with no problems. The code below shows the result of the lsnrctl status command. C:\>lsnrctl status LSNRCTL for 32-bit Windows: Version 9.0.1.1.1 - Production on 08-OCT-2001 20:36:14 Copyright (c) 1991, 2001, Oracle Corporation. All rights reserved.
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=mprntw507953) (PORT=1521))) STATUS of the LISTENER -----------------------Alias LISTENER Version TNSLSNR for 32-bit Windows: Version 9.0.1.1.1 - Production Start Date 08-OCT-2001 20:32:11 Uptime 0 days 0 hr. 4 min. 4 sec Trace Level off Security OFF SNMP OFF Listener Parameter File D:\oracle\ora90\network\admin\listener.ora Listener Log File D:\oracle\ora90\network\log\listener.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=mprntw507953.cmg .com)(PORT=1521))) (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXT PROC0ipc))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=127.0.0.1)(PORT =2482))(PRESENTATION=GIOP)(SESSION=RAW)) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT= 2481))(PRESENTATION=GIOP)(SESSION=RAW)) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=127.0.0.1)(PORT =9090))(PRESENTATION=http://admin)(SESSION=RAW)) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT= 8080))(PRESENTATION=http://admin)(SESSION=RAW)) Services Summary... Service "MJW" has 2 instance(s). Instance "MJW", status UNKNOWN, has 1 handler(s) for this service... Instance "MJW", status READY, has 3 handler(s) for this service... Service "PLSExtProc" has 1 instance(s).
Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully
Listing the Services for the Listener The lsnrctl services command displays information about the services, such as whether or not the services have any dedicated prespawned server processes or dispatched processes associated with them, and how many connections have been accepted and rejected per service. Use this method to check if a listener is listening for a particular service. The following code shows an example of running the services command: C:\>lsnrctl services LSNRCTL for 32-bit Windows: Version 9.0.1.1.1 - Production on 08-OCT-2001 20:39:14 Copyright (c) 1991, 2001, Oracle Corporation. reserved.
All rights
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=mprntw507953) (PORT=1521))) Services Summary... Service "MJW" has 2 instance(s). Instance "MJW", status UNKNOWN, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 LOCAL SERVER Instance "MJW", status READY, has 3 handler(s) for this service... Handler(s): "D001" established:0 refused:0 current:0 max:1002 state:ready DISPATCHER <machine: MPRNTW507953, pid: 373> (ADDRESS=(PROTOCOL=tcp)(HOST=mprntw507953.cmg.com) (PORT=1038))
"D000" established:0 refused:0 current:0 max:1002 state:ready DISPATCHER <machine: MPRNTW507953, pid: 370> (ADDRESS=(PROTOCOL=tcp)(HOST=mprntw507953.cmg.com) (PORT=1036)) "DEDICATED" established:0 refused:0 state:ready LOCAL SERVER Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 LOCAL SERVER The command completed successfully
Other Commands in lsnrctl There are other commands that can be run in lsnrctl. Table 2.3 shows a summary of the other commands. Type the command at the lsnrctl prompt to execute it. TABLE 2.3
Summary of the lsnrctl Commands Command
Definition
change_password
Allows a user to change the password needed to stop the listener.
exit
Exits the lsnrctl utility.
quit
Performs the same function as EXIT.
reload
Rereads the listener.ora file without stopping the listener. Used to refresh the listener if changes are made to the file.
save_config
Makes a copy of the listener.ora file called listener.bak when changes are made to the listener.ora file from lsnrctl.
Summary of the lsnrctl Commands (continued) Command
Definition
services
Lists a summary of services and details information on the number of connections established and the number of connections refused for each protocol service handler.
start listener
Starts the named listener.
status listener
Shows the status of the named listener.
stop listener
Stops the named listener.
trace
Turns on tracing for the listener.
version
Displays the version of the Oracle Net software and protocol adapters.
set Commands in lsnrctl The lsnrctl utility also has commands called set commands. These commands are issued by typing set at the LSNRCTL> prompt. The set commands are used to make modifications to the listener.ora file, such as setting up logging and tracing. Most of these parameters can be set using the Oracle Net Manager. To display the current setting of a parameter, use the show command. show is used to display the current settings of the parameters set using the set command. Table 2.4 shows a summary of the lsnrctl set commands. If you just type in set or show, you will see a listing of all of the commands. TABLE 2.4
Summary of the lsnrctl set Commands set Command
Description
current_listener
Sets the listener to make modifications to or shows the name of the current listener.
displaymode
Sets display for the lsnrctl utility to RAW, COMPACT, NORMAL, or VERBOSE.
Summary of the lsnrctl set Commands (continued) set Command
Description
log_status
Shows whether logging is on or off for the listener.
log_file
Shows the name of listener log file.
log_directory
Shows the log directory location.
rawmode
Shows more detail on STATUS and SERVICES when set to ON. Values: ON or OFF.
startup_waittime
Sets the length of time a listener will wait to respond to a status command in the lsnrctl command line utility.
spawn
Starts external services that the listener is listening for and that are running on the server.
save_config_on_ stop
Saves changes to the listener.ora file when exiting lsnrctl.
trc_level
Sets the trace level to OFF, USER, ADMIN, SUPPORT.
trc_file
Sets the name of the listener trace file.
trc_directory
Sets the name of the listener trace directory.
Stopping the Listener In order to stop the listener, you must issue the lsnrctl stop command. This command will stop the default listener. To stop a non-default listener, include the name of the listener. For example, to stop the LISTENER1, type lsnrctl stop listener1. If you are in the lsnrctl> facility, you will stop whatever listener is the current listener defined by the current_listener setting. To see what the current listener is set to, use the show command. The default value is LISTENER. Stopping the listener does not affect clients connected to the database. It only means that no new connections can use this listener until the listener is restarted. The following code shows what the stop command looks like: C:\>lsnrctl stop
LSNRCTL for 32-bit Windows: Version 9.0.1.1.1 - Production on 08-OCT-2001 20:43:52 Copyright (c) 1991, 2001, Oracle Corporation. All rights reserved. Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=mprntw507953) (PORT=1521))) The command completed successfully
Dynamic Registration of Services
O
racle9i databases have the ability to automatically register their presence with an existing listener. The instance will register with the listener defined on the local machine. Dynamic service registration allows you to take advantage of other features, such as load balancing and automatic failover. The PMON process is responsible for registering this information with the listener. When dynamic service registration is used, you will not see the service listed in the listener.ora file. To see the service listed, you should run the lsnrctl services command. Be aware that if the listener is started after the Oracle instance, there may be a time lag before the instance actually registers information with the listener. In order for an instance to automatically register with a listener, the listener must be configured as a default listener, or you must specify the init.ora parameter LOCAL_LISTENER. The LOCAL_LISTENER parameter defines the location of the listener with which you want the Oracle server to register. A default listener definition is show below: Listener Name = LISTENER Port = 1521 Protocol = TCP/IP And here is an example of the LOCAL_LISTENER parameter being used to register the Oracle server with a non-default listener: local_listener="(ADDRESS_LIST = (Address = (Protocol = TCP) (Host=weishan) (Port=1522))) In the example above, the Oracle server will register with the listener listening on Port 1522 using TCP/IP. This is a non-default port location, so you must use the LOCAL_LISTENER parameter in order for the registration to take place.
Two other init.ora parameters need to be configured to allow an instance to register information with the listener. There are two parameters used to allow automatic registration, INSTANCE_NAME and SERVICE_NAMES. The INSTANCE_NAME parameter is set to the name of the Oracle instance you would like to register with the listener. The SERVICE_NAMES parameter is a combination of the instance name and the domain name. The domain name is set to the value of the DB_DOMAIN initialization parameter. For example, if your DB_DOMAIN is set to GR.COM and your Oracle instance was DBA, the parameters would be set as follows: Instance_name = DBA Service_names = DBA.GR.COM If you are not using domain names, the INSTANCE_NAME and SERVICE_ NAMES parameters should be set to the same values.
Configure the Listener for Oracle9i JVM
Y
ou can configure the Oracle Net services to respond to requests to the Oracle9i Java Virtual Machine (JVM) over TCP/IP or TCP/IP with Secure Sockets Layer (SSL). The Java Virtual Machine runs within the Oracle server. Client processes can interact with processes that run within the JVM directly over HTTP or Internet Inter-ORB Protocol (IIOP). If you are using pre-Oracle 8.1 instances with these options, the listener addresses must be configured manually as outlined below. If you are using Oracle9i databases and listeners, dynamic registration would take care of registering these services with the listener. The steps to add the service name information are as follows: 1. Start the Oracle Net Manager. 2. Choose Local and Listener. 3. Select a listener and then choose Listening Locations from the drop-
down choices on the right-hand panel. 4. Choose Add Address. 5. Select TCP/IP or TCP/IP With SSL. 6. Enter a hostname in the database host field.
7. Enter port 2481 for the TCP/IP protocol or 2482 for TCP/IP with SSL
protocol. 8. Select Statically Dedicate This Address For Jserver Connections. 9. Select File and Save Configuration.
Here is an example of the entry made in the listener.ora file that contains the configuration information to connect to an Oracle server over TCP/ IP with SSL using IIOP or HTTP: # LISTENER.ORA Network Configuration File: D:\oracle\ora90\NETWORK\ADMIN\listener.ora # Generated by Oracle configuration tools. LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = mprntw507953)(PORT = 1521))) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = mprntw507953)(PORT = 2482)) (PROTOCOL_STACK = (PRESENTATION = GIOP) (SESSION = RAW) ) ) )
Troubleshooting Server-Side Connection Problems
E
ven if it seems that you have configured Oracle server-side components correctly, network errors may still occur that need to be addressed. There are any number of reasons for a network problem:
The client, middle tier, or Oracle server may not be configured properly.
The client may not be able to resolve the net service name.
The underlying network protocol may not be active on the server; for example, the TCP/IP process on the server may not be running.
The user may enter an incorrect net service name, user ID, or password.
These types of errors can be diagnosed and corrected easily. In the section entitled “Server-Side Computer and Database Checks,” you will see how to diagnose and correct connection problems originating from the Oracle server. In the next chapter, we will discuss troubleshooting problems with client-side network configuration. When a client has a connection problem that is up to you to fix, it is helpful to first gather information about the situation. Make sure you record the following information:
The Oracle error received.
The location of the client. Is the client connecting from a remote location, or is the client connected directly to the server?
The name of the Oracle server to which the client is attempting to connect.
Check if other clients are having connection problems. If other clients are experiencing problems, are these clients in the same general location?
Ask the user what is failing. Is it the application being used or the connection?
We will now look at the particular network areas to check and the methods used to further diagnose connection problems from the Oracle server. We will also look at the Oracle error codes that will help identify and correct the problems.
Server-Side Computer and Database Checks There are several server-side checks that can be performed if a connection problem occurs. Before running such checks, make sure the machine is running, that the Oracle server is available, and that the listener is active. Here is a summary of checks to perform on the server.
Check Server Machine Make sure the server machine is active and available for connections. On some systems, it is possible to start a system in a restricted mode that allows only supervisors or administrators to log in to the computer. Make sure that the computer is open and available to all users.
On a TCP/IP network, you can use the ping utility to test for connectivity to the server. Here is an example of using ping to test a network connection to a machine called matt: C:\users\default>ping matt Pinging cupira03.cmg.com [10.69.30.113] with 32 bytes of data: Reply Reply Reply Reply
Enter user-name: An ORA-01034 error means the Oracle instance is not running. You need to start up the Oracle instance. The ORA-27101 error means that there is currently no instance available to connect to for the specified ORACLE_SID.
Verify That Database Is Open to All Users A database can be opened in restricted mode. This means that only users with restricted mode access can use the system. This is not a networking problem, but it will lead to clients being unable to connect to the Oracle server. D:\>sqlplus scott/tiger@MJW SQL*Plus: Release 9.0.1.0.0 - Production on Mon Oct 15 11:37:25 2001 (c) Copyright 2000 Oracle Corporation. reserved.
All rights
ERROR: ORA-01035: ORACLE only available to users with RESTRICTED SESSION privilege
Enter user-name:
Check User Privileges Make sure the user attempting to establish the connection has been granted the CREATE SESSION privilege to the database. This privilege is needed for a user to connect to the Oracle server. If the client does not have this privilege, you must grant it to the user. To do so, follow this example: D:\oracle\ora90\BIN>sqlplus matt/matt SQL*Plus: Release 9.0.1.0.1 - Production on Mon Oct 15 14:04:51 2001
(c) Copyright 2001 Oracle Corporation. All rights reserved. ERROR: ORA-01045: user MATT lacks CREATE SESSION privilege; logon denied Here is an example of how a DBA would grant the CREATE SESSION privilege to a user: SQL> grant create session to matt; Grant succeeded SQL>
Server-Side Checks Network Checks After you validate that the server where the database is located is up and available and you verify that the user has proper privileges, you should begin checking for any underlying network problems on the server.
Check Listener Make sure the listener is running on the Oracle server. Make sure you check the services for all of the listeners on the Oracle server; you can use the lsnrctl status command to do this. The following command shows the status of the default listener named LISTENER: D:\>lsnrctl status LSNRCTL for 32-bit Windows: Version 9.0.1.1.1 - Production on 03-JAN-2002 10:56:04 Copyright (c) 1991, 2001, Oracle Corporation. reserved.
All rights
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=mprntw507953) (PORT=1521)))
STATUS of the LISTENER -----------------------Alias Version
LISTENER TNSLSNR for 32-bit Windows: Version 9.0.1.1.1 - Production 03-JAN-2002 08:38:30 0 days 2 hr. 17 min. 34 sec off OFF OFF
Start Date Uptime Trace Level Security SNMP Listener Parameter File D:\oracle\ora90\network\admin\listener.ora Listener Log File D:\oracle\ora90\network\log\listener.log Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.72.127.148) (PORT=2481))(PRESENTATION=GIOP)(SESSION=RAW)) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=10.72.127.148) (PORT=2482))(PRESENTATION=GIOP)(SESSION=RAW)) Services Summary... Service "MJW" has 1 instance(s). Instance "MJW", status READY, has 3 handler(s) for this service... Service "PLSExtProc" has 1 instance(s). Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service... The command completed successfully Also check the services for which the listener is listening. You must see the service to which the client is attempting to connect. If the service is not listed, the client may be entering the wrong service, or the listener may not be configured to listen for this service.
Check GLOBAL_DBNAME If the client is using the hostnaming method, make sure the GLOBAL_DBNAME parameter is set to the name of the host machine. You can find this parameter in the service definition of the listener.ora file. Verify the setting by reviewing the listener.ora configuration.
Check Listener Protocols Check the protocols the listener is configured for. This is displayed by the lsnrctl services command. Make sure the protocol of the service matches the protocol the client is using when requesting a connection. If the client is requesting to connect with a protocol the listener is not listening for, the user will receive an ORA-12541 “No Listener” error.
Check Server Protocols Make sure the underlying network protocol on the server is active. For systems that run TCP/IP, you can attempt to use the ping command to ping the server. This will verify that the TCP/IP daemon process is active on the server. There are other ways to check this, such as verifying the services on Windows NT/2000 or with the ps command on Unix. An example of the ping command can be found in the next chapter.
Check Server Protocol Adapters Make sure the appropriate protocol adapters have been installed on the server. On most platforms, you can invoke the Oracle Universal Installer program and check the list of installed protocols. On Unix platforms, you can use the adapter utility to make sure the appropriate adapters have been linked to Oracle. An example of how to run this utility is provided below. This utility is located in the $ORACLE_HOME/bin directory. [root@localhost] ./adapters oracle Net protocol adapters linked with oracle are: BEQ IPC TCP/IP RAW Net Naming Adapters linked with oracle are: Oracle TNS Naming Adapter Oracle Naming Adapter
Advanced Networking Option/Network Security products linked with oracle are: Oracle Security server Authentication Adapter If the required protocol adapter is not listed, you have to install the adapter. This can be done by using the Oracle Installer, installing the Oracle Net Server software, and choosing the appropriate adapters during the installation process.
Check for Connection Timeouts If the client is receiving an ORA-12535 or an ORA-12547 error, the client is timing out before a valid connection is established. This can occur if you have a slow network connection. You can attempt to solve this problem by increasing the time that the listener will wait for a valid response from the client; simply set the CONNECT_TIMEOUT parameter to a higher number. This is the number of seconds the listener waits for a valid response from the client when establishing a connection.
Oracle Net Logging and Tracing on the Server
I
f a network problem persists, you can use logging and tracing to help resolve it. Oracle generates information into log files and trace files that can assist you in tracking down network connection problems. You can use logging to find out general information about the success or failure of certain components of the Oracle Network. Tracing can be used to get in-depth information about specific network connections.
By default, Oracle produces logs for clients and the Oracle listener. Client logging cannot be disabled.
Logging records significant events, such as starting and stopping the listener, along with certain kinds of network errors. Errors are generated in the log in the form of an error stack. The listener log records information such as the version number, connection attempts, and the protocols it is listening for. Logging can be enabled at the client, middle tier, and server locations.
Tracing, which you can also enable at the client, middle-tier, or server location, records all events that occur on a network, even when an error does not happen. The trace file provides a great deal of information that logs do not, such as the number of network round trips made during network connection or the number of packets sent and received during a network connection. Tracing enables you to collect a thorough listing of the actual sequence of the statements as a network connection is being processed. This gives you a much more detailed picture of what is occurring with connections the listener is processing.
Use Tracing Sparingly Tracing should be used only as a last resort if you are having connectivity problems between the client and server. You should complete all of the server-side checks described above before you resort to tracing. This is because the tracing process generates a significant amount of overhead and, depending on the trace level set, it can create some very large files. This activity will impede system I/O performance because of all of the information that is written to the logs, and if left unchecked, it could fill up your disk or filesystem. I was once involved with a large project that was using JDBC to connect to the Oracle server. We were having difficulty with connections being periodically dropped between the JDBC client and the Oracle server. Tracing was enabled to try to assist in figuring out what the problem was. We did eventually correct the problem (it was a problem with how our DNS Names server was configured), but the tracing was left on inadvertently. When the system eventually went into production, the trace files grew so large that they filled up the disk where tracing was being collected. To prevent this from happening, it is important to periodically ensure that the trace parameters are not turned on, and if they are, they should be turned off.
Use the Oracle Net Manager to enable most logging and tracing parameters. Many of the logging and tracing parameters are found in the sqlnet .ora file. Let’s take a look at how to enable logging and tracing for the various components in an Oracle Network.
Server Logging By default, the listener is configured to enable the generation of a log file. The log file records information about listener startup and shutdown, successful and unsuccessful connection attempts, and certain types of network errors. By default, the listener log location is $ORACLE_HOME/network/log on Unix and %ORACLE_HOME%\network\log on Windows NT/2000. The default name of the file is listener.log. The format of the information in the listener.log file is a fixedlength, delimited format with each field separated by an asterisk. If you want to do further analysis of the information in the log, the data in the log can be loaded into an Oracle table using a tool like SQL*Loader. Notice in the sample listing below that the file contains information about connection attempts, the name of the program executing the request, and the name of the client attempting to connect. The last field will contain a zero if a request was successfully completed. TNSLSNR for 32-bit Windows: Version 9.0.1.1.1 - Production on 02-OCT-2001 09:52:02 Copyright (c) 1991, 2001, Oracle Corporation. reserved.
All rights
System parameter file is D:\oracle\ora90\network\admin\listener.ora Log messages written to D:\oracle\ora90\network\log\listener.log Trace information written to D:\oracle\ora90\network\trace\listener.trc Trace level is currently 0 Started with pid=260 Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=mprntw507953.cmg .com)(PORT=1521))) Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(PIPENAME=\\.\pipe\EXT PROC0ipc))) TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
Server Tracing As mentioned earlier, tracing gathers information about the flow of traffic across a network connection. Data is transmitted back and forth in the form of packets. A packet contains sender information, receiver information, and data. Even a single network request may generate a large amount of packets. In the trace file, each line of the file starts with the name of the procedure executed in one of the Oracle Net layers followed by a set of hexadecimal numbers. The hexadecimal numbers are the actual data transmitted. If you are not encrypting the data, sometimes you will see the actual data after the hexadecimal numbers. Each of the Oracle Net procedures is responsible for a different action. Each packet has a different code type depending on the action being taken. All of the packet types start with NSP. Here is a summary of the common packet types. Packet Keyword
Packet Type
NSPTAC
Accept
NSPTRF
Refuse
NSPTRS
Resend
NSPDA
Data
NSPCNL
Control
NSPTMK
Marker
If you are doing server-to-server communications and have a sqlnet.ora file on the server, you can enter information in the Server Information section of the Oracle Net Manager tracing screen. This provides tracing information for server-to-server communications.
There are also several numeric codes that are used to help diagnose and troubleshoot problems with Oracle Net connections. These codes can be found in the trace files. Here is an example of a line from the trace file that contains a code value: nspsend: plen=12, type=4 Here is a summary of the numeric codes that you could encounter in a trace file. Code
Packet Type
1
Connect packet
2
Accept packet
3
Acknowledge packet
4
Refuse packet
5
Redirect packet
6
Data packet
7
Null packet, empty data
9
Abort packet
11
Resend packet
12
Marker packet
13
Attention packet
14
Control information packet
Enabling Server Tracing You can enable server tracing from the same Oracle Net Manager screens shown earlier. Simply choose the Tracing Enabled radio button. The default filename and location is ORACLE_HOME/network/trace/listener.trc in Unix and ORACLE_HOME\network\trace\listener.trc on Windows NT/2000. You can set the trace level to OFF, USER, ADMIN, or SUPPORT. The USER level will detect specific user errors. The ADMIN level contains all of the user-level information along with installation-specific errors. SUPPORT is the highest level and can be used to produce information that may be beneficial to Oracle
Support personnel. This level also can produce very large trace files. The following listing shows an example of a listener trace file: nsglhfre: entry nsglhrem: entry nsglhrem: entry nsglhfre: Deallocating cxd 0x4364d0. nsglhfre: exit nsglma: Reporting the following error stack: TNS-01150: The address of the specified listener name is incorrect TNS-01153: Failed to process string: (DESCRIPTION=(ADDRESS=(PROTOCOL=TC)(HOST=mprntw507953) (PORT=1521))) nsrefuse: entry nsdo: entry nsdo: cid=0, opcode=67, *bl=437, *what=10, uflgs=0x0, cflgs=0x3 nsdo: rank=64, nsctxrnk=0 nsdo: nsctx: state=2, flg=0x4204, mvd=0 nsdo: gtn=152, gtc=152, ptn=10, ptc=2019 nscon: entry nscon: sending NSPTRF packet nspsend: entry nspsend: plen=12, type=4 ntpwr: entry ntpwr: exit You can tell what section of the Oracle Net stack the trace file is in by looking at the first two characters of the program names in the trace file. In the example above, nscon refers to the network session (NS) sublayer of the Oracle Net Foundation Layer. A message is being sent back to the client in the form of an NSPTRF packet. This is a refuse packet, which means that the requested action is being denied. You see the Oracle error number embedded in the error message. In this example, a TNS-01153 error was generated. This error means that the listener failed to start. It also shows the line of information that the listener is failing on. This error could be the result of a problem with another process listening
on the same location or a syntax problem in the listener.ora file. Basically, this error states that a syntax error has occurred, because the protocol was specified as TC and not TCP. In addition to this error, there are some more recent ones. The most recent errors are located at the bottom of the file. The next example shows a section of the listener.ora file with the logging and tracing parameters enabled: # D:\ORACLE\ORA90\NETWORK\ADMIN\LISTENER.ORA Configuration # File:D:\Oracle\Ora90\NETWORK\ADMIN\listener.ora # Generated by Oracle Oracle Net Manager TRACE_LEVEL_LISTENER = ADMIN TRACE_FILE_LISTENER = LISTENER.trc TRACE_DIRECTORY_LISTENER = D:\Oracle\Ora8\network\trace LOG_DIRECTORY_LISTENER = D:\Oracle\Ora8\network\log LOG_FILE_LISTENER = LISTENER.log Table 2.5 contains a summary of the meaning of each of these parameters. TABLE 2.5
listener.ora Log and Trace Parameters Parameter
Definition
TRACE_LEVEL_LISTENER
This parameter turns tracing on and off. The levels are OFF, USER, ADMIN, and SUPPORT. SUPPORT generates the greatest amount of data.
The listener is the main server-side component in the Oracle Net environment. Listener configuration information is stored in the listener.ora file, and the listener is managed using the lsnrctl command line utility. You configure the listener by using the Oracle Net Manager. The Oracle Net Manager provides a graphical interface for creating most of the Oracle Net files you will use for Oracle including the listener.ora file. If multiple listeners are configured, each one will have a separate entry in the listener.ora file. Depending on the capabilities of the operating system, two types of dedicated server connections are possible: bequeath and redirect connections. Bequeath connections are possible if the operating system supports a direct handoff of connection information from one process to another. Redirect sessions are used when the operating system does not support this type of interprocess communication. It requires extra communication between the listener process, the server process, and the client process. Oracle9i provides a feature called Dynamic Registration of Services. This feature allows an Oracle instance to automatically register itself with a listener. The listener must be configured with TCP/IP and listen on Port 1521, or you must specify the parameter LOCAL_LISTENER in the init.ora file. You have to set the parameters INSTANCE_NAME and SERVICE_NAMES in the init.ora file for the Oracle instance to enable Dynamic Registration. You can configure logging and tracing on the Oracle Server using Oracle Net Manager. Logging records significant events, such as starting and stopping the listener, along with certain kinds of network errors. Errors are generated in the log in the form of an error stack. Tracing records all events that occur even when an error does not happen. The trace file provides a great deal of information that logs do not. Tracing uses much more space than logging and can also have an impact on system performance. Enable tracing only if other methods of troubleshooting fail to resolve the problem. Configuring the Oracle server correctly is the first step to successfully implementing Oracle in a network environment. If you do not have the Oracle server network components configured correctly, you will be unable to provide connection support to clients in the Oracle environment. The server network components should be configured and tested prior to moving on to configuring the Oracle clients as described in Chapter 3.
Exam Essentials Be able to define the main responsibilities of the Oracle listener. To fully understand the function of the Oracle listener, you should understand how the listener responds to client connection requests. In addition, you should know the difference between bequeath connections and redirect connections, and you should know under what circumstances the listener will use each. Also, you should be able to outline the steps involved in using each of these connection types. Be able to define what the listener.ora file is and the ways in which the file is created. To understand the purpose of this file, you should know its default contents and know how to make changes to it using the Oracle Net Manager tool. In addition, you should be able to define the different sections of the file and know the definitions of the optional parameters it contains. You should also understand the structure of the listener.ora file when one or more listeners are configured. Understand how to use the lsnrctl command line utility. In order to start up and shut down the listener, you should know how to use the lsnrctl command line utility. You will also need to be able to explain the command line options for the lsnrctl utility, such as services, status, and reload. When using this utility, you should also know the different options available to you and you should be able to define the various set commands. Understand the concepts of static and dynamic service registration. Be able to define the difference between static service registration and dynamic service registration and know what the advantages are of using dynamic service registration over static service registration. Also, be aware of the situations in which you have to use static service registration. And lastly, be familiar with the init.ora parameters that you will need to set in order to enable dynamic service registration. Understand the basics of Oracle and Java connectivity. You should know the basics of configuring Oracle to enable connections to the Oracle9i JVM using IIOP and HTTP.
Be able to diagnose and correct network connectivity problems. You should know the types of server-side errors that can occur and how to diagnose and correct these problems. You should be able to define the difference between logging and tracing and know how to use the types of packet information that you may find in a trace file.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: bequeath connection
Review Questions 1. Which file must be present on the Oracle server to start the Oracle
listener? A. listener.ora B. lsnrctl.ora C. sqlnet.ora D. tnsnames.ora 2. What are the possible ways in which the listener may connect a user to
an Oracle9i instance? (Choose all that apply.) A. Prespawned connection B. Redirect connection C. Bequeath connection D. Multipass connection 3. What is the default name of the Oracle listener? A. lsnrctl B. Listen C. sqlnet D. tnslistener E. None of the above 4. What is the maximum number of databases a listener processes? A. 1 database B. 2 databases C. 10 databases D. 25 databases E. None of the above
5. What is the maximum number of listener.ora files that should
exist on a server? A. One B. Two C. Four D. Eight E. None of the above 6. Which of the following does this phrase characterize? “…records all
events that occur on a network, even when an error does not happen.” A. Oracle Net Manager B. Network tracing C. Network logging D. None of the above 7. When automatic registration of services is used, you will not see the
service listed in which of the following files? A. sqlnet.ora B. tnsnames.ora C. listener.ora D. None of the above 8. Which of the following are not default listener.ora settings?
(Choose all that apply.) A. Listener name = LISTENER B. Port = 1521 C. Protocol = IPX D. Protocol = TCP/IP E. Listener name = lsnrctl
9. Which of the following is the command-line interface used to admin-
ister the listener? A. LISTENER B. lismgr C. TCPCTL D. lsnrctl E. None of the above 10. Which Oracle Net Manager icon should you choose to manage
listeners? A. Services B. Listener Names C. Profile D. Listeners E. None of the above 11. Which parameter sets the number of seconds a server process waits to
get a valid client request? A. connect_waittime_listener_name B. connect_wait_listener_name C. timeout_listener_name D. connect_timeout_listener_name 12. Which of the following is the trace level that will produce the largest
amount of information? A. ADMIN B. USER C. ALL D. SUPPORT E. None of the above
13. What is the maximum number of listeners that can be configured for
a server? A. One B. Two C. Four D. Eight E. None of the above 14. There is a listener called LISTENER. Which of the following is the
correct way to start this listener? A. lsnrctl startup listener B. lsnrctl start C. listener start D. listener start listener 15. There is a listener called listenerA. Which of the following is the
correct command to start this listener? A. lsnrctl startup listenerA B. lsnrctl start C. listener start D. listener startup E. lsnrctl start listenerA 16. Modifications have been made to the listener.ora file from Oracle
Net Manager. When will these modifications take effect? A. Immediately B. After exiting the Oracle Net Manager C. Upon saving the listener.ora file D. After executing lsnrctl refresh E. None of the above
17. There is a listener called listener1 that you want to edit using the
lsnrctl utility. What command would you use to target the listener as the current listener for editing? A. set current_listener listener1 B. accept current_listener listener1 C. reload listener listener1 D. refresh listener listener1 18. Modifications have been made using the lsnrctl facility. What must
be set to ON in order to make the changes permanent? A. save_configuration B. save_listener.ora C. save_config_on_stop D. configuration_save 19. The administrator or DBA wants to make a backup of the listener
file after making changes using lsnrctl. Which command must be implemented to make this backup from the lsnrctl facility? A. create_backup B. save_config_on_stop C. save_config D. save_backup 20. What is the port number to use when you are configuring the listener
for Oracle9i JVM on TCP/IP with SSL? A. 1521 B. 2482 C. 2481 D. 1526 E. None of the above
Answers to Review Questions 1. A. The listener is the process that manages incoming connection
requests. The listener.ora file is used to configure the listener. The sqlnet.ora file is an optional client- and server-side file. The tnsnames.ora file is used for doing local naming resolution. There is no such file as lsnrctl.ora. 2. B, C. The listener can handle a connection request in one of two ways:
it can spawn a process and bequeath (pass) control to that process, or it can redirect the process to a dedicated process or dispatcher when using Oracle Shared Server. 3. E. When creating a listener with the Oracle Net Manager, the Assistant
recommends LISTENER as the default name. When you are starting and stopping the listener via the command line tool, the tool assumes the name of the listener is LISTENER if no listener name is supplied. 4. E. There is no physical limit to the number of services a listener can
listen for. 5. A. Although a listener can listen for an unlimited number of services,
only one listener.ora file is used. If multiple listeners are configured, there will still be only one listener. 6. B. Network tracing is what records all events on the network even if
there is no error involved. Tracing should be used sparingly and only as a last resort in the case of network problems. Logging will log only significant events such as listener startup and connection requests. 7. C. When services are dynamically registered with the listener, their
information is not present in the listener.ora file. 8. C, E. A default listener has a name of LISTENER and listens on
9. D. LISTENER is the default name of the Oracle listener. There is no
such utility as lismgr. TCPCTL was actually an old utility used to start and stop the SQL*NET version 1 listener. The lsnrctl command is used to manage the listener. 10. D. Become familiar with the Oracle Net Manager interface. Listeners
is the correct choice. Profile is used for sqlnet.ora administration. The other choices are not valid menu options. 11. D. When a user makes a connection request, the listener passes control
to some server process or dispatcher. Once the user is attached to this process, all negotiations and interaction with the database pass through this process. If the user supplies an invalid user ID or password, the process waits for a period of time for a valid response. If the user does not contact the server process with a valid response in the allotted time, the server process terminates, and the user must contact the listener so that the listener can again spawn a process or redirect the client to an existing dispatcher. This period of time that the process waits is specified by the connect_timeout_listener_name parameter. This parameter is specified in seconds. 12. D. The highest level of tracing available is the SUPPORT level. This is
the level that would be used to trace packet traffic information. 13. E. There is no maximum number of listeners that can be configured
per server. 14. B. The default listener name is LISTENER. Since this is the default,
simply enter lsnrctl start. The name LISTENER is assumed to be the listener to start in this case. 15. E. Oracle expects the listener to be called LISTENER by default. The
name of the facility to start the listener is lsnrctl. Using lsnrctl start will start the default listener. To start a listener with another name, enter lsnrctl start listener_name.
16. E. Anytime modifications are made to the listener file using the Oracle
Net Manager, either manually or by using lsnrctl, the listener must be reloaded for the modifications to take effect. To perform this reload, get to a command line and enter lsnrctl reload. You could also stop and start the listener, which will have the same effect. Since lsnrctl reload is not one of the choices, none of the above is the correct answer. 17. A. If you want to administer any listener besides the default listener
when using lsnrctl, you must target that listener. set commands are used to change lsnrctl session settings. So, set current_listener listener1 would be the correct command. 18. C. Changes made to the listener.ora file in the lsnrctl facility can
be made permanent. To make changes permanent, set the save_ config_on_stop option to ON. 19. C. The DBA can make a backup of the existing listener.ora file
after making modifications to it using lsnrctl. The backup will be named listener.bak. This is done with the save_config option. 20. B. Port 2482 is the port to use when you want to configure Oracle Net
for HTTP and IIOP connections over TCP/IP with SSL. Port 2481 is used when TCP/IP is used for HTTP and IIOP connections.
Configuring Oracle Net for the Client ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe the difference between host naming and local service name resolution. Use Oracle Net Configuration Assistant to configure: Host Naming, Local naming method, Net service names. Perform simple connection troubleshooting.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
nce the Oracle server is properly configured, you can focus on getting the clients configured to allow for connectivity to the Oracle server. This chapter details the basic network elements of Oracle client configuration. It discusses the different types of service name models you can choose from when creating net service names. In addition, it details available service resolution configurations and how to configure hostnaming and localnaming using the Oracle Net Manager. It also discusses the types of client-side failures that can happen and how to troubleshoot client-side connection problems. It is important to understand how to configure the Oracle clients for connectivity to the Oracle servers. Without proper knowledge of how to configure the client, you are limited in your connection choices to the server. The DBA must understand the network needs of the organization and the type of connectivity that is required, client/server connections versus 3-tier connectivity, for example, in order to make the appropriate choices about client-side configuration. This chapter should help clarify the client-side connectivity options available to you and how to troubleshoot client connection problems.
Client-Side Names Resolution Options
W
hen a client needs to connect to an Oracle server, the client must supply three pieces of information: their user ID, password, and net service name. The net service name provides the necessary information, in the form of a connect descriptor, to locate an Oracle service in a network. This connect descriptor describes the path to the Oracle server and its service name, which is an alias for an Oracle database. This information is kept in different locations depending on the names resolution method that you choose. The three methods of net service name resolution are hostnaming, localnaming, and the Oracle Internet Directory. Normally, you will choose just one of these methods, but you can use any combination.
The Oracle Names Server is still available in Oracle9i, although no further development is being done for the Oracle Names Server. The Oracle Names Server is being replaced by the Oracle Internet Directory as the preferred names resolution method for large Oracle networks.
Choosing hostnaming is advantageous when you want to reduce the amount of configuration work necessary. However, there are a few prerequisites that you must consider before you use this option; we will talk about these and discuss configuring this method for use shortly. This option is typically used in small networks that have few Oracle servers to maintain. Localnaming is the most popular names resolution method used. This method involves configuring the tnsnames.ora file, which contains the connect descriptor information to resolve the net service names. You will see how to configure localnaming after the discussion of hostnaming that follows. Oracle Internet Directory is advantageous when you are dealing with complex networks that have many Oracle servers. When the DBA chooses this method, they will be able to configure and manage net service names and connect descriptor information in a central location. It is important to understand that Oracle Internet Directory is available, but in-depth knowledge of this option is not necessary for success on the OCP exam.
The Hostnaming Method In small networks with few Oracle servers to manage, you can take advantage of the hostnaming method. Hostnaming saves you from having to do configuration work on the clients, although it does have limitations. There are four prerequisites to using hostnaming:
You must use TCP/IP as your network protocol.
You must not use any advanced networking features, such as Oracle Connection Manager.
You must have an external naming service, such as DNS, or have a HOSTS file available to the client.
The listener must be set up with the GLOBAL_DBNAME parameter equal to the name of the machine.
Configuring the Hostnaming Method By default, Oracle will only attempt to use the hostnaming method from the client after it attempts connections using localnaming. If you want to override this default search path for resolving names, set the NAMES.DIRECTORY_ PATH parameter in the sqlnet.ora file on the client so that it searches for hostnaming only. You can configure this parameter using the Oracle Net Manager (see Figure 3.1). To configure the parameter using Oracle Net Manager, choose Profile from the Local tab and select Naming from the drop-down list at the top of the screen. This brings up a list of naming methods that are available. The Selected Methods list displays the naming methods being used and the order in which the methods are used to resolve service names. The Available Methods list displays the methods that have not been included in the selected methods. To change the list of available methods, use your mouse to highlight a method name and click the arrow key (>) to include it in the list of selected methods. You can remove a name by selecting it in the list of selected methods and clicking the other arrow key (ping mil02ora Pinging mil02ora [10.1.5.210] with 32 bytes of data: Reply Reply Reply Reply
from from from from
10.1.5.210: 10.1.5.210: 10.1.5.210: 10.1.5.210:
bytes=32 bytes=32 bytes=32 bytes=32
time (ADDRESS=(PROTOCOL=tcp)(HOST=mprntw507953.cmg.com) (PORT=1038)) "D000" established:15 refused:3 current:2 max:1002 state:ready DISPATCHER <machine: MPRNTW507953, pid: 117> (ADDRESS=(PROTOCOL=tcp)(HOST=mprntw507953.cmg.com) (PORT=1036)) The command completed successfully
Data Dictionary Views for Shared Server The data dictionary provides views you can query to gather information about the Shared Server environment. These views provide information about the number of dispatchers and shared servers configured, the activity among the shared servers and dispatchers, the activity in the request and response queue, as well as the clients that are connected with shared server connections. The data dictionary views are described in the following sections. For a complete listing of all of the column definitions for the V$ views, consult the Oracle9i Database Reference Release 1 (9,0.1) Part Number A90190-02.
V$DISPATCHER Dictionary View The V$DISPATCHER view contains information about the dispatchers. You can collect information about the dispatchers’ activity, the number of connections
the dispatchers are currently handling, and the total number of connections each dispatcher has handled since instance startup. Here is a sample output from the V$DISPATCHER view: SQL> select name,status,messages,idle,busy,bytes,breaks from 2 v$dispatcher NAME ---D000 D001
V$DISPATCHER_RATE Dictionary View The V$DISPATCHER_RATE view shows statistics for the dispatchers, such as the average number of bytes processed, the maximum number of inbound and outbound connections, and the average rate of bytes processed per client connection. The columns in the table that begin with CUR show current statistics. Columns that begin with AVG or MAX show historical statistics taken at some time interval. The time interval is typically measured in hundredths of a second. The scale measurement periods used for each of the column types is contained in the columns that begin with SCALE. This information can be useful when you are taking load measurements for the dispatchers. Here is a sample of the output from this view. SQL>select name,cur_event_rate,cur_msg_rate, cur_svr_byte_rate from v$dispatcher_rate NAME CUR_EVENT_RATE CUR_MSG_RATE CUR_SVR_BYTE_RATE ---- -------------- ------------ ----------------D000 12 0 0 D001 14 0 1
V$QUEUE Dictionary View The V$QUEUE dictionary view contains information about the request and response queues. The information deals with how long requests are waiting in the queues. This information is valuable when you are trying to determine
if more shared servers are needed. The following example shows the COMMON request queue and two response queues: SQL> select * from v$queue; PADDR -------00 03C6C244 03C6C534
V$CIRCUIT Dictionary View V$CIRCUIT displays information about Shared Server virtual circuits, such as the volume of information that has passed between the client and the dispatcher and the current status of the client connection. The SADDR column displays the session address for the connected session. This can be joined to the V$SESSION view to display information about the user to whom this connection belongs. Here is a sample output from this view: SQL> select circuit,dispatcher,server,waiter WTR, 2 status,queue,bytes from v$circuit; CIRCUIT -------03E2A624 03E2A724
DISPATCH -------03C6C244 03C6C534
SERVER -------00 03C6BC64
WTR --00 00
STATUS -----NORMAL NORMAL
QUEUE -----NONE SERVER
BYTES ----47330 43572
SADDR -----03C7AB68 03C79BE8
V$SHARED_SERVER Dictionary View This view contains information about the shared server processes. It displays information about the number of requests and the amount of information processed by the shared servers. It also indicates the status of the shared server (i.e., whether it is active or idle). SQL> select name,status,messages,bytes,idle,busy, requests from v$shared_server;
V$SHARED_SERVER_Monitor Dictionary View This view contains information that can assist in tuning the Shared Server. This includes the maximum number of concurrent connections attained since instance startup and the total number of servers started since instance startup. The query below shows an example of output from the V$SHARED_SERVER view. SQL> select maximum_connections “MAX CONN”,maximum_ sessions “MAX SESS”, servers_started “STARTED” from v$shared_server_monitor; MAX CONN MAX SESS STARTED -------- --------- -------115 120 10
V$SESSION Dictionary View This view contains information about the client session. The SERVER column indicates whether this client is using a dedicated session or a dispatcher. The listing below shows an example of the V$SESSION view displaying the server information. This listing ignores any rows that do not have a username to avoid listing information about the background processes. Notice that user Scott has a server value of SHARED. This means Scott is connected to a dispatcher. The SYSTEM user is connected using a local connection because the status is NONE. If a user connected using a dedicated connection, the status would be DEDICATED. SQL> select username,program,server from v$session where username is not null; USERNAME --------------SYSTEM SCOTT
V$MTS Dictionary View This view contains information about the configuration of the dispatchers and shared servers. This includes the maximum number of connections for each dispatcher, the number of shared servers that have been started and stopped, and the highest number of shared servers that have been active at the same time. This view gives you an indication of whether more shared server processes should be started. The sample below shows output from this view: SQL> select MAX_CONNECTIONS MAX_CONN, SERVERS_STARTED SRV_STARTED, SERVERS_TERMINATED SRV_TERM, SERVERS_HIGHWATER SRV_HW FROM V$MTS; MAX_CONN SRV_STARTED SRV_TERM SRV_HW -------- ----------- -------- -----60 0 0 2
The V$MTS view is identical in content to the V$SHARED_SERVER_MONITOR view. V$MTS was the name for this view at Oracle8i release and is still available for reference.
Requesting a Dedicated Connection in a Shared Server Environment
You can have Shared Server and dedicated servers connecting to a single Oracle server. This is advantageous in situations where you have a mix of activity on the Oracle server. Some users may be well suited to shared server connections while other types of users may be better suited to use dedicated connections. By default, if Shared Server is configured, a client is connected to a dispatcher unless the client explicitly requests a dedicated connection. As part of the connection descriptor, the client has to send information requesting a dedicated connection. Configure this option using the Oracle Net Manager. Clients may request this type of connection if the names resolution method is localnaming. This option cannot be used with hostnaming.
Configuring Dedicated Connections When Localnaming Is Used If you are using localnaming, you want to add a parameter to the service name entry in the tnsnames.ora file. The parameter (SERVER=DEDICATED) is added to the DBA net service name. The SERVER parameter can also be abbreviated as SRVR. Here is an example of the entry in the tnsnames.ora file. # D:\ORACLE\ORA90\NETWORK\ADMIN\TNSNAMES.ORA Configuration # File:D:\Oracle\Ora90\NETWORK\ADMIN\tnsnames.ora # Generated by Oracle Net Manager DBA = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = weishan) (PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = DBA) (SRVR = DEDICATED) Request a dedicated connection for DBA ) )
Tuning the Shared Server Option
Before tuning the Shared Server, you should examine the performance of the dispatchers and the shared server processes. You want to make sure that you have enough dispatchers so that clients are not waiting for dispatchers to respond to their requests, and you want to have enough shared server processes so that requests are not waiting to be processed. You also want to configure the Large Pool SGA memory area. The Large Pool is used to store the UGA. The UGA takes the place of the PGA that is used for dedicated servers. The Large Pool is designed to allow the database to request large amounts of memory from a separate area of the SGA. Before the database had a Large
Pool design, memory allocations for Shared Server came from the Shared Pool. This caused Shared Server to compete with other processes updating information in the Shared Pool. The Large Pool alleviates the memory burden on the Shared Pool and enhances performance of the Shared Pool.
Configure the Large Pool You can configure the Large Pool by setting the parameter LARGE_POOL_SIZE in the init.ora file. This parameter can be set to a minimum of 300KB and a maximum of at least 2GB; the maximum setting is operating system dependent. When a default value is used, Oracle adds 250KB per session for each shared server if the DISPATCHERS parameter is specified. If you do not configure a Large Pool, Oracle will place the UGA into the Shared Pool. Because of this, you should configure a Large Pool when using Shared Server so that you don’t affect the performance of the Shared Pool. Here is an example of setting the LARGE_POOL_SIZE parameter in the init.ora file: LARGE_POOL_SIZE = 50M You can see how much space is being used by the Large Pool by querying the V$SGASTAT view. The free memory row shows the amount available in the Large Pool and the session heap row shows the amount of space used in the Large Pool. Here is a listing that shows an example of the query: SQL> select * from v$sgastat where pool = 'Large Pool'; POOL ----------large pool large pool
NAME BYTES -------------------------- --------free memory 251640 session heap 48360
Sizing the Large Pool The Large Pool should be large enough to hold information for all of your shared server connections. Generally, each connection will need between one and three megabytes of memory, but this depends on that client’s type of activity. Clients that do a great deal of sorting or open many cursors will use more memory.
You can gauge how much memory shared server connections are using by querying the V$SESSTAT view. This view contains information about memory utilization per user. The query below shows how to measure the maximum amount of memory for all shared server sessions since the instance was started. You can use this as a guide to determine how much memory you should allocate for the Large Pool. This example shows that the maximum amount of memory used for all shared server sessions is around 240KB: select sum(value) "Max MTS Memory Allocated"from v$sesstat ss, v$statname st where name = 'session uga memory max'and ss.statistic# =st.statistic#; Max MTS Memory Allocated -----------------------244416
Determine Whether You Have Enough Dispatchers The dispatcher processes can be monitored by querying the V$DISPATCHER view. This view contains information about how busy the dispatcher processes are. Query this view to determine whether it will be advantageous to start more dispatchers. The sample query below runs against the V$DISPATCHER view to show what percentage of the time dispatchers are busy: Select name, (busy / (busy + idle))*100 "Dispatcher % busy Rate" From V$DISPATCHER Protocol -----------D000 D001
These dispatchers show very little busy time. If dispatchers are busy more than 50 percent of the time, you should consider starting more dispatchers. This can be done dynamically with the ALTER SYSTEM command. Add one or
two more dispatchers and monitor the busy rates of the dispatchers to see if they fall below 50 percent.
Determine How Long Users Are Waiting for Dispatchers To measure how long users are waiting for the dispatchers to execute their request, look at the combined V$QUEUE and V$DISPATCHER views. See the listing below for an example: SELECT decode(sum(totalq),0,’No Responses’, Sum(wait)/sum(totalq)) “Average Wait time” FROM V$QUEUE q, V$DISPATCHER d WHERE q.type = ‘DISPATCHER’ AND q.paddr = d.paddr; Average Wait Time -----------------.0413 The average wait time for dispatchers is a little more than four hundredths of a second. Monitor this measure over time. If the number is consistently increasing, you should consider adding more dispatchers.
Determine Whether You Have Enough Shared Servers You can monitor shared servers by using the V$SHARED_SERVER and V$QUEUE dictionary views. The shared servers are responsible for executing client requests and placing the requests in the appropriate dispatcher response queue. The measurement you are most interested in is how long client requests are waiting in the request queue. The longer the request remains in the queue, the longer the client will wait for a response. The following statement will tell you how long requests are waiting in the queue: Select decode(totalq,0,’No Requests’) “Wait Time”, Wait/totalq || ‘ hundredths of seconds’ “Average Wait time per request” from V$QUEUE where type = ‘COMMON’
Wait Time Average Wait time per request -------- ----------------------------------.023132 hundredths of a second The average wait time in the request queue is a little more than two hundredths of a second. Monitor this measure over time. If the number is consistently increasing, you should consider adding more shared servers.
Summary
T
he Shared Server is a configuration of the Oracle server that allows you to support a greater number of connections without the need for additional resources. It is important to understand the shared server option because it can stave off potentially unnecessary hardware upgrades when you are faced with the problem of the number of processes your server can manage. In this configuration, user connections share processes called dispatchers. Dispatchers replace the dedicated server processes in a dedicated server environment. The Oracle server is also configured with shared server processes that can process the requests of many clients. The Oracle server is configured with a single request queue in which dispatchers place the client requests that the shared servers will take and process. The shared server processes put the completed requests in the appropriate dispatcher’s response queue. The dispatcher then sends the completed request back to the client. These request and response queues are structures added to the SGA. There are a number of parameters that are added to the init.ora file to configure Shared Server. Dispatchers and shared servers can be added dynamically after the Oracle server has been started. You can add more shared servers and dispatchers up to the maximum value specified. There are several V$ views that are used to monitor Shared Server. The information contained in these views pertains to dispatchers, shared server processes, and the clients that are connected to the dispatcher processes. You can use the V$ views to tune the Shared Server. It is most important to measure how long clients are waiting for dispatchers to process their requests and how long it is taking before a shared server processes the client requests. These factors may lead to increasing the number of shared server and dispatcher processes. You also want to monitor the usage of the Large Pool.
Exam Essentials Define Oracle Shared Server. It will be important for you to be able to list the advantages of Shared Server versus a dedicated server and when it is appropriate to consider either option. Understand the architecture of the Oracle Shared Server. Be able to summarize the steps that a client takes to initiate a connection with a shared server and the processes behind those steps. You should understand what happens during client request processing and outline the steps involved. Understand the changes that are made in the SGA and the PGA. Make sure you understand that in a Shared Server environment many of the PGA structures are moved in the Large Pool inside of the SGA. This means that the SGA will become larger and the Large Pool will need to be configured in the init.ora file. Know how to configure the Oracle Shared Server. You should be able to define the meaning of each of the parameters involved in the configuration of Oracle Shared Server. You should know what parameters can be dynamically modified and what parameters require the Oracle instance to be restarted to take effect. Know how to configure clients running in Shared Server mode. You should be able to configure clients that need to have a dedicated connection to Oracle if it is running in Shared Server mode. Know what views to use to monitor the Shared Server performance. It is important that you be able to use the available V$ views to monitor and tune the shared server and know how to adjust settings when necessary.
Review Questions 1. All of the following are reasons to configure the server using Shared
Server except: A. There is a reduction of overall memory utilization. B. The system is predominantly used for decision support with large
result sets returned. C. The system is predominantly used for small transactions with
many users. D. There is a reduction of the number of idle connections on the
server. 2. Which of the following is true about Shared Server? A. Dedicated connections cannot be made when Shared Server is
configured. B. Bequeath connections are not possible when Shared Server is
configured. C. The database can be started when connected via Shared Server. D. The database cannot be stopped when connected via Shared
Server. 3. The administrator wants to allow a user to connect via a dedicated
connection into a database configured in Shared Server mode. Which of the following lines would accomplish this? A. (SERVER=DEDICATED) B. (CONNECT=DEDICATED) C. (INSTANCE=DEDICATED) D. (MULTITRHEADED=FALSE) E. None of the above
4. In what file would you find the shared server configuration parameters? A. listener.ora B. mts.ora C. init.ora D. tnsnames.ora E. sqlnet.ora 5. Which of the following is one of the components of Shared Server? A. Shared user processes B. Checkpoint processes C. Dispatcher processes D. Dedicated server processes 6. The DBA wants to put the database in Shared Server mode. In what
file will modifications be made? A. tnsnames.ora B. cman.ora C. names.ora D. init.ora 7. What choice in the Oracle Net Manager allows for the configuration
of Shared Server? A. Local B. Service Naming C. Listeners D. Profile E. None of the above
8. The DBA wants two TCP/IP dispatchers and one IPC dispatcher to
start when the instance is started. Which line will accomplish this? A. dispatchers=(protocol=tcp)(dispatchers=2)
(protocol=IPC)(dispatchers=1)dispatchers=(protocol= tcp)(dispatchers=2) (protocol=IPC)(dispatchers=1) B. dispatchers=“(protocol=tcp)(dispatchers=2)
(protocol=IPC)(dispatchers=1)” C. dispatchers_start=(protocol=tcp)(dispatchers=2)
(protocol=IPC)(dispatchers=1) D. dispatchers_start=(pro=tcp)(dis=2) (pro=IPC)(dis=1) 9. What is the piece of shared memory that client connections are bound
to during communications via Shared Server called? A. Program Global Area B. System Global Area C. Virtual Circuit D. Database Buffer Cache E. None of the above 10. What is the first step the dispatcher should take after it has received a
request from the user? A. Pass the request to a shared server. B. Place the request in a request queue in the PGA. C. Place the request in a request queue in the SGA. D. Process the request. 11. Dispatchers have all of the following characteristics except: A. Dispatchers can be shared by many connections. B. More dispatchers can be added dynamically with the ALTER
SYSTEM command. C. A dispatcher can listen for multiple protocols. D. Each dispatcher has its own response queue.
12. When configured in Shared Server mode, which of the following is
contained in the PGA? A. Cursor state B. Sort information C. User session data D. Stack space E. None of the above 13. Which of the following is false about shared servers? A. Shared servers can process requests from many users. B. Shared servers receive their requests directly from dispatchers. C. Shared servers place completed requests on a dispatcher response
queue. D. The SERVERS parameter configures the number of shared servers
to start at instance startup. 14. Which of the following is not a step in the processing of a shared server
request? A. Shared servers pass information back to the client process. B. Dispatchers place information in a request queue. C. Users pass requests to a dispatcher. D. The dispatcher picks up completed requests from its response
queue. E. None of the above. 15. When you are configuring Shared Server, which initialization parameter
would you likely need to increase? A. DB_BLOCK_SIZE B. DB_BLOCK_BUFFERS C. SHARED_POOL_SIZE D. BUFFER_SIZE E. None of the above
16. Which of the following is false about request queues? A. They reside in the SGA. B. They are shared by all of the dispatchers. C. Each dispatcher has its own request queue. D. The shared server processes remove requests from the request
queue. 17. The DBA is interested in gathering information about users connected
via shared server connections. Which of the following is the view that would contain this information? A. V$USERS B. V$QUEUE C. V$SESS_STATS D. V$CIRCUIT E. None of the above 18. What is the process that is responsible for notifying the listener after
a database connection is established? A. SMON B. DBWR C. PMON D. LGWR 19. The DBA is interested in gathering performance and tuning related
information for the shared server processes. The DBA should start by querying which of the following views? A. V$USERS B. V$CIRCUIT C. V$SHARED_SERVER_MONITOR D. V$SESS_STATS
Answers to Review Questions 1. B. Shared Server is a scalability option of Oracle. It provides a way to
increase the number of supported user processes while reducing the overall memory usage. This configuration is well suited to high-volume, small transaction-oriented systems with many users connected. Because users share processes, there is also an overall reduction of the number of idle processes. It is not well suited for large data retrieval type applications like decision support. 2. D. Users can still request dedicated connections in a shared server con-
figuration. Bequeath and dedicated connections are one and the same. The database cannot be stopped or started when a user is connected over a shared server connection. 3. A. A user must explicitly request a dedicated connection when a server is
configured in Shared Server mode. Otherwise, the user will get a shared server connection. The correct parameter is (SERVER=DEDICATED). 4. C. The shared server configuration parameters exist in the init.ora
file on the Oracle server machine. 5. C. In Shared Server, users connect to a pool of shared resources called
dispatchers. A client connects to the listener and the listener redirects the request to a dispatcher. The dispatchers handle all of the user requests for the session. Many users can share dispatchers. 6. D. Because the database has to be configured in Shared Server mode,
changes have to be made to the init.ora file. The other choices are also configuration files, but none of them are used to configure Shared Server. 7. E. This is one of the tricky questions again! Many options and files can
be configured by the Oracle Net Manager, including tnsnames.ora and sqlnet.ora. But because Shared Server is a characteristic of the database server and not of the network, Oracle Net Manager is not used to configure it.
8. B. Back to syntax again! The DISPATCHERS parameter of the init.ora
file is used to configure dispatchers, so the correct answer is option B. All of the other choices are invalid parameters. 9. C. The System Global Area is the shared memory segment Oracle
obtains on instance startup. The Program Global Area is an area of memory used primarily during dedicated connections. The Database Buffer Cache is actually a component of the Program Global Area. Virtual Circuits are the shared memory areas to which clients bind. 10. C. Once a dispatcher receives a request from the user process, it places the
request on the request queue. Remember that in a shared server environment, a request can be handled by a shared server process. This is made possible by placing the request and user information in the SGA. 11. C. Many users can connect to dispatchers, and dispatchers can be
added dynamically. Also, each dispatcher does have its own response queue. The only one of these options that is false is option C because dispatchers can listen for only one protocol. Multiple dispatchers can be configured so that each is responsible for different protocols. 12. D. A small PGA is maintained even though most of the user-specific
information is moved to the SGA (specifically called the UGA in the Shared Pool or the Large Pool). The only information left in the reduced PGA is stack space. 13. B. Shared servers can process requests from many users. The completed
requests are placed into the dispatchers’ response queues. The servers are configured with the SERVERS parameter. However, shared servers do not receive requests directly from dispatchers. The requests are taken from the request queue. 14. A. Study the steps of what happens during a request via Shared Server.
Dispatchers receive requests from users and place the requests on request queues. Only dispatchers interact with client processes. Shared servers merely execute the requests and place the results back on the dispatcher’s response queue.
15. C. Shared Server requires a shift of memory away from individual ses-
sion processes to the SGA. More information has to be kept in the SGA (in the UGA) within the Shared Pool. A Large Pool can also be configured and would probably be responsible for the majority of the SGA space allocation. But, because that was not a choice, option C is the correct answer. The block size and block buffers settings do not affect Shared Server. 16. C. Request queues reside in the SGA, and there is one request queue
per instance. This is where shared server processes pick up requests that are made by users. Dispatchers have their own response queues but they share a single request queue. 17. D. There are several V$ views that can be used to manage the Shared
Server. V$QUEUE gives information regarding the request and response queues. V$USERS and V$SESS_STATS are not valid views. V$CIRCUIT will give information about the users who are connected via shared server connections, and it will provide the necessary information. 18. C. The PMON process is responsible for notifying the listener after a
client connection is established. This is so that the listener can keep track of the number of connections being serviced by each dispatcher. 19. C. The V$SHARED_SERVER_MONITOR view can be queried to view
information about the maximum number of connections and sessions, the number of servers started and terminated, and the server highwater mark. These numbers can help determine whether the DBA should start more shared servers. 20. C. Dispatchers register with listeners so that when a listener redirects
a connection to a dispatcher, the listener knows how many active connections the dispatcher is serving. The lsnrctl status command summarizes the number of connections established, connections currently active, and other valuable information regarding Shared Server. The lsnrctl services command only gives a summary of dispatchers, not any details about connections.
Backup and Recovery Overview ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe the basics of database backup, restore and recovery. List the types of failure that may occur in an Oracle environment. Define a backup and recovery strategy.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
ackup and recovery in an Oracle database environment can be simple or complex depending on the requirements of the business environment the database is supporting. Oracle provides methods for supporting such environments, and each of these methods requires different levels of complexity for backup and recovery operations. First of all, there are multiple types of failures that may occur in an Oracle database environment. Each of these can result in different types of recovery operations. You must understand these types of failure in order to make the correct recovery decisions. Once you understand Oracle backup and recovery and the possible types of Oracle database failures, you will be able to create a backup and recovery strategy. When you are determining this strategy, you need to consider a number of issues. First of all, keep in mind that in order for backup and recovery to be successful, everyone from the technical team through management must understand the requirements and the effects of the backup and recovery strategy. After this strategy is agreed upon and in place, a disaster recovery plan can be created based upon this strategy. When you are creating your disaster recovery plan, it’s important that you understand the options for high availability, as well as the options for configuring your database for recoverability. After you have successfully created this plan, the final step is to test it. This chapter takes you step-by-step through the basic principles of backup and recovery: it introduces you to the types of failures that may occur, and to the backup and recovery strategy. In the end, you should be comfortable with your knowledge of what is involved in the Oracle backup, restore, and recovery process. You should understand enough about the different types of failures so that you can identify the appropriate course of action to implement in a recovery situation. This level of understanding will not only make your job as a DBA easier, but it should also make it much more comfortable.
If you understand and can identify the aspects of the Oracle database that are required for normal operation, you will understand what must be backed up, restored, and recovered. The Oracle database is made up of a set of physical structures that must be present and consistent for the database to function normally. At a minimum, these physical structures consist of data files, redo logs, control files, and initialization files. If any of these files are not present, the database may not start up or it may halt during normal operations. All of these files must be backed up on a regular basis to disk, tape, or both. Such a backup can consist of a user-managed backup or Recovery Manager (RMAN)-based backup. A user-managed backup consists of any custom backup; such a backup is usually performed in an OS script such as a Unix shell script or the DOS-based batch script. These scripts execute database commands and OS commands to copy the necessary database files to disk or tape. An RMAN-based backup is performed by the Oracle recover manager utility, which is part of the Oracle software. RMAN backups are performed by executing standard RMAN commands or scripts. Both of these backups can be used to restore the necessary database files from disk or tape to the desired location. The restore process consists of copying the database files from tape or disk to a desired location so that database recovery can begin. The recovery process consists of starting the database and making it consistent using a complete or partial backup copy of some of the physical structures of the database. Recovery has many options depending on the type of backups that are performed. We will discuss the different types of recovery in user-managed and RMAN-based situations in later chapters.
Types of Failure in Oracle Environments
There are two major categories of database failures: non-media failures and media (disk) failures. Non-media failures consist of four types of failures, which are typically less critical in nature. Media failures have only one type of
failure, which is generally more critical in nature—the inability to read or write from a database file.
Non-Media Failures This type of failure is made up of statement failures, process failures, instance failures, and user errors, and it is almost always less critical than a media failure. In most cases, statement, process, and instance failures are automatically handled by Oracle and require no DBA intervention. User error can require a manual recovery performed by the DBA. Statement failure consists of a syntax error in the statement, and Oracle usually returns an error number and description. Process failure occurs when the user program fails for some reason, such as when there is an abnormal disconnection or a termination. The process monitor (PMON) process usually handles cleaning up the terminated process. Instance failure occurs when the database instance abnormally terminates due to a power spike or outage. Oracle handles this automatically upon start-up by reading through the current online redo logs and applying the necessary changes back to the database. User error occurs when a table is erroneously dropped or data is erroneously removed.
Media, or Disk, Failures These failures are the most critical. A media failure occurs when the database fails to read or write from a file that it requires. For example, a disk drive could fail, a controller supporting a disk drive could fail, or a database file could be removed, overwritten, or corrupted. Each type of media failure that occurs requires a different method for recovery. The basic steps you should take to perform media recovery are as follows: 1. Determine which files will need to be recovered: data files, control
files, and/or redo logs. 2. Determine which type of media recovery is required: complete or
incomplete, opened database, or closed database. (You will learn more about these types of recovery in later chapters.) 3. Restore backups of the required files: data files, control files, and
offline redo logs (archived logs) necessary to recover.
4. Apply offline redo logs (archived logs) to the data files. 5. Open the database at the desired point, depending on whether you are
performing a complete or an incomplete recovery. 6. Perform frequent testing of the process. Create a test plan of typical
failure scenarios.
Defining a Backup and Recovery Strategy
T
o create a solid backup and recovery strategy, you must keep in mind six major requirements:
The amount of data that can be lost in the event of a database failure
The length of time that the business can go without the database in the event of a database failure
Whether the database can be offline to perform a backup, and if so, the length of time that it can remain offline
The types of resources available to perform backup and recovery
The procedures for undoing changes to the database, if necessary
The cost of buying and maintaining hardware and performing additional backups versus the cost of replacing or re-creating the data lost in a disaster
All of these requirements must clearly be understood before you plan a backup and recovery strategy.
Losing Data in a Database Failure The amount of data that can be lost in a failure helps determine the backup and recovery strategy that gets implemented. For instance, if losing a week’s worth of data in the event of failure is tolerable, then a weekly backup may be a possible option. On the other hand, if no data can be lost in the event of failure, then weekly backups would be out of the question and backups would need to be performed daily.
Surviving without the Database in a Database Failure If the company database were to fail during an outage, how long would it take for the business to be negatively affected? Generally, this question can be answered by management. If all data is entered manually by data entry, the downtime could be relatively long without hurting the business operations. The business could potentially operate normally by generating orders or forms that could be entered into the database later. This type of situation could have minimal effect on the business. On the other hand, a financial institution that sends and receives data electronically 24 hours a day can’t afford to be down for any time at all, and if it were, business operations would most definitely be impaired. The electronic transactions could be unusable until the database was recovered. After you determine how long the business could survive without a database, you can use the mean time to recovery (MTTR) to figure out the average amount of time the database could be down if it were to fail. The MTTR is the average time it takes to recover from certain types of failure. You should record each type of failure that is tested so that you can then determine an average recovery time. The MTTR can help determine mean recovery times for different failure scenarios. You can document these times during your testing cycles.
Performing an Offline Backup To determine whether it is possible to perform a database backup if the database is offline or shut down, you must first know how long the database can afford to be out of commission. For example, if the database is being used with an Internet site that has national or international access, or if it is being used with a manufacturing site that works two or three shifts across different time zones and has nightly batch processing, then it would have to be available 24 hours a day. In this case, the database would always need to remain online, with the exception of scheduled downtimes for maintenance. In this case, an online backup, or hot backup, would need to be performed. This type of backup is done when the database is online or running. Businesses that don’t require 24-hour availability and do not have long batch processing activities in the evening could potentially afford to have the
database offline on regular nightly intervals for an offline backup, or cold backup. In this scenario, each site should conduct their own backup tests with factors unique to their environment to determine how long it would take to perform a cold backup. If the database downtime is acceptable for that site, then a cold backup could be a workable solution.
Knowing Your Backup and Recovery Resources The personnel, hardware, and software resources available to the business also affect the backup and recovery strategy. Personnel resources would include at least adequate support from a database administrator (DBA), system administrator (SA), and operator. The DBA would be responsible for the technical piece of the backup, such as user-managed scripts or Recovery Manager (RMAN) scripts. A user-managed backup is an OS backup written in an OS scripting language, such as the Korn shell in the Unix OS. RMAN is an automated tool from Oracle that can perform the backup and recovery process. The SA would be involved in some aspects of the scripting, tape backup software, and tape hardware. The operator might be involved in changing tapes and ensuring that the proper tape cycles are followed. The hardware resources could include an automated tape library (ATL), a stand-alone tape drive, adequate staging disk space for scripted hot backups and exports, adequate archived log disk space, and third disk mirrors. Many storage subsystem hardware vendors are offering their own third disk mirror options or equivalents. These options create disk copies of 100 gigabytes and greater in just a few minutes. All types of disk subsystems should be at least mirrored or use some form of Redundant Array of Inexpensive Disks (RAID), such as RAID 5, where performance is not compromised. The software resources could include backup software, scripting capabilities, and tape library software. The Oracle RMAN utility comes with the Oracle9i Server software and is installed when selecting all components of Oracle9i Enterprise Server. The technical personnel, the DBA and SA at a minimum, are generally responsible for informing the management of the necessary hardware and software to achieve the desired recovery goals.
RAID is essentially fault tolerance that protects against individual disk crashes. There are multiple levels of RAID. RAID 0 implements disk striping without redundancy. RAID 1 is standard disk mirroring. RAID 2–5 offer some form of parity-bit checking on separate disks. RAID 5 has become the most popular in recent years, with many vendors offering their own enhancements to RAID 5 for increased performance. RAID 0 + 1 has been a longtime fault-tolerance and performance favorite for Oracle database configurations. This is due to redundancy protection and strong write performance. However, with the new RAID 5 enhancements (performed by some storage array vendors to include large caches or memory buffers), the write performance has improved substantially. RAID 0 + 1 and RAID 5 both can be viable configurations for Oracle databases.
Undoing Changes to the Database There are three primary ways of undoing changes to the database; one is a manual approach, the other two methods use Oracle features to undo the data.
Manually—by reexecuting code or rebuilding tables
Using Oracle LogMiner to recover dropped objects
Using a new Oracle9i feature called Flashback Query
Whether it is possible to undo changes to the database with the manual approach depends on the sophistication of the code releases and the configuration management control for the application in question. If the configuration control is highly structured with defined release schedules, then undoing changes may not be necessary. A highly structured release schedule would reduce the possibility of data errors or dropped database objects. On the other hand, if the release schedule tends to be unstructured, the potential for data errors from developers can be higher. It is a good idea to prepare for these issues in any case. A full export can be done periodically, which would give the DBA a static copy of all the necessary objects within a database. Although exports have limitations, they can be useful for repairing data errors because individual users and tables can be extracted from the export file. Additionally, individual tablespace backups can be performed more frequently on high-use tablespaces.
Oracle LogMiner was first introduced in Oracle8/8i. This utility rebuilds data from redo log–generated transactions. LogMiner allows you to rebuild erroneously dropped tables by performing a series of steps that include building an external data dictionary and identifying the transactions that must be reloaded. LogMiner is run by using Procedural Language SQL (PL/SQL) procedures to build the LogMiner utility. Table 5.1 describes the PL/SQL procedures involved with using LogMiner. The Data Manipulation Language (DML) can be seen in the v$logmnr_contents table after the PL/SQL procedures have been executed. TABLE 5.1
LogMiner PL/SQL Procedures PL/SQL Procedure
Purpose
sys.dbms_logmnr_d.build
Builds data dictionary
dbms_logmnr.add_logfile
Accesses desired redo log file
dbms_logmnr.start_logmnr
Begins using LogMiner session
Oracle Flashback Query is a new feature to Oracle9i. This feature allows a user to access past versions of dictionary tables. This feature works by generating a picture of data as it was in the past by using undo data. This is performed by identifying all data that has been modified since undo was created and retained against the retention policy. Then the corresponding undo data is retrieved. The Flashback Query feature is performed by executing the PL/SQL dbms_flashback package. Further, Automatic Undo Management must be enabled by setting the init.ora file to the UNDO_MANAGEMENT = AUTO parameter. There also must be an undo tablespace parameter designated, such as UNDO_TABLESPACE = UNDOTBS. This must be set in the init.ora before the Flashback Query can be used. The parameter that controls the length of retention of the Flashback Query is called UNDO_RETENTION. This parameter may be set with UNDO_RETENTION = n with n being an integer value in seconds.
Flashback Query cannot query data that is greater than five days old. This is true even if the UNDO_RENTENTION parameter is set to a value greater than five days old.
Weighing the Costs Additional hardware is usually needed to perform adequate testing and failover for critical databases. When this additional hardware is unavailable, the risk of an unrecoverable database failure is greater. The cost of the additional hardware should be weighed against the cost of re-creating the lost data in the event of an unrecoverable database failure. This type of cost comparison will cause the management team to identify the steps necessary to manually re-create lost data, if this can be done. Once the steps for re-creating lost data are identified and the associated costs are determined, these costs can be compared to the cost of additional hardware. This additional hardware would be used for testing backups and as a system failover if a production server was severely damaged.
Testing a Backup and Recovery Plan
O
ne of the most important (but also most overlooked) components of the recovery plan is testing. Testing should be done before and after the database that you are supporting is in production. Testing validates that your backups are working, and gives you the peace of mind that recovery will work when a real disaster occurs. You should document and practice scenarios of certain types of failures so that you are familiar with them, and you should make sure that the methods to recover from these types of failures are clearly defined. At a minimum, you should document and practice the following types of failures:
Loss of a system tablespace
Loss of a nonsystem tablespace
Loss of a current online redo log
Loss of the whole database
Testing recovery should include recovering your database to another server, such as a test or development server. The cost of having additional servers available on which to perform testing can be intimidating for some businesses, and it can be one deterrent for adequate testing. Test servers are absolutely necessary, however, and businesses that fail to perform this requirement can be at risk of severe data loss or an unrecoverable situation.
One way that you can test recovery is to create a new development or testing environment and recover the database to a development or test server in support of a new software release. Database copies are often necessary to support new releases of the database and application code prior to moving it to production. RMAN provides the DUPLICATE command in support of this. Manual OS tools in Unix, such as ufsrestore, tar, and cpio from tape or copying from a disk staging area, are often used for scripted backups.
Driving Adequate Testing with the Service Level Agreement Let’s look at this real-life example to determine how much testing is enough for recovery preparation. Many companies try to reduce implementation and support costs by not allocating sufficient resources for testing. In this particular case, a manufacturing company is trying to determine how much testing is enough to get by. Up until this point, this company had performed all of its data processing on a mainframe-customized environment, so the customized Oracle database environment is new to them. As a result, when the company switched to an Oracle-based environment, they did not set up a service level agreement with the information technology (IT) department so that their database environment would be maintained properly. Because they lacked this service level agreement from customers of the database, they didn’t have the instrument necessary to drive the testing, nor did they have the necessary resources to perform the testing. Their reasoning for this lack of resources was based on the premise that testing is a costly effort, and the results of it may never be needed. In this case, it was hard for the company to justify the expense of resources (IT staff and equipment) to perform testing. What this company was not aware of was that, at a minimum, certain recovery tests should be performed to validate the backup process and to train the database administration staff for certain common failures.
Also, because there was no service agreement with the database customers, there was never a formalized testing strategy to support the recovery of the database. As a result, within one year after the company converted their manufacturing environment to a custom Oracle-based solution, there was an outage. As a result, the company experienced a corrupt online redo log, but because the company had a customized hot backup strategy, a backup was available. It wasn’t immediately apparent to those working on the problem that the failure was due to a corrupt redo log. As a result, over six hours was wasted by the DBA group before the problem was accurately diagnosed. Once it was, the staff was not sure of the exact type of recovery to perform. Because of this, more time was wasted and more anxiety was created than necessary. Finally, the problem was diagnosed with the help of Oracle Support, who determined that the company would need to perform an incomplete recovery prior to the identified corruption point in the online redo log. With a service level agreement from the database customers and the IT department, a formalized set of expectations would have been set. These expectations could have lead to the proper testing of the database. As a result, most of these problems could have been reduced.
Summary
Oracle provides numerous options for backup and recovery, which support varying business requirements. These options are founded on some basic principles of database file structures required in the backup, restore, and recovery processes. One of these principles is that the necessary Oracle database file structures are backed up on a regular basis so that if necessary, the restore and recovery of these files can be performed. There are two major categories of failure: non-media and media based failures. Understanding these types of failure within these categories will allow you to develop a backup and recovery strategy. The result of this strategy is how you will respond in a failure situation to best meet the situation at hand. It is important to have a solid of understanding of the backup, restore, and recovery processes of an Oracle database. Equally important is having an
understanding of the types of failures or problems that could cause you to recover a database. Without understanding the importance and appropriate course of action to take for certain failures, significant down time or mistakes can occur. Making sure the database is open and available for use is one of the most important responsibilities of the DBA.
Exam Essentials Identify the failure categories. Make sure you are aware of non-media and media failures. A media failure is more serious because it is a hardware failure or corruption, and it will prevent the user from reading from or writing to a database file. A non-media failure is usually less serious because recovering from this type of failure is usually easy. Identify the different types of non-media failure. The four failure types are statement failure, process failure, instance failure, and user error. You should be aware of how each of these failure types is initiated. Understand the multiple ways to undo user errors. Identify the three ways to undo user errors in the database. Undoing user errors is performed manually (rebuilding lost code or data, or importing objects), by using LogMiner to restore transactions from archived log files, and by using the Flashback Query feature to query data based on the undo information. Understand the media recovery process. The media recovery process includes restoring physical database files that are impacted by media failure and applying archived logs to roll the restored files forward to a determined point in time. Understand the difference between user-managed and RMAN backups and recovery. A traditional, user-managed backup and recovery is customized in an OS scripting language so that it can call database commands and perform OS commands. A RMAN backup and recovery is created by using the RMAN utility. Identify the requirements for a backup and recovery strategy. Be able to understand the general concept of a backup and recovery strategy so that you are aware of the unique planning required to recover a database to meet your customers’ requirements.
Review Questions 1. What is a type of non-media failure? (Choose all that apply.) A. Process failure B. Crashed disk drive with data files that are unreadable C. Instance failure D. User error E. Statement failure 2. Why is it important to get management to understand and agree with
the backup and recovery plan? (Choose all that apply.) A. So that they understand the benefits and costs of the plan. B. So that they understand the plan’s effects on business operations. C. It’s not important for management to understand. D. So that they give approval of the plan. 3. What are some of the reasons why a DBA might test the backup and
recovery strategy? A. To validate the backup and recovery process B. To stay familiar with certain types of failures C. To practice backup and recovery D. To build duplicate production databases to support new releases E. All of the above 4. What method of undoing transactions in the database requires reading
archived log information? A. Flashback Query B. Fastback Query C. LogMiner D. Export/Import data
5. Why should backup and recovery testing be done? A. To practice your recovery skills B. To validate the backup and recovery process C. To get MTTR statistics D. To move the database from the production server to the test server E. All of the above 6. What method of undoing transactions in the database requires the
database parameter UNDO_MANAGEMENT = AUTO ? A. Flashback Query B. LogMiner C. Manual rebuilding data D. Parallel query 7. At a minimum, what backup and recovery tests should be performed? A. Recovery from the loss of a system tablespace B. Recovery from the loss of a nonsystem tablespace C. Full database recovery D. Recovery from the loss of an online redo log E. All of the above 8. List all the methods used to protect against erroneous changes to the
database without performing a full database recovery. (Choose all that apply.) A. Tablespace backups of high-usage tablespaces B. Control file backups C. Exports D. Multiplexed redo logs
9. A user-managed backup best describes the following: A. An RMAN-based backup B. An export of the database C. A hot backup only D. A custom backup using OS and database commands 10. An offline backup would be best performed on what type of business
environment? A. A database that has transactional activity 24 hours a day B. A database that has transactional activity 12 hours a day and batch
processing the other 12 hours C. A database that has transactional activity 12 hours only D. A database that has transactional activity 18 hours and data mart
data extraction the other 6 hours 11. Which IT professionals will most likely be involved in the technical
aspects of the backup and recovery strategy? (Choose all that apply.) A. Database administrator (DBA) B. Management C. System administrator (SA) D. Application developer 12. Undoing database changes can be more easily performed when the
following is a general business practice: A. Software configuration management is tightly enforced. B. Developers modify the production code without DBA knowledge. C. Limited testing of production code is performed before it is
implemented. D. Ad hoc DML statements are performed in production databases.
13. Statement failure is when which of the following occurs? A. A user accidentally drops a table. B. A user writes an invalid SQL command. C. A user deletes the wrong data. D. A user export is performed. 14. The wrong data in a table gets deleted by mistake. This type of error
or failure is called a(n): A. Statement error B. User error C. Instance failure D. Media failure 15. What new feature in Oracle9i is used to view data based on the
undo records? A. LogMiner B. Fastback Query C. Flashback Query D. Export
Answers to Review Questions 1. A, C, D, E. A media failure occurs when a database file cannot be read
or written to. All other types of failures are non-media failures. 2. A, B, D. Management needs to understand the backup and recovery
plan so that the plan can be tailored to meet the business operational requirements. Furthermore, by understanding the plan, management can better gauge the benefits and costs of decisions that they are about to make. 3. E. All are relevant reasons to test a backup and recovery strategy. 4. C. The LogMiner utility reads archived redo logs to reconstruct
previously run information. This information can be used to undo changes to the database. 5. E. All answers are potential reasons to perform backups, the most
important being that testing validates the backup and recovery process. 6. A. The Flashback Query requires that the UNDO_MANAGEMENT parameter
be set to AUTO so that undo information automatically gets recorded in the designated undo tablespace. 7. E. All of the options are backup and recovery tests that should be
performed. 8. A, C. Tablespace backups of high-usage tablespaces and exports of
the whole database or high-usage tables can provide protection against erroneous changes without doing a full database recovery. 9. D. A user-managed backup is the term used for backing up an Oracle
database using either an OS script such as a Unix shell script or a Windows batch file in conjunction with database commands. 10. C. A database that is being accessed 24 hours a day will not be a can-
didate for a offline backup because the database must be shut down.
11. A, C. The DBA and SA will most likely be involved in the technical
aspects of the backup and recovery strategy because they are in charge of the technical areas required to perform the recovery operation. 12. A. Code and data modifications should be tightly enforced either by a
software configuration management tool or by a manual process. This provides an audit trail of modifications that are made to the database. There is a better chance of undoing errors with this information. 13. B. A statement error occurs when a user writes an incorrect SQL com-
mand. The Oracle database will not process invalid commands during the parse phase of evaluating the SQL command; instead, it will return an error. 14. B. A user error occurs when a user performs some action within the
database that they didn’t want to do. 15. C. The Flashback Query feature is new to Oracle9i. This feature uses
Instance and Media Recovery Structures ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe the Oracle processes, memory structures, and files relating to recovery. Identify the importance of checkpoints, redo log files, and archived log files. Describe ways to tune instance recovery.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
racle uses a wide variety of processes and structures to provide a robust set of recovery options. A process is a daemon, or background program, that performs certain tasks. A structure is either a physical or logical object that is part of the database, such as files or database objects themselves. The processes consist of log writer (LGWR), system monitor (SMON), process monitor (PMON), checkpoint (CKPT), and archiver (ARCn). The available structures include redo logs, rollback segments, control files, and data files. You were introduced to these terms in Chapter 5, “Backup and Recovery Overview.” This chapter provides further detail, including how different combinations of these processes and structures are used to recover from different kinds of failures. This chapter also introduces methods you can use to tune instance recovery, including the ability to set approximate limits on the length of time needed to recover from instance failure. We will discuss and provide examples of these methods. In order to understand the backup and recovery process, you must first understand processes and structures. This knowledge will help you, as the DBA, make real-life decisions about backup and recovery situations. For example, if you understand how the database physical structures get synchronized, you will be able to make sense of the recovery process. If you truly understand how processes and structures interact with backup and recovery procedures, you are on the road to having a solid understanding of the backup and recovery process.
Oracle Recovery Processes, Memory Components, and File Structures
179
Oracle Recovery Processes, Memory Components, and File Structures
O
racle recovery processes, memory components, and file structures all work together during recovery in order to maintain the physical integrity of the database. Oracle recovery processes interact with Oracle memory components to coordinate data blocks that are written to disk. These blocks of data reside in memory for faster access at different times of database operations. As these memory structures change, coordination with the physical and logical structures occurs so that the database can remain consistent. Basically, recovery processes coordinate what blocks (and other information) need to be read (or modified from the data blocks in memory) and then written to disk or to the physical and logical structures in the form of online redo logs, archived logs, or data files. Each process has specific tasks that it fulfills. These physical and logical structures are like memory structures in that they are made up of data blocks. But the physical structures are static structures that consist of files in the OS file system. Data blocks and other information are written to these physical structures to make them consistent. The logical structures are temporarily used to hold parts of information for intermediate time periods, or until the processes can permanently record the appropriate information in the physical structures.
Recovery Processes Oracle has five major processes related to recovery. These processes include log writer, system monitor, process manager, checkpoint, and archiver. Let’s look at each of these processes in more detail. Log writer (LGWR) The log writer (LGWR) process writes redo log entries from the redo buffers. A redo log entry is any change, or transaction, that has been applied to the database, committed or not. (To commit means to save or permanently store the results of the transaction to the database.) The LGWR process is mandatory and is started by default when the database is started. System monitor (SMON) The system monitor (SMON) process performs a varied set of functions. SMON is responsible for instance recovery, and it also performs temporary segment cleanup. It is a mandatory process and is started by default when the database is started.
Process monitor (PMON) The process monitor (PMON) process performs recovery of failed user processes. This is a mandatory process and is started by default when the database is started. Checkpoint (CKPT) The checkpoint (CKPT) process performs checkpointing in the control files and data files. Checkpointing is the process of stamping a unique counter in the control files and data files for database consistency and synchronization. In Oracle7, the LGWR would also perform checkpointing if the CKPT process wasn’t present. As of Oracle8, however, the CKPT process was made mandatory and was started by default. This is continued in Oracle9i. Archiver (ARCn) The archiver (ARCn) process performs the copying of the online redo log files to archived log files. The ARCn process is enabled only if the init.ora file’s parameter LOG_ARCHIVE_START is set to TRUE or with the ARCHIVE LOG START command. This isn’t a mandatory process. In the Windows 2000/XP environments, the processes are threads of the main Oracle executable. In the Unix environment, an example of each of these processes can be seen by typing the Unix command ps –ef | grep . $ORACLE_SID is a Unix environment variable that identifies the Oracle system identifier. In this case, $ORACLE_SID is orc9. oracle@octilli:~ > ps -ef|grep orc9 oracle 2077 1 0 00:26 ? 00:00:00 ora_pmon_orc9 oracle 2079 1 0 00:26 ? 00:00:00 ora_dbw0_orc9 oracle 2081 1 0 00:26 ? 00:00:00 ora_lgwr_orc9 oracle 2083 1 0 00:26 ? 00:00:00 ora_ckpt_orc9 oracle 2085 1 0 00:26 ? 00:00:00 ora_smon_orc9 oracle 2087 1 0 00:26 ? 00:00:00 ora_reco_orc9 oracle 2097 1 0 00:26 ? 00:00:00 ora_arc0_orc9
Memory Structures There are two Oracle memory structures relating to recovery: log buffers and data block buffers. The log buffers are the memory buffers that record the changes, or transactions, to data block buffers before they are written to online redo logs or disk. Online redo logs record all changes to the database, whether the transactions are committed or rolled back. The data block buffers are the memory buffers that store all the database information. A data block buffer stores mainly data that needs to be queried,
Oracle Recovery Processes, Memory Components, and File Structures
181
read, changed, or modified by users. The modified data block buffers that have not yet been written to disk are called dirty buffers. At some point, Oracle determines that these dirty buffers must be written to disk. When this happens, a checkpoint occurs. Both Oracle memory structures can be viewed in a number of ways. The most common method is by performing a SHOW SGA command from SQL. This displays all of the memory sizes of the database that you are connected to. See the following example: oracle@octilli:/oracle/admin/orc9/pfile > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Tue Sep 25 00:38:13 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect /as sysdba Connected. SQL> show sga Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers SQL>
235693104 279600 167772160 67108864 532480
bytes bytes bytes bytes bytes
When you look at this code example, you will see that the database buffers are approximately 67MB, and the redo buffers are approximately 512KB. When the SHOW SGA command is run, the data block buffers are referred to as database buffers, and the log buffers are referred to as redo buffers. These values are extremely small and are suitable for a sample database or for testing. Data block buffers can be about 100MB to 300MB for average-sized databases with a few hundred users. In this System Global Area (SGA) output, Variable Size pertains to the SHARED_POOL and LARGE_POOL values in the init.ora file. The SHARED_POOL value stores parsed versions of SQL and PL/SQL so that it may be reused. The LARGE_POOL value provides large memory sizes for shared servers session
memory as well as backup and recovery operations. The init.ora file’s DBWR_IO_SLAVES and BACKUP_TAPE_IO_SLAVES parameters are examples of backup and recovery operations that will use the LARGE_POOL memory. In the SGA output above, Fixed Size pertains to a few less-critical parameters in the init.ora.
File Structures The Oracle file structures relating to recovery include the online redo logs, archived logs, control files, data files, and parameter files. The redo logs consist of files that record all the changes to the database. The archived logs consist of files that are copies of the redo logs; these exist so that a historical record of all the changes made to the database can be utilized if necessary. Control files are binary files that contain the physical structure of the database, such as operating system filenames of all files that make up the database. Data files are physical structures of the database that make up a logical structure called a tablespace. All data is stored within some type of object within a tablespace. Parameter files are the files that contain the initialization parameters for the database. These are the files that set the initial settings or values of database resources when the database is started. Let’s look at each of these in more detail.
Redo Logs The redo logs consist of files that record all of the changes to the database. Recording all changes is one of the most important activities in the Oracle database from the recovery standpoint. The redo logs get information written to them before all other physical structures in the database. The purpose of the redo log is to protect against data loss in the event of a failure. The term redo log includes many subclassifications. Redo logs consist of online redo logs, offline redo logs (also called archived logs), current online redo logs, and non-current online redo logs. Each is described below. Online redo logs Logs that the log buffers are writing to in a circular fashion. These logs are written and rewritten. Offline redo logs, or archived logs Copies of the online redo logs before they are written over by the LGWR. Current online redo logs Logs that are currently being written to and therefore are considered active. Non-current redo logs Online redo logs that are not currently being written to and therefore are inactive.
Oracle Recovery Processes, Memory Components, and File Structures
183
Each database has at least two sets of online redo logs, but Oracle recommends at least three sets. You will see why when we discuss archived logs in the next section. Redo logs record all the changes that are made to the database; these changes result from the LGWR writing out log buffers to the redo logs at particular events or points in time. As mentioned previously, redo logs are written in a circular fashion. That is, if there are three sets of logs, log 1 gets written to first until it is full. Then Oracle moves to log 2, and it gets written to until it is full. Then Oracle moves to log 3, and it gets written to until it is full. Oracle then goes back to log 1, writes over the existing information, and repeats this process over again. Here is a listing of the logs from the V$LOGFILE view. SQL> select group#, member from v$logfile; GROUP# MEMBER ---------- -----------------------------------------3 /oracle/oradata/orc9/redo03.log 2 /oracle/oradata/orc9/redo02.log 1 /oracle/oradata/orc9/redo01.log 3 rows selected. Figure 6.1 shows an example of the circular process of redo file generation, which writes to one log at a time, starting with log 1, then log 2, then log 3, and then back to log 1 again. FIGURE 6.1
Redo logs contain values called system change numbers (SCNs) that uniquely identify each committed transaction in the database. SCNs are like a clock of events that have occurred in the database, and they are one of the major synchronization elements in recovery. Each data-file header and control file is synchronized with the current highest SCN.
Archived Logs Archived logs are non-current online redo logs that have been copied to a new location offline. This location is the value of the init.ora file’s LOG_ARCHIVE_ DEST parameter. Archived logs are created if the database is in ARCHIVELOG mode rather than NOARCHIVELOG mode. A more detailed explanation of ARCHIVELOG and NOARCHIVELOG mode will come in Chapter 10, “UserManaged Complete Recovery and RMAN Complete Recovery.” As noted earlier, archived logs are also referred to as offline redo logs. An archived log is created when a current online redo log is completed or filled, and before an online redo log needs to be written to again. Remember, redo logs are written to in a circular fashion. If there are only two redo log sets and you are in ARCHIVELOG mode, the LGWR may have to wait or halt the writing of information to redo logs while an archived log is being copied. If it doesn’t, the LGWR process would overwrite the archive information, making the archived log useless. If at least three redo log groups are available, the archived log will have enough time to be created and not cause the LGWR to wait for an available online redo log. This is under average transaction volumes. Some large or transaction-intensive databases may have 10 to 20 log sets to reduce the contention on redo log availability. Archived logs are the copies of the online redo logs; the archived logs get applied to the database in certain types of recovery. Archived logs build the historical transactions, or changes, back in the database to make it consistent to the desired recovery point.
Control Files A control file is a binary file that stores the information about the physical structures that make up the database. The physical structures are OS objects, such as all OS filenames. The control file also stores the highest SCN to assist in the recovery process. This file stores information about the backups if you are using RMAN, the database name, and the date the database was created. You should always have at least two control files on different disk devices. Having this duplication is called multiplexing your control files. Multiplexing control files is done in an init.ora file’s CONTROL_FILES parameter.
Oracle Recovery Processes, Memory Components, and File Structures
185
Every time a database is mounted, the control file is accessed to identify the data files and redo logs that are needed for the database to function. All new physical changes to the database get recorded in the control file by default.
Data Files Data files are physical files stored on the file system. All Oracle databases have at least one data file, but usually more. Data files are where the physical and the logical structures meet. Tablespaces are logical structures and are made up of one or more data files. All logical objects reside in tablespaces. Logical objects are those that do not exist outside of the database, such as tables, indexes, sequences, and views. Data files are made up of blocks. These data blocks are the smallest unit of measure in the database. The logical objects, such as tables and indexes, are stored in the data blocks, which reside in the data files. The first block of every file is called a header block. The header block of the data file contains information such as the file size, block size, and associated tablespace. It also contains the SCN for recovery purposes. Here is the output from the V$DATAFILE view, showing the data files that make up a sample database. SQL> select file#, status, substr(name,0,50) from v$datafile; FILE# ---------1 2 3 4 5 6 7 8
STATUS ------SYSTEM ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE
Parameter Files There are two parameter files that make up the file structures of the Oracle9i database. The parameter file is the file that contains the initialization parameters of the database that are used upon startup. The standard parameter file, called init.ora, contains parameters required for instance startup. The ORACLE_SID equals the name of the oracle system identifier or database name. This parameter file is ASCII and can be edited. The second parameter file, called spfile.ora, is the server parameter file that stores persistent parameters that are required for instance startup and those that are modified when the database is started. The server parameter file is stored in binary format, which is a new feature in Oracle9i. Either of these files can be used to start the Oracle database, but the spfile.ora files are the default for starting the database. If these files are not found, then the init.ora files are used. The default locations for these files are in the following directory structures. We will demonstrate both Unix and Windows NT/2000 locations. Unix
$ORACLE_HOME/dbs
Windows NT/2000
%ORACLE_HOME%\database
Checkpoints, Redo Logs, and Archived Logs
N
ow that you have a basic understanding of the various Oracle processes, file structures, and memory structures used for recovery, it’s time to see how these interrelate. As you learned earlier, checkpoints, redo logs, and archived logs are significant to all aspects of recovery. The checkpoint is an event that determines the synchronization or consistency of all transactions on disk. The checkpoint is implemented by storing a unique number—the SCN (again, this stands for system change number)—in the control files, header of the data files, online redo logs, and archived logs. The checkpoint is performed by the CKPT process. One of the ways a checkpoint is initiated is by the data block writer (DBWR) process. The DBWR process initiates a checkpoint by writing all modified data blocks in the data buffers (dirty buffers) to the data files. After a checkpoint is performed, all committed transactions are written to the data files.
If the instance were to crash at this point, only new transactions that occurred after this checkpoint would need to be applied to the database to enable a complete recovery. Therefore, the checkpoint process determines which transactions from the redo logs need to be applied to the database in the event of a failure and subsequent recovery. Remember that all transactions, whether committed or not, get written to the redo logs. Other methods that cause a checkpoint to occur include any of the following commands: ALTER SYSTEM SWITCH LOGFILE, ALTER SYSTEM CHECKPOINT LOCAL, ALTER TABLESPACE BEGIN BACKUP, and ALTER TABLESPACE END BACKUP. SCNs are recorded within redo logs at every log switch, at a minimum. This is because a checkpoint occurs at every log switch. Archived logs have the same SCNs recorded within them as the online redo logs because the archived logs are merely copies of the online redo logs. Let’s look at an example of how checkpointing, online redo logs, and archived logs are all interrelated. First, the ALTER TABLESPACE BEGIN BACKUP command is used to begin an online backup of a database.
An online backup, also called a hot backup, occurs while the database is still available or running. See Chapter 9, “User-Managed and RMAN-based Backups,” for a more detailed explanation of a hot backup.
The ALTER TABLESPACE BEGIN BACKUP command is followed by an OS command to copy the files, such as cp in Unix. Then, the command ALTER TABLESPACE END BACKUP is used to end the hot backup. As we just discussed, these ALTER TABLESPACE commands also cause a checkpoint to occur. The following is an example of the data file data01.dbf for the tablespace DATA being backed up: SQL> connect /as sysdba Connected. SQL> alter tablespace data begin backup; Tablespace altered. SQL> ! cp data01.dbf /stage/data01.dbf SQL> alter tablespace data end backup; Tablespace altered. SQL>
Note that the tablespace DATA was put in backup mode, and then the OS command was executed, copying the data file data01.dbf to the new directory /stage, where it awaits writing to tape. Finally, the tablespace DATA was taken out of backup mode. These steps are repeated for every tablespace in the database. This is a simplified example of a hot backup. If a hot backup was taken on Sunday at 2 A.M., and the database crashed on Tuesday at 3 P.M., then the last checkpoint would have been issued after all the data files were backed up. This backup would be the last checkpointed disk copy of the database. Therefore, all the archived logs generated after the 2 A.M. backup was completed would need to be applied to the checkpointed database to bring it up to 3 P.M. Tuesday, the time of the crash. When you are making a hot backup of the database, you are getting a copy of the database that has been checkpointed for each data file. In this case, each data file has a different SCN stamped in the header and each will need all applicable redo log entries made with a greater SCN applied to the data file to make the database consistent.
Ways to Tune the Instance Recovery
As we’ve previously discussed in Chapter 5, “Backup and Recovery Overview,” instance failure is when the Oracle database instance abnormally terminates due to something like a power outage or a shutdown abort. Instance recovery is the automatic process that the Oracle database performs to ensure that the database is functioning properly and the data is consistent. This is also known as the roll forward and roll backward process. Upon startup of the Oracle database after an instance failure, Oracle reads the current online redo log and applies all the changes in that redo log to the database. Any uncommitted changes are then rolled back. Thus, the database is made consistent from the time of the outage. The concept of defining an approximate set time for the instance recovery process is called bounded time recovery. Bounded time recovery means that the DBA controls or puts bounds on the time it takes for an instance to recover after instance failure. These bounds are controlled by using two initialization parameters.
There are two primary initialization parameters that will speed up the instance recovery process. The first parameter is called FAST_START_MTTR_ TARGET. This parameter makes the database writer (DBW0 background process) write dirty blocks faster and at a predefined pace to meet agreed-upon recovery timeframes. This parameter can be set from 0 to 3600 seconds. The second parameter is called FAST_START_PARALLEL_ROLLBACK. This parameter determines the maximum number of processes that can exist for performing a parallel rollback. This parameter is useful when there are long running transactions involved in the instance recovery process. This causes the rollback aspect of instance recovery to process faster. This parameter can be set as HIGH, LOW, and FALSE. The following are sample init.ora file parameters: fast_start_mttr_target fast_start_parallel_rollback
= 300 = LO
There are three V$ sys user views that can be queried to monitor these parameters. These views are V$INSTANCE_RECOVERY, V$FAST_START_SERVERS, and V$FAST_START_TRANSACTIONS. Below is an example of the V$INSTANCE_ RECOVERY view, which can be used to monitor and determine the length of recovery for the FAST_START_MTTR_TARGET parameter. TARGET_MTTR and ESTIMATED_MTTR show the estimated time in seconds to recover the database in the advent of an instance failure at that given time. SQL*Plus: Release 9.0.1.0.0 - Production on Mon Sep 24 23:14:36 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production SQL> select target_mttr,estimated_mttr from v$instance_ recovery;
Instance Recovery in a Distributed Database Environment Let’s look at a real-life example to determine the circumstances that can cause data loss even though Oracle instance recovery accounts for all data loss properly. A small manufacturing company generates their manufacturing information on an Oracle database. This database keeps track of raw materials, work in progress, and finished products. In order to complete the work required of it, the database needs to be available throughout the three shifts that the company runs to meet customer demand. For financial purposes, on a regular basis, information from this manufacturing database is transferred to the financial database on another server. This is done through a variety of out-of-database transfers, which leaves the data exposed until it meanders its way back into the financial database. In the midst of one of these transfers, the company experiences a severe power spike, which causes the servers to reboot. As a result, instance failure occurs on each database because data was in the process of being transferred from the manufacturing database to the financial database when the power spike occurred. In addition, the database transfer mechanism did not recover the data in transit. At this point, the experienced database administrator can see that the issue here is a loosely distributed database environment. This could result from the company using different vendors for manufacturing and financial applications with customized application program interfaces (APIs).
Now that the power spike is over, the system administrator must get the servers restarted and the database administrator needs to validate that the databases are available again. When this is done, the administrators find out that the manufacturing database thinks that the data has been transferred to financial database, but the data never actually made it into the financial database before the instance failure. As a result, the databases are out of synchronization with each other and the data inconsistencies must be manually tracked down. The analysts and business people must determine what did or didn’t make it to the appropriate database by a series of ad hoc SQL statements.
Summary
T
he recovery structures and processes available in Oracle allow significant recovery options for the DBA. These recovery structures consist of files, processes, memory buffers, and logical objects that reside in the database. In addition to these recovery structures, this chapter also identified the file structures—redo logs, archived logs, control files, data files, and parameter files—and the processes, which are PMON, SMON, LGWR, CKPT, and ARCn. It then discussed the memory structures, which consist of log buffers and data buffers. All of these structures play different roles in the recovery process, and they can be used for different types of recovery. This chapter also described the checkpoint concept. During this discussion, you learned the importance of SCNs and how these identifiers help determine the consistency of the transactions that have been applied to the database. In addition, you viewed the three main initialization parameters that you will need to tune the instance recovery process. It is through these that the DBA can constrain the time of the recover process to certain approximate limits. These parameters can be monitored by a series of V$ sys views. This chapter lays the groundwork for making decisions in the backup and recovery process for later chapters and the real world. It is vital that you understand how the file structures and process that are discussed in this chapter function to make the database consistent or inconsistent in a failure
situation. These topics can be thought of as the building blocks of the backup and recovery processes.
Exam Essentials Identify the Oracle background processes involved in recovery. There are five background processes involved with recovery. These processes are as follows: log writer (LGWR) writes redo logs, system monitor (SMON) is involved with instance recovery, process monitor (PMON) performs recovery of process failure, checkpoint (CKPT) performs checkpointing of database files, and archiver (ARCn) copies online redo logs to archived logs. Identify the Oracle memory structures that are involved in recovery. There are two memory structures involved in the recovery process. The log buffers are the memory buffers that record changes to redo logs, and the data block buffers store all the cached data, which is queried and modified by users. Identify the Oracle file structures involved in recovery. The file structures related to recovery include the online redo logs, archived logs, control files, data files, and parameter files. These files make up the physical components of the Oracle database. Understand the checkpoint process. The checkpoint process interacts with redo logs, archived logs, and data file headers to synchronize the database with SCNs. Checkpoints occur due to log switches, dirty block buffers, system commands, and initialization parameters. Understand the difference between archived logs and redo logs. Redo logs contain transactional log information that will get written over. Archived logs are copies of the redo logs before the redo logs are written over. Archived logs are used to recover the database. Identify the fast instance recovery techniques. You should understand the concept of bound time instance recovery and be able to use initialization parameters to define recovery time and to monitor this process with the V$ sys views. The two initialization parameters that help define instance recovery are FAST_START_MTTR_TARGET and FAST_START_PARALLEL_ ROLLBACK.
Review Questions 1. What command must the DBA execute to initiate an instance recovery? A. RECOVER DATABASE B. RECOVER INSTANCE C. RECOVER TABLESPACE D. No command is necessary. 2. What process is in charge of writing data to the online redo logs? A. Log buffer B. ARCn C. Data buffers D. LGWR 3. What are the file structures related to recovery? (Choose all that
apply.) A. Redo logs B. Archived logs C. Log buffers D. Data files 4. What file structure consists of a binary file that stores information
about all the physical components of the database? A. Redo log B. Data file C. Control file D. Archived logs
5. Which of the following are processes associated with recovery?
(Choose all that apply.) A. PMON B. SMON C. ARCn D. DBWR 6. In Oracle9i, which process is responsible for performing checkpointing? A. SMON B. PMON C. LGWR D. CKPT 7. Which of the following are memory structures? (Choose all that
apply.) A. Rollback segments B. Log buffers C. Data block buffers D. Data files 8. What type of shutdown requires an instance recovery upon startup? A. SHUTDOWN NORMAL B. SHUTDOWN IMMEDIATE C. SHUTDOWN TRANSACTIONAL D. SHUTDOWN ABORT 9. What events trigger a checkpoint to take place? (Choose all that apply.) A. CKPT B. SHUTDOWN NORMAL C. SHUTDOWN IMMEDIATE D. Log switch
10. What procedure is responsible for stamping the SCN to all necessary
physical database structures? A. Read-consistent image B. Checkpointing C. Commits D. Rollbacks 11. The dirty buffers get written to disk when what event occurs? A. A commit occurs. B. A rollback occurs. C. Checkpoint occurs. D. SHUTDOWN ABORT occurs. 12. What database process is not mandatory or present at startup of the
database? A. PMON B. CKPT C. SMON D. ARCn 13. What is the primary background process responsible for instance
recovery? A. PMON B. LGWR C. SMON D. ARCn 14. What is a redo log that is being actively written to called? A. Online archived log B. Current online redo log C. Online redo log D. Offline redo log
15. What is a redo log that is not currently being written to and is instead
copied to a new location called? A. Online archived log B. Current online redo log C. Offline redo log D. Online redo log 16. Which file structure joins the physical structures of the database to the
logical database structure? A. Table B. Control file C. Redo log D. Data file 17. Which memory structure is responsible for temporarily storing redo
log entries? A. Large pool B. Database buffers C. Log buffers D. Shared pool 18. What is having more than one control file called? A. Multiple control files B. Duplicating control files C. Multiplexing control files D. Duplexing control files 19. What event happens when a log switch occurs? A. An archived log is created. B. A checkpoint occurs. C. All transactions are committed. D. All pending transactions are committed.
Answers to Review Questions 1. D. The instance recovery is automatic. 2. D. The LGWR, or log writer process, writes changes from the log
buffers to the online redo logs. 3. A, B, D. Redo logs, archived logs, and data files are all file structures
that are associated with recovery. Log buffers are memory structures. 4. C. The control file is a binary file that stores all the information about
the physical components of the database. 5. A, B, C. PMON, SMON, and ARCn are all associated with some part
of the recovery process. PMON recovers failed processes, SMON assists in instance recovery, and ARCn generates archived logs used to recover the database. The DBWR writes data to the data files when appropriate. 6. D. The CKPT process performs all the checkpointing. This process
starts by default. 7. B, C. There are only two memory structures related to recovery: log
buffers and data block buffers. 8. D. SHUTDOWN ABORT requires an instance recovery upon startup
because the data files are not checkpointed during shutdown. 9. B, C, D. The SHUTDOWN NORMAL and SHUTDOWN IMMEDIATE events
checkpoint all necessary physical database structures. Switching log files forces a checkpoint of all necessary physical database structures. The CKPT process is responsible for initiating or performing the checkpoint; it doesn’t cause it. 10. B. Checkpointing is the procedure initiated by the CKPT process,
which stamps all the data files, redo logs, and control files with the latest SCN.
11. C. A checkpoint causes dirty buffers to be flushed to disk. A SHUTDOWN
NORMAL or SHUTDOWN IMMEDIATE causes a checkpoint on shutdown, but a SHUTDOWN ABORT doesn’t force a checkpoint. This is why instance recovery is necessary upon startup. A rollback and commit do not cause a checkpoint. 12. D. The ARCn process is not a required at startup. This process is only
used if the database is running in archive mode. 13. C. The SMON process is responsible for the majority of instance
recovery. 14. B. A current online redo log is a log that is being written to by the
LGWR process. 15. C. Archived logs are offline redo logs. These are redo logs that are
copied to a new location before the LGWR writes over them. 16. D. The data file is a physical structure that makes up a logical
tablespace. 17. C. The log buffers are responsible for temporarily holding log entries
that will be written to the redo logs. 18. C. You should always have more than one control file to protect against
the loss of one. This practice is called multiplexing control files. 19. B. A log switch forces a checkpoint immediately as part of the
synchronization process for recovery. 20. D. The spfile.ora parameter file is new to Oracle9i. The init.ora
file has been around since the beginning of the Oracle database, but the spfile.ora file is a new binary initialization file.
Configuring the Database Archiving Mode ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe the differences between Archivelog and Noarchivelog modes. Configure a database for Archivelog mode. Enable automatic archiving. Perform manual archiving of logs. Configure multiple archive processes. Configure multiple destinations, including remote destinations.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
onfiguring an Oracle database for backup and recovery can be complex. At a minimum, you must understand the archive process, the initialization parameters associated with the archive process, the commands necessary to enable and disable archiving, the commands used to manually archive, and the process of initializing automated archiving. This chapter provides examples of the backup and recovery configuration process. After reading this chapter, you should be comfortable with this process. In Chapter 6, “Instance and Media Recovery Structures,” you learned about the file and memory structures involved with Oracle backup and recovery. Two of these structures are covered in more detail within this chapter—specifically, the archiver process and the archived logs that this process generates. The archiver and archived logs are the key components you need for the backup and recovery process. Archived logs make it possible for a complete recovery to be conducted. In addition, the archiver or archivers are responsible for creating archived logs so that they are available in the event of a failure. It is important that you understand how to configure the database for archiving and that you need to provide multiple copies of archived logs to reduce the damage caused by failure situations.
Choosing ARCHIVELOG Mode or NOARCHIVELOG Mode
O
ne of the most fundamental backup and recovery decisions that a DBA will make is whether to operate the database in ARCHIVELOG mode or NOARCHIVELOG mode. As you learned earlier, the redo logs record all the transactions that have occurred in a database, and the archived logs are copies
of these redo logs. So, the archived logs contain the historical changes, or transactions, that occur in the database. Operating in ARCHIVELOG mode means that the database will generate archived logs; operating in NOARCHIVELOG mode means that the database will not generate archived logs. This section discusses the differences between ARCHIVELOG and NOARCHIVELOG mode.
ARCHIVELOG Mode In ARCHIVELOG mode, the database generates archived log files from the redo logs. This means that the database makes copies of all the historical transactions that have occurred in the database. Here are other characteristics of operating in ARCHIVELOG mode:
Performing online (hot) backups is possible. This type of backup is done when the database is up and running. Therefore, a service outage is not necessary to perform a backup. The ALTER TABLESPACE BEGIN BACKUP command is issued to perform hot backups. After this command is issued, an OS copy can take place on each tablespace’s associated data files. When the OS copy is complete, an ALTER TABLESPACE END BACKUP command must be issued. These commands must be executed for every tablespace in the database.
A complete recovery can be performed. This is possible because the archived logs contain all the changes up to the point of failure. All logs can be applied to a backup copy of the database (hot or cold backup). This would apply to all the transactions up to the time of failure. Thus, there would be no data loss or missing transactions.
Tablespaces can be taken offline immediately.
Increased disk space is required to store archived logs, and increased maintenance is associated with maintaining this disk space.
NOARCHIVELOG Mode In NOARCHIVELOG mode, the database does not generate archived log files from the redo logs. This means that the database is not storing any historical transactions from such logs. Instead, the redo logs are written over each other as needed by Oracle. As a result, the only transactions that can be
used in the event of instance failure are in the current redo logs. Operating in NOARCHIVELOG mode has the following characteristics:
In most cases, a complete restore cannot be performed. This means that a loss of data will occur. The last cold backup will need to be used for recovery.
The database must be shut down completely for a backup, which means the database will be unavailable to the users of the database during that time. This means that only a cold backup can be performed.
Tablespaces cannot be taken offline immediately.
Additional disk space and maintenance is not needed to store archived logs.
Understanding Recovery Implications of NOARCHIVELOG
T
he recovery implications associated with operating a database in NOARCHIVELOG mode are important. A loss of data usually occurs when the last consistent full backup is used for a recovery. Therefore, to reduce the amount of data lost in the event of a failure, frequent cold backups need to be performed. This means that the database could be unavailable to users on a regular basis. Now that you are familiar with the problems associated with this mode, let’s look at examples of when it would not make sense to use NOARCHIVELOG mode and when it would. Imagine that Manufacturing Company A’s database must be available for 24 hours a day to support three shifts of work. This work consists of entering orders, bills of lading, shipping instructions, and inventory adjustments. The shifts are as follows: day shift, 9 A.M. to 5 P.M.; swing shift, 5 P.M. to 1 A.M.; and night shift, 1 A.M. to 9 A.M. If this database is shut down for a cold backup from midnight to 2 A.M., then the night shift and swing shift would be unable to use it during that period. As a result, a NOARCHIVELOG backup strategy would not be workable for Manufacturing Company A. On the other hand, if Manufacturing Company B’s database must be available only during the day shift, from 9 A.M. to 5 P.M., then backups could be performed after 5 P.M. and before 9 A.M. without affecting users. Thus, the DBA could schedule the database to shut down at midnight and perform
Configuring a Database for ARCHIVELOG Mode and Automatic Archiving 205
the backup for two hours. The database would be restarted before 9 A.M., and there would be no interference with the users’ work. In the event of a failure, there would be a backup from each evening, and only a maximum of one day’s worth of data would be lost. If one day’s worth of data loss were acceptable, this would be a workable backup and recovery strategy for Manufacturing Company B. These examples show that in some situations, operating in NOARCHIVELOG mode makes sense. But there are recovery implications that stem from this choice. One implication is that a loss of data will occur in the event of a failure. Also, there are limited choices on how to recover. The choice is usually to restore the whole database from the last consistent whole backup while the database was shut down (cold backup).
Configuring a Database for ARCHIVELOG Mode and Automatic Archiving
O
nce the determination has been made to run the database in ARCHIVELOG mode, the database will need to be configured properly. You can do this to a new database during database creation or to an existing database via Oracle commands. After the database is in ARCHIVELOG mode, you will most likely configure automatic archiving. Automatic archiving frees the DBA up from the manual task of archiving logs with commands before the online redo logs perform a complete cycle.
Setting ARCHIVELOG Mode ARCHIVELOG mode can be set during the database creation or by using the ALTER DATABASE ARCHIVELOG command. The database must be mounted, but not open, in order to execute this command. This command stays in force until it is turned off by using the ALTER DATABASE NOARCHIVELOG command. The database must be mounted, but not open, in order to execute this command as well. The redo log files will be archived to the location specified by the LOG_ ARCHIVE_DEST parameter in the init.ora file. By default, the database is in manual archiving mode. This means that as the redo logs become full, the database will hang until the DBA issues the ARCHIVE LOG ALL command, which archives all the online redo log files not yet archived. Figure 7.1 shows a database configured for ARCHIVELOG mode.
A database configured for ARCHIVELOG mode Oracle log writer process
LGWR Inactive online redo log
Inactive online redo log
Inactive online redo log
Active online redo log
log1.log
log2.log
log3.log
log4.log
Redo logs
Oracle archive process
ARCn
Disk 1 archive_dump_dest directory
/disk01/arch01
arch_log1.log arch_log2.log arch_log3.log
Let’s look at an example of how to tell whether the database is in ARCHIVELOG or NOARCHIVELOG mode. You will need to run SQL*Plus and execute the following SQL statement that queries from one of the V$ views. Alternatively, you can perform OS commands, such as ps –ef |grep arch in Unix, that check the process list to see whether the ARCn process is running. This process does the work of copying the archived logs from the redo logs. This example shows ARCHIVELOG mode using the V$ views: oracle@octilli:~ > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Tue Sep 25 19:08:25 2001 (c) Copyright 2001 Oracle Corporation. reserved.
Configuring a Database for ARCHIVELOG Mode and Automatic Archiving 207
SQL> connect /as sysdba Connected. SQL> select name,log_mode from v$database; NAME LOG_MODE --------- -----------ORC9 ARCHIVELOG SQL> This example shows NOARCHIVELOG mode using the V$ views: oracle@octilli:~ > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Tue Sep 25 19:08:25 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect /as sysdba Connected. SQL> select name,log_mode from v$database; NAME LOG_MODE --------- -----------ORC9 NOARCHIVELOG SQL> The following is an example of using the Unix OS command ps –ef |grep arch to see whether the archiver process is running. This is more indirect than
the V$ table output, but if the archiver process is running, then the database would have to be in ARCHIVELOG mode. oracle@octilli:~ > ps -ef|grep arc oracle 2097 1 0 00:26 ? oracle 4468 4327 0 19:34 pts/3 oracle@octilli:~ >
00:00:00 ora_arc0_orc9 00:00:00 grep arc
A couple of methods exist for determining the location of the archived logs. The first is to execute the SHOW PARAMETER command and the second is to view the init.ora file. An example of using the SHOW PARAMETER command to display the value of LOG_ARCHIVE_DEST is as follows: SQL> show parameters log_archive_dest NAME -----------------------------------log_archive_dest log_archive_dest_1 log_archive_dest_10 log_archive_dest_2 log_archive_dest_3 log_archive_dest_4 log_archive_dest_5 log_archive_dest_6 log_archive_dest_7 log_archive_dest_8 log_archive_dest_9 log_archive_dest_state_1 log_archive_dest_state_10 log_archive_dest_state_2 log_archive_dest_state_3 log_archive_dest_state_4 log_archive_dest_state_5 log_archive_dest_state_6 log_archive_dest_state_7 log_archive_dest_state_8 log_archive_dest_state_9 SQL>
Configuring a Database for ARCHIVELOG Mode and Automatic Archiving 209
An example of viewing a partial init.ora file to display the LOG_ ARCHIVE_DEST is listed here: ##################################################################### # Partial Sample of Init.ora File ##################################################################### ########################################### # Archive Logging ########################################### log_archive_start=true log_archive_dest=/oracle/admin/orc9/arch log_archive_format=archorc9_%s.log
Setting Automatic Archiving To configure a database for automatic archiving, you must perform a series of steps: 1. Edit the init.ora file and set the LOG_ARCHIVE_START param-
eter to TRUE. This will automate the archiving of redo logs as they become full. 2. Shut down the database and restart the database by using the
command STARTUP MOUNT. 3. Use the ALTER DATABASE ARCHIVELOG command to set
ARCHIVELOG mode. 4. Open the database with the ALTER DATABASE OPEN command.
To verify that the database is actually archiving, you should execute the ALTER SYSTEM SWITCH LOGFILE command. After you execute this command, check the OS directory specified by the parameter LOG_ARCHIVE_DEST to validate that archived log files are present. You can also execute the ARCHIVE LOG LIST command to display information that confirms the database is in ARCHIVELOG mode and automatic archival is enabled. Now let’s walk through this process.
1. First, edit the init.ora file and change the parameter LOG_ARCHIVE_
START to TRUE. As a result, the database will be in automatic ARCHIVELOG mode. See the example init.ora file below. ##################################################################### # Partial Sample of Init.ora File ##################################################################### ########################################### # Archive Logging ########################################### log_archive_start=true log_archive_dest=/oracle/admin/orc9/arch log_archive_format=archorc9_%s.log 2. Next, run the following commands in SQL:
SQL> SQL> SQL> SQL>
shutdown startup mount alter database archivelog; alter database open;
The automatic archival feature has now been enabled for this database. To verify that the database has been configured correctly, you can perform the following checks. 1. First, perform an ALTER SYSTEM SWITCH LOGFILE. This will need to be
done n + 1 times, where n is the number of redo logs in your database. SQL> alter system switch logfile; 2. Next, perform a directory listing of the archive destination in LOG_
ARCHIVE_DEST. The Unix command pwd displays the current working directory, and ls shows the contents of the directory. oracle@octilli:/oracle/admin/orc9/arch > ls -ltr total 276 -rw-r----1 oracle dba 133632 Sep 25 -rw-r----1 oracle dba 125952 Sep 25 -rw-r----1 oracle dba 1024 Sep 25 -rw-r----1 oracle dba 1536 Sep 25
Configuring a Database for ARCHIVELOG Mode and Automatic Archiving 211
-rw-r----1 oracle dba 1024 Sep 25 00:28 archorc9_7.log oracle@octilli:/oracle/admin/orc9/arch > The other way to verify that the automatic archival feature has been enabled is to execute the ARCHIVE LOG LIST command, which displays the status of these settings (as is shown here). SQL> archive log list; Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence SQL>
If you enable ARCHIVELOG mode but forget to enable the automatic archival by not editing the init.ora and changing the LOG_ARCHIVE_START to TRUE, the database will hang when it gets to the last available redo log. You will need to perform manual archiving as a temporary fix or to shut down and start up the database after changing the LOG_ARCHIVE_START parameter.
Providing Adequate Space for Archive Logging The additional disk space necessary for ARCHIVELOG mode is often overlooked or underestimated. For average databases, you should have enough space to keep at least one or two days of archived logs online. In order to estimate the correct size of such files, you will need to estimate the volume during peak transaction volume periods. Peak times are typically when the heaviest transactional activity or batch activity occurs. This estimated size is multiplied by how many log_archive_dest_n locations are set up. If you perform nightly hot backups on your system, one or two days of online archived logs should meet most of your recovery requirements from the archived log perspective.
But if the archive process doesn’t have enough disk space to write archived logs, the database will hang or stop all activity. This hung state will remain until you make more space available by moving older archived logs off your system, by compressing log files, or by setting up an automated job that will remove logs after they have been written to tape. You must be careful not to misplace or delete archived logs when trying to free up space for the archive process. Remember these archived logs could be required in a recovery situation.
Manually Archiving Logs
T
he manual archiving of logs consists of enabling the database for ARCHIVELOG mode and then manually executing the ARCHIVE LOG ALL command from SQL. The init.ora parameter LOG_ARCHIVE_START must be set to FALSE to disable automatic archival. The next step is to put the database in ARCHIVELOG mode by performing the following commands: SQL> SQL> SQL> SQL>
shutdown startup mount alter database archivelog; alter database open;
Now you are ready to perform manual archiving of redo logs by using SQL> archive log all; or SQL> archive log next; The ARCHIVE LOG ALL command will archive all redo logs available for archiving, and the ARCHIVE LOG NEXT command will archive the next group of redo logs.
Using init.ora Parameters for Multiple Archive Processes and Locations 213
Using init.ora Parameters for Multiple Archive Processes and Locations
T
he capability to have more than one archived log destination and multiple archivers was first introduced in Oracle8. In Oracle8i and 9i, more than two destinations can be used, providing even greater archived log redundancy. The main reason for having multiple destinations for archived log files is to eliminate any single point of failure. For example, if you were to lose the disk storing the archived logs before these logs were backed up to tape, the database would be vulnerable to data loss in the event of a failure. If the disk containing the archived logs was lost, then the safest thing to do would be to run a backup. This would ensure that no data would be lost in the event of a database crash from media failure. Having only one archived log location is a single point of failure for the backup process. Hence, Oracle has provided multiple locations, which can be on different disk drives, so that the likelihood of archived logs being lost is significantly reduced. See Figure 7.2, which demonstrates ARCHIVELOG mode with multiple destinations. In addition to reducing the potential archived log loss, one of the multiple locations can be remote. The remote location supports the Oracle standby database, which is an option that can be configured to protect a computing site from a disaster. Let’s go over a brief explanation of this option and why it can require remote archived log capabilities. The Oracle standby database can require archived logs to be moved to a remote server, which is running a copy of the production database. This copy of the production database is in a permanent recovery situation with archived logs from the production database regularly being applied to the standby database. There are certain initialization parameters that can be used to assure that the archived logs get moved to remote locations. These initialization parameters will be covered in more detail in the upcoming section “Remote Archived Log Locations.”
Note: log_archive_dest_1 and log_archive_dest_2 init.ora parameters work in conjunction with each other, as do the log_archive_dest and log_archive_duplex_dest
Multiple Archive Processes and Locations Having multiple archive processes can make the archived log creation process faster. If significant volumes of data are going through the redo logs, the archiver can be a point of contention. This means that redo logs could wait or delay database activity while trying to write out an archived log. Furthermore, the archiver has more work to do if the database is writing to multiple destinations. Thus, multiple archive processes can do the extra work to support the additional archive destinations. To implement these new features, the database must be in ARCHIVELOG mode. (To set the database to ARCHIVELOG mode, perform the steps shown earlier in the section entitled “Configuring a Database for ARCHIVELOG mode and Automatic Archiving.”) To verify that the database is in ARCHIVELOG mode, either run an ARCHIVE LOG LIST command or query the V$DATABASE view, as shown earlier.
Using init.ora Parameters for Multiple Archive Processes and Locations 215
To configure a database for multiple archive processes and LOG_ ARCHIVE_DESTs, you use two sets of init.ora parameters. The first set is based on the destination parameter, which has been slightly changed to LOG_ARCHIVE_DEST_N (where N is a number from 1 to 10). The values for these parameters LOG_ARCHIVE_DEST_1 and LOG_ARCHIVE_DEST_2 are as follows: 'LOCATION = /ORACLE/ADMIN/ORC9/ARCH1', 'LOCATION = /ORACLE/ADMIN/ORC9/ARCH2', and 'LOCATION = /ORACLE/ADMIN/ORC9/ REMOTE_ARCH3'. The first set of parameters is listed below in init.ora. ########################################### # Archive Logging ########################################### log_archive_start=true #log_archive_dest=/oracle/admin/orc9/arch log_archive_dest_1='location=/oracle/admin/orc9/arch1' log_archive_dest_2='location=/oracle/admin/orc9/arch2' log_archive_dest_3='location=/oracle/admin/orc9/arch_remote3' log_archive_max_processes=3 log_archive_format=archorc9_%s.log After these parameters are changed or added, then you will need to start up the database to use the new parameters. These new dump locations will then be in effect. The second set of parameters are LOG_ARCHIVE_DEST = and LOG_ARCHIVE_DUPLEX_DEST = . The following example uses LOG_ARCHIVE_DEST = /ORACLE/ADMIN/ORC9/ARCH1 and LOG_ARCHIVE_DUPLEX_DEST = /ORACLE/ ADMIN/ORC9/ARCH2. The main difference in this approach is that you use these parameters if you are going to have only two locations or want to use the same init.ora parameter format supported in 8.0.x. These parameters are mutually exclusive with LOG_ARCHIVE_DEST_N mentioned in the previous example. This second set of parameters can be seen below in the init.ora parameter values. ########################################### # Archive Logging ########################################### log_archive_start=true log_archive_dest=/oracle/admin/orc9/arch1 log_archive_duplex_dest=/oracle/admin/orc9/arch2
log_archive_max_processes=2 log_archive_format=archorc9_%s.log This second method of mirroring archived logs is designed to mirror just one copy of the log files, whereas the first method can mirror up to 10 copies, one of which can be a remote database.
Remote Archived Log Locations With all remote transactions, there is a greater degree of potential problems. In order to deal with these potential transaction problems, Oracle has developed some specific commands for remote archive destinations. Among these, there are a couple of commands and initialization parameters that determine whether the logs are required to reach the remote location and whether they are required to retry unsuccessful sends. LOG_ARCHIVE_MIN_SUCCEED_DEST=N is the initialization parameter that determines how many of the 10 maximum archive destinations must successfully receive an archived log before the online redo logs can be written over. If this is not met, the database will hang. The first parameters specify whether the destinations are mandatory or optional log archive destinations. When you use the MANDATORY parameter, destinations will require log files to arrive at the remote location; if these files don’t arrive, the database generating these logs will hang. When you use the OPTIONAL parameter, the opposite of the MANDATORY parameter, if a log does not make it to remote location, the database will continue processing without hanging. The REOPEN command causes failed archive destinations to be retried after a certain period of time. Below is an example of this command and the parameters previously mentioned. Note that the REOPEN example uses a service-entered destination for use in standby database mode. LOG_ARCHIVE_DEST_1="LOCATION=/oracle/admin/tst9/arch1 MANDATORY" LOG_ARCHIVE_DEST_2="LOCATION=/oracle/admin/tst9/arch2 OPTIONAL" LOG_ARCHIVE_DEST_3="SERVICE=euro_fin_db_V9.0.1 REOPEN=60"
Make sure each LOG_ARCHIVE_DEST_N and LOG_ARCHIVE_DUPLEX_DEST is on a different physical device. The main purpose of these new parameters is to allow a copy of the files to remain intact if a disk were to crash.
The Oracle backup and recovery capability is full featured and robust. It provides many options that support a wide variety of backup and recovery situations. In this chapter, you have seen how to configure an Oracle database for backup and recovery. You have learned the ramifications of operating in ARCHIVELOG as opposed to NOARCHIVELOG mode, and vice versa, and how that choice affects the backup and recovery process. You have seen examples of how the init.ora parameters control the destinations and automation of the archive logging. Finally, you walked through an example that enabled ARCHIVELOG mode and automatic archival of logs in the database. A solid understanding of the archived log process is fundamental to the backup and recovery process. The decision to enable archive logging has major implications on a DBA’s ability to recover the database. In addition, certain recovery options that will be covered in upcoming chapters are dependent on whether or not you have enabled ARCHIVELOG mode. Make sure you understand the reasons for enabling or not enabling the archived log process because this knowledge will benefit you during OCP testing, as well as in real life situations.
Exam Essentials Identify the differences between ARCHIVELOG and NOARCHIVELOG modes. You should know the differences between having the database generate archived logs or not. Make sure that you aware of the impact that the ARCHIVELOG and NOARCHIVELOG modes have on the backup and recovery process. Know how to configure a database in ARCHIVELOG mode. Be able to explain how to configure a database so that it will generate archived logs. You should also be able to identify the commands and initialization parameters involved in this process. Know how to configure the database for automatic archiving. Be able to identify the initialization parameter LOG_ARCHIVE_START = TRUE, which is used to enable automatic archiving. Demonstrate how to perform manual archiving. Know how to use the required commands to generate archive logs manually.
Understand how to configure a database for multiple archive destinations. Know the two different methods of initialization parameters: ARCHIVE_DEST_LOG_N and ARCHIVE_LOG_DEST_DUPLEX. You should also understand the differences between each of these methods, such as remote archive logging and the number of locations. Understand how to configure multiple archive processes. To configure multiple archive processes, you must be familiar with the initialization parameter that configures the database for multiple archive log processes. Understand the commands and initialization parameters involved with remote archiving. You should be comfortable with the commands that affect remote archival, such as MANDATORY, OPTIONAL, and REOPEN. You should also be familiar with the initialization parameters, such as LOG_ ARCHIVE_MIN_SUCCEED_DEST, and how they affect the remote archive process.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: ARCHIVELOG mode
Review Questions 1. What state does the database need to be in to enable archiving? A. Opened B. Closed C. Mounted D. Unmounted 2. What are some of the issues the DBA should be aware of when the
database is running in ARCHIVELOG mode? A. Whether they have the ability to perform a complete recovery B. Whether they have to shut down the database to perform backups C. Whether they have the ability to perform an incomplete recovery D. Whether this mode will cause increased disk space utilization 3. Which of the choices below are ways in which you can change the
destination of archived log files? (Choose all that apply.) A. Change the LOG_ARCHIVE_DEST_n init.ora parameter. B. Configure the control file. C. Change the LOG_ARCHIVE_DEST init.ora parameter. D. Change the LOG_ARCHIVE_DUMP init.ora parameter. 4. What is the maximum number of archived log destinations that is
5. What type of database backup requires a shutdown of the database?
(Choose all that apply.) A. A database in ARCHIVELOG B. A database in NOARCHIVELOG C. Hot backup (online backup) D. Cold backup (offline backup) 6. List all the methods of determining that the database is in ARCHIVELOG
mode. (Choose all that apply.) A. Query the V$DATABASE view. B. See whether the ARCn process is running. C. Check the value of the LOG_ARCHIVE_DEST parameter. D. View the results of the ARCHIVE LOG LIST command. 7. You are the DBA for a manufacturing company that runs three shifts
a day and performs work 24 hours a day on the database, taking orders, performing inventory adjustments, and shipping products. Which type of backup should you perform? (Choose all that apply.) A. Hot backup in ARCHIVELOG mode B. Cold backup in ARCHIVELOG mode C. Online backup in NOARCHIVELOG mode D. Online backup in ARCHIVELOG mode 8. What init.ora parameter allows no more than two archive log
destinations? A. ARCHIVE_LOG_DEST_N B. ARCHIVE_LOG_DEST_DUPLEX C. LOG_ARCHIVE_DEST_DUPLEX D. LOG_ARCHIVE_DUPLEX_DEST
9. What init.ora parameter allows no more than 10 archive log
destinations? A. ARCHIVE_LOG_DEST_N B. ARCHIVE_LOG_DEST_DUPLEX C. LOG_ARCHIVE_DEST_DUPLEX D. LOG_ARCHIVE_DUPLEX_DEST E. LOG_ARCHIVE_DEST_N 10. What init.ora parameter allows remote archive log destinations? A. ARCHIVE_LOG_DEST B. ARCHIVE_LOG_DEST_DUPLEX C. LOG_ARCHIVE_DEST_N D. LOG_ARCHIVE_DUPLEX_DEST 11. What command is necessary to perform manual archiving? A. MANUAL ARCHIVE ALL B. ARCHIVE MANUAL C. LOG ARCHIVE LIST D. ARCHIVE LOG ALL 12. What is required to manually archive the database? A. The database running in ARCHIVELOG mode. B. LOG_ARCHIVE_START is set to TRUE. C. MANUAL_ARCHIVE is set to TRUE. D. Nothing.
13. What command is necessary to perform manual archiving? A. ARCHIVE LOG NEXT B. ARCHIVE LOG LIST C. ARCHIVE ALL LOG D. ARCHIVE ALL 14. What will happen to the database if ARCHIVELOG mode is enabled, but
the LOG_ARCHIVE_START command is not set to TRUE? (Choose all that apply.) A. The database will perform with problems. B. The database will hang until the archived logs are manually
archived. C. The database will not start because of improper configuration. D. The database will work properly until all online redo logs have
been filled. 15. Which of the following is true about NOARCHIVELOG mode? A. The database must be shut down completely for backups. B. Tablespaces must be taken offline before backups. C. The database must be running for backups. D. The database may be running for backups, but it isn’t required. 16. Which initialization parameter determines how many of the archived
log destinations must be successfully written to before the online redo logs may be written over? A. ARCHIVE_MINIMUM_SUCCEED B. LOG_ARCHIVE_MINIMUM_SUCCEED C. LOG_ARCHIVE_MIN_SUCCEED_DEST D. ARCHIVE_MIN_SUCCEED_DEST
17. What parameter will allow the database to keep on processing if an
archived log does not arrive at a log archive destination? A. REOPEN B. MANDATORY C. OPTIONAL D. UNSUCCESSFUL 18. What parameter will cause failed archived log transfers to retry after
a determined period of time? A. MANDATORY B. OPTIONAL C. REOPEN D. RETRY 19. How can you identify whether the database is in ARCHIVELOG mode or
not? (Choose all that apply.) A. You can use the ARCHIVE LOG LIST command. B. You can select * from V$DATABASE. C. You can use the ARCHIVELOG LIST command. D. You can select * from V$LOG. 20. Complete recovery can be performed when what is true about the
database? A. It is in ARCHIVELOG mode. B. It is in NOARCHIVELOG mode. C. Manual archive logging is enabled. D. Automatic archive logging is enabled.
Answers to Review Questions 1. C. The database must be in MOUNT mode to enable archive logging. 2. D. The database will use more space because of the creation of
archived logs. 3. A, C. The archive destination is controlled by the LOG_ARCHIVE_DEST
init.ora parameter and by issuing ALTER SYSTEM commands when the database is running. 4. D. The maximum number of archive destinations is controlled by the
LOG_ARCHIVE_DEST_N init.ora parameter. It will support up to 10 locations. 5. B, D. A database in NOARCHIVELOG mode will not support backups
without a shutdown of the database, and a cold backup is a backup taken when the database is offline or shut down. 6. A, B, D. The V$DATABASE view shows whether the database is in
ARCHIVELOG or NOARCHIVELOG mode, the ARCn process indirectly determines that the archiver is running, and the ARCHIVE LOG LIST command shows whether archiving is enabled. Checking the value of the LOG_ARCHIVE_DEST parameter indicates only whether there is a directory to contain the archived logs, not whether the database is in ARCHIVELOG mode. 7. A, D. A hot backup and online backup are synonymous. A cold
backup and offline backup are synonymous. A hot backup can be run only in ARCHIVELOG mode (so the backup method in answer C doesn’t exist). A hot backup can be run while the database is running; therefore, it does not affect the availability of the database. Because this database must have 24-hour availability, a hot or online backup would be appropriate. 8. D. The LOG_ARCHIVE_DUPLEX_DEST parameter allows for the second
archive destination. Also, this parameter must be used in conjunction with the LOG_ARCHIVE_DEST parameter for the first archive log destination.
9. E. The LOG_ARCHIVE_DEST_N allows up to 10 locations. 10. C. LOG_ARCHIVE_DEST_N allows up to 10 locations. One of these
locations can be remote, that is, not on the same server. 11. D. The ARCHIVE LOG ALL command is responsible for manually gen-
erating archived logs. The ARCHIVE LOG NEXT command will also archive logs manually, but it will only archive the next archived log in the sequence as opposed to all of the archived logs. The database must also be in ARCHIVELOG mode. 12. A. The database running in ARCHIVELOG mode is a requirement for
performing manual or automatic archiving. This means that the database must be restarted with STARTUP MOUNT and then the command ALTER DATABASE ARCHIVELOG must be executed. Then the database should be opened for normal use. 13. A. The ARCHIVE LOG NEXT command will archive the next group of
redo logs. 14. B, D. The database will hang after all the online redo logs have been
filled, but it will work properly as long as there are unfilled online redo logs. 15. A. The database must be shut down completely for backups. If the
database is open and a backup is taken, the backup will be invalid. 16. C. The LOG_ARCHIVE_MIN_SUCCEED_DEST parameter determines how
many of the log archive destinations must be successfully written to before the online redo logs can be overwritten. 17. C. The OPTIONAL parameter will allow the database to continue pro-
cessing even when the online redo logs have been filled and the archived log still has not arrived at the log archive destination. 18. C. The REOPEN parameter will cause failed archived log transfers to
19. A, B. The ARCHIVE LOG LIST command can be executed from within
SQL*Plus and will display whether or not the database is in ARCHIVELOG mode. Selecting from V$DATABASE will also show this information. 20. A. When the database is in ARCHIVELOG mode, complete recovery can
be performed. What mode to use is one of the most significant decisions you can make about backup and recovery of your database.
Oracle Recovery Manager Overview and Configuration ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Identify the features and components of RMAN. Describe the RMAN repository and control file usage. Describe channel allocation. Describe the Media Management Library interface. Connect to RMAN without the recovery catalog. Configure the RMAN environment.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
his chapter provides an overview of RMAN, including the capabilities and components of the RMAN tool. The RMAN utility attempts to move away from the highly customized OS backup scripts (user-managed) discussed in earlier chapters, to a highly standardized backup and recovery process. Thus, as of Oracle version 8, you can reduce backup and recovery mistakes associated with the highly customized OS backup scripts used before RMAN’s release. In this chapter, you will walk through a practical example of connecting to the RMAN utility without using the optional, but recommended, recovery catalog. We will also demonstrate multiple ways to configure the RMAN environment to automate and set up manual RMAN settings. Each of the topics covered in this chapter is important; you can apply this knowledge to your real-world DBA work in addition to your preparations for the test. When you know how to use the RMAN repository, channel allocation, and MML, and you can configure the RMAN environment, you will be better able to utilize RMAN in your organization.
Identifying Features of Oracle Recovery Manager (RMAN)
Oracle Recovery Manager (RMAN) has many features that can be used to facilitate the backup and recovery process. This tool comes in both GUI and command-line versions. In general, RMAN performs and standardizes the backup and recovery process, and by doing this, it can reduce mistakes made by DBAs during this process. A list of some of the major RMAN features follows: Backs up databases, tablespaces, data files, control files, and archived logs The RMAN tool is capable of backing up Oracle databases in multiple ways to allow for flexibility in backup and recovery methods.
Identifying Features of Oracle Recovery Manager (RMAN)
229
Compresses backups by determining which blocks have changed and backing up only those blocks One way RMAN improves performance of the backup is by compressing backups. RMAN identifies blocks that have changed and only backs up those blocks. Also, empty blocks are not backed up. Performs incremental backups RMAN has the ability to perform incremental and full backups. Incremental backups include only the changes that have been made since the last backup. This type of backup can improve performance by allowing you to take a full backup one day a week and incremental backups on the rest of the days. Provides scripting capabilities to combine tasks One way RMAN can improve the efficiency of your backup, restoration, and recovery operations is by allowing RMAN commands to be scripted. The scripts can consist of multiple RMAN commands that can be stored within the recovery catalog. These scripts can be called and executed to perform tasks repetitively. Logs backup operations RMAN has the ability to log the status of backups as they progress. This information is stored in log and trace files. Integrates with third-party tape media software The RMAN tool has APIs with many third-party tape media software. These allow RMAN to be executed within other non-Oracle backup utilities and to be integrated in a common backup strategy for an organization. Provides reports and lists of catalog information Information about backups that is stored within the recovery catalog can be queried with RMAN LIST and REPORT commands. These commands provide useful ways to display information. Stores information about backups in a catalog within an Oracle database Information about backups is stored in the recovery catalog. This information can be retrieved at a later date or whenever desired. Offers performance benefits, such as parallel processing of backup and restore operations Backup and restore operations can be set in parallel. This can split workloads onto different tape heads and disk devices, which will improve performance. Creates duplicate databases for testing and development purposes Duplicate databases can be created from RMAN backups and can be used for testing purposes. Tests whether backups can be restored successfully RMAN provides the VALID commands that will check to see whether the backup is valid.
Oracle Recovery Manager Overview and Configuration
Determines whether backups are still available in media libraries RMAN provides the CROSSCHECK command to determine if the backup media and catalog information match. Figure 8.1 illustrates some of the differences between RMAN and the customized backup scripts and commands used in the earlier chapters. FIGURE 8.1
Differences between backup scripts and the RMAN utility Enterprise Manager
Telnet> DOS> SVRMGR>
RMAN>
SQL commands: ALTER TABLESPACE OS commands: [BEGIN/END] BACKUP cp, tar, cpio, ALTER DATABASE dd in Unix BACKUP CONTROL FILE TO TRACE ARCHIVE LOG ALL
Backs up and restores database files
Backs up database files
Combines OS commands and SQL to make hot and cold custom backup scripts
RMAN utility Commands Scripts Recovery catalog
Backs up and restores database files Performs hot and cold backups Records backup information Validates backups Integrates with third-party tape media hardware
Exploring RMAN Components
T
he main components of RMAN are GUI or command-line access, the optional recovery catalog, the RMAN commands and scripting, and tape media
connectivity. These components enable a DBA to automate and standardize the backup and recovery process. Each component is described below: GUI or command-line access method This method provides access to Recovery Manager, which in turn provides access to RMAN. The GUI method spawns off of server sessions that connect to the target database— the database targeted by RMAN for backup or recovery actions. GUI access is provided through the Oracle Enterprise Manager (OEM) tool, which is a DBA tool that performs backups, exports/imports, data loads, performance monitoring/tuning, job and event scheduling, and standard DBA management, to mention a few. Command-line access can be run in a standard Unix Telnet or X Windows session as well as in a DOS shell in the Windows environment. Optional recovery catalog This special data dictionary of backup information is stored in a set of tables, in much the same way that the data dictionary stores information about databases. The recovery catalog provides a method for storing information about backups, restores, and recoveries. This information provides status updates on the success or failure of backups, the OS backup, data file copies, tablespace copies, control file copies, archived log copies, full database backups, and the physical structures of a database. RMAN commands These commands enable different actions to be performed that facilitate the backup and restoration of the database. These commands can be organized logically into scripts, which can then be stored in the recovery catalog database. The scripts can be reused for other backups, thus keeping consistency among different target database backups. Tape media connectivity This component provides a method for interfacing with various third-party tape hardware vendors to store and track backups in automated tape libraries (ATLs). An ATL is a large tape cabinet that stores many tapes and can write to multiple tapes at the same time to improve performance. ATLs use robotic arms to load, unload, and store tapes. By default, Oracle provides media management libraries (MML) for the Legato Storage Management (LSM) software, which manages an ATL device. Figure 8.2 shows an example of how the RMAN utilities’ components fit together to form a complete backup and recovery package.
Oracle Recovery Manager Overview and Configuration
FIGURE 8.2
The components of RMAN Enterprise Manager RMAN> command
Recovery catalog database
Stored scripts and information about backups RMAN Write or read from tape or disk.
Server session
Server session
Third-party tape hardware or disk
Back up and restore databases.
Target databases
Storing Information Using the RMAN Repository and Control Files
The RMAN utility uses two methods to store information about the target databases that are backed up. Each method is called the RMAN repository. When RMAN uses the first method, it accesses an optional RMAN catalog or recovery catalog of information about backups. In the second method, it accesses the necessary information about backups in the target database’s control file. If the optional recovery catalog database is not used, then the target database’s control file will be used as the RMAN repository. The information that RMAN needs to function in the recovery catalog database
Storing Information Using the RMAN Repository and Control Files
233
or in the target database’s control file is also called the RMAN repository. The target database’s control is always updated, whether the catalog is used or not. In the next paragraph, we will discuss the recovery catalog as the RMAN repository for storing information, and then we will list the commands you can use in this method. Following this list, we will focus on using the control file as the RMAN repository. Oracle recommends that you store RMAN backup data in the recovery catalog database as opposed to in the control file of the target database. If you store the data in this manner, the RMAN utility will have full functionality. This recovery catalog database is another Oracle database that has special RMAN catalog tables that store metadata about backups in much the same way that a data dictionary stores data about objects in the database. When this database is used, activities such as cross checking backups and available tapes can be performed. Also, using this method, backup scripts can be created and stored in the recovery catalog database for later use. This database can also be backed up so that the information it contains is made safe. We will discuss the recovery catalog database in more detail in Chapter 13, “Recovery Catalog Creation and Maintenance.” Before we move on to discuss the second method in more detail, take a look at these commands. They are allowed only if you use the RMAN recovery catalog as the RMAN repository.
Oracle Recovery Manager Overview and Configuration
As mentioned earlier, the RMAN utility also enables you to connect to a target database without using this recovery catalog database. Though this approach is not recommended by Oracle, it does have its uses. (For instance, you might use this approach if the overhead of creating and maintaining the recovery catalog were too great for your organization.) If you use RMAN without the recovery catalog, you are storing most of the necessary information about each target database in the target database’s control file, which serves as the RMAN repository. Thus, you must manage the target database’s control file to support this data. The init.ora file’s CONTROL_FILE_RECORD_KEEP_TIME parameter determines how long information that can be used by RMAN is kept in the control file. The default value for this parameter is 7 days, but it can be as many as 365 days. The greater the number, the larger the control file becomes so that it can store more information. The control file can only be as large as the OS allows, so be aware of this. The information that is stored within the control file is stored in the reusable sections. These sections can grow if the value of the CONTROL_FILE_ RECORD_KEEP_TIME parameter is 1 or more. The reusable sections are made up of the following categories:
Archived log
Backup data file
Backup redo log
Copy corruption
Deleted object
Offline range
Backup corruption
Backup piece
Backup set
Data file copy
Log history
Describing Channel Allocation
C
hannel allocation is a method you can use to connect RMAN and target databases. While you are doing this, you can also determine the type of I/O device that the server process will use to perform the backup or restore
operation. Figure 8.3 displays this example. The I/O device can be either tape or disk (in Figure 8.3, you can see both). FIGURE 8.3
Channel allocation RMAN>
Disk
Channel T1 disk
Channel T2 “sbt-tape”
Server session
Server session
Target database Tape
Channels can be allocated manually or automatically. Manual channel allocation is performed any time you issue the ALLOCATE CHANNEL command, which starts a server process on the server of the target database. To manually write to a disk file system, you would use the ALLOCATE CHANNEL TYPE DISK command. Similarly, to write to a tape backup system, you would use the ALLOCATE CHANNEL TYPE ‘SBT_TAPE’command. These are the most common manual channel allocation usages. Automatic channel allocation is performed when you set the RMAN configuration at the RMAN command prompt. You do this by using the CONFIGURE DEFAULT DEVICE or CONFIGURE DEVICE command. This type of allocation is automatic when you are executing the BACKUP, RESTORE, or DELETE commands. By using the CONFIGURE commands you can eliminate the need to use the ALLOCATE CHANNEL TYPE DISK or ‘SBT_ TAPE’ command every time you perform a BACKUP, RESTORE, or DELETE. The complete listing of automatic channel allocation commands is as follows:
Oracle Recovery Manager Overview and Configuration
CONFIGURE CHANNEL DEVICE TYPE
CONFIGURE CHANNEL n DEVICE TYPE
There are some default naming conventions for the disk and tape device types—ORA_MAINT_DISK_n and ORA_SBT_TAPE_n. The example below shows the default device type set to DISK and parallelism set to 1. This means that if you don’t allocate a channel manually, you will get the parameters listed below. RMAN> show all; RMAN configuration parameters are: … CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default RMAN>
Exploring the Media Management Library Interface
T
he media management library (MML) interface is an API that interfaces RMAN and different hardware vendors’ ATLs. All tape hardware vendors that wish to work with Oracle RMAN will make their own MML. This is necessary because most tape hardware devices are proprietary and require different program calls. The MML is then linked in with the Oracle database kernel so that the RMAN server process and MML can read and write the Oracle data to the tape device. Figure 8.4 describes this concept.
Oracle provides a third-party media management library (MML) that is included in its software installation by default with RMAN. This MML is used with Legato Storage Manager (LSM) software, which will manage an automated tape library (ATL).
*Media management software is most likely on a separate server other than the target database. Then the software can be centralized, allowing backup of all servers and databases within an organization.
MML
Media management software
Tape
ATL unit
When you set up the MML with RMAN, you need to use OS commands to replace an existing shared library with the new media management vendor’s library. Here is a generic example of this being done in the Unix environment. 1. If an old libobk.so symbolic link already exists in $ORACLE_HOME/
lib, then remove it before installing the media manager. For example: oracle@octilli:/oracle/product/9.0.1/rdbms/lib > rm libobk.so 2. There are two ways to access the new media management library from
the vendor. Either create a symbolic link, oracle@octilli: > ln -s /vendor/lib/oracle_lib.so $ORACLE_HOME/rdbms/lib/libobk.so or move the library into the $ORACLE_HOME/lib directory. oracle@octilli: > mv /vendor/lib/oracle_lib.so $ORACLE_HOME/rdbms/lib/libobk.so
Oracle Recovery Manager Overview and Configuration
Configuring and Using the Media Manager Vendor Software In your role as a DBA, you have configured RMAN and made sure that it is operational. You also have a backup script and method that you feel comfortable with. But you are not out of the woods, yet. This is because there is often a whole new setup, configuration, usage, and testing effort associated with the software and hardware solutions that you choose to integrate with RMAN. For instance, if you interface with an ATL, you will encounter software that manages the tapes, and most likely, the RMAN backups as well. In some cases, these software solutions can be simple agents performing shell scripts that then call RMAN scripts. But in some cases, you may end up dealing with full-featured software applications that execute RMAN from their own GUI screens. As a result, you should make sure that you plan for the extra time you will need to configure and work with these software tools. You will also need to take into account the time you will need to work with system administrators, backup coordinators, and other database administrators to assure that your complete backup package is properly implemented.
Connecting to RMAN without a Recovery Catalog
I
n order to connect to the target database in RMAN, you must set the ORACLE_SID to the appropriate target database. In this example, it is orc9. This example uses the oraenv shell script provided by Oracle with the 9i database software to change database environments. Next, you initiate the RMAN utility. Once the RMAN utility is started, issue the CONNECT TARGET command with the appropriate SYSDBA privileged account. This performs the connection to the target database.
Let’s walk through this step by step: 1. Set the ORACLE_SID to the appropriate target database that you wish
to connect to. oracle@octilli:/oracle/product/9.0.1/bin > . oraenv ORACLE_SID = [orc9] orc9 2. Execute the RMAN utility by typing RMAN and pressing Enter.
RMAN> 3. Issue the CONNECT TARGET command with the appropriate DBA
privileged account. RMAN> connect target / connected to target database: ORC9 (DBID=3960695) RMAN> Here are two other methods of connecting to the target database without the recovery catalog: oracle@octilli:~ > rman target / nocatalog and oracle@octilli:~ > rman target SYS/CHANGE_ON_INSTALL@tst9 NOCATALOG
Configuring the RMAN Environment
Configuring the RMAN environment consists mainly of executing the CONFIGURE command at some point while you are using the RMAN prompt.
Oracle Recovery Manager Overview and Configuration
The existing configuration can be seen by executing the SHOW ALL command, as we did in the “Describing Channel Allocation” section earlier. Let’s start by first looking at the output from the SHOW ALL command in the target database TST9. RMAN> show all; using target database controlfile instead of recovery catalog RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 5 DAYS; CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/product/ 9.0.1/dbs/snapcf_tst9.ft RMAN> Now let’s look at a specific example of what happens while you are configuring the AUTOBACKUP control file. This process begins when you execute the CONFIGURE CONTROLFILE AUTOBACKUP ON command at the RMAN prompt, as shown here: RMAN> configure controlfile autobackup on; new RMAN configuration parameters: CONFIGURE CONTROLFILE AUTOBACKUP ON; new RMAN configuration parameters are successfully stored RMAN>
If you want to revert to the default entry for the AUTOBACKUP control file or some other configuration setting, you can use the CLEAR command, as demonstrated here: RMAN> configure controlfile autobackup clear; old RMAN configuration parameters: CONFIGURE CONTROLFILE AUTOBACKUP ON; RMAN configuration parameters are successfully reset to default value RMAN> There are other configuration parameters that can be used in RMAN. Each of these configuration parameters can be used in a similar way as the previous configuration example. The other configuration parameters can be broken into the following major categories:
Configuring automatic channels
Configuring the AUTOBACKUP control file
Configuring the backup retention policy
Configuring the maximum size of backup sets
Configuring backup optimization
Configuring the number of backup copies
Configuring tablespaces for exclusion from whole database backups
Configuring the snapshot control file location
Summary
T
his chapter discussed the features and capabilities of the RMAN utility. From it, you should have gotten a sense of some of the basic functions of RMAN. You should have learned how RMAN can be used with or without the recovery catalog and in what environments this practice would be most beneficial. In addition, you learned what the effects of using RMAN on the control file would be if you are not using the recovery catalog. You also
Oracle Recovery Manager Overview and Configuration
learned of manual and automatic channel allocation and explored specific examples of each. Each of the topics covered in this chapter will be covered on the test. Having an understanding of the components and features of RMAN is important for both testing and the workplace. When you know how to work with the RMAN repository, use channel allocation and the MML, and configure the RMAN environment, you will be able to make appropriate decisions when you implement and use RMAN in the workplace. This level of understanding will definitely be beneficial in the testing process.
Exam Essentials Name the components of RMAN. The different components that make up RMAN include the GUI access through Oracle Enterprise Manager, the RMAN command line capabilities, Media Management support, and the optional recovery catalog database. Know how to configure RMAN to store backup information without a recovery catalog. You will need to configure the control file so that it will store information about backups. You would do this by setting the CONTROL_FILE_RECORD_KEEP_TIME parameter. Understand the RMAN repository or target database control file limitations. Not all the commands are available to you if you use RMAN repository to access your target database’s control file rather than your own database’s RMAN catalog. Be aware of these commands. Define channel allocation. You’ll need to understand what channel allocation is and the two I/O devices it can be connected to: tape and disk. You should also know that there are two types of channel allocation: automatic and manual. Configure the RMAN environment. Be able to use the CONFIGURE command and the parameters necessary to configure this environment. Also, be aware that you can configure the NLS settings and media management layer in the OS.
Describe the media management library interface. Be familiar with the media management library and know that it is a library created by tape hardware vendors to link RMAN with their unique hardware devices and software. Be able to connect to RMAN without the recovery catalog. You should know how to set the database environment ORACLE_SID to the correct target database you would like to back up.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: automated tape library (ATL)
Oracle Recovery Manager Overview and Configuration
Review Questions 1. Does the RMAN utility require the use of the recovery catalog? A. The recovery catalog is required. B. The recovery catalog is not required. C. The recovery catalog is required if it is stored in the same database
as the target database. D. The recovery catalog is not required if it is stored in the same
database as the target database. 2. What are some of the capabilities of the RMAN utility? (Choose all
that apply.) A. Backs up databases, tablespaces, data files, control files, and
archived logs B. Compresses backups C. Provides scripting capabilities D. Tests whether backups can be restored E. All of the above 3. What type of interface does the RMAN utility support? (Choose all
that apply.) A. GUI through Oracle Enterprise Manager B. Command-line interface C. Command line only D. GUI through Oracle Enterprise Manager only
4. What actions can be performed within the RMAN utility? (Choose all
that apply.) A. Start up target database. B. Shut down target database. C. Grant roles to users. D. Create user accounts. 5. The tape media management library (MML) enables RMAN to
perform which of the following? (Choose all that apply.) A. Interface with third-party tape hardware vendors. B. Use third-party automated tape libraries (ATLs). C. Write to any tape. D. Write to disk. 6. Which of the following commands is used in automatic channel
allocation? A. ALLOCATE CHANNEL C1 TYPE DISK B. ALLOCATE CHANNEL C1 TYPE ‘SBT_TAPE’ C. CONFIGURE DEFAULT DEVICE TYPE D. CONFIGURE DEFAULT TYPE DEVICE 7. Which of the following commands are examples of those used in the
manual channel allocation process? (Choose all that apply.) A. ALLOCATE CHANNEL T1 TYPE DISK B. ALLOCATE CHANNEL T1 TYPE ‘SBT_TAPE’ C. CONFIGURE DEFAULT DEVICE TYPE D. CONFIGURE DEFAULT TYPE DEVICE
Oracle Recovery Manager Overview and Configuration
8. Which of the following media management libraries are provided with
most RMAN installations? A. MMLs that support Legato Storage Manager software. B. MMLs that support Disk. C. MMLs are not supplied. D. MMLs are not necessary for proprietary tape hardware. 9. Which of the following commands would you use to set the default
parallelism for a RMAN session? A. SET DEVICE TYPE PARALLELISM B. CONFIGURE DEVICE TYPE PARALLELISM C. INSTALL DEVICE TYPE PARALLELISM D. CONFIG DEVICE TYPE PARALLELISM 10. Which of the following commands cannot be used without a recovery
catalog? (Choose all that apply.) A. RESYNCH CATALOG B. RESET DATABASE C. REPLACE SCRIPT D. LIST INCARNATION E. All of the above 11. Which of the following commands best describes manual channel
allocation? (Choose all that apply.) A. CONFIGURE DEFAULT DEVICE TYPE B. CONFIGURE CHANNEL DEVICE TYPE C. ALLOCATE CHANNEL TYPE ‘SBT_TAPE’ D. ALLOCATE CHANNEL TYPE DISK
12. Which of the following commands automatically backs up the
control file? A. SET CONTROLFILE AUTOBACKUP ON B. CONFIGURE CONTROLFILE AUTOBACKUP ON C. CONFIGURE CONTROLFILE AUTO ON D. CONFIGURE CONTROLFILE AUTOBACKUP TRUE 13. Which of the following files best describes the default media manage-
ment library for Unix? A. libobk.sl B. libobk.so C. obklib.so D. libmml.so 14. Study the following command and then choose the option that best
describes what this command does. RMAN> connect target / A. Connects to the recovery catalog B. Connects to the target database C. Connects to the target database and the recovery catalog D. Connects to neither the target database nor the recovery catalog 15. What is the maximum size for the RMAN repository if the recovery
catalog is not being used? A. Dependent on OS file size limitations B. No limits C. 2 gigabytes D. 4 gigabytes
Oracle Recovery Manager Overview and Configuration
Answers to Review Questions 1. B. The recovery catalog is optional regardless of the configuration of
the target database. It is used to store information about the backup and recovery process in much the same way that the data dictionary stores information about the database. 2. E. All answers are capabilities of the RMAN utility. 3. A, B. The RMAN utility can be run in GUI mode via the use of Ora-
cle Enterprise Manager or through a command-line interface on the server. 4. A, B. The RMAN utility can start and stop a target database. Data-
base objects and users’ accounts are not created with the RMAN utility. 5. A, B. The tape media library enables RMAN to interface with other
tape hardware vendors and use their automated tape library systems. Writing to disk and tape can still be performed by using special tape libraries. 6. C. The CONFIGURE DEFAULT DEVICE TYPE command is used during
automatic channel allocation. The CONFIGURE DEFAULT TYPE DEVICE command is incorrect and the other examples are manual channel allocation examples. 7. A, B. Both examples with ALLOCATE CHANNEL as a part of the command
are examples of manual channel allocation. The other examples, C and D, are not manual channel allocation. 8. A. The MMLs that are installed with most RMAN installations
support the Legato Storage Manager. 9. B. The CONFIGURE DEVICE TYPE PARALLELISM command will set a
10. E. None of these commands can be used unless you have imple-
mented the recovery catalog. 11. C, D. Both of these options are methods of manual channel alloca-
tion. SBT_TAPE supports tape allocation and DISK supports disk allocation. 12. B. The CONFIGURE CONTROLFILE AUTOBACKUP ON command will
configure the control file to be automatically backed up. 13. B. The libobk.so file is the default media management library. This
must be replaced or pointed to whatever vendor is being used. 14. B. This command connects to the target database. 15. A. The RMAN repository is the target database control file if the
recovery catalog is not being used. The size of any file is dependent on the OS in which it resides.
User-Managed and RMAN-Based Backups ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe user-managed backup and recovery operations. Discuss backup issues associated with read tablespaces. Perform closed database backups. Perform open database backups. Back up the control file. Perform cleanup after a failed online backup. Use the DBVERIFY utility to detect corruption. Identify types of RMAN specific backups. Use the RMAN BACKUP command to create sets. Back up the control file. Back up the archived redo log files. Use the RMAN COPY command to create image copies.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
physical backup is a copy of the physical database files, and it can be performed in two ways. The first is through the Recovery Manager (RMAN) tool that Oracle provides. The second way is by performing a usermanaged, or non-RMAN, backup. This chapter focuses on both types of backup. The user-managed backup has been used for years to back up the Oracle database. The OS backup script is a totally customized solution, and therefore, it has the variations and inconsistencies associated with custom solutions. This backup usually consists of an OS backup created with a scripting language, such as Korn shell or Bourne shell in the Unix environment, or batch commands in the Windows NT/2000/XP environment. Even though the user-managed backup has been historically helpful, the current trend shows that most larger database sites are now using RMAN to conduct their backups. The reason for this is because of RMAN’s extended capabilities and because it can be used consistently regardless of platform. However, the OS backup is still useful to the DBA. You can use it to train yourself so that you understand the physical backup fundamentals. Further, many storage subsystem providers still utilize user-managed backups in conjunction with third mirror capabilities to expedite large backups in a short period of time. (A third mirror, or third mirroring equivalents, offers a way to separate a third copy of the disk mirror at the hardware level. This third copy is made into a backup that can be copied to tape.) After you have learned as much as you can from the user-managed backup, move on to RMAN, which builds on these fundamentals. In this chapter, you will learn the various physical backup methods with both user-managed backups and RMAN backups.
s mentioned, different sites customize their user-managed backups and recovery operations to suit different requirements. This customization is possible because this type of backup is generally a script written in a Unix shell or using Windows NT/2000/XP batch commands. As a result, these user-managed backups and recovery operations must be managed as custom code, and significant testing must be conducted to validate their functionality. This description shows both the benefits and drawbacks of using this type of backup. Though this ability to customize can allow user-managed backups to be designed for unique situations in addition to significant testing and validation, such customizations could cause errors that could invalidate the entire backup or recovery process. Despite such possible side effects, user-managed backups have been in use for many years in the Oracle environment. In addition to their broad usage, these backups also provide the building blocks you need to understand the entire Oracle backup and recovery process (including RMAN). This is why every DBA should be comfortable with user-managed backup and recovery operations. This knowledge will allow them to make the correct decisions in a failure situation whether the site is using user-managed or RMAN-based backup and recovery.
Working with Read-Only Tablespaces
T
he backup and recovery of a read-only tablespace requires unique procedures in certain situations. The backup and recovery process changes depending on the state of the tablespace at the time of backup and the time of recovery, and the state of the control file. This section discusses the implications of each of these situations. A backup of a read-only tablespace requires different procedures from those used in a backup of a read-write tablespace. The read-only tablespace is, as its name suggests, marked read-only; in other words, all write activity is
disabled. Therefore, the system change number (SCN) does not change after the tablespace has been made read-only, as long as it stays read-only. This means that after a database failure, no recovery is needed for a tablespace marked read-only if the tablespace was read-only at the time of the backup. The read-only tablespace could simply be restored and no archived logs would get applied during the recovery process. If a backup is restored that contains a tablespace that was in read-write mode at the time of the backup but is in read-only at the time of failure, then a recovery would need to be performed. This is because changes would be made during read-write mode. Archived logs would be applied up until the tablespace was made read-only. The state of the control file also affects the recovery process of read-only tablespaces. During recovery of a backup control file, or recovery when there is no current control file, read-only tablespaces should be taken offline or you will get an ORA-1233 error. The control file cannot be created with a readonly tablespace online. The data file or data files associated with the read-only tablespace must be taken offline before the recovery command is issued. After the database is recovered, the read-only tablespace can be brought online. We will look at examples of the recovery of read-only tablespace in Chapter 10, “User-Managed Complete Recovery and RMAN Complete Recovery.”
Understanding Closed and Opened Database Backups
A closed backup is a backup of a database that is not in the opened state. Usually this means that the database is completely shut down. This kind of backup is also called a cold, or offline, backup. An opened backup is a backup of a database in the opened state. In this state, the database is completely available for access. An opened backup is also called a hot, or online, backup. The main implication of a closed backup is that the database is unavailable to users until the backup is complete. One of the preferable prerequisites of doing a cold backup is that the database should be shut down with the NORMAL or IMMEDIATE options so that the database is in a consistent state. In other words, the database files are stamped with the same SCN at the same point in time. This is called a consistent backup. Because no recovery information in the archived logs needs to be applied to the data files, no recovery
is necessary for a consistent, closed backup. Figure 9.1 is an example of a closed backup. FIGURE 9.1
Physical backup utilizing the cold, offline, or closed backup approach in Unix SVRMGRL > SVRMGRL > SHUTDOWN NORMAL SVRMGRL > !cp/Disk1/*/staging … SVRMGRL > !cp/Disk5/*/staging
/Disk1
/Disk2
/Disk3
/Disk4
/Disk5
Archive logs
Redo logs
Data files Control files
Data files
Control files
Copy of all physical files to /staging Disk6
/Disk6
/dev/rmt0 tape
/staging
tar cvf /dev/rmt0/staging
The main implication of an opened backup is that the database is available to users during the backup. The backup is accomplished with the command ALTER TABLESPACE BEGIN BACKUP, an OS copy command, and the command ALTER TABLEPACE END BACKUP. Refer back to Chapter 7, “Configuring the Database Archiving Mode” for a more detailed discussion of hot backups. Figure 9.2 is an example of an opened backup.
Physical backup utilizing the hot, online, or opened backup approach in Unix SVRMGRL > SVRMGRL > ALTER TABLESPACE BEGIN BACKUP; SVRMGRL > ! cp /Disk3/datafile* /staging SVRMGRL > ALTER TABLESPACE END BACKUP;
/Disk1
/Disk2
/Disk3
/Disk4
/Disk5
Archive logs
Redo logs
Data files Control files
Data files
Control files
Copy archive logs for recovery purposes. Copy of all data files associated with tablespaces on which ALTER TABLESPACE BEGIN/END BACKUP was run to /staging Disk6.
Note: Copy all data files associated with each tablespace.
/dev/rmt0 tape
/Disk6 /staging
tar cvf /dev/rmt0 /staging
During an opened backup, the database is in an inconsistent state; in other words, the SCN information for the data files and control files is not necessarily consistent. Therefore, this is referred to as an inconsistent backup. This requires recovery of the data files by applying archived logs to bring the data files to a consistent state.
Performing Closed and Opened Database Backups
A
closed backup and opened backup are performed in a similar manner—by executing OS copy commands. The closed backup can be performed just like a standard OS backup after the database has been shut down. Basically,
the closed backup just makes a copy of all the necessary physical files that make up the database; these include the data files, online redo logs, control files, and parameter files. The opened backup is also executed by issuing OS copy commands. These commands are issued between an ALTER TABLESPACE BEGIN BACKUP command and an ALTER TABLESPACE END BACKUP command. The opened backup requires only a copy of the data files. The ALTER TABLESPACE BEGIN BACKUP command causes a checkpoint to the data file or data files in the tablespace. This causes all dirty blocks to be written to the data file, and the data-file header is stamped with the SCN consistent with those data blocks. All other changes occurring in the data file from Data Manipulation Language (DML), such as INSERT, UPDATE, and DELETE statements, get recorded in the data files and redo logs in almost the same way as during normal database operations. However, the data-file header SCN does not get updated until the ALTER TABLESPACE END BACKUP command gets executed and another checkpoint occurs. To distinguish what blocks are needed in a recovery situation, Oracle writes more information in the redo logs during the period that the ALTER TABLESPACE BEGIN BACKUP command is executed. This is one reason why data files are fundamentally different from other Oracle database files, such as redo logs, control files, and init.ora files, when it comes to backups. Redo logs, control files, and init.ora files can be copied with standard OS copy commands without performing any preparatory steps such as the ALTER TABLESPACE commands. This is completely true in the Unix environment. This is partially true in the Windows NT/2000/XP environment because of the locking that is performed on open files.
Even though hot backups can be performed at any time when the database is opened, it is a good idea to perform hot backups when there is the lowest DML activity. This will prevent excessive redo logging, which could impair database performance.
Here are the steps you need to take to perform a closed database backup: 1. Shut down the database that you want to back up. Make sure that a
SHUTDOWN NORMAL, IMMEDIATE, or TRANSACTIONAL command is used, and not a SHUTDOWN ABORT. oracle@octilli:/opt/oracle > sqlplus /nolog
SQL*Plus: Release 9.0.1.0.0 - Production on Sat Sep 29 00:11:37 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect /as sysdba Connected. Database closed. Database dismounted. ORACLE instance shut down. 2. Once the database is shut down, perform an OS copy of all the data
files, parameter files, and control files to a disk or a tape device. In Unix, perform the following commands to copy the data files, parameter files, and control files to a disk staging location where they await copy to tape. cp /oracle/product/9.0.1/oradata/*.dbf /staging/cold # datafiles cp /oracle/admin/orc9/pfile/* /staging/cold # INIT.ora files cp /oracle/oradata/*.ctl /staging/cold # control files location 1 cp /oracle/product/9.0.1/oradata/*.ctl /staging/cold # control files location 2 cp /oracle/oradata/orc9/*.log /staging/cold # online redo logs group 1 3. Restart the database and proceed with normal database operations.
SQL> startup; Here are the steps you would use to perform an opened database backup. The database is available to users during these operations, although the response time for users may be decreased depending on what tablespace is being backed up. 1. To determine all the tablespaces that make up the database, query the
V$TABLESPACE and V$DATAFILE dynamic views. The following is the SQL statement that identifies tablespaces and their associated data files.
select a.TS#,a.NAME,b.NAME from v$tablespace a, v$datafile b where a.TS#=b.TS#; 2. Determine what all the data files are that make up each tablespace.
Each tablespace can be made up of many data files. All data files associated with the tablespace need to be copied when the tablespace is in backup mode. Perform the above query by connecting to SQLPLUS. oracle@octilli:~ > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Sat Sep 29 10:44:22 2001 (c) Copyright 2001 Oracle Corporation.
All rights reserved.
SQL> SQL> connect /as sysdba Connected. SQL>
select a.TS#,a.NAME,b.NAME from v$tablespace a, v$datafile b where a.TS#=b.TS#;
TS# ----0 1 2 3 4 5 7 8
NAME ---------SYSTEM UNDOTBS CWMLITE DRSYS EXAMPLE INDX TOOLS USERS
NAME -------------------------------------------------/oracle/product/9.0.1/oradata/orc9/system01.dbf /oracle/product/9.0.1/oradata/orc9/undotbs01.dbf /oracle/product/9.0.1/oradata/orc9/cwmlite01.dbf /oracle/product/9.0.1/oradata/orc9/drsys01.dbf /oracle/product/9.0.1/oradata/orc9/example01.dbf /oracle/product/9.0.1/oradata/orc9/indx01.dbf /oracle/product/9.0.1/oradata/orc9/tools01.dbf /oracle/product/9.0.1/oradata/orc9/users01.dbf
3. Put the tablespaces in backup mode.
SQL> alter tablespace users begin backup; Tablespace altered.
4. Perform an OS copy of each data file associated with the tablespace in
backup mode. SQL> ! cp /oracle/product/9.0.1/oradata/orc9/ users01.dbf /staging/cold 5. End backup mode for the tablespace.
SQL> alter tablespace users end backup; Tablespace altered. These series of commands can be repeated for every tablespace and associated data file that make up the database. The database must be in ARCHIVELOG mode to execute the ALTER TABLESPACE BEGIN and END backup commands. Typically, this type of backup is done with a scripting language in Unix or a third-party GUI utility in the Windows environment. Using Unix shell scripts, a list of data files and tablespaces are dumped to a file listing and parsed into svrmgr and cp commands so that all tablespaces and data files are backed up together.
During an opened or closed backup, it is a good idea to get a backup control file, all archived logs, and a copy of the parameter files. These can be packaged with data files so that all necessary or potentially necessary components for recovery are grouped together. This is called a whole database backup.
Identifying User-Managed Control-File Backups
T
here are two types of user-managed control-file backups. The first type is performed by executing a command that creates a binary copy of the existing control file in a new directory location. For example, the following command performs a binary copy of the control file. SQL> alter database backup controlfile to ‘/staging/control.ctl.bak’; The second type of control-file backup creates an ASCII copy of the current control file as a trace file in the USER_DUMP_DEST location. The USER_ DUMP_DEST parameter should be set in your init.ora file. In a configuration
compliant with Optimal Flexible Architecture (OFA), this will be the udump directory. The backup of the control file can be performed by executing the following command: SQL> alter database backup controlfile to trace; The output of the trace file looks like this: /oracle/admin/orc9/udump/ora_4976.trc Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production ORACLE_HOME = /oracle/product/9.0.1 System name: Linux Node name: octilli Release: 2.4.4-4GB Version: #1 Fri May 18 14:11:12 GMT 2001 Machine: i686 Instance name: orc9 Redo thread mounted by this instance: 1 Oracle process number: 13 Unix process pid: 4976, image: oracle@octilli (TNS V1-V3) *** SESSION ID:(8.7) 2001-09-29 00:29:20.499 *** 2001-09-29 00:29:20.499 # The following commands will create a new control file and use it # to open the database. # Data used by the recovery manager will be lost. Additional logs may # be required for media recovery of offline data files. Use this # only if the current version of all online logs are available. STARTUP NOMOUNT CREATE CONTROLFILE REUSE DATABASE "ORC9" NORESETLOGS ARCHIVELOG MAXLOGFILES 50 MAXLOGMEMBERS 5 MAXDATAFILES 100
MAXINSTANCES 1 MAXLOGHISTORY 226 LOGFILE GROUP 1 '/oracle/oradata/orc9/redo01.log' SIZE 100M, GROUP 2 '/oracle/oradata/orc9/redo02.log' SIZE 100M, GROUP 3 '/oracle/oradata/orc9/redo03.log' SIZE 100M # STANDBY LOGFILE DATAFILE '/oracle/product/9.0.1/oradata/orc9/system01.dbf', '/oracle/product/9.0.1/oradata/orc9/undotbs01.dbf', '/oracle/product/9.0.1/oradata/orc9/cwmlite01.dbf', '/oracle/product/9.0.1/oradata/orc9/drsys01.dbf', '/oracle/product/9.0.1/oradata/orc9/example01.dbf', '/oracle/product/9.0.1/oradata/orc9/indx01.dbf', '/oracle/product/9.0.1/oradata/orc9/tools01.dbf', '/oracle/product/9.0.1/oradata/orc9/users01.dbf' CHARACTER SET WE8ISO8859P1 ; # Recovery is required if any of the datafiles are restored backups, # or if the last shutdown was not normal or immediate. RECOVER DATABASE # All logs need archiving and a log switch is needed. ALTER SYSTEM ARCHIVE LOG ALL; # Database can now be opened normally. ALTER DATABASE OPEN; # Commands to add tempfiles to temporary tablespaces. # Online tempfiles have complete space information. # Other tempfiles may require adjustment. ALTER TABLESPACE TEMP ADD TEMPFILE '/oracle/product/9.0.1/ oradata/orc9/temp01.dbf' REUSE; # End of tempfile additions.
The control-file backup to ASCII can be used as part of a common technique of moving production databases to test and development servers. This technique can be useful for testing backups.
Online backups, whether they are user-managed or RMAN-based, perform the same database commands. One of these commands, the ALTER TABLESPACE BEGIN BACKUP command, results in tablespaces being placed into backup mode. When the required data file(s) is completely copied, the ALTER TABLESPACE END BACKUP command is then executed. If there is a problem before the tablespace is taken out of backup mode, the tablespace may cause problems during recovery or it may lock up the next backup. If the database is shutdown with a tablespace in backup mode, the database will not start without taking the associated data files out of backup mode. Sometimes RMAN may have problems if the next scheduled backup attempts to place the tablespace in backup mode when it is already in backup mode. This will cause the next RMAN backup to fail or hang while it is trying to place the tablespace in backup mode. If an online backup fails, all tablespace associated data files should be checked to make sure that they are not in backup mode. This can be done by checking the V$BACKUP view. The following example shows that one data file is in backup mode. The tablespace associated with data file 4 should be taken out of backup mode. This tablespace should be identified and then altered out of backup mode with ALTER TABLESPACE END BACKUP. See the example below: 1. First, check to see if any data file is active; if one is, this means that the
tablespace and its associated data files are in backup mode. In this case, data file number 4 is in backup mode because the status is ACTIVE. select * from v$backup; FILE# ---------1 2 3 4 5 6 7 8
STATUS CHANGE# TIME ------------------ ---------- --------NOT ACTIVE 0 NOT ACTIVE 0 NOT ACTIVE 0 ACTIVE 279174 29-SEP-01 NOT ACTIVE 0 NOT ACTIVE 0 NOT ACTIVE 0 NOT ACTIVE 278815 29-SEP-01
8 rows selected. SQL> 2. Next, find what tablespace is associated with data file 4 by executing
the following SQL query. Note that data file 4 and tablespace 3 are associated with the DRSYS tablespace. select substr(b.name,0,10) name,a.file#,a.ts#,status from v$datafile a, v$tablespace b where a.ts#=b.ts# order by file#; NAME FILE# TS# STATUS ---------- ---------- ---------- ------SYSTEM 1 0 SYSTEM UNDOTBS 2 1 ONLINE CWMLITE 3 2 ONLINE DRSYS 4 3 ONLINE EXAMPLE 5 4 ONLINE INDX 6 5 ONLINE TOOLS 7 7 ONLINE USERS 8 8 ONLINE 8 rows selected. 3. Next, take this tablespace out of backup mode by executing the fol-
lowing command. Then query the V$BACKUP view again to verify that the data file status is not active. alter tablespace DRSYS end backup; Tablespace altered.
STATUS CHANGE# TIME ------------------ ---------- --------NOT ACTIVE 0 NOT ACTIVE 0 NOT ACTIVE 0 NOT ACTIVE 279174 29-SEP-01 NOT ACTIVE 0 NOT ACTIVE 0 NOT ACTIVE 0 NOT ACTIVE 278815 29-SEP-01
8 rows selected.
Checking the Backup before Shutdown Because RMAN backups called by Media Management vendor’s software are conducted in the background, they tend to be forgotten. Another reason these backups may be forgotten is because rather than the DBA conducting them, such backup-related tasks may be executed and controlled by the backup coordinator, who may reside within the systems administrators group. As a result, when the backup terminates for some reason, you may not know about it, unless you have good communication set up. As a result, the backup may terminate in such a way that it leaves the database partially in backup mode with some tablespaces and the associated data files still active, or it leaves jobs incomplete and hanging in the recovery catalog. What happens when the database goes down when a tablespace is in backup mode? If this happens, the data file is not checkpointed so that it is consistent with the rest of the database. Therefore, when the database is restarted, the data file is marked as inconsistent and in need of recovery. This situation can come as an unwanted surprise when you are bouncing the database for some reason. You can remedy this situation without recovery by issuing the ALTER DATAFILE ‘’ END BACKUP command to fix this. However, this situation can be avoided in the first place if you check the V$BACKUP view to validate that it is safe to shut down the database before you do so.
The Oracle DBVERIFY utility is executed by entering dbv at the command prompt. This utility has six parameters that can be specified at execution. The parameters are FILE, START, END, BLOCKSIZE, LOGFILE, and FEEDBACK. Table 9.1 describes these parameters. TABLE 9.1
DBVERIFY Parameters
Parameter
Description
Default Value for Parameter
FILE
Data file to be verified by the utility.
No default parameter
START
Starting block to begin verification.
First block in the data file
END
Ending block to end verification.
Last block in the data file
BLOCKSIZE
Block size of database. This should be the same as the init.ora parameter DB_BLOCK_SIZE.
2048
LOGFILE
Log file to store the results of running the utility.
No default parameter
FEEDBACK
Displays the progress of the utility by displaying a dot for each number of blocks processed.
0
This help information can also be seen by executing the DBV HELP=Y command. See the following example: oracle@octilli:/db01/oracle/tst9 > dbv help=y DBVERIFY: Release 9.0.1.0.0 - Production on Tue Oct 9 00:06:48 2001 (c) Copyright 2001 Oracle Corporation. reserved.
Keyword Description (Default) ---------------------------------------------------FILE File to Verify (NONE) START Start Block (First Block of File) END End Block (Last Block of File) BLOCKSIZE Logical Block Size (2048) LOGFILE Output Log (NONE) FEEDBACK Display Progress (0) PARFILE Parameter File (NONE) USERID Username/Password (NONE) SEGMENT_ID Segment ID (tsn.relfile.block) (NONE) oracle@octilli:/db01/oracle/tst9 > To run the DBVERIFY utility, the BLOCKSIZE parameter must match your database block size, or the following error will result: oracle@octilli:/db01/oracle/tst9 > dbv file=data01.dbf DBVERIFY: Release 9.0.1.0.0 - Production on Tue Oct 9 00:12:55 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
DBV-00103: Specified BLOCKSIZE (2048) differs from actual (8192) oracle@octilli:/db01/oracle/tst9 > Once the BLOCKSIZE parameter is set to match the database block size, the DBVERIFY utility can proceed. There are two ways to run this utility: without the LOGFILE parameter specified, and with it specified. Let’s walk through each of these examples. First, this is what it looks like without the LOGFILE parameter set: oracle@octilli:/db01/oracle/tst9 > dbv file=data01.dbf BLOCKSIZE=8192
DBVERIFY - Verification complete Total Pages Examined : 6400 Total Pages Processed (Data) : 0 Total Pages Failing (Data) : 0 Total Pages Processed (Index): 0 Total Pages Failing (Index): 0 Total Pages Processed (Other): 1 Total Pages Processed (Seg) : 0 Total Pages Failing (Seg) : 0 Total Pages Empty : 6399 Total Pages Marked Corrupt : 0 Total Pages Influx : 0 oracle@octilli:/db01/oracle/tst9 > The following code demonstrates the DBVERIFY utility with the LOGFILE parameter set. The results of this command are written to the file data01.log and not to the screen. This can be displayed by editing the log file. oracle@octilli:/db01/oracle/tst9 >dbv file=data01.dbf BLOCKSIZE=8192 LOGFILE=data01.log DBVERIFY: Release 9.0.1.0.0 - Production on Tue Oct 9 00:14:13 2001 (c) Copyright 2001 Oracle Corporation. reserved.
There are three types of backups that are supported by the RMAN utility:
Full or incremental
Opened or closed
Consistent or inconsistent
Each is described in the following sections.
Full or Incremental Backups The full and incremental backups are differentiated by how the data blocks are backed up in the target database. The full backup backs up all the data blocks in the data files, modified or not. An incremental backup backs up only the data blocks in the data files that were modified since the last incremental backup. The full backup cannot be used as part of an incremental backup strategy. The baseline backup for an incremental backup is a level 0 backup. A level 0 backup is a full backup at that point in time. Thus, all blocks, modified or not, are backed up, allowing the level 0 backup to serve as a baseline for future incremental backups. The incremental backups can then be applied with the baseline, or level 0, backup to form a full backup at some time in the future. The benefit of the incremental backup is that it is quicker, because not all data blocks need to be backed up. There are two types of incremental backups: differential and cumulative, both of which back up only modified blocks. The difference between these two types of incremental backups is in the baseline database used to identify the modified blocks that need to be backed up. The differential incremental backup backs up only data blocks modified since the most recent backup at the same level or lower. A differential incremental backup will determine which level 1 or level 2 backup has occurred most recently and back up only blocks that have changed since that backup. The differential incremental backup is the default incremental backup. The cumulative incremental backup backs up only the data blocks that have changed since the most recent backup of the next lowest level—n – 1 or lower (with n being the existing level of backup). For example, if you are performing a level 2 cumulative incremental backup, the backup will copy data
blocks only from the most recent level 1 backup. If no level 1 backup is available, then it will back up all data blocks that have changed since the most recent level 0 backup.
Full backups do not mean the whole or complete database was backed up. In other words, a full backup can back up only part of the database and not all data files, control files, and logs.
Opened or Closed Backups The opened and closed backups are differentiated by the state of the target database being backed up. The RMAN opened backup occurs when the target database is backed up while it is opened or available for use. This is similar to the non-RMAN hot backup that was demonstrated earlier in this chapter. The RMAN closed backup occurs when the target database is mounted but not opened. This means the target database is not available for use during this type of backup. This is similar to the non-RMAN cold backup that was demonstrated earlier in this chapter.
Consistent or Inconsistent Backups The consistent and inconsistent backups are differentiated by the state of the SCN in data file headers and in the control files. The consistent backup is a backup of a target database that is mounted but not opened and was shut down with either a SHUTDOWN IMMEDIATE, SHUTDOWN TRANSACTIONAL, or SHUTDOWN NORMAL option, but not the SHUTDOWN ABORT option. Also, the database must not have crashed prior to being mounted. This means that the SCN information in the data files matches the SCN information in the control files. The inconsistent backup is the backup of the target database when it is opened but crashed prior to mounting, or when it was shut down with the SHUTDOWN ABORT option prior to mounting. This means that the SCN information in the data files does not match the SCN information in the control files.
Using RMAN’s BACKUP and COPY Commands
T
here are two main backup sources that can be the basis for the RMAN recovery process: image copies and backup sets.
Image copies are actual copies of the database files, archived logs, or control files, and they are not stored in a special RMAN format—they can be stored only on disk. An image copy in RMAN is equivalent to an OS copy command such as cp or dd in Unix, or the COPY command in Windows NT/2000/XP. Thus, no RMAN restore processing is necessary to make image copies usable in a recovery situation. This can improve the speed and efficiency of the restore and recovery process in some cases. An image copy is performed by executing the RMAN COPY command. On the other hand, database files in backup sets are stored in a special RMAN format and must be processed with the RESTORE command before these files are usable. This can take more time and effort during the recovery process. Let’s take a look at an example of using the BACKUP command, and then we will look at the RMAN COPY command in more detail.
Using RMAN BACKUP to Create Sets The RMAN BACKUP command is used to perform a backup that creates a backup set. When you are using the BACKUP command, the target database should be mounted or opened. You must manually allocate a channel for the BACKUP command to use during the backup process. Below is an example of this command in action. In this example, you are backing up the USERS tablespace and the current control file to a backup set. oracle@octilli:/oracle/product/9.0.1/bin > rman Recovery Manager: Release 9.0.1.0.0 - Production (c) Copyright 2001 Oracle Corporation. reserved.
All rights
RMAN> connect target connected to target database: ORC9 (DBID=3960695) RMAN> run 2> {allocate channel ch1 type disk; 3> backup tablespace users 4> include current controlfile;} using target database controlfile instead of recovery catalog
allocated channel: ch1 channel ch1: sid=12 devtype=DISK Starting backup at 30-SEP-01 channel ch1: starting full datafile backupset channel ch1: specifying datafile(s) in backupset input datafile fno=00008 name=/oracle/product/9.0.1/ oradata/orc9/users01.dbf including current controlfile in backupset channel ch1: starting piece 1 at 30-SEP-01 channel ch1: finished piece 1 at 30-SEP-01 piece handle=/oracle/product/9.0.1/dbs/01d5bspm_1_1 comment=NONE channel ch1: backup set complete, elapsed time: 00:00:08 Finished backup at 30-SEP-01 released channel: ch1
You cannot include archived logs and data files in a single backup. In other words, you will need to use the BACKUP command for the database or tablespace backup, and you will need to use it again for archived logs.
The BACKUP command has multiple options that can be specified. These options control performance, formatting, file sizes, and types of backups, to mention a few. Below are Tables 9.2 and 9.3, which describe the complete list of the BACKUP command’s options and formats. TABLE 9.2
BACKUP Command Options Option
Description
FULL
Causes the server session to copy all used blocks from data files into the backup set. The only blocks that do not get copied are blocks that have never been used. All archived log and redo log blocks are copied when the archived logs are designated for backup.
INCREMENTAL LEVEL INTEGER
Causes the server session to copy data blocks that have been modified since the last incremental n backup where n is any integer from 1 to 4.
Determines how many files are in a backup set. When this option is used, the number of data files is compared to a determined number of files that are being backed up per channel allocated, and it takes the lower of the two. Using this option is another method for performing parallel backups.
DISKRATIO INTEGER
Forces RMAN to group data files in backup sets that are spread across a determined number of disk drives.
SKIP OFFLINE| READONLY | INACCESSIBLE
Excludes some data files or archived redo logs from the backup set. Some of the files it excludes include offline data files, read-only data files, or inaccessible data files and archived logs.
MAXSETSIZE INTEGER
Specifies the maximum size of the backup set. Bytes are the default unit of measure, but kilobytes (K), megabytes (M), and gigabytes (G) can also be used.
DELETE INPUT
Deletes input files when the backup set has been created. This option should only be used when backing up archived logs, data file copies, or backup sets. This is equivalent to using the CHANGE and DELETE command for all input files.
INCLUDE CURRENT CONTROLFILE
Creates a copy of the current control file and places it into each backup set.
BACKUP Command Formats Format
Description
%c
Specifies the copy number of the backup piece within the set of duplexed backup pieces.
Specifies the database name, padded on the right with n characters equal to a length of 8 characters.
%p
Specifies the backup piece number within the backup set. This number starts at 1 and increases by 1 for each backup piece created.
%s
Specifies the backup number. This number starts at 1 and increases by 1 for each backup piece created.
%t
Specifies the backup set time stamp. The combination of %s and %t can be used to form a unique name for a backup set.
%u
Specifies an 8-character name that combines a compressed version of the backup set number and the time the backup set was created.
%U
This format parameter is equivalent to %u_%p_%c.
Using RMAN COPY to Create Image Copies In this example, we will utilize the RMAN COPY command to create an image copy of various database files. In this example, you are backing up the system data file and the current control file as image copies to the /staging directory. RMAN> run { allocate channel ch1 type disk; 2> copy 3> datafile 1 to '/staging/system01.dbf' , 4> current controlfile to '/staging/control.ctl';} allocated channel: ch1 channel ch1: sid=12 devtype=DISK Starting copy at 30-SEP-01 channel ch1: copied datafile 1 output filename=/staging/system01.dbf recid=1 stamp=441840852
channel ch1: copied current controlfile output filename=/staging/control.ctl Finished copy at 30-SEP-01 released channel: ch1
Backing Up the Control File
A
control file backup can be performed through the RMAN utility by executing the CURRENT CONTROLFILE command. Below is a brief example of using this command within the RMAN utility after being connected to the appropriate target database and RMAN catalog database.
In the following example, note that the TAG command is being used to name the backed up control file controlfile_thurs within the recovery catalog. As you might guess from this name, the TAG command is used to assign a meaningful, logical name to backups or image copies. By performing this task, this command allows you to find backups more easily in LIST outputs. The TAG command name can also be used in SWITCH and RESTORE commands.
RMAN> run { 2> allocate channel ch1 type disk; 3> backup 4> format 'cf_t%t_s%s_p%p' 5> tag controlfile_thurs 6> (current controlfile); 7> release channel ch1; 8> } allocated channel: ch1 channel ch1: sid=11 devtype=DISK Starting backup at 05-OCT-01 channel ch1: starting full datafile backupset channel ch1: specifying datafile(s) in backupset including current controlfile in backupset channel ch1: starting piece 1 at 05-OCT-01
channel ch1: finished piece 1 at 05-OCT-01 piece handle=/oracle/product/9.0.1/dbs/cf_t442286312_s7_p1 comment=NONE channel ch1: backup set complete, elapsed time: 00:00:02 Finished backup at 05-OCT-01 released channel: ch1 RMAN>
Backing Up the Archived Redo Logs
A
nother important part of a complete database backup is including the archived logs. Without the archived redo logs, you cannot roll forward from online backups. Below is an example of a complete database backup file that includes the archived redo logs. In this example, the first activity that is performed is the backing up of the database. Next, all the redo logs that can be archived are flushed to archived logs in the filesystem. After that, the archived logs are backed up in the RMAN catalog by using the BACKUP ARCHIVELOG ALL command; this command backs up all available archived logs in the filesystem. Below is a brief example of using this command as it is used in complete database backup. run { allocate channel c1 type disk; allocate channel c2 type disk; backup database; backup (archivelog all); }
Summary
U
ser-managed backups have been commonplace at Oracle sites for years. This type of backup mainly consists of OS-based commands, such as those produced from Korn or Bourne shells in the Unix environment. In the Windows environment, user-managed backups are available through the use
of third-party GUI tools and batch commands. More recently, RMAN backups have become more popular with many sites (since the release of RMAN in Oracle8). The fundamentals of backing up an Oracle database can be seen clearly when you analyze an OS-based backup. Some of these basic techniques are utilized in the hot, or opened, backup examples, and some are used in cold, or closed examples. If you have a solid understanding of these concepts, you should fully understand the backup process with RMAN-based backups. RMAN-based backups can perform the same functions as user-managed backups. But RMAN backups tend to be more standardized as a result of their use of a common command set. We demonstrated some of the commands that are involved in performing various backup operations within RMAN in this chapter. User-managed and RMAN-based backups are the two primary methods of backing up databases. You will be responsible for at least one of these types of backups, if not both, in a real-world DBA job. Understanding how to perform user-managed backup and recovery is a necessary prerequisite for understanding RMAN-based backup and recovery. This knowledge will help you understand what RMAN is trying to do and improve upon.
Exam Essentials Understand user-managed backup and recovery. User-managed backup and recovery is a traditional process that uses OS and database commands to backup and recover the database. Know the backup issues that are associated with read-only tablespaces. Read-only tablespaces have special requirements for backup. If the tablespace was read-only at the time of backup, no recovery is needed, but if it was read-write, then recovery is needed. Identify the differences between open and closed backups. A closed backup is taken when the database is offline or has been shut down using all modes other than SHUTDOWN ABORT. In an open backup, the database is online or available during the backup process. You should also be familiar with the database commands that must be performed against the opened database before data files can be copied. Identify the different types of user-managed control file backups. The two formats of control file backups are binary format and ASCII format.
Understand what needs to be cleaned up after an online backup fails. The V$BACKUP view can be used to see the status of the data files after a failed online backup. The V$BACKUP view shows whether a data file is ACTIVE or INACTIVE in the backup process. Understand how the DBVERIFY utility works to detect corruption. The DBVERIFY utility works on online and offline data files to identify corrupted blocks. Identify the types of RMAN-specific backups. The three types of RMAN backups are full/incremental, open/closed, and consistent/inconsistent. Understand the difference between backup sets and image copies. Backup sets are backups that are stored in a specific RMAN format; image copies are just backups of actual database files in the same format as the OS. You should also be familiar with the BACKUP command options and formats. Know how to use RMAN commands to backup control files and archived redo logs. You should know how to use all of the necessary RMAN commands with the proper syntax to backup control files and archived logs.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: BACKUP
Review Questions 1. Which type of backup most closely represents an opened backup?
(Choose all that apply.) A. Online backup B. Offline backup C. Hot backup D. Cold backup 2. In the event of a database failure that requires a full database restore,
how would you perform a recovery for a read-only tablespace? (Choose all that apply.) A. No recovery is necessary, if the restored copy was made when the
tablespace was read-only. B. You would recover by applying redo log entries to the read-only
tablespace, regardless of the state of the tablespace copy. C. You would recover by applying redo log entries to the read-only
tablespace if the tablespace was in read-write mode at the time of the backup used for the restore. D. You would recover by applying redo log entries to the read-only
tablespace if the tablespace was in read-only mode at the time of the backup used for the restore. 3. What type of backup is consistent? A. Online backup B. Opened backup C. Hot backup D. Cold backup
4. What type of backup is inconsistent? (Choose all that apply.) A. Cold backup B. Online backup C. Opened backup D. Closed backup 5. If a read-only tablespace is restored from a backup when the data file
is read-only, what type of recovery is necessary? A. Data file recovery B. Tablespace recovery C. Database recovery D. No recovery is needed. 6. What are valid ways to back up a control file while the database is
running? (Choose all that apply.) A. Back up to trace file B. Back up to binary control file C. OS copy to tape D. Back up to restore file 7. Which of the following statements is true about a user-managed
backup? A. This type of backup is conducted using the RMAN utility. B. A user-managed backup can be customized using a combination of
OS and database commands. C. A user-managed backup is a new type of backup in Oracle9i. D. A user-managed backup is one of the backup options within RMAN.
8. If a tablespace was backed up shortly after it was made read-only,
what would need to be done with archived logs during a recovery of that tablespace? A. All archived logs would need to be applied. B. Only the archived logs that were added after the backup would
need to be applied. C. Only the archived logs that were added before the backup would
need to be applied. D. No archived logs would need to be applied. 9. Which of the following is a true statement about a RMAN image copy? A. It can be backed up to tape or disk. B. It can be backed up to disk only. C. It can be backed up to tape only. D. It can be copied to tape only. 10. A cold backup requires the database to be in what condition? (Choose
all that apply.) A. ARCHIVELOG mode B. NOARCHIVELOG mode C. The database must be started. D. The database cannot contain any read-only tablespaces. 11. To perform an open or hot backup, what state must the database be in? A. ARCHIVELOG mode B. NOARCHIVELOG mode C. Shutdown D. Automatic archiving must be enabled.
12. Hot backups are best run when what is occurring in the database? A. Heavy DML activity B. Heavy batch processing C. The database is being shut down. D. Low DML activity 13. What method can be used to clean up a failed online or hot backup? A. Shutting down the database B. Querying V$TABLESPACE C. Querying V$BACKUP D. Querying V$DATAFILE 14. What utility can be used to check to see if a data file has block corruption? A. DBVALIDATE B. DBVERIFY C. DBVERIFIED D. DBVALID 15. Which of the following are types of RMAN backups? (Choose all that
apply.) A. Open and closed backups B. Full and incremental backups C. Consistent and inconsistent backups D. Control file backups
Answers to Review Questions 1. A, C. An opened backup is performed when the database is opened
or available for access (online). A hot backup and online backup are synonymous. 2. A, C. In a read-only tablespace, the SCN doesn’t change or if it does,
none of the changes get applied. So if the backup of the tablespace was taken when the tablespace was read-only, no recovery would be necessary. On the other hand, if the backup was taken when the database was read-write, then redo logs would need to be applied. The redo logs in this case would also contain the command that puts the tablespace into read-only mode. 3. D. A cold backup ensures that all the SCNs in the data files are con-
sistent for a single point in time. 4. B, C. Opened and online backups both back up the data files with
different SCNs in the headers, which makes recovery necessary during a restore operation. 5. D. No recovery is needed because the tablespace was read-only during
backup and at the time of failure. 6. A, B. Backing up both to a trace file and to a binary control file are
valid backups of the control file. The other references are made up. 7. B. A user-managed backup is a customizable backup that uses OS
and database commands, and it is usually written in some sort of native scripting language. 8. D. No archived logs would need to be applied because the tablespace
was backed up after it was made read-only. 9. B. An image copy can be backed up only to disk. 10. A, B. A cold backup occurs when the database is shutdown. The
database can be in ARCHIVELOG mode or NOARCHIVELOG mode.
11. A. The database must be in ARCHIVELOG mode. Archiving can be set
to manual or automatic. 12. D. More transactional activity gets written to redo logs when a
tablespace is in backup mode. It is a good idea to do hot backups when you have the lowest transactional activity. 13. C. It is a good idea to query V$BACKUP to check to see if any data files
are being actively backed up. If they are, you can execute ALTER TABLESPACE END BACKUP to change the status from ACTIVE to INACTIVE. 14. B. The DBVERIFY utility is used to check whether or not a data file
has any block corruption. 15. A, B, C. Open and closed, full and incremental, and consistent and
inconsistent backups are the different type of RMAN backups.
User-Managed Complete Recovery and RMAN Complete Recovery ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe media recovery. Perform recovery in Noarchivelog mode. Perform complete recovery in Archivelog mode. Restore datafiles to different locations. Relocate and recover a tablespace by using archived redo log files. Describe read-only tablespace recovery. Describe the use of RMAN for restoration and recovery.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
n this chapter, we will focus on media failures and how to recover from them. There are two methods that can be used to recover from media failures: user-managed recovery and RMAN-based recovery. This chapter uses examples to demonstrate the differences between each type. As we have discussed in previous chapters, the mode you decide to operate in, whether ARCHIVELOG mode or NOARCHIVELOG mode, determines the recovery options that you can perform. This chapter covers these options and the modes in which you operate in further detail. You will also become familiar with examples of both ARCHIVELOG mode and NOARCHIVELOG mode media recoveries using both user-managed and RMAN methods of recovery. In addition to this detailed discussion of recovery methods, you also will look at recovery situations in which the relocation of files is required, and you will learn how to handle read-only tablespace recovery in different situations. Media recoveries are critical tasks in the testing process and workplace. How media recovery situations are handled depends on the DBA performing the recovery. You can improve your ability to perform such recoveries by testing various media recovery situations so that you have a degree of confidence. As a result of this practice, when you need to perform a media recovery, your uncertainties will be significantly reduced. Testing media recovery situations will also prepare you for the real-life situations that you will experience as a DBA.
Recovering Using NOARCHIVELOG and ARCHIVELOG Modes
287
Defining Media Recovery
Media recovery is a type of recovery used for recovering any currently used data file, control file, or online redo log file that becomes unavailable. The data file or control file may become unavailable for a number of reasons—it may have been lost, deleted, or moved from its original location, or it may have been damaged by data corruption or a hardware failure. All of these situations result in the Oracle database not being able to read or write to this file. When a situation requiring media recovery occurs, the DBA must restore the unavailable file or files. If the database is in ARCHIVELOG mode, you must then recover these files by applying archived logs to the restored files. This will make the restored files as current as the rest of the database files.
Recovering Using NOARCHIVELOG and ARCHIVELOG Modes
O
ne of the most significant backup and recovery decisions a DBA can make is whether to operate in ARCHIVELOG mode or NOARCHIVELOG mode. The outcome of this decision dramatically affects the backup and recovery options available. When the database is in ARCHIVELOG mode, it generates historical changes in the form of offline redo logs, or archived logs. That is, the database doesn’t write over the online redo logs until a copy is made, and this copy is called an offline redo log, or archived log. These logs can be applied to backups of the data files to recover the database up to the point of a failure. Figure 10.1 illustrates complete recovery in ARCHIVELOG mode. When the database is in NOARCHIVELOG mode, it does not generate historical changes; as a result, there is no archive logging. In this mode, the database writes over the online redo logs without creating an archived log. Thus, no historical information is generated and saved for later use. Figure 10.2 illustrates incomplete recovery in NOARCHIVELOG mode. Even though this is called a complete recovery, the recovery process will be missing transactions because you are not generating archived logs to be applied in the recovery process.
Recovering Using NOARCHIVELOG and ARCHIVELOG Modes
289
A significant type of failure is a media failure because in most cases, it requires that you restore all or some of the data files and the application of the redo logs during recovery. As discussed in Chapter 6, “ Instance and Media Recovery Structures,” media failure occurs when a database file cannot be accessed for some reason. The usual reason is a disk crash or controller failure.
The Dangers of Media Failures and Recovery A media failure is generally considered the most dangerous type of failure and it is the most touchy to recover from as well. The severity of this type of failure may vary from a lost or accidentally removed data file to a severe hardware failure. No matter what type of media failure the DBA is handling, they must contribute more analysis and thought to a media failure then they would to most other failure situations, such as those associated with instance recovery or basic user error. In fact, in certain situations, a severe hardware failure could cause a significant amount of the physical database to be relocated to a new filesystem. And in some cases, new filesystems may need to be re-created and properly configured for stripe, load, and redundancy. This can be a difficult task to perform when the database is down, especially if minimal downtime cannot be tolerated by the users. In less demanding environments, the database may remain unavailable for an excessive period of time. A case in point is a small nonprofit organization that lost a disk controller for its financial database application. As in many small companies, this organization was concerned about keeping IT costs to a minimum. As a result, most resources that had been purchased were in use, such as servers and disk space. Another result of this policy was that extra capacity, such as multiple controllers to disk arrays, was not always purchased in an effort to reduce costs. Because of this, when the disk controller was lost, the financial instance was unavailable until a new disk controller could be purchased, delivered, and installed in the disk storage array. This application was unavailable for one business day until the hardware was successfully installed and the database was successfully restored and recovered.
User-Managed Complete Recovery and RMAN Complete Recovery
Media failure requires database recovery. If the database is in ARCHIVELOG mode, complete recovery can be performed. This means that a backup can be restored to the affected filesystem, and archived logs can be applied up to the point of failure. Thus, no data is lost. If the database is in NOARCHIVELOG mode, however, a complete recovery cannot be performed without some transactions being lost. This means that a backup can be restored to the affected filesystem, but there won’t be any archived logs or historical changes saved. Thus, the database will have only the transactions that were available at the time of the backup. If backups were scheduled every night, the business would only lose one day’s worth of transactions. If backups were scheduled weekly, on the other hand, the business would lose one week’s worth of transactions. The end result is that, in almost all cases, some data will be lost. This is a complete recovery of the database but the database does not contain all the transactions up to the failure point. The end result is that the database will be similar to an incomplete recovery, which will be covered in Chapter 11, “User-Managed and RMANBased Incomplete Recovery.” The differences are that an incomplete recovery will be intentionally stopped before all the transactions are applied to the database, and an incomplete recovery requires the database to be in ARCHIVELOG mode and some other specific actions to be performed.
Performing User-Managed Recovery in NOARCHIVELOG Mode This is an example of a user-managed recovery when the database is in NOARCHIVELOG mode. In this case, the database cannot be completely recovered. The database is available all day during the week. Every Saturday, the database is shut down, and a complete, cold backup (offline backup) is performed. The database is restarted when this activity is completed.
Diagnosing the Failure On Wednesday morning, there is a lost or deleted data file in the database. The error received upon startup is as follows: SQL> startup ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers
Recovering Using NOARCHIVELOG and ARCHIVELOG Modes
291
Database mounted. ORA-01157: cannot identify/lock data file 4 - see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/tst9/users01.dbf' Because you are operating in NOARCHIVELOG mode, you must perform a full database restore from the previous weekend. You cannot perform a tablespace or data file recovery in NOARCHIVELOG mode because you have no ability to roll back and roll forward historical changes. There are no archived logs to apply to the data file to make the data file current with the rest of the database. Data entered into the database on Monday, Tuesday, and Wednesday are lost and must be reentered, if possible. To perform the database restore, you will need to copy all the data files, online redo logs, and control files from the last Saturday backup and perform a full database restore. You will then copy these files back to their original locations.
Step-by-Step Recovery To recover a lost data file when you are operating in NOARCHIVELOG mode, take the following steps: 1. Perform a cold backup of the database to simulate the Saturday backup.
The following is a sample script, which performs a cold backup by shutting down the database and copying the necessary data files, redo logs, and control files. # User-managed backup script # Cold backup script for tst9 # echo '' echo 'starting cold backup...' echo '' # Script to stop database! ./stopdb_tst9.sh echo echo echo echo
'' 'tst9 shutdown...' '' ‘clean up last backup in staging directory’
User-Managed Complete Recovery and RMAN Complete Recovery
rm /staging/cold/tst9/* rm /staging/cold/tst9/* echo ‘’ echo ‘copying files to staging...’ echo ‘’ cp /db01/oracle/tst9/* /staging/cold/tst9/. cp /db02/oracle/tst9/* /staging/cold/tst9/. cp /oracle/admin/tst9/arch/* /staging/cold/tst9/. echo '' echo 'tst9 starting up........' echo '' # Script to startup database! ./startdb_tst9.sh 2. Validate that the user TEST’s objects exist in the USERS tablespace.
This is the tablespace that you will remove to simulate a lost or deleted data file. SQL> select username, default_tablespace, temporary_tablespace from dba_users ; USERNAME DEFAULT_TABLESPACE TEMPORARY_TABLESPACE ---------------- ------------------- -------------------SYS SYSTEM TEMP SYSTEM
TOOLS
TEMP
OUTLN
SYSTEM
SYSTEM
DBSNMP
SYSTEM
SYSTEM
TEST
USERS
TEMP
5 rows selected. SQL> 3. Create a table and insert data to simulate data being entered after Satur-
day’s cold backup. This is the data that would be entered on Monday
Recovering Using NOARCHIVELOG and ARCHIVELOG Modes
293
through Wednesday, before the failure, but after the cold backup. The user TEST was created before the cold backup with a default tablespace of USERS. The account has connect and resource privileges. SQL> connect test/test SQL> create table t1 (c1 number, c2 char (50)); Statement processed. SQL> insert into t1 values (1, 'This is a test!'); 1 row processed. SQL> commit; Statement processed. SQL> 4. Verify the data file location of the USERS tablespace. Then remove or
delete this file. SQL> select name from v$datafile; NAME ----------------------------------------/db01/ORACLE/tst9/system01.dbf /db01/ORACLE/tst9/rbs01.dbf /db01/ORACLE/tst9/temp01.dbf /db01/ORACLE/tst9/users01.dbf /db01/ORACLE/tst9/tools01.dbf /db01/ORACLE/tst9/data01.dbf /db01/ORACLE/tst9/indx01.dbf 7 rows selected. SQL> rm /db01/ORACLE/tst9/users01.dbf 5. Start the database and verify that the “cannot identify/lock data file”
error occurs. [oracle@DS-LINUX tst9]$ sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Thu Nov 1 21:04:10 2001
User-Managed Complete Recovery and RMAN Complete Recovery
(c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect / as sysdba Connected. SQL> startup ORACLE instance started. Total System Global Area 19504528 Fixed Size 64912 Variable Size 16908288 Database Buffers 2457600 Redo Buffers 73728 Database mounted. ORA-01157: cannot identify/lock data file 4 see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/tst9/users01.dbf'
bytes bytes bytes bytes bytes
6. Shut down the database to perform a complete database restore. The
database must be shut down to restore a cold backup. SQL> shutdown ORA-01109: database not open Database dismounted. ORACLE instance shut down. SQL> 7. Perform a complete database restore by copying all data files, redo
logs, and control files to their original locations. cp /staging/cold/tst9/* /db01/ORACLE/tst9 8. Start the database and check to see whether the data entered after the
cold backup is there. When you do, you will see that Table t1 and the data do not exist. All data entered after the last backup will have to be reentered. [oracle@DS-LINUX backup]$ sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Thu Nov 1 21:04:10 2001
Recovering Using NOARCHIVELOG and ARCHIVELOG Modes
(c) Copyright 2001 Oracle Corporation. reserved.
295
All rights
SQL> connect test/test Connected. SQL> select * from t1; select * from t1 * ORA-00942: table or view does not exist SQL>
Conclusions The most notable observation about this scenario is that when the database is in NOARCHIVELOG mode, data is lost. All data entered after the backup, but before the failure, is lost and must be reentered. To recover it, you will have to shut down the database. Furthermore, you must restore the whole database instead of just the one data file that was lost or removed, which could increase the recovery time.
Performing User-Managed Complete Recovery in ARCHIVELOG Mode In this example, the database is completely recovered because it is in ARCHIVELOG mode. This database is available 24 hours a day, 7 days a week, with the exception of scheduled maintenance periods. Every morning at 1 A.M., a hot backup is performed. The data files, archived logs, control files, backup control files, and init.ora files are copied to a staging directory, and from there, they are then copied to tape. The copy also remains on the disk until the next morning, when the hot backup runs again. This allows for quick access in the event of failure. When the backup runs again, the staging directory is purged and rewritten.
Diagnosing the Failure On Wednesday morning, there is a lost or deleted data file in the database. The error received upon startup is as follows: SQL> startup ORACLE instance started.
User-Managed Complete Recovery and RMAN Complete Recovery
Total System Global Area 19504528 Fixed Size 64912 Variable Size 16908288 Database Buffers 2457600 Redo Buffers 73728 Database mounted. ORA-01157: cannot identify/lock data file 4 see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/tst9/users01.dbf'
bytes bytes bytes bytes bytes
In this case, you are operating in ARCHIVELOG mode, so you only need to replace the damaged or lost file: /db01/oracle/tst9/users01.dbf. Then, with the database open, the archived logs can be applied to the database. This archived log action reapplies all the changes to the database; therefore, no data will be lost.
Step-by-Step Recovery To recover the lost data file, take these steps: 1. Connect to user TEST and enter data in Table t1 in the tablespace
USERS, which consists of the data file users01.dbf. This will simulate the data that is in the hot backup of the USERS tablespace. [oracle@DS-LINUX backup]$ sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Thu Nov 1 21:04:10 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect test/test Connected. SQL> insert into t1 values (1,'This is test one before hot backup'); 1 row processed. SQL> connect / as sysdba SQL> commit; Statement processed.
Recovering Using NOARCHIVELOG and ARCHIVELOG Modes
297
SQL> select username,default_tablespace from 2> dba_users where username = 'TEST'; USERNAME DEFAULT_TABLESPACE -------------------------- ---------------------------TEST USERS 1 row selected. 2. Perform a hot backup of the USERS tablespace by placing it in backup
mode. Proceed to copy the data file users01.dbf to a staging directory. Then, end the backup of the USERS tablespace. SQL> [oracle@DS-LINUX backup]$ sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Thu Nov 1 21:04:10 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect /as sysdba Connected. SQL> alter tablespace users begin backup; Statement processed. SQL> ! cp /db01/ORACLE/tst9/users01.dbf /stage SQL> alter tablespace users end backup; Statement processed. SQL> alter system switch logfile; Statement processed. 3. Connect to the user TEST and add more data to Table t1. This data is
in rows 2 and 3. This data has been added after the backup of the users01.dbf data file, therefore, the data is not part of the data file copied earlier. After this is done, perform log switches to simulate normal activity in the database. This activates the archiver process to generate archived logs for the newly added data. [oracle@DS-LINUX backup]$ sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Thu Nov 1 21:04:10 2001
User-Managed Complete Recovery and RMAN Complete Recovery
(c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect test/test Connected. SQL> insert into t1 values(2,'This is test two after hot backup'); 1 row processed. SQL> insert into t1 values(3,'This is test three after hot backup'); 1 row processed. SQL> commit; Statement processed. SQL> connect / as sysdba Connected. SQL> alter system switch logfile; Statement processed. SQL> alter system switch logfile; Statement processed. SQL> alter system switch logfile; Statement processed. SQL> alter system switch logfile; Statement processed. 4. Verify the location of the data file of the USERS tablespace. Then
remove or delete this file. SQL> ! rm
/db01/ORACLE/tst9/users01.dbf
5. Shut down the database. Upon restarting, verify that the missing data
file error occurs. [oracle@DS-LINUX tst9]$ sql SQL*Plus: Release 9.0.1.0.0 - Production on Thu Nov 1 21:04:10 2001 (c) Copyright 2001 Oracle Corporation. reserved.
Recovering Using NOARCHIVELOG and ARCHIVELOG Modes
SQL> connect /as sysdba Connected to an idle instance. SQL> startup ORACLE instance started. Total System Global Area 19504528 Fixed Size 64912 Variable Size 16908288 Database Buffers 2457600 Redo Buffers 73728 Database mounted. ORA-01157: cannot identify/lock data file 4 see DBWR trace file ORA-01110: data file 4: '/db01/ORACLE/tst9/users01.dbf' SQL>
299
bytes bytes bytes bytes bytes
6. Take the recovered data file offline. This will enable you to recover
this data file and tablespace while the rest of the database is available for user access. SQL> alter database datafile '/db01/oracle/tst9/ users01.dbf' offline; Statement processed. 7. Restore the individual data file by copying the data file users01.dbf
back to the original location. [oracle@DS-LINUX tst9]$ cp /stage/users01.dbf /db01/ oracle/tst9 8. With the database open, begin the recovery process by executing the
RECOVER DATABASE command. Then, apply all the available redo logs; this should result in a complete recovery. Finally, bring the data file online so that it is available for access by users. SQL> connect /as sysdba Connected. SQL> recover datafile '/db01/ORACLE/tst9/users01.dbf'; ORA-00279: change 48323 generated at 03/29/00 22:04:25 needed for thread 1 ORA-00289: suggestion : /oracle/admin/tst9/arch1/archtst9_84.log
User-Managed Complete Recovery and RMAN Complete Recovery
ORA-00280: change 48323 for thread 1 is in sequence #84 Specify log: {=suggested | filename | AUTO | CANCEL} Log applied. ORA-00279: change 48325 generated at 03/29/00 22:05:25 needed for thread 1 ORA-00289: suggestion : /oracle/admin/tst9/arch1/archtst9_85.log ORA-00280: change 48325 for thread 1 is in sequence #85 ORA-00278: log file '/oracle/admin/tst9/arch1/archtst9_84.log' no longer needed for this recovery Specify log: {=suggested | filename | AUTO | CANCEL} Log applied. ORA-00279: change 48330 generated at 03/29/00 22:08:41 needed for thread 1 ORA-00289: suggestion : /oracle/admin/tst9/arch1/archtst9_86.log ORA-00280: change 48330 for thread 1 is in sequence #86 ORA-00278: log file '/oracle/admin/tst9/arch1/archtst9_85.log' no longer needed for this recovery Specify log: {=suggested | filename | AUTO | CANCEL} ORA-00279: change 48330 generated at 03/29/00 22:08:41 needed for thread 1 ORA-00289: suggestion : /oracle/admin/tst9/arch1/archtst9_86.log ORA-00280: change 48330 for thread 1 is in sequence #86 ORA-00278: log file '/oracle/admin/tst9/arch1/archtst9_85.log' no longer needed for this recovery
Recovering Using NOARCHIVELOG and ARCHIVELOG Modes
301
Specify log: {=suggested | filename | AUTO | CANCEL} Log applied. Media recovery complete. SQL> SQL> alter database datafile '/db01/ORACLE/tst9/users01.dbf' online; Statement processed. 9. Verify that there is no data loss, even though records 2 and 3 were
added after the hot backup. The data for these records were applied from the offline redo logs (archived logs). SQL> select * from t1; C1 C2 ---------- -------------------------------------------1 This is a test one - before hot backup 2 This is a test two - after hot backup 3 This is a test three - after hot backup 3 rows selected. SQL>
Conclusions The most notable observation about this scenario is that when the database is in ARCHIVELOG mode, no data is lost. All data entered after the hot backup into the USERS tablespace, but before the failure, is not lost. Only the data file users01.dbf must be restored, which takes less time than restoring all the data files. Therefore, by applying the archived logs during the recovery process, you can salvage all changes that occur after a hot backup of a data file. Another equally important feature is that the database can remain open to users while the one tablespace and associated data file(s) are being recovered. This allows users to access data in other tablespaces of the database not affected by the failure.
User-Managed Complete Recovery and RMAN Complete Recovery
Restoring Data Files to Different Locations
Restoring data files to a different location in both ARCHIVELOG mode and NOARCHIVELOG mode can be performed in a similar manner. The main difference is that like any NOARCHIVELOG mode recovery, the database in most cases cannot be completely recovered to the point of failure. The only time a database can be completely recovered in NOARCHIVELOG mode is when the database has not cycled through all of the online redo logs since the last complete backup. To restore the files to a different location, you would perform an OS copy from the backup location to the new location, and then start the database at the mount stage. After that, you would update the control file with the ALTER DATABASE RENAME FILE command to designate the new location. Let’s walk through this procedure. 1. Use OS commands to restore files to new locations.
cp /db01/oracle/tst9/data01.dbf /db02/oracle/tst9/ data01.dbf 2. Start up the database instance and mount the database.
oracle@octilli:~ > oraenv ORACLE_SID = [tst9] ? oracle@octilli:~ > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Mon Oct 29 23:26:23 2001 (c) Copyright 2001 Oracle Corporation. reserved. SQL> connect /as sysdba Connected. SQL> startup mount ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers
Relocate and Recover a Tablespace by Using Archived Redo Logs
Redo Buffers Database mounted. SQL>
303
77824 bytes
3. Use the ALTER DATABASE RENAME FILE command to designate the
new location. SQL> ALTER DATABASE RENAME FILE 2> ‘/db01/oracle/tst9/data01.dbf’ to 3> ‘/db02/oracle/tst9/data01.dbf’; 4. Use the ALTER DATABASE OPEN command to open the database.
SQL>alter database open; Database altered.
Relocate and Recover a Tablespace by Using Archived Redo Logs
I
n this example, during the recovery process, you will relocate a tablespace to a new filesystem by restoring the tablespace’s data files to a new filesystem. You will use the RECOVER DATABASE command to determine which archived logs you will need to apply to the newly relocated data files. This type of recovery can be performed at the tablespace level or at the database level. If you perform it at the tablespace level, you will need to take the tablespace offline; at the database level, you will need to start and mount the database. Below is an example of this recovery procedure at the database level. 1. Set ORACLE_SID to ORCL, which is your target database, so that the
database can be started or mounted with SQL*Plus. oracle@octilli:~ > oraenv ORACLE_SID = [tst9] ? 2. Run the appropriate user-managed script to back up the tst9 data-
base to disk. This customized script shuts down the database and then copies the data files, control files, redo logs, and archived log files to a staging directory. After this is done, database tst9 is restarted. # User-managed backup script # Cold backup script for tst9
User-Managed Complete Recovery and RMAN Complete Recovery
# echo '' echo 'starting cold backup...' echo '' # Script to stop database! ./stopdb_tst9.sh echo '' echo 'tst9 shutdown...' echo '' echo ‘clean up last backup in staging directory’ rm /staging/cold/tst9/* rm /staging/cold/tst9/* echo ‘’ echo ‘copying files to staging...’ echo ‘’ cp /db01/oracle/tst9/* /staging/cold/tst9/. cp /db02/oracle/tst9/* /staging/cold/tst9/. cp /oracle/admin/tst9/arch/* /staging/cold/tst9/. echo '' echo 'tst9 starting up........' echo '' # Script to startup database! ./startdb_tst9.sh 3. Now start up the database to demonstrate the INDX tablespace failure that
will need to be restored, recovered, and relocated to the new filesystem. oracle@octilli:/db01/oracle/tst9 > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Thu Nov 1 21:04:10 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect /as sysdba Connected to an idle instance.
Relocate and Recover a Tablespace by Using Archived Redo Logs
305
SQL> startup mount ORACLE instance started. Total System Global Area 75854976 bytes Fixed Size 279680 bytes Variable Size 71303168 bytes Database Buffers 4194304 bytes Redo Buffers 77824 bytes Database mounted. SQL> alter database open; alter database open * ERROR at line 1: ORA-01157: cannot identify/lock data file 7 - see DBWR trace file ORA-01110: data file 7: '/db01/oracle/tst9/indx01.dbf' SQL> 4. Next, shut down the database.
SQL> shutdown ORA-01109: database not open Database dismounted. ORACLE instance shut down. 5. After this is accomplished, restore the backup indx01.dbf file from
the online backup directory located in /staging/cold/tst9/indx01 .dbf to the new filesystem /db02/oracle/tst9. oracle@octilli:/staging/cold/tst9 > cp indx01.dbf / db02/oracle/tst9/. 6. Next, start up and mount the database, and then use the RENAME com-
mand to update the control file with the indx01.dbf data file’s new location. SQL> startup mount ORACLE instance started.
User-Managed Complete Recovery and RMAN Complete Recovery
Total System Global Area 75854976 bytes Fixed Size 279680 bytes Variable Size 71303168 bytes Database Buffers 4194304 bytes Redo Buffers 77824 bytes Database mounted. SQL> alter database rename file 2 '/db01/oracle/tst9/indx01.dbf' to 3 '/db02/oracle/tst9/indx01.dbf'; Database altered. SQL> 7. Then recover the database and apply the necessary archived logs to
make the indx01.dbf data file in the INDX tablespace current. Then open the database. SQL> recover database; ORA-00279: change 153845 generated at 10/31/2001 23:12:23 needed for thread 1 ORA-00289: suggestion : /oracle/admin/tst9/arch/ archtst9_12.log ORA-00280: change 153845 for thread 1 is in sequence #12
Specify log: {=suggested | filename | AUTO | CANCEL} Log applied. Media recovery complete. SQL> alter database open; Database altered.
8. Verify that the INDX tablespace and its associated data file have been
moved from filesystem /db01/oracle/tst9 to /db02/oracle/tst9. SQL> select name,status from v$datafile; NAME ----------------------------------/db01/oracle/tst9/system01.dbf /db01/oracle/tst9/rbs01.dbf /db01/oracle/tst9/temp01.dbf /db01/oracle/tst9/users01.dbf /db02/oracle/tst9/tools01.dbf /db01/oracle/tst9/data01.dbf /db02/oracle/tst9/indx01.dbf /db02/oracle/tst9/data02.dbf
STATUS ------SYSTEM ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE ONLINE
8 rows selected. SQL>
Describe Read-Only Tablespace Recovery
T
here are three scenarios that can occur with read-only tablespace recovery. These are as follows:
Read-only backup and read-only recovery
Read-only backup and read-write recovery
Read-write backup and read-only recovery
The first scenario is the most straightforward because no recovery is needed. The SCN does not change because the tablespace is read-only. In this type of recovery, the only activity you need to do is restore the data files associated with the read-only tablespaces; thus no archived logs need to be applied. The second scenario will require a more complex recovery process because the tablespace is being recovered to a read-write state in which the SCN number has changed or transactions have been made in the tablespace.
User-Managed Complete Recovery and RMAN Complete Recovery
In this case, you would restore the tablespace from backup and apply archived logs from the point at which the table was made read-write. The final scenario will also require recovery because the tablespace is restored in a read-write state and then recovered to read-only. In this case, you will need to restore the backup of the tablespace in read-write mode and apply archived logs up to the point where the tablespace was made read-only. You should always perform a backup after making a tablespace read-only because doing so eliminates the need to restore the tablespace.
Using RMAN for Restoration and Recovery
The restore and recovery considerations for using RMAN consist of how you will restore databases, tablespaces, data files, control files, and archived logs from RMAN. Restores and recoveries can be performed from backups on both disk and tape devices. There are two main backup sources that can be the basis for the RMAN recovery process. These sources are image copies and backup sets. Image copies can be stored only on disk. Image copies are actual copies of the database files, archived logs, or control files and are not stored in a special RMAN format. An image copy in RMAN is equivalent to an OS copy command, such as cp or dd in Unix, or COPY in Windows NT/2000/XP. In Oracle9i, the RESTORE command will determine the best available backup set or image copy to use in the restoration and the file will only be restored if a restoration is necessary. In prior Oracle versions, the files were always restored, even if it wasn’t necessary. The RECOVER command applies the necessary changes from the online redo logs and archived log files to recover the restored files. If you are using incremental backups, the online redo logs and archived log files will be applied to recover the database.
Performing RMAN Recovery in NOARCHIVE Mode
As the first example of using RMAN for restores and recoveries, you will restore a database in NOARCHIVELOG mode. To restore a database in this
mode, you must first make sure that the database was shut down cleanly so that you are sure to get a consistent backup. This means the database should be shut down with a SHUTDOWN NORMAL, IMMEDIATE, or TRANSACTIONAL command, but the ABORT command should not be used. The database should then be started in MOUNT mode, but not opened. This is because the database files cannot be backed up when the database is opened and not in ARCHIVELOG mode. Next, while in the RMAN utility, you must connect to the target database, which in our example, is tst9 in the Unix environment. Then you can connect to the recovery catalog in the rcat database. Once you are connected to the proper target and catalog, you can execute the appropriate RMAN backup script. This script will back up the entire database. After this has been done, the database can be restored with the appropriate RMAN script. Finally, the database can be opened for use. Let’s walk through this example: 1. Set the ORACLE_SID to tst9, which is your target database, so that
the database can be started in MOUNT mode with SQL*Plus. oracle@octilli:~ > oraenv ORACLE_SID = [tst9] ? oracle@octilli:~ > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Mon Oct 29 23:36:19 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect /as sysdba Connected to an idle instance. SQL> startup mount ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers SQL>
User-Managed Complete Recovery and RMAN Complete Recovery
2. Start the RMAN utility at the command prompt and connect to the
target and the recovery catalog database rcat. oracle@octilli:~ > rman Recovery Manager: Release 9.0.1.0.0 - Production (c) Copyright 2001 Oracle Corporation. reserved.
All rights
RMAN> connect target connected to target database: tst9 (not mounted) RMAN> connect catalog rman/rman@rcat connected to recovery catalog database RMAN> 3. Once you are connected to the target and recovery catalog, you can
back up the target database to disk or tape. In this example, you choose disk. You give the database name a format of backupset unique identifier, and then you concatenate to the database name with the backupset number. RMAN> run 2> { 3> allocate channel c1 type disk; 4> backup database format 'db_%u_%d_%s'; 5> release channel c1; 6> } allocated channel: c1 channel c1: sid=11 devtype=DISK Starting backup at 29-OCT-01 channel c1: starting full datafile backupset channel c1: specifying datafile(s) in backupset
including current controlfile in backupset input datafile fno=00001 name=/db01/oracle/tst9/ system01.dbf input datafile fno=00006 name=/db01/oracle/tst9/ data01.dbf input datafile fno=00002 name=/db01/oracle/tst9/ rbs01.dbf input datafile fno=00008 name=/db01/oracle/tst9/ data02.dbf input datafile fno=00003 name=/db01/oracle/tst9/ temp01.dbf input datafile fno=00004 name=/db01/oracle/tst9/ users01.dbf input datafile fno=00007 name=/db01/oracle/tst9/ indx01.dbf input datafile fno=00005 name=/db01/oracle/tst9/ tools01.dbf channel c1: starting piece 1 at 29-OCT-01 channel c1: finished piece 1 at 29-OCT-01 piece handle=/oracle/product/9.0.1/dbs/db_0jd7r8e3_ TST9_19 comment=NONE channel c1: backup set complete, elapsed time: 00:01:57 Finished backup at 29-OCT-01 released channel: c1 4. Once the backup has completed, the database may be restored. It must
be mounted but not opened. In the restore script, choose three disk channels to utilize parallelization of the restore process. The RESTORE DATABASE command is responsible for the restore process within RMAN. No recovery is required because the database was in NOARCHIVELOG mode and the complete database was restored. RMAN> run { allocate channel c1 type disk; allocate channel c2 type disk; allocate channel c3 type disk; restore database; }
User-Managed Complete Recovery and RMAN Complete Recovery
5. Once the database has been restored, it can be opened and then shut
down normally. At this point, a startup should be performed to make sure the restore process was successful. SQL> alter database open; Database altered. SQL> shutdown Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. SQL>
75854976 279680 71303168 4194304 77824
bytes bytes bytes bytes bytes
Performing RMAN Complete Recovery in ARCHIVELOG Mode
A
s the second example of using RMAN for restores and recoveries, you will restore a database in ARCHIVELOG mode. In this case, the database can be mounted or opened. This is because the database files can be backed up when the database is opened and in ARCHIVELOG mode in a similar manner to the way the user-managed ALTER TABLESPACE BEGIN BACKUP command is used.
Performing RMAN Complete Recovery in ARCHIVELOG Mode
313
To perform this, you must connect to the target database (tst9 in the Unix environment in our example). Then you can connect to the recovery catalog in the rcat database. Once you are connected to the proper target and catalog, you can execute the appropriate RMAN backup script. This script will back up the entire database. After this is done, you can restore the database with the appropriate RMAN script and then open it for use. Let’s walk through this example: 1. Set the ORACLE_SID to tst9, which is your target database, so that the
database can be started in MOUNT mode with SQL*Plus. oracle@octilli:~ > oraenv ORACLE_SID = [tst9] ? oracle@octilli:~ > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Mon Oct 29 23:36:19 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect /as sysdba Connected to an idle instance. SQL> startup mount ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers SQL>
75854976 279680 71303168 4194304 77824
bytes bytes bytes bytes bytes
2. Start the RMAN utility at the command prompt and connect to the
target and the recovery catalog database rcat. oracle@octilli:~ > rman Recovery Manager: Release 9.0.1.0.0 - Production
User-Managed Complete Recovery and RMAN Complete Recovery
(c) Copyright 2001 Oracle Corporation. reserved.
All rights
RMAN> connect target connected to target database: tst9 (not mounted) RMAN> connect catalog rman/rman@rcat connected to recovery catalog database RMAN> 3. Once you are connected to the target and recovery catalog, you can
back up the target database, including archived logs, to disk or tape. In this example, you choose disk. You give the database name a format of db_%u_%d_%s, which means that a db_ will be concatenated to the backupset unique identifier and then concatenated to the database name with the backupset number. RMAN> run 2> { 3> allocate channel c1 type disk; 4> backup database format 'db_%u_%d_%s'; 5> backup format 'log_t%t_s%s_p%p' 6> (archivelog all); 7> } allocated channel: c1 channel c1: sid=11 devtype=DISK Starting backup at 30-OCT-01 channel c1: starting full datafile backupset channel c1: specifying datafile(s) in backupset including current controlfile in backupset input datafile fno=00001 name=/db01/oracle/tst9/ system01.dbf input datafile fno=00006 name=/db01/oracle/tst9/ data01.dbf
User-Managed Complete Recovery and RMAN Complete Recovery
input archive log thread=1 sequence=9 recid=16 stamp=444512889 input archive log thread=1 sequence=10 recid=17 stamp=444525609 channel c1: starting piece 1 at 30-OCT-01 channel c1: finished piece 1 at 30-OCT-01 piece handle=/oracle/product/9.0.1/dbs/log_t444525610_ s21_p1 comment=NONE channel c1: backup set complete, elapsed time: 00:00:04 Finished backup at 30-OCT-01 released channel: c1 4. Once the backup has completed, the database may be restored and
recovered. The database must be mounted but not opened. In the restore and recovery script, choose three disk channels to utilize parallelization of the restore process. This is not necessary, but it improves the restore and recovery time. The RESTORE DATABASE command is responsible for the restore process within RMAN; this command is required because the database was in ARCHIVELOG mode and these files need to be applied to the data files in order for a complete recovery to be performed. Finally, the database is opened. RMAN> run 2> { 3> allocate channel c1 type disk; 4> allocate channel c2 type disk; 5> allocate channel c3 type disk; 6> restore database; 7> recover database; 8> alter database open; 9> } allocated channel: c1 channel c1: sid=11 devtype=DISK allocated channel: c2 channel c2: sid=12 devtype=DISK
User-Managed Complete Recovery and RMAN Complete Recovery
database released released released
opened channel: c3 channel: c1 channel: c2
RMAN> 5. Once the database has been restored, recovered, and opened, it should
be shut down normally. A startup should be performed to make sure the restore process was successful. SQL> shutdown Database closed. Database dismounted. ORACLE instance shut down. SQL> startup ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. Database opened. SQL>
75854976 279680 71303168 4194304 77824
bytes bytes bytes bytes bytes
Using RMAN to Restore Data Files to Different Locations
As the third example, you will restore and recover a data file by using RMAN. In this case, the database will also be in ARCHIVELOG mode because an individual data file will be backed up. As in the previous tablespace example, the database will be backed up while it is open.
Using RMAN to Restore Data Files to Different Locations
319
First, within RMAN, you must perform the appropriate data file backup script. For this example, you will select the data file for the DATA tablespace. You will back up the current control file as an extra precaution. Once the data file is backed up, you can begin the restore and recovery process. For this process, the database should be mounted, but not open. You will also need to use the SET NEWNAME command to identify the new data file location, and the SWITCH command to record the location change in the control file. With the database mounted, you can execute the appropriate RMAN script to restore and recover the data file. The steps are as follows: 1. Set ORACLE_SID to ORCL, which is your target database, so that the
database can be started or mounted with SQL*Plus. oracle@octilli:~ > oraenv ORACLE_SID = [tst9] ? 2. Connect to RMAN, the target database, and the recovery catalog in
connected to target database: TST9 (DBID=1268700551) connected to recovery catalog database 3. Run the appropriate RMAN script to back up the DATA data file to disk.
RMAN> run 2> { 3> allocate channel ch1 type disk; 4> backup 5> format '%d_%u' 6> (datafile '/db01/oracle/tst9/data02.dbf'); 7> release channel ch1; 8> } allocated channel: ch1
User-Managed Complete Recovery and RMAN Complete Recovery
channel ch1: sid=12 devtype=DISK allocated channel: ch1 channel ch1: sid=12 devtype=DISK Starting backup at 30-OCT-01 channel ch1: starting full datafile backupset channel ch1: specifying datafile(s) in backupset input datafile fno=00008 name=/db01/oracle/tst9/ data02.dbf channel ch1: starting piece 1 at 30-OCT-01 channel ch1: finished piece 1 at 30-OCT-01 piece handle=/oracle/product/9.0.1/dbs/TST9_0nd7tstb comment=NONE channel ch1: backup set complete, elapsed time: 00:00:01 Finished backup at 30-OCT-01 released channel: ch1 RMAN> 4. Once the data file has been backed up, you can restore and recover the
data file with the appropriate RMAN script. The RMAN script uses the SET NEWNAME command to designate the new location of the data file that will be relocated; then database will be restored. Next, the SWITCH command will record the location change in the control file. Finally, the database will be recovered and opened. RMAN> run 2> { 3> set newname for datafile 8 to '/db02/oracle/tst9/ data02.dbf'; 4> restore database; 5> switch datafile all; 6> recover database; 7> alter database open; 8> }
User-Managed Complete Recovery and RMAN Complete Recovery
input datafilecopy recid=32 stamp=444528057 filename=/ db02/oracle/tst9/data02.df starting full resync of recovery catalog full resync complete Starting recover at 31-OCT-01 using channel ORA_DISK_1
starting media recovery media recovery complete Finished recover at 31-OCT-01 database opened RMAN> 5. Once the database has been restored, shut it down normally. Then
perform a startup to make sure the restore process was completed. SQL> shutdown SQL> startup
Use RMAN to Relocate and Recover a Tablespace Using Archived Logs
I
n this example, you will relocate a tablespace to a new filesystem during recovery. You can perform this using the SET NEWNAME and SWITCH commands that were mentioned earlier. In addition, the RECOVER command applies the necessary backup of data files and archived logs. The major difference between this process and that of relocating a data file is that the tablespace needs to be taken offline before the associated data files can be moved to a new location. The database, however, can be opened during this process. Below is an example of this procedure.
Use RMAN to Relocate and Recover a Tablespace Using Archived Logs
323
1. Set ORACLE_SID to ORCL, which is your target database, so that the
database can be started or mounted with SQL*Plus. oracle@octilli:~ > oraenv ORACLE_SID = [tst9] ? 2. Connect to RMAN, the target database, and the recovery catalog in
connected to target database: TST9 (DBID=1268700551) connected to recovery catalog database 3. Run the appropriate RMAN script to back up the tst9 database to disk.
RMAN> run 2> { 3> allocate channel c1 type disk; 4> backup database format 'db_%u_%d_%s'; 5> backup format 'log_t%t_s%s_p%p' 6> (archivelog all); 7> } 4. Then issue the recovery script, which will utilize the SET NEWNAME,
RESTORE, SWITCH, and RECOVER commands. Finally, bring the tablespace online. RMAN> run 2> { 3> sql 'alter tablespace tools offline immediate'; 4> set newname for datafile '/db01/oracle/tst9/ tools01.dbf' to 5> '/db02/oracle/tst9/tools01.dbf'; 6> restore (tablespace tools); 7> switch datafile 5;
file /oracle/admin/tst9/arch/archtst9_17.log archive log thread 1 sequence 18 is already on disk as file /oracle/admin/tst9/arch/archtst9_18.log archive log filename=/oracle/admin/tst9/arch/archtst9_16.log thread=1 sequence=6 media recovery complete Finished recover at 31-OCT-01 sql statement: alter tablespace tools online RMAN>
Summary
I
n this chapter, we emphasized media recoveries. We described the two methods of performing Oracle database recovery for media failures (usermanaged and RMAN-based recoveries) and we performed specific examples of each. In addition, we identified the differences between ARCHIVELOG mode and NOARCHIVELOG mode and we described the significant implications that each mode has on the backup and recovery process. We also showed examples of both ARCHIVELOG mode and NOARCHIVELOG mode recoveries using both user-managed and RMAN methods of recovery. We then discussed read-only tablespace recovery and the three recovery scenarios associated with it. Each of these scenarios requires different recovery actions. We also performed both user-managed and RMAN-based recovery situations in which file relocation was required. Media recoveries are an important topic in testing and in real work situations. How media recovery situations are handled depends on the confidence of the DBA performing the media recovery. You can obtain confidence by practicing media recoveries in all of the above-mentioned situations. Then, when you need to perform a media recovery, your uncertainties will have been significantly reduced. Such practice situations will also prepare you for testing and situations you will encounter as a DBA.
User-Managed Complete Recovery and RMAN Complete Recovery
Exam Essentials Understand media recovery. Media recovery is required when database files, such as data files, control files, or online redo logs, become unavailable. Such files can become unavailable because of hardware failure, corruption, or accidental removal. Know the recovery differences between ARCHIVELOG and NOARCHIVELOG mode. In order to perform a complete recovery, the database must be in ARCHIVELOG mode. In NOARCHIVELOG mode, an incomplete recovery must be performed—otherwise all transactions after the last full backup to the point of failure will be lost. Be familiar with the process of restoring data files to different locations. To restore data files to different locations, you must use the OS commands to copy files from the backup location to the new location. Then you will use the ALTER DATABASE RENAME FILE command to record the new data file locations in the control file. List the read-only tablespace recovery scenarios. The three read-only recovery scenarios include read-only backup and read-only recovery, read-only backup and read-write recovery, and read-write backup and read-only recovery. You should understand how recovery varies in each of these scenarios. Understand the commands necessary to perform RMAN restoration and recovery. The RESTORE command will determine the best available backup set or image copy to use in the restoration. The RECOVER command will apply the necessary changes from the online redo logs and archived logs as well as incremental backups (if used) to the restored files. Understand RMAN recovery in ARCHIVELOG and NOARCHIVELOG mode. In NOARCHIVELOG mode, you use a RESTORE command of a consistent database but you don’t use the RECOVER command. In ARCHIVELOG mode, both the RESTORE and RECOVER commands are used. Use RMAN commands to restore data files to different locations. The SET NEWNAME command is used to name the new location of the data file, and the SWITCH command is used to update the control file with the new location. After this, the database is recovered.
Use RMAN commands to relocate and recover a tablespace using archived logs. The SET NEWNAME command is used to name the new location of the data file, and the SWITCH command is used to update the control file with the new location. Note that the tablespace must be taken offline to perform the recovery, and then it must be put back online when the recovery is complete while the database is still open.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: ARCHIVELOG mode
User-Managed Complete Recovery and RMAN Complete Recovery
Review Questions 1. Which of these failure situations best describe a media failure and will
require recovery? (Choose all that apply.) A. A deleted data file B. All control files deleted C. A failed disk controller with access to disks storing data files D. A disk drive crash on a non-mirrored storage array containing
data files E. All of the above 2. In which of these modes must a database be in order for a complete
database recovery up to the point of failure to be performed? A. NOARCHIVELOG B. ARCHIVELOG C. Export D. LOG 3. What is the type of recovery being performed when transactions are
not applied and the database is NOARCHIVELOG mode? A. Complete recovery B. Partial recovery C. Incomplete recovery D. No recovery 4. What command is required to relocate files in a user-managed recovery? A. ALTER DATABASE RENAME FILE B. SET NEWNAME C. ALTER DATABASE SET NEWNAME D. ALTER DATABASE NEWNAME FILE
5. You make cold database backups including a read-only tablespace
and then you make the tablespace read-write. Before the next weekly backup, the tablespace is made read-write and you need to perform recovery on that tablespace. What option must be performed? A. No recovery is needed because the tablespace was backed up
read-only. B. No recovery is needed because the tablespace did not have many
changes made in it after it was made read-write. C. Recovery is needed because the tablespace was made read-write
and the backup was read-only. D. Restoring the read-only tablespace is all that is needed. 6. What RMAN command is required to recover a database in
NOARCHIVELOG mode? A. RESTORE B. RECOVER C. SET NEWNAME D. SWITCH 7. What RMAN command is responsible for applying incremental back-
ups and archived logs? A. RESTORE B. RECOVER C. SET NEWNAME D. SWITCH 8. When using RMAN, what mode does the database need to be in to
perform a database restore and recovery? A. Opened B. Nomount C. Closed D. Mount
User-Managed Complete Recovery and RMAN Complete Recovery
9. The SWITCH command is responsible for which of the following
activities during the relocation process of database files? A. Renaming the location B. Updating the control file C. Moving the physical files D. Updating the files in the data dictionary 10. Which of the following recoveries doesn’t require you to recover the
database? A. Read-only tablespace backup and tablespace was read-only when
recovered B. Read-only tablespace backup and tablespace was read-write when
recovered C. Read-write tablespace backup and tablespace was read-only when
recovered D. Read-write tablespace backup and tablespace was read-write
when recovered 11. What are the read-only tablespace recovery scenarios? (Choose all
that apply.) A. Read-only backup to read-only recovery B. Read-only backup to read-write recovery C. Read-write backup to read-write recovery D. Read-write backup to read-only recovery 12. What type of read-only recovery could the DBA avoid if they are
following recommended procedures? A. Read-only backup to read-only recovery B. Read-only backup to read-write recovery C. Read-write backup to read-write recovery D. Read-write backup to read-only recovery
13. What is unique about a RMAN image copy? A. It can only back up data files. B. It can only back up to tape. C. It can only back up to disk. D. It cannot back up control files. 14. Which of the following is a true statement about a backup set? A. It is not stored in a special RMAN format. B. It can contain multiple data files. C. It can contain multiple data files and archived logs. D. It can only contain one data file per backup set. 15. Which RMAN command is responsible for copying files from the
backup media? A. RESTORE B. RENAME C. RECOVER D. SWITCH
User-Managed Complete Recovery and RMAN Complete Recovery
Answers to Review Questions 1. E. All these failures will require media recovery. Each failure will
make a database file unavailable, which will mean that a restoration and recovery is needed. 2. B. Complete recovery up to the point of failure can only be per-
formed when the database is in ARCHIVELOG mode. 3. A. Complete recovery in NOARCHIVELOG mode is the correct answer.
Even though you are performing a complete recovery, not all transactions are being applied and there can be data missing. This is because the database is not in ARCHIVELOG mode or is not generating archived logs. 4. A. The ALTER DATABASE RENAME FILE command is used for user-
managed backups. 5. C. Recovery is needed when a read-only tablespace is made read-
write and the backup was taken when the tablespace was read-only. This means that the SCNs in the headers of the data files have changed and will need to be resynchronized with the rest of the database during the recovery process. 6. A. The RESTORE command is responsible for the recovery process
when the database is in NOARCHIVELOG mode because no recovery is necessary. 7. B. The RECOVER command applies incremental backups and archived
logs to the recovery process. 8. D. The database should be mounted to perform a database restore
and recovery so that the control file can be read and the target database can be connected. 9. B. The SWITCH command is responsible for updating the control file
with the new location of the files that have been moved.
10. A. When the tablespace is read-only for the backup and read-only
during the recovery, no recovery is needed. The data files of the readonly tablespace can be restored because there is no change to the SCN of data file headers. 11. A, B, D. The scenarios for read-only tablespace recovery are read-
only backup to read-only recovery, read-only backup to read-write recovery, and read-write backup to read-only recovery. 12. D. Every time you make a tablespace read-only, you should conduct
a backup shortly thereafter. This eliminates the need to conduct a recovery in the read-write backup to read-only recovery scenario. 13. C. The RMAN image copy is similar to an OS copy. It can only be
performed to disk. 14. B. A backup set can contain multiple data files within the same
backup set, but it cannot contain data files and archived logs in the same backup set. A backup set is stored in a special RMAN format, unlike an image copy. 15. A. The RESTORE command is responsible for copying files from the
User-Managed and RMAN-Based Incomplete Recovery ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe the steps of incomplete recovery. Perform an incomplete database recovery. Identify the loss of current online redo log files. Perform an incomplete database recovery using UNTIL TIME. Perform an incomplete database recovery using UNTIL SEQUENCE.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
ncomplete database recovery requires an understanding of the redo log and ARCHIVELOG processes, the synchronization of the Oracle database, and the options allowed for performing an incomplete recovery. This chapter discusses the incomplete recovery process and the commands associated with each incomplete recovery option. It also includes an example that shows how to perform an incomplete recovery due to lost or corrupted current redo log files. In addition to the user-managed incomplete recovery, this chapter demonstrates how to use RMAN for this process. The RMAN examples covered include how to use the SET UNTIL TIME and UNTIL SEQUENCE clauses. Incomplete recovery is the only method of recovery for certain types of failures. It is important that you understand when and how to use incomplete recovery methods. Anytime there is a failure that will cause you not to apply all the changes back to the databases, you will need to use one of the incomplete recovery methods that will be discussed in this chapter.
Describing the User-Managed Incomplete Recovery
I
ncomplete recovery occurs when the database is not recovered entirely to the point at which the database failed. This is a partial recovery of the database in which some archived logs are applied to the database, but not all of them. With this type of recovery, only a portion of the transactions gets applied. There are three types of incomplete media recovery:
Each of these options allows recovery to a point in time prior to the failure. The main reason why there are three options is so that the DBA can have better flexibility and control of where recovery is halted during the recovery process. Each of these options is described in detail in the next section. Incomplete recovery is performed any time you don’t want to apply all the archived and nonarchived log files that are necessary to bring the database up to the time of failure. As a result, the database is essentially not completely recovered; transactions remain missing. Figure 11.1 illustrates the different types of incomplete recovery. FIGURE 11.1
Incomplete recovery in ARCHIVELOG mode for a media failure on January 28th Database CRASH
Media failure Database backup
Time
1-Jan-02
7-Jan-02
14-Jan-02
28-Jan-02
Arch11.log Arch21.log Arch38.log Arch53.log Recover database until change *6,748; Recover database until time ‘2002-1-25-13:00:00’; Recover database until cancel; *6,748 is system change number in Arch37.log
Incomplete recovery should be performed when the DBA wants or is required to recover the database prior to the point of time when the database failed. Some of the circumstances that might call for this type of recovery include data file corruption, redo log corruption, or the loss of a table due to user error.
In some cases, incomplete recovery is the only option available to you. In a failure situation that involves the loss or corruption of the current and inactive nonarchived redo log files, recovering the database without the transactions in these files is the only option. If a complete recovery were performed instead, the error would be reintroduced as a result of the recovery process.
Performing an Incomplete Database Recovery
T
his section details the three types of incomplete database recovery: cancel-based, time-based, and change-based. Each of these methods is used for different circumstances, and you should be aware of when each is appropriate.
Cancel-Based Recovery In cancel-based recovery, you cancel the recovery before the point of failure. Cancel-based recovery provides the least flexibility and control of the stopping point during the recovery process. In this type of recovery, you apply archived logs during the recovery process. At some point before the recovery is complete, you enter the CANCEL command. At this point, recovery halts, and no more archived logs are applied. The following is a sample of cancel-based incomplete recovery. SQL> recover database until cancel; One example of when you would use a cancel-based incomplete recovery is when you need to restore a backup of a lost data file from a hot backup. To do this, you perform the following steps: 1. Make sure that the database is shutdown by using the SHUTDOWN com-
mand from SQL*Plus. SQL> shutdown abort 2. Make sure that current copies of the data files, control files, and
parameter files exist in case there are errors that arise during the recovery process. If these files exist, you will be able to restart the recovery process, if needed, without introducing any new errors that might have resulted from a previous failed recovery. 3. Make sure that a current backup exists because copied files from the
current backup will replace the failed data files, online redo log files, or control files.
SQL> startup mount 6. Verify that all the data files you need to recover are online. The fol-
lowing query shows the status of each data file that is online (with the exception of the system data file, which is always online; status equals system). SQL> select file#,status,enabled,name from v$datafile; FILE# STATUS ENABLED NAME ---------- ------- ---------- ------------------------1 SYSTEM READ WRITE /oracle/database/tst9/system01.dbf 2 ONLINE
7. Perform an incomplete recovery by using the UNTIL CANCEL clause
in the RECOVER DATABASE command. SQL> recover database until cancel; 8. Open the database with the RESETLOGS option.
SQL>
alter database open resetlogs;
You must use the RESETLOGS clause with the ALTER DATABASE OPEN command for all types of incomplete recovery. The RESETLOGS option ensures that the log files applied in recovery can never be used again by resetting the log sequence and rebuilding the existing online redo logs. This process permanently deactivates all transactions that exist in the nonarchived log files so that they can never be recovered. At the same time, it resynchronizes log files with the data files and control files. If these transactions were not purged, the log files would create bad archived logs. This is the main reason why a backup of the control file, data files, and redo logs should be done prior to performing an incomplete recovery. 9. Perform a new cold or hot backup of the database. Existing backups
are no longer valid.
Remember, after the ALTER DATABASE OPEN RESETLOGS command is applied, the previous log files and backed-up data files are useless for this newly recovered database. This is because a gap exists in the log files. The old backup data files and logs can never again be synchronized with the database. Thus, a complete backup must be performed after an incomplete recovery of any type.
Using Incomplete Recovery to Move a Database Recovery operations can be used as tools to perform activities other than the typical recovery from failure. For this reason, you need to be familiar with the backup and recovery features and capabilities associated with an incomplete recovery.
Incomplete recovery options, such as the backup control file being used in conjunction with the RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL command, can be useful when you are trying to move databases from one location to another. When you use such options, you can move databases for any purpose, such as moving a database for testing, or just moving a database to a new server. You must make sure that if you are moving a database to a new server, the Oracle database software you are using and the OS on the new server are similar. This approach of moving databases is performed by taking the hot or cold backup of the database you want to move and moving the data files and initialization files to the new location. Then you would change the backup control file to location references of all the physical database files, such as redo logs and data files. After you have done this, you need to validate your ORACLE_SID and make sure that it is sourced to the correct database; you will then need to execute the backup control file at the SQL prompt as sysdba. This will generate a new database on a new server and in different locations. Please refer to the Oracle documentation for the exact steps to perform this task and always try this process in a test environment first. As a DBA, you will be responsible for setting up test database environments for numerous reasons. You will find that the ability to move and set up databases and applications on different servers for testing and upgrade purposes is a must-have skill. Every time there is any significant upgrade, it will need to be tested on a different environment than the production environment. This approach of moving data files and re-creating the control file works well for many testing and upgrading situations.
Time-Based Recovery In time-based recovery, the DBA recovers the database to a point in time before the point of failure. Time-based recovery provides more flexibility and control than the cancel-based option does. The cancel-based option’s granularity is the size of a redo log file; in other words, when you are applying a redo log file, you get all the transactions in that file, regardless of the time period over which that log was filled. In time-based recovery, you apply archived logs to the database up to a designated point in time. This point could be in the middle of the archived log, but not necessarily apply to the whole archived log. After you have applied these logs, you can control the recovery process to a time prior to a
fatal action in the database, such as data block corruption or the loss of a database object due to user error. Below is a sample of the time-based, incomplete recovery. SQL> recover database until time ‘2001-9-30:22:55:00’; You can use the preceding example to restore lost data files and then use time-based recovery in place of cancel-based recovery. To do this, you restore all the necessary data files from a hot backup, as before. The only change is that you use the UNTIL TIME clause in step 7 instead of an UNTIL CANCEL clause, as shown here: 7. Perform an incomplete recovery by using the UNTIL TIME clause. SQL> recover database until time ‘2001-9-30:22:55:00’; All other steps remain the same.
Change-Based Recovery In change-based recovery, you recover to a system change number (SCN) before the point of failure. This type of incomplete recovery gives you the most control. As you have already learned, the SCN is what Oracle uses to uniquely identify each committed transaction. The SCN is a number that orders the transactions consecutively in the redo logs as each transaction occurs. This number is also recorded in transaction tables within the rollback segments, control files, and data file headers. The SCN coordination between the transactions and these files synchronizes the database to a consistent state. Each redo log is associated with a low and high SCN number. This SCN information can be seen in the V$LOG_HISTORY view below. Notice the low and high SCN numbers in the FIRST_CHANGE# and NEXT_CHANGE# columns for each log sequence or log file. SQLWKS> select sequence#,first_change#,next_change#,first_ time 2> from v$log_history where sequence# > 10326; SEQUENCE# FIRST_CHAN NEXT_CHANG FIRST_TIME ---------- ---------- ---------- -------------------10327 60731807 60732514 30-SEP-01 10328 60732514 60732848 30-SEP-01 10329 60732848 60747780 30-SEP-01 10330 60747780 60748140 30-SEP-01 4 rows selected.
All transactions between these SCNs are included in these logs. Oracle determines what should be recovered by using the SCN information that is recorded in transaction tables within the rollback segments, control files, and data file headers. To perform a change-based recovery, you can use the previous example of incomplete database recovery but utilize change-based recovery in place of cancel-based or time-based recovery. To do this, restore all the needed data files from a hot backup, and then just use the following step 7 instead of the one shown previously. 7. Perform an incomplete recovery by using the UNTIL CHANGE clause. SQL> recover database until change 60747681;
Recovering after Losing Current Redo Logs
I
ncomplete recovery is necessary if there is a loss of the current and/or inactive nonarchived redo log files. If this scenario occurs, it means that you don’t have all the redo log files up to the point of failure, so the only alternative is to recover prior to the point of failure.
Oracle has made improvements to compensate for this failure by giving you the ability to mirror copies of redo logs or to create group members on different filesystems.
Some common error messages that might be seen in the alert log are ORA00255, ORA-00312, ORA-00286, and ORA-00334. Each of these error messages indicates a problem writing to the online redo log files. To perform incomplete recovery after the redo log files have been lost, you do the following: 1. Start Server Manager and execute a CONNECT INTERNAL command.
SQL>
connect / as sysdba ;
2. Execute a SHUTDOWN command and copy all data files.
3. Execute a STARTUP MOUNT command to read the contents of the
control file. SQL>
startup mount;
4. Execute a RECOVER DATABASE UNTIL CANCEL command to start the
recovery process. SQL>
recover database until cancel;
5. Apply the necessary archived logs up to, but not including, the lost or
corrupted log. 6. Open the database and reset the log files.
SQL>
alter database open resetlogs;
7. Switch the log files to see whether the new logs are working.
SQL>
alter system switch logfile;
8. Shut down the database.
SQL>
shutdown normal;
9. Execute STARTUP and SHUTDOWN NORMAL commands to validate that
the database is functional by making sure that the alert logs after these commands are executed. SQL> SQL>
startup; shutdown normal;
10. Perform a cold backup or hot backup.
If you need to recover archived logs from a different location, you can just change the LOG_ARCHIVE_DEST location to the new location of the archived log files. This may occur in a recovery situation in which you recover your archived logs from tape to a staging location on disk.
Performing an RMAN-Based Incomplete Recovery Using UNTIL TIME
345
Performing an RMAN-Based Incomplete Recovery Using UNTIL TIME
I
n this example you will perform one type of incomplete recovery using RMAN. You will recover the database to a particular point in time. To do so, you will create a user, called TEST, and two tables with date and time data stored in them. You will then perform a database backup in ARCHIVELOG mode and recover to a time between 2:31, when the data was stored in the first table, and 3:59, when the data was stored in the second table. You accomplish all of this with the SET UNTIL TIME clause in RMAN; this clause is required to perform incomplete recovery. Thus, when the database is recovered, you should not see the second table’s data. When the recovery is completed and validated, you will need to register the database in the recovery catalog. Let’s walk through this example: 1. Source your ORACLE_SID to tst9, which is your target database, so
that the database can be started in MOUNT or OPENED mode with SQL. oracle@octilli:~ > . oraenv ORACLE_SID = [tst9] tst9 oracle@octilli:~ > SQL> startup mount Oracle instance started database mounted Total System Global Area Fixed Size Variable Size Database Buffers
75854976 279680 71303168 4194304
bytes bytes bytes bytes
Or SQL> startup mount ORACLE instance started. Total System Global Area Fixed Size
connected to target database: TST9 (DBID=1268700551) connected to recovery catalog database RMAN> 3. Create a user TEST and the two tables, which will be used throughout
this example. Data will be added to the first table. The results of this data insertion can be seen in the SELECT statement. SQL> create user test identified by test 2> default tablespace users 3> temporary tablespace temp; Statement processed. SQL> grant connect,resource to test; Statement processed. SQL> connect test/test Connected. SQL> create table t1 (c1 number, c2 char(50)); Statement processed. SQL>insert into t1 values (1, to_char(sysdate, 'HH:MI DD-MON-YYYY')); SQL>commit; SQL> create table t2 (c1 number, c2 char(50)); Statement processed.
Performing an RMAN-Based Incomplete Recovery Using UNTIL TIME
347
SQL> connect system/manager SQL> alter system switch logfile; Statement processed. SQL> select * from t1; C1 C2 ---------- -------------------------------------------1 02:31 04-OCT-2001 1 row selected. 4. Back up the database.
RMAN> run { 2> allocate channel ch1 type disk; 3> backup database; 4> } allocated channel: ch1 channel ch1: sid=10 devtype=DISK Starting backup at 04-OCT-01 channel ch1: starting full datafile backupset channel ch1: specifying datafile(s) in backupset including current controlfile in backupset input datafile fno=00001 name=/db01/oracle/tst9/ system01.dbf input datafile fno=00006 name=/db01/oracle/tst9/ data01.dbf input datafile fno=00002 name=/db01/oracle/tst9/ rbs01.dbf input datafile fno=00003 name=/db01/oracle/tst9/ temp01.dbf input datafile fno=00004 name=/db01/oracle/tst9/ users01.dbf input datafile fno=00007 name=/db01/oracle/tst9/ indx01.dbf input datafile fno=00005 name=/db01/oracle/tst9/ tools01.dbf channel ch1: starting piece 1 at 04-OCT-01 piece handle=/oracle/product/9.0.1/dbs/02d5p88p_1_1 comment=NONE
channel ch1: backup set complete, elapsed time: 00:01:49 Finished backup at 04-OCT-01 5. Back up all archived log files.
RMAN> run { 2> allocate channel ch1 type disk; 3> backup 4> format 'log_t%t_s%s_p%p' 5> (archivelog all); 6> } 6. Create the second table, t2, and add the date-time data to the table. This
date-time is the same day, but at 3:59 in the afternoon. Assume that you have run some log switches to move the data to the archived logs. SQL> connect test/test Connected. SQL>insert into t2 values (2, to_char(sysdate, 'HH:MI DD-MON-YYYY')); 1 row processed. SQL>commit; SQL> select * from t2; C1 C2 ---------- -------------------------------------------1 03:59 04-OCT-2001 1 row selected. SQL> connect system/manager SQL> alter system switch logfile; SQL> alter system switch logfile; 7. Back up the archived logs to disk with the appropriate RMAN script.
RMAN> run { 2> allocate channel ch1 type disk; 3> backup 4> format '/oracle/backups/log_t%t_s%s_p%p' 5> (archivelog all); 6> }
Performing an RMAN-Based Incomplete Recovery Using UNTIL TIME
349
8. Restore the database to a point in time between 2:31 and 3:59. Then
validate that you do not see the second table. RMAN> run { allocate channel ch1 type disk; set until time 'OCT 04 2001 15:58:00'; restore database; recover database; sql 'alter database open resetlogs'; } SQL> select * from t1; C1 C2 ---------- -----------------------------------------1 02:31 04-OCT-2001 1 row selected. SQL> select * from t2; No rows selected. SQL> 9. Once the database has been restored, it should be shut down normally.
Then you should perform a startup to make sure the restore process was completed successfully. SQL> shutdown SQL> startup 10. Once the database has been validated, the database should be reregistered
in the RMAN catalog. This must done every time there is an incomplete recovery and the SQL 'ALTER DATABASE OPEN RESETLOGS'; command is performed. To reregister the database, you must be connected to the target database and the recovery catalog. oracle@octilli:~ > rman target / catalog rman/rman@rcat Recovery Manager: Release 9.0.1.0.0 - Production (c) Copyright 2001 Oracle Corporation. reserved.
connected to target database: TST9 (DBID=1268700551) connected to recovery catalog database RMAN> reset database; database registered in recovery catalog starting full resync of recovery catalog full resync complete RMAN>
A complete backup should be performed on any database that is opened with the RESETLOGS options because all previous archived logs are invalid with databases opened with these options.
Performing RMAN-Based Incomplete Recovery Using UNTIL SEQUENCE
In this example, you will perform another type of incomplete recovery using RMAN. You will recover the database to a particular log sequence number. To do this, you will need to use the SET UNTIL [LOGSEQ/SEQUENCE/ SCN] clause to perform an incomplete recovery until a sequence point. In this example, you will recover to the log sequence just prior to a corrupt online redo log. When the recovery is completed and validated, you will need to register the database in the recovery catalog. Let’s walk through this example: 1. Source ORACLE_SID to tst9, which is your target database, so that the
database can be started in MOUNT or OPENED mode with SQL. oracle@octilli:~ > . oraenv ORACLE_SID = [tst9] tst9 oracle@octilli:~ >
Performing RMAN-Based Incomplete Recovery Using UNTIL SEQUENCE 351
SQL> startup mount Oracle instance started database mounted Total System Global Area Fixed Size Variable Size Database Buffers
75854976 279680 71303168 4194304
bytes bytes bytes bytes
Or SQL> startup mount ORACLE instance started. Total System Global Area Fixed Size Variable Size Database Buffers Redo Buffers Database mounted. SQL> alter database open;
75854976 279680 71303168 4194304 77824
bytes bytes bytes bytes bytes
2. Connect to RMAN, the target database, and the recovery catalog in
3. Create a user TEST and the two tables, which will be used throughout
this example. Data will be added to the first table. The results of this data insertion can be seen in the SELECT statement. oracle@octilli:~ > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Fri Oct 5 00:33:53 2001 (c) Copyright 2001 Oracle Corporation. reserved. SQL> connect /as sysdba Connected. SQL> archive log list; Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence
piece handle=/oracle/product/9.0.1/dbs/06d5pf0q_1_1 comment=NONE channel ch1: backup set complete, elapsed time: 00:00:01 Finished backup at 05-OCT-01 released channel: ch1 RMAN> 5. In this case, there is a corrupt online redo log sequence number 264,
so you will need to recover to log sequence number 263. RMAN> run { 2> allocate channel ch1 type disk; 3> set until logseq=263 thread=1; 4> restore database; 5> recover database; 6> sql "alter database open resetlogs"; 7> } allocated channel: ch1 channel ch1: sid=11 devtype=DISK executing command: SET until clause Starting restore at 05-OCT-01 channel ch1: starting datafile backupset restore channel ch1: specifying datafile(s) to restore from backup set restoring datafile 00001 to /db01/oracle/tst9/ system01.dbf restoring datafile 00002 to /db01/oracle/tst9/rbs01.dbf restoring datafile 00003 to /db01/oracle/tst9/ temp01.dbf restoring datafile 00004 to /db01/oracle/tst9/ users01.dbf restoring datafile 00005 to /db01/oracle/tst9/ tools01.dbf
Performing RMAN-Based Incomplete Recovery Using UNTIL SEQUENCE 355
restoring datafile 00006 to /db01/oracle/tst9/ data01.dbf restoring datafile 00007 to /db01/oracle/tst9/ indx01.dbf channel ch1: restored backup piece 1 piece handle=/oracle/product/9.0.1/dbs/04d5peg1_1_1 tag=null params=NULL channel ch1: restore complete Finished restore at 05-OCT-01 Starting recover at 05-OCT-01 starting media recovery archive log thread 1 sequence 262 is already on disk as file /oracle/admin/tst9/arch/archtst9_262.log archive log filename=/oracle/admin/tst9/arch/archtst9_ 262.log thread=1 sequence2 media recovery complete Finished recover at 05-OCT-01 sql statement: alter database open resetlogs released channel: ch1 RMAN> 6. Next, you can validate that the database was opened and the logs were
reset, thus recovering before the corrupt log file sequence 263. SQL> archive log list Database log mode Automatic archival Archive destination Oldest online log sequence Next log sequence to archive Current log sequence SQL>
7. Once the database has been restored, shut it down normally. Then
perform a startup to make sure that the restore process was completed successfully. SQL> shutdown SQL> startup 8. Once the database has been validated, it should be reregistered in the
RMAN catalog with the RESET DATABASE command. This must be done every time there is an incomplete recovery and the SQL 'ALTER DATABASE OPEN RESETLOGS'; command is performed. You must be connected to the target database and the recovery catalog. oracle@octilli:~ > rman target / catalog rman/rman@rcat Recovery Manager: Release 9.0.1.0.0 - Production (c) Copyright 2001 Oracle Corporation. reserved.
All rights
connected to target database: TST9 (DBID=1268700551) connected to recovery catalog database RMAN> reset database; database registered in recovery catalog starting full resync of recovery catalog full resync complete RMAN>
Summary
I
ncomplete recovery allows you to recover a database to before the point where the database failed, or to the last available transaction at the time of the failure. In other words, as a result of this type of recovery, the recovered database is missing transactions or is incomplete.
You might need to perform incomplete database recovery for various reasons. The type of failure that requires such a recovery might be in the database, such as a corruption error or a dropped database object. This means that recovery would need to stop short of using all the archived logs that are available to be applied. If it didn’t, the failure could be reintroduced to the database as the transactions were being read from archived redo logs. Another situation that might require an incomplete recovery is one in which your database has lost the current redo log files. Again, the reason this can only be solved with an incomplete recovery is because not all the previous transactions are available. At least one online log file is corrupted or lost. The RMAN incomplete recovery process was also demonstrated with examples that used the SET UNTIL TIME and SEQUENCE commands. Incomplete recovery is a key component for certain types of failure; as a result, to be properly prepared for the test you must understand this concept. In the workplace, you can use these concepts in routine maintenance activities like those in which you may be required to move databases from one server to another.
Exam Essentials Identify the types of user-managed incomplete recovery. The three different types of user-managed incomplete recovery are cancel-based, time-based, and change-based recovery. Make sure you understand all three types. Understand how to perform a user-managed incomplete recovery. You should know the commands that must be issued during the incomplete recovery process. These include RECOVER DATABASE UNTIL CANCEL/ TIME/CHANGE. Know how to perform an incomplete recovery that results from a loss of redo logs. To do this, you must know how to stop the recovery process prior to the missing or corrupt redo log. This is the only way you can recover a database that has this type of failure. Understand how to use the RMAN UNTIL TIME command. In incomplete RMAN-based recovery, this command is used to stop the recovery prior to complete recovery based on a time of occurrence.
Know when to use the RMAN UNTIL SEQUENCE command. This command is used in incomplete RMAN-based recovery to stop the recovery before it becomes a complete recovery based on a sequence number such as log sequence, sequence, or SCN.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: cancel-based recovery
Review Questions 1. What are the three types of incomplete recovery? (Choose all
that apply.) A. Change-based B. Time-based C. Stop-based D. Cancel-based E. Quit-based 2. Which type of incomplete recovery can be performed in
NOARCHIVELOG mode? A. Change-based B. Time-based C. Stop-based D. Cancel-based E. None 3. You’re a DBA and you have just performed an incomplete recovery.
You neglected to perform a backup following the incomplete recovery. After a couple of hours of use, the database fails again due to the loss of a disk that stores some of your data files. What type of incomplete recovery can you perform? A. Change-based B. Stop-based C. Time-based D. Cancel-based E. None
4. What does Oracle use to uniquely identify each committed transaction
in the log files? A. Unique transaction ID B. Static transaction ID C. System change number D. Serial change number E. Transaction change number 5. What type of recovery is necessary to execute the ALTER DATABASE
OPEN RESETLOGS command? (Choose all that apply.) A. Complete recovery B. Incomplete recovery C. Cancel-based recovery D. Time-based recovery 6. What should be performed after the ALTER DATABASE OPEN
RESETLOGS command has been applied? A. A recovery of the database B. A backup of the database C. An import of the database D. Nothing 7. What type of incomplete recovery gives the DBA the most control? A. Cancel-based B. Time-based C. Change-based D. All give equal control.
8. Which type of incomplete recovery gives the DBA the least control? A. Cancel-based B. Time-based C. Change-based D. All give equal control. 9. What command is used in an RMAN incomplete recovery? A. SET SEQUENCE B. SET UNTIL TIME C. SET TIME D. SET CHANGE 10. A reset command should be performed after what action in the
database? A. The database is opened with the RESETLOGS option. B. There is a physical change to the recovery catalog. C. There is a physical change to the target database. D. The database has been completely recovered. 11. The RMAN change-based recovery has which of the following
options? (Choose all that apply.) A. LOGSEQ B. SEQLOG C. SEQUENCE D. SCN
12. If you need to perform a user-managed recovery of a database and you
stop the recovery before October 4th at 15:01:00, which command would be used? A. SET UNTIL TIME ‘OCT 04 2001 15:00:00’ B. RECOVER UNTIL TIME
‘OCT 04 2001 15:00:00’
C. RECOVER DATABASE UNTIL TIME ‘OCT 04 2001 15:00:00’ D. RECOVER TABLESPACE UNTIL TIME ‘OCT 04 2001 15:00:00’ 13. Which of the following situations will force an incomplete recovery? A. Loss of data file B. Corrupt data file C. Loss of an online redo log that has been archived D. Corrupt current online redo log 14. Which of the following is a common command that must be performed
with all incomplete recoveries and not with complete recoveries? A. RECOVER DATABASE B. RECOVER UNTIL CANCEL C. ALTER DATABASE OPEN RESETLOGS D. RESTORE DATABASE 15. What information could be required in order for a change recovery to
be performed? (Choose all that apply.) A. SEQUENCE# B. FIRST_CHANGE# C. NEXT_CHANGE# D. Information from V$LOG_HISTORY E. All of the above
Answers to Review Questions 1. A, B, D. Options A, B, and D all describe valid types of incomplete
recovery. Options C and E are not incomplete recovery types. 2. E. Incomplete recovery cannot be performed in NOARCHIVELOG mode. 3. E. Incomplete recovery cannot be performed unless a new backup is
taken after the first failure. All backups prior to an incomplete recovery are invalidated for use with any of the existing data files, control files, or redo logs. 4. C. The system change number, or SCN, uniquely identifies each com-
mitted transaction in the log files. 5. B, C, D. All forms of incomplete recovery require the use of the
RESETLOGS clause during the ALTER DATABASE OPEN command. All redo logs must be reset to a new sequence number. This invalidates all prior logs to that database. 6. B. A backup of the database should be performed if the ALTER
DATABASE OPEN RESETLOGS command has been applied. 7. C. Change-based recovery gives the DBA the most control of the
incomplete recovery process because it allows the stopping point to be specified by the SCN number. 8. A. Cancel-based recovery applies the complete archived log before it
cancels or stops the recovery process. Therefore, you cannot recover part of the transactions within the archived log as with change-based or time-based recovery. 9. B. RMAN time-based recovery applies the changes in an archived log up
to the point in time reference specified in the SET UNTIL TIME command. 10. A. Anytime the database is opened with the RESETLOGS options, you
will need to reset the target database in the recovery catalog.
11. A, C, D. There are three RMAN change options: LOGSEQ, SEQUENCE,
and SCN. These are used with the SET UNTIL command. 12. C. The user-managed recovery of a database would be performed
with the RECOVER DATABASE UNTIL TIME ‘OCT 04 2001 15:00:00’ command. 13. D. A corrupt current online redo will force an incomplete recovery
because the transactions in that log will not be applied to the recovery process, and the database will need to be stopped before complete recovery by time, change, or cancel commands. 14. C. The ALTER DATABASE OPEN RESETLOGS command must be per-
formed with all incomplete recoveries but it does not have to be performed with complete recoveries. 15. E. All options are information from the V$LOG_HISTORY view, which
contain the log sequence number and SCN information.
ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Perform cross checking of backups and copies. Update the repository when backups have been deleted. Change the availability status of backups and copies. Make a backup or copy exempt from the retention policy. Catalog backups made with operating system commands.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
his chapter discusses practical topics related to RMAN maintenance including recovery catalog information and information pertaining to backup media, whether disk or tape. The main emphasis of these RMAN features is on keeping the catalog synchronized with the backup media. In addition to learning about catalog synchronization, you will become familiar with maintenance commands for multiple maintenance operations. For instance, you will learn about cross checking backups with the catalog entries, updating the catalog when you deleted a backup, changing the status of backups, and exempting a backup from the retention policy. Also, this chapter will also demonstrate how to catalog or copy OS files into the RMAN catalog. The RMAN maintenance commands discussed in this chapter are required for testing, as well as day-to-day management of the RMAN tool. It is important that you know how to perform maintenance functions on the recovery catalog, the repository, and associated media sources so that RMAN will function properly in backup and recovery situations.
Performing Cross Checking of Backups and Copies
Cross checking backups and copies is a process that compares the recovery catalog information with the information contained on the actual media that contains the backup, such as tape or a file on disk. When cross checking is performed, there are cases in which the actual tape containing the backup is removed from the tape library and shipped off site. You can use the
CROSSCHECK command to check for this. When you use this command, notice that all of the backups are listed as available. In the example illustrated here, we are using the disk media that contains the files, but tape media could also contain these files. In this example, the CROSSCHECK command is being used on the TST9 database. Notice the backup piece, stamp=4422842, is available here. Also note the special ALLOCATE CHANNEL FOR MAINTENANCE TYPE DISK command, which performs the CROSSCHECK command that is a maintenance activity. RMAN> allocate channel for maintenance type disk; allocated channel: ORA_MAINT_DISK_1 channel ORA_MAINT_DISK_1: sid=12 devtype=DISK RMAN> crosscheck backup of database; crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/03d5pe6s_1_1 recid=2 stamp=4422842 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/04d5peg1_1_1 recid=3 stamp=4422846 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/05d5petc_1_1 recid=4 stamp=4422842 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/08d5rmrn_1_1 recid=7 stamp=4423588 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/0ad5rpmm_1_1 recid=9 stamp=4423618 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/0fd5rv49_1_1 recid=12 stamp=442363 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/0gd5rv49_1_1 recid=13 stamp=442364
In the next example, notice that the file that was renamed with a Unix OS command mv is now listed as EXPIRED. This means that the EXPIRED backup in catalog is not available on the media. Let’s step through this process. 1. Rename the backup set file 03d5pe6s_1_1 to 03d5pe6s_1_1.old so
that the catalog does not recognize the filename as it is stored in the catalog database. oracle@octilli:/oracle/product/9.0.1/dbs > mv 03d5pe6s_ 1_1 03d5pe6s_1_1.old oracle@octilli:/oracle/product/9.0.1/dbs > 2. Next, reexecute the CROSSCHECK command and notice that the
03d5pe6s_1_1 backup set has expired. RMAN> crosscheck backup of database; crosschecked backup piece: found to be 'EXPIRED' backup piece handle=/oracle/product/9.0.1/dbs/03d5pe6s_ 1_1 recid=2 stamp=4422842 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/04d5peg1_ 1_1 recid=3 stamp=4422846 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/05d5petc_ 1_1 recid=4 stamp=4422842 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/08d5rmrn_ 1_1 recid=7 stamp=4423588 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/0ad5rpmm_ 1_1 recid=9 stamp=4423618 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/0fd5rv49_ 1_1 recid=12 stamp=442363 crosschecked backup piece: found to be 'AVAILABLE' backup piece handle=/oracle/product/9.0.1/dbs/0gd5rv49_ 1_1 recid=13 stamp=442364 RMAN>
Updating the Repository When Backups Have Been Deleted
369
Below is an example of how you would cross check a copy within the target database TST9. This is performed and functions similarly to the cross check of the backup set example that we previously went through using the CROSSCHECK BACKUP OF DATABASE command. In this case, you would use the CROSSCHECK COPY OF DATABASE command. RMAN> crosscheck copy of database; validation succeeded for datafile copy datafile copy filename=/staging/cold/tst9/data01.dbf recid=15 stamp=442429505 RMAN>
Updating the Repository When Backups Have Been Deleted
R
MAN can mark a backup in the repository, which is either the recovery catalog or the control file, with a status of deleted. Remember that if you do not use the recovery catalog, the information about backups is stored in the target database’s control file. Backups that have been marked in this manner will not appear in a list output when the recovery catalog is queried. RMAN uses a special command, ALLOCATE CHANNEL FOR DELETE TYPE DISK, to perform this maintenance activity. We will be using the CHANGE command and DELETE command in these examples. This example shows the process of deleting part of a backup set and control file: 1. Designate a channel to work with the storage medium—a tape, in this
example. RMAN> allocate channel for delete type tape; allocated channel: ORA_MAINT_TAPE_1 channel ORA_MAINT_TAPE_1: sid=11 devtype=TAPE
2. Mark the backup set and a control file as deleted.
RMAN> change backupset 261 delete;
List of Backup Pieces BP Key BS Key Pc# Cp# Status Device Type Piece Name ------- ------- --- --- ----------- ----------- ---------262 261 1 1 AVAILABLE TAPE /oracle/ product/9.0.1/dbs/02d5p1 Do you really want to delete the above objects (enter YES or NO)? YES deleted backup piece backup piece handle=/oracle/product/9.0.1/dbs/02d5p88p_1_1 recid=1 stamp=4422789 3. When this has been done, you must release the channel so that it can
be used by other jobs within RMAN. RMAN> release channel; released channel: ORA_MAINT_TAPE_1 RMAN>
Change the Availability Status of Backups and Copies
R
MAN can mark a backup or copy as unavailable or available. This capability is used primarily to designate backups or to designate a copy that has been moved offsite or is being brought back onsite. The following example shows how a backup set can be made unavailable and can then be made available again. To do this, we will use the CHANGE, UNAVAILABLE, and AVAILABLE commands. The CHANGE, UNAVAILABLE, and AVAILABLE commands are used to make a backup set or copy unavailable or available in the recovery
Change the Availability Status of Backups and Copies
371
catalog. As in the previous examples, you must be connected to the target database and the recovery catalog before you can execute these commands. 1. Execute the CHANGE command to make a backup set unavailable.
RMAN> change backupset 283 unavailable; changed backup piece unavailable backup piece handle=/oracle/product/9.0.1/dbs/03d5pe6s_ 1_1 recid=2 stamp=4422842 2. Execute the CHANGE command to make the backup set available again.
RMAN> change backupset 283 available; allocated channel: ORA_DISK_1 channel ORA_DISK_1: sid=11 devtype=DISK changed backup piece available backup piece handle=/oracle/product/9.0.1/dbs/03d5pe6s_ 1_1 recid=2 stamp=4422842 RMAN> The next example shows how to make a copied file that has been cataloged unavailable. This will be done in the same manner as a backup set, but we will use the CHANGE DATAFILECOPY command. 1. First we will connect to RMAN and list all the copies in the database.
Notice that column S (the status column) has a value of A for available. oracle@octilli:~ > rman target / catalog rman/rman@rcat Recovery Manager: Release 9.0.1.0.0 - Production (c) Copyright 2001 Oracle Corporation.
All rights reserved.
connected to target database: TST9 (DBID=1268700551) connected to recovery catalog database RMAN> list copy of database;
Datafile Copies File S Completion CkpSCN Ckp Time Name ---- - ---------- ------ -------- ---------------------------6 A 06-OCT-01 67540 05-OCT-01 staging/cold/tst9/data01.dbf 2. Next, change the data file with the CHANGE DATAFILECOPY command
so that it becomes unavailable. RMAN> change datafilecopy '/staging/cold/tst9/ data01.dbf' unavailable; changed datafile copy unvailable datafile copy filename=/staging/cold/tst9/data01.dbf recid=15 stamp=442429505 3. Then use the LIST command to show that the copy is unavailable.
This status (U for unavailable) can be seen in the S column. RMAN> list copy of database;
List of Key ------502
Datafile Copies File S Completion CkpSCN Ckp Time Name ---- - ---------- ------ -------- ---------------------------6 U 06-OCT-01 67540 05-OCT-01 staging/cold/tst9/data01.dbf
RMAN> 4. Now you can make the file available again by using the following
Make a Backup or Copy Exempt from the Retention Policy
373
5. Now verify that the file is available. Again, notice the S column has a
value of A. RMAN> list copy of database;
List of Key ------502
Datafile Copies File S Completion CkpSCN Ckp Time Name ---- - ---------- ------ -------- ---------------------------6 A 06-OCT-01 67540 05-OCT-01 staging/cold/tst9/data01.dbf
RMAN>
Make a Backup or Copy Exempt from the Retention Policy
To make a backup or copy that is excluded from the tape retention policy, you must perform certain RMAN commands. These commands involve the use of the CONFIGURATION parameters that can be seen in the examples below. CONFIGURATION parameters are similar to OS environment variables or settings, but these parameters are used for RMAN settings. These settings are used by all RMAN connections by default. 1. First, you must know what the RMAN configuration parameters are
set to. You can do this by executing the SHOW ALL command. RMAN> show all; RMAN configuration parameters are: CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default CONFIGURE BACKUP OPTIMIZATION OFF; # default CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default CONFIGURE DEVICE TYPE DISK PARALLELISM 1; # default
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default CONFIGURE MAXSETSIZE TO UNLIMITED; # default CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/ product/9.0.1/dbs/snapcf_tst9.ft RMAN> 2. Next, you should set the retention policy to a number of days. We will
arbitarily set the retention to 10 days for this example. In real life, this value would be agreed upon by the IT management. The results of this assignment mean that the recovery catalog keeps backups for only 10 days. To set the retention policy, you will use the CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF N DAYS configuration setting. RMAN> CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 10 DAYS; new RMAN configuration parameters: CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 10 DAYS; new RMAN configuration parameters are successfully stored starting full resync of recovery catalog full resync complete 3. Next, you should create a backup using the KEEP command. The KEEP
command uses a defined time period or forever. In this example, we will use a set date or time of 01-DEC-02. This means that the database and logs will not expire until this date is reached even if the retention policy is set for only 10 days. RMAN> run { 2> allocate channel c1 type disk; 3> backup database keep until time '01-DEC-02' logs; 4> backup (archivelog all) ; 5> release channel c1; 6> }
Multiple Backups Types on Your Tapes Your tape backup device could be supporting multiple backups that may include both RMAN-based and normal filesystem backups. Most automated tape libraries (ATLs) and the software associated with them support filesystem backups as well as RMAN backups. This means that you could have each type of backup on a tape, especially because digital linear tapes (DLTs) will support large storage volumes—from 40–200 gigabytes per tape—which means that filesystem backups and RMAN backups could be interspersed on a single tape.
Catalog Backups Made with Operating System Commands
377
You should be cognizant of this possibility. First of all, make sure that the tape cycle your organization uses supports the requirements of both the backup filesystem and RMAN that are needed for the longest period of time. For example, filesystem backups are needed for only one week—until the next complete filesystem backup is taken on the weekend. But some RMAN backups may be needed for up to a month in order to support business requirements. As a result, in this situation, you should store all the tapes for up to a month.
Catalog Backups Made with Operating System Commands
RMAN can catalog or store information in the catalog about OSbased backups. In a traditional hot backup, you would use the ALTER TABLESPACE BEGIN BACKUP command to do this. Then you would perform a cp command in Unix to copy the file to another place on the disk. This new location is then cataloged in RMAN. Cataloging means that the information in the recovery catalog is stored. Below is an example of how the data01 .dbf data file is cataloged. This can be done for an entire database similar to how it is done for a user-managed backup. 1. Make a copy of the data01.dbf data file to a backup location.
oracle@octilli:~ > cp /db01/ORACLE/tst9/data01.dbf /staging/data01.dbf 2. Connect to the target database and recovery catalog.
oracle@octilli:~ > rman target / catalog rman/rman@rcat 3. Store the data file in the recovery catalog by executing the CATALOG
Now that you have cataloged the file in the RMAN repository, you can query the repository with a variation of the LIST command. This LIST command will view all the cataloged information for the target database. RMAN> list copy of database;
List of Key ------502
Datafile Copies File S Completion CkpSCN Ckp Time Name ---- - ---------- ------ -------- ---------------------------6 A 06-OCT-01 67540 05-OCT-01 staging/cold/tst9/data01.dbf
RMAN>
Summary
I
n this chapter, we have demonstrated multiple RMAN maintenance commands and the uses of these commands. Essentially, they are used to keep your tape or disk backups synchronized with your catalog. We also demonstrated how to perform RMAN maintenance operations that cross check catalog entries with the actual backup media, update the catalog for a deleted backup, change the status of backups and copies, and exempt a backup from the retention policy. In addition, we noted special channel commands for maintenance operations and cataloged non-RMAN files, such as database data files, within the RMAN catalog. RMAN maintenance commands are required for testing but are essential when you are trying to manage the RMAN environment in the workplace. This chapter is full of many commands that are used to perform such maintenance activities. Make sure that you understand the command syntax and how to use them. When you understand all of this, you should be comfortable when you are confronted with the problems that present themselves on the test and in the workplace.
Exam Essentials Know how to cross check backups and copies. Cross checking compares the actual media (tape or disk) that contains the backups with the recovery
catalog. The available and expired backup piece indicates that the cross check did or did not find a match between the catalog and media. Update the repository when backups have been deleted. The commands that you must issue to update the repository when backups have been deleted include the CHANGE BACKUPSET n DELETE command, among others. You should also understand that the repository of backup information is the recovery catalog database if it is being used, and if it isn’t, the repository is the target database’s control file. Be able to change the availability status of backups. The commands that you must issue to update the repository when backups have been deleted include CHANGE BACKUPSET n UNAVAILABLE and CHANGE BACKUPSET n AVAILABLE. Be able to change the availability status of copies. The commands that you must issue to update the repository when copies have been deleted are CHANGE DATAFILECOPY n UNAVAILABLE and CHANGE DATAFILECOPY n AVAILABLE. Make a backup or copy exempt from the retention policy. To make a backup or copy exempt, you will need to understand how commands such as BACKUP DATABASE KEEP UNTIL TIME ‘DATE TIME STAMP’ LOGS can be used to override configuration settings such as CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF n DAYS. Catalog backups made with operating system commands. You should be able to catalog OS files into the database by using the CATALOG DATAFILECOPY ‘/location/filename’ command.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: AVAILABLE
Review Questions 1. The RMAN CROSSCHECK command requires the use of which of the
following? A. Recovery catalog B. RMAN repository only C. Standard channel allocation D. Allocated channel for restore 2. What type of channel gets allocated for the CROSSCHECK command? A. c1 B. t1 C. MAINTENANCE D. Only sbt_tape 3. The CROSSCHECK command can only be performed on what type of
backups? (Choose all that apply.) A. Backup pieces only B. Backup sets C. Copies D. Backup sets only 4. Which command is used to remove backup sets and copies from the
recovery catalog? A. REMOVE B. UNAVAILBLE C. DELETE D. MARK UNSUABLE
5. What channel type is required for using the DELETE command? A. DELETE B. c1 C. MAINTENANCE D. t1 6. The CHANGE AVAILABILITY command can be used on what type of
backups? (Choose all that apply.) A. Copies B. User-managed backups C. Backup sets D. Cataloged backups 7. What is the status of an unavailable backup? A. Unavailable B. Deleted C. U D. UA 8. The RMAN retention policy is known as what? A. Command B. Report C. List D. CONFIGURATION parameter
9. The retention policy can be bypassed by what command? A. EXTEND B. KEEP C. EXEMPT D. UNLIMITED 10. What is the term for recording non-RMAN backup in the RMAN
repository? A. OS file copies B. Backup sets C. Backup pieces D. Catalog backups 11. When a backup set is made unavailable, what commands must
be used? A. CHANGE and UNAVAILABLE B. SET and UNAVAILABLE C. MAKE and UNAVALIBLE D. CHANGE and REMOVE 12. In order to delete a backup from disk, you must perform which of the
following commands? A. ALLOCATE CHANNEL FOR REMOVE TYPE DISK B. ALLOCATE CHANNEL FOR DELETE TYPE DISK C. ALLOCATE CHANNEL FOR UNAVAILABLE TYPE DISK D. ALLOCATE CHANNEL TO DELETE TYPE DISK
13. Which of the following commands will set the retention policy for 7
days within RMAN? A. CONFIGURE RETENTION POLICY TO RECOVERY PERIOD OF 7
DAYS; B. CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 7
DAYS; C. CONFIGURE RETENTION POLICY TO RECOVERY TIME OF 7 DAYS; D. CONFIGURE RETENTION POLICY TO RECOVERY TIMEFRAME OF 7
DAYS; 14. Which of the following commands will display the retention policy? A. show retention plan B. display all C. show all D. display retention plan 15. Which of the following commands will allow you to identify if the
availability of a data file copy has changed? A. LIST COPY OF DATABASE AVAILABILITY B. LIST COPY OF DATABASE C. LIST COPY OF AVAILABILITY D. LIST BACKUP OF AVAILBILITY
Answers to Review Questions 1. A. The CROSSCHECK command requires the use of a recovery catalog.
This is the backup information that must be checked with the recovery catalog to compare to the tape or disk media where the actual backups are stored. 2. C. The channel that gets allocated for CROSSCHECK command is a
special channel for maintenance only. c1 and t1 are simply names of the channel devices. 3. B, C. The CROSSCHECK command can perform both backup sets and
copies. 4. B. The DELETE command removes backup sets and copies for the
catalog. 5. A. The allocate channel for delete is necessary when using the
DELETE command. c1 and t1 are names of the channel devices. 6. A, C, D. The CHANGE AVAILABILITY command can be used on
backup sets, copies, and cataloged backups that are file copies. 7. C. The status of unavailable is seen in the list command output
and is U. 8. D. The retention policy is a parameter known as a CONFIGURATION
parameter. There are many other configuration parameters. 9. B. The RMAN KEEP command causes backups to bypass or outlast
a retention policy. 10. D. A catalog backup is a file that is backed up for an OS or a user-
managed backup. 11. A. The command to make a backup set unavailable is CHANGE
12. B. The ALLOCATE CHANNEL FOR DELETE TYPE DISK command will
allocate what is necessary before you can delete a backup within RMAN. 13. B. The correct command is CONFIGURE RETENTION POLICY TO
RECOVERY WINDOW OF 7 DAYS. The other options are invalid syntax for the configuration parameters. 14. C. The show all command will display all configuration parameters in
RMAN. One of these configuration parameters is the retention policy. 15. B. The LIST COPY OF DATABASE command will show status of A for
Recovery Catalog Creation and Maintenance ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe the contents of the recovery catalog. Create the recovery catalog. Maintain the recovery catalog by using RMAN commands. Use RMAN to register, resynchronize, and reset a database. Query the recovery catalog to generate reports and lists. Create, store, and run scripts. Describe methods for backing up and recovering the recovery catalog.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
his chapter discusses both practical and conceptual topics related to RMAN and the recovery catalog. You will learn the reasons and considerations for using the recovery catalog, and you will also explore the components that make up this catalog. This chapter focuses on the practical aspects of the recovery catalog, such as installation and configuration. You will use the RMAN commands introduced in this chapter to manage and maintain the recovery catalog, and you will learn how to create the reports and lists that provide information about the backup process. You will also work with scripts to perform assorted backup tasks and activities, similar to what you have already done with the OS hot and cold backup scripts. However, the scripts in this chapter are executed within the RMAN utility. The topics covered in this chapter will all be required in both testing and real-world situations. The recovery catalog is a key component and allows for complete functionality of RMAN. Understanding how this information can be stored and utilized is key to performing well on the test and getting the most of RMAN in the workplace.
Your decision to use the recovery catalog is one of the most significant decisions you make when you are using RMAN. The recovery catalog provides many more backup and recovery functions than using the target database’s control file as the RMAN repository. For this reason, when you are using RMAN, Oracle recommends that you use the recovery catalog whenever possible. The main considerations regarding the use of the recovery catalog are as follows:
Some functionality is not supported unless the recovery catalog exists.
You should create a separate catalog database.
You must administer the catalog database like any other database in areas such as data growth and stored database objects such as scripts.
You must back up the catalog database.
You must determine whether you will keep each target database in a separate recovery catalog within a database.
Oracle recommends that you use the recovery catalog unless the process of maintaining and creating the catalog database requires too many resources for a site. (Any site that has an experienced and qualified DBA and system administrator should be able to maintain and create the catalog database.) If the database is small and not critical, however, using RMAN without the recovery catalog is acceptable and has been improved from previous Oracle versions.
You should store the recovery catalog on a separate server and filesystem than the one on which you store the target databases it is responsible for backing up. This prevents failures on the target database server and filesystem from affecting the recovery catalog for backup and recovery purposes.
Figure 13.1 shows the recovery catalog’s association with the whole RMAN backup process.
RMAN utility interacting with the recovery catalog
RMAN
*The recovery catalog database stores information about backups on disk and tape storage devices, as well as information about the target database.
Recovery catalog database
Server session
Server session
Server session
Target database
Disk storage Tape storage
The Components of Recovery Catalog
T
he main components of the RMAN recovery catalog support the logging of the backup and recovery information in the catalog. This information is stored within tables, views, and other databases objects within an Oracle database. Backups are compressed for optimal storage. Here is a list of the components contained in a recovery catalog:
Backup and recovery information that is logged for long-term use from the target databases
RMAN scripts that can be stored and reused
Backup information about data files and archived logs
Information about the physical makeup, or schema, of the target database
As noted earlier, the recovery catalog is an optional feature of RMAN. The catalog is similar to the standard database catalog, in that it stores information about the recovery process as the database catalog stores information about the database. RMAN can be run without the catalog. For this reason, the recovery catalog must be stored in its own database, preferably on a server other than the server where the target database resides. To enable the catalog, an account with CONNECT, RESOURCE, and RECOVERY_CATALOG_OWNER privileges must be created to hold the catalog tables. After this is done, the catalog creation script command must be executed as the database user RMAN connected to the RMAN utility. Let’s walk through the steps to create the recovery catalog. This example assumes that you have already built a database called rcat in which you plan to store the recovery catalog. 1. First, you must select the database where the recovery catalog will
reside. This is not the target database. In this case, this RMAN database is being called rcat. In this situation, you will be using the oraenv shell script (provided by Oracle) to switch to other databases on the same server. oracle@octilli:/oracle/product/9.0.1/bin > . oraenv ORACLE_SID = [rcat] ? oracle@octilli:/oracle/product/9.0.1/bin > 2. After you have completed the preceding step, you will need to create the
user that will store the catalog. To do this, use the name RMAN with the password RMAN. Make DATA the default tablespace and TEMP the temporary tablespace. oracle@octilli:~ > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Sat Sep 29 14:21:01 2001 (c) Copyright 2001 Oracle Corporation. reserved.
SQL> connect /as sysdba Connected. SQL> SQL> create user rman identified by rman 2> default tablespace data 3> temporary tablespace temp; Statement processed. 3. Grant the appropriate permissions to the RMAN user.
SQL> grant connect,resource,recovery_catalog_owner to rman; Statement processed. SQL> 4. Then launch the RMAN tool.
5. Connect to the catalog with the user called RMAN that you created
in step 2. RMAN> connect catalog rman/rman connected to recovery catalog database recovery catalog is not installed 6. Finally, you can create the catalog by executing the following com-
mand and specifying the tablespace in which you want to store the catalog. RMAN> create catalog tablespace data; recovery catalog created RMAN>
Using RMAN to Register, Resynchronize, and Reset the Database
393
Oracle recommends the following space requirements for RMAN in tablespaces for one-year growth in the recovery catalog database: system tablespace, 90MB; rollback tablespace, 5MB; temp tablespace, 5MB; and the recovery catalog tablespace, 15MB.
Use RMAN Commands to Maintain the Recovery Catalog
T
here are multiple RMAN commands that will help you maintain the recovery catalog: CROSSCHECK Identifies the differences between the catalog and the actual files on the media—either disk or tape. DELETE Removes any file with which the LIST or CROSSCHECK command can operate. DELETE EXPIRED Removes records of expired backups from the recovery catalog. DELETE INPUT Used in conjunction with the BACKUP command in situations in which you have multiple archive destinations. If this is the case, then one archived log is copied and the other destinations are deleted. CHANGE/KEEP/NOKEEP Alter the length of time you will keep a backup so that you can bypass a retention policy for a special backup.
Using RMAN to Register, Resynchronize, and Reset the Database
T
here are three other RMAN commands that perform initial setup and configuration operations on the recovery catalog. Essentially, these commands
are less frequently used maintenance commands. These commands fall into these categories:
Registering and unregistering the target database
Resetting the recovery catalog
Resynchronizing the recovery catalog
You should be familiar with these setup and configuration categories and the associated commands.
Registering and Unregistering the Target Database Registering the target database is required so that RMAN can store information about the target database in the recovery catalog. This is the information that RMAN uses to properly back up the database. Here is an example of registering a target database: oracle@octilli:~ > rman target / catalog rman/rman@rcat oracle@octilli:~ > . oraenv ORACLE_SID = [tst9] ? orc9 oracle@octilli:~ > rman target / catalog rman/rman@rcat Recovery Manager: Release 9.0.1.0.0 - Production (c) Copyright 2001 Oracle Corporation. reserved.
All rights
connected to target database: ORC9 (DBID=3960695) connected to recovery catalog database RMAN> register database; database registered in recovery catalog starting full resync of recovery catalog full resync complete RMAN>
Using RMAN to Register, Resynchronize, and Reset the Database
395
By unregistering the target database, you remove the information necessary to back up the database. This task is not performed in the RMAN utility; instead, it is performed by executing a stored procedure as the recovery catalog’s schema owner. Here is an example of how you would unregister a target database: 1. You must get the DB_ID and DB_KEY values in the DB table that reside
in the Recovery Manager catalog. oracle@octilli:~ > sqlplus rman/rman@rcat SQL*Plus: Release 9.0.1.0.0 - Production on Sat Oct 6 22:42:08 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 Production With the Partitioning option JServer Release 9.0.1.0.0 - Production SQL> select * from db; DB_KEY DB_ID HIGH_CONF_RECID CURR_DBINC_KEY ---------- ---------- --------------- -------------504 3960695 0 505 SQL> 2. Then you must run the stored procedure with these values.
3. Finally, you can validate that there is no value in the DB table referencing
the database. SQL> select * from db; no rows selected SQL>
Resetting the Recovery Catalog Resetting the recovery catalog enables RMAN to work with a database that has been opened with the ALTER DATABASE OPEN RESETLOGS command. When you use this command, you cause RMAN to make what is called a new incarnation of the target database. An incarnation of the target database is a new reference of the database in the recovery catalog. This incarnation is marked as the current reference for the target database, and all future backups are associated with this incarnation. Here is an example of how you would connect to the target database and recovery catalog and then reset the database: oracle@octilli:~ > rman target / catalog rman/rman@rcat Recovery Manager: Release 8.1.5.0.0 - Production connected to target database: TST9 (DBID=2058500149) connected to recovery catalog database RMAN> reset database; compiling command: reset executing command: reset
Using RMAN to Register, Resynchronize, and Reset the Database
397
Making Sure a New Incarnation Is Recognized You have just moved a monthly refresh of the production database to the test server. This is a typical process that a lot of organizations follow so that the developers have fresh data with which to develop new code. The database will be named “test,” which is the same name as that of the previous database on the test server. After the new copy of the test database is operational, you need to make sure that the database is backed up so that none of the developers’ work is lost. To do this, you initiate the RMAN backup script for the test database and find that the database cannot be found in the recovery catalog, even though the name and physical structure of the database is the same as it was before you performed the refresh. The RMAN catalog recognizes that this test database named “test” is a new incarnation and that it is uniquely identified in the recovery catalog. This is because this database was opened with the RESETLOGS option. This database must be reset with the RESET command so that it can be properly recognized as a new incarnation in the recovery catalog. At this point, you can see the difference between the recovery catalog of the old test database and the one of the new test database that was refreshed. As a result, the recovery catalog backup information will not be confused between these databases.
Resynchronizing the Recovery Catalog Resynchronizing the recovery catalog enables RMAN to compare the control file of the target database to the information stored in the recovery catalog and to update the recovery catalog appropriately. This resynchronization can be full or partial. A partial resynchronization does not update the recovery catalog with any physical information, such as data files, tablespaces, and redo logs. A full resynchronization captures all the previously mentioned physical information plus the changed records. This process also occurs when a backup takes place.
Here is an example of how you would connect to the target database and recovery catalog and then resynchronize the database: 1. Make a physical change to the target database by adding a new data
file to the DATA tablespace. oracle@octilli:~ > sqlplus /nolog SQL*Plus: Release 9.0.1.0.0 - Production on Tue Oct 9 17:52:19 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
SQL> connect /as sysdba Connected. SQL> ALTER TABLESPACE DATA ADD DATAFILE '/db01/oracle/tst9/data02.dbf' SIZE 20M 2. Next, connect to RMAN and resynchronize the catalog to reflect this
change. oracle@octilli:~ > rman target / catalog rman/rman@rcat Recovery Manager: Release 8.1.5.0.0 - Production connected to target database: TST9 (DBID=2058500149) connected to recovery catalog database RMAN> resync catalog; starting full resync of recovery catalog full resync complete RMAN> 3. Now run a report to view the changes to the TST9 database. Notice the
new data file data02.dbf, which is the eighth data file in the report.
Generating Lists and Reports from the Recovery Catalog
399
RMAN> report schema; Report of database schema File K-bytes Tablespace ---- ---------- -------------------1 204800 SYSTEM 2 40960 RBS 3 10240 TEMP 4 10240 USERS 5 5120 TOOLS 6 51200 DATA 7 10240 INDX 8 20480 DATA
RB segs ------YES YES NO NO NO NO NO NO
Datafile Name ------------------/db01/oracle/tst9/system01.dbf /db01/oracle/tst9/rbs01.dbf /db01/oracle/tst9/temp01.dbf /db01/oracle/tst9/users01.dbf /db01/oracle/tst9/tools01.dbf /db01/oracle/tst9/data01.dbf /db01/oracle/tst9/indx01.dbf /db01/oracle/tst9/data02.dbf
RMAN>
Generating Lists and Reports from the Recovery Catalog
R
MAN has two types of commands (list and report) that you can use to access the recovery catalog so that you can see the status of what you may need to back up, copy, or restore, as well as general information about your target database. Each of these commands is performed from within the RMAN utility.
Using List Commands List commands query the recovery catalog or control file to determine which backups or copies are available. These commands provide the most basic information from the recovery catalog. The information generated is mainly what has been done up to this point in time; from this, you can determine what is available or not available. There are some new features and capabilities that have been added to the list commands in the RMAN version that is compatible with Oracle9i. These
features improve the output by adding greater information, such as backup sets and the contents of backup sets (backup pieces and files). This new set of features is included in the LIST BACKUP command. There are also two new list commands in Oracle9i. The first is LIST BACKUP BY FILE. This shows the output of the backup sets and copies by file types. The file type listings are grouped by data file, archived log, and control file. The second new command is LIST BACKUP SUMMARY; this causes a summarized version of all RMAN backups to appear. In each of the three examples below, you must first connect to the target database and recovery catalog using some variation of the LIST command. The first example displays the incarnation of the database. This listing shows when the database was registered in the recovery catalog. Again, you must first connect to the target database and the recovery catalog. oracle@octilli:~ > rman target / catalog rman/rman@rcat RMAN> list incarnation of database;
List of DB Key ------1 1
Database Incarnations Inc Key DB Name DB ID ------- -------- ---------------2 TST9 1268700551 358 TST9 1268700551
CUR --NO YES
Reset SCN ---------1 66901
Reset Time ---------03-OCT-01 05-OCT-01
RMAN> The next example lists the DATA tablespace backups that have occurred in the database. This listing shows when the DATA tablespace was last backed up. Again, you must first connect to the target database and recovery catalog. oracle@octilli:~ >
Generating Lists and Reports from the Recovery Catalog
401
BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------446 Full 200K DISK 00:01:42 05-OCT-01 BP Key: 447 Status: AVAILABLE Tag: Piece Name: /oracle/product/9.0.1/dbs/0ad5rpmm_1_1 List of Datafiles in backup set 446 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---6 Full 66989 05-OCT-01 /db01/oracle/tst9/data01.dbf BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------488 Full 104K DISK 00:00:40 05-OCT-01 BP Key: 490 Status: AVAILABLE Tag: Piece Name: /oracle/product/9.0.1/dbs/0fd5rv49_1_1 List of Datafiles in backup set 488 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---6 Full 67092 05-OCT-01 /db01/oracle/tst9/data01.dbf RMAN> Finally, this example lists the full database backups that have occurred in the database. This listing shows when the full database was last backed up. Again, you must first connect to the target database and recovery catalog. This command assumes that you are already connected to the target database and recovery catalog. oracle@octilli:~ > rman target / catalog rman/rman@rcat RMAN> list backup of database; List of Backup Sets ===================
BS Key Type LV Size Device Type Elapsed Time Completion Time ------- ---- -- ---------- ----------- ------------ --------------261 Full 115M DISK 00:01:42 04-OCT-01 BP Key: 262 Status: AVAILABLE Tag: Piece Name: /oracle/product/9.0.1/dbs/02d5p88p_1_1 List of Datafiles in backup set 261 File LV Type Ckp SCN Ckp Time Name ---- -- ---- ---------- --------- ---1 Full 66671 04-OCT-01 /db01/oracle/tst9/system01.dbf 2 Full 66671 04-OCT-01 /db01/oracle/tst9/rbs01.dbf 3 Full 66671 04-OCT-01 /db01/oracle/tst9/temp01.dbf 4 Full 66671 04-OCT-01 /db01/oracle/tst9/users01.dbf 5 Full 66671 04-OCT-01 /db01/oracle/tst9/tools01.dbf 6 Full 66671 04-OCT-01 /db01/oracle/tst9/data01.dbf 7 Full 66671 04-OCT-01 /db01/oracle/tst9/indx01.dbf
Using Report Commands Report commands provide more detailed information from the recovery catalog and are used for more sophisticated purposes than list commands are. Reports can provide information about what should be done. Some uses of reports include determining what database files need to be backed up or what database files have been recently backed up. Let’s walk through some examples of report queries. The first example displays all the physical structures that make up the database. This report is used for determining every structure that should be backed up when you are performing a full database backup. Again, you must first connect to the target database and recovery catalog before running any report. RMAN> report schema; Report of database schema
Tablespace RB segs Datafile Name -------------- ------- ------------------SYSTEM YES /db01/oracle/tst9/system01.dbf RBS YES /db01/oracle/tst9/rbs01.dbf TEMP NO /db01/oracle/tst9/temp01.dbf USERS NO /db01/oracle/tst9/users01.dbf TOOLS NO /db01/oracle/tst9/tools01.dbf DATA NO /db01/oracle/tst9/data01.dbf INDX NO /db01/oracle/tst9/indx01.dbf
RMAN> The second example displays all the backup sets that contain an additional copy or duplicate backups. Again, you must first connect to the target database and recovery catalog before running any report. oracle@octilli:~ > rman target / catalog rman/rman@rcat RMAN> report obsolete redundancy = 2; Report of obsolete backups and copies Type Key Completed -------------------- ------ --------Backup Set 261 04-OCT-01 Backup Piece 262 04-OCT-01 Backup Set 283 05-OCT-01 Backup Piece 284 05-OCT-01 Backup Set 294 05-OCT-01 Backup Piece 295 05-OCT-01
MAN scripts can be created to execute a group of RMAN commands. These scripts can be stored within the RMAN catalog. Once they are stored in the recovery catalog, the scripts can then be executed in much the same manner as a stored PL/SQL procedure. Let’s walk through an example
of creating, storing, and running an RMAN script. In this example, you will back up the complete database: 1. Connect to the recovery catalog.
oracle@octilli:~ > rman catalog rman/rman@rcat 2. While you are in the RMAN utility, create a script called complete_bac.
This will create, compile, and store the script in the recovery catalog. RMAN> create script complete_bac{ 2> allocate channel c1 type disk; 3> allocate channel c2 type disk; 4> backup database; 5> backup archivelog all; 6> } created script complete_bac RMAN> 3. Once the scripts are created and stored within the recovery catalog,
they can be rerun as needed. This assures that the same script and functionality is reproduced for later jobs. Figure 13.2 shows how to create and store scripts with the recovery catalog. FIGURE 13.2
Create and store RMAN scripts in the recovery catalog [oracle@OS-HPUX 9.0.1] rman catalog rman/rman@redo RMAN > create script complete_bac {…}
piece handle=/oracle/product/9.0.1/dbs/0fd5rv49_1_1 comment=NONE channel c1: backup set complete, elapsed time: 00:00:48 channel c2: finished piece 1 at 05-OCT-01 piece handle=/oracle/product/9.0.1/dbs/0gd5rv49_1_1 comment=NONE channel c2: backup set complete, elapsed time: 00:01:44 Finished backup at 05-OCT-01 RMAN-00571: ======================================================= ==== RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS =============== RMAN-00571: ======================================================= ==== RMAN-00579: the following error occurred at 10/05/2001 23:46:58 RMAN-03015: error occurred in stored script complete_ bac RMAN-03006: non-retryable error occurred during execution of command: sql RMAN-12004: unhandled exception during command execution on channel default RMAN-20000: abnormal termination of job step RMAN-11003: failure during parse/execution of SQL statement: ALTER SYSTEM ARCHIL RMAN-11001: Oracle Error: ORA-00271: there are no logs that need archiving RMAN> Figure 13.3 shows how stored scripts can be run in RMAN.
Methods for Backing Up and Recovering the Recovery Catalog
T
here are a few methods you can use to back up and recover the catalog. Before we address these methods, you need to know one steadfast rule: the catalog database needs to be on a different server than the one on which the target databases are stored. This prevents the catalog database from being impacted by a server-wide failure on a target database server. If necessary, the catalog database can be backed up by another RMAN catalog or by a user-managed backup. When this kind of backup is conducted, the database can be in ARCHIVELOG mode or NOARCHIVELOG mode depending on the activity. How you use the catalog determines how complicated you need to make the backup process. For example, let’s say the environment you support executes backups only in the evening. These backups are all completed by 6:00 A.M. the following day. In this case, the RMAN catalog backups could go on
every day at some time after the backups are complete. This type of environment could probably safely operate in NOARCHIVELOG log mode if desired. Let’s look at a more demanding schedule that would require ARCHIVELOG mode. In this example, backups continue around the clock, so there would be no time to perform a cold backup. This means that the database has to operate in ARCHIVELOG mode. In addition, the catalog backup should occur when the catalog is being used the least in order to avoid contention, just as you would do with your other backups. Additionally, you can perform an export of the RMAN schema. This is a good way to supplement the physical backup performed by RMAN or a usermanaged backup. The following syntax is an example of how to back up the RMAN schema with an export in which RMAN is the schema owner and the database name is rcat. exp rman/rman@rcat file=cat.dmp owner=rman In conclusion, a good general approach is to back up regularly using physical backups: either RMAN or user-managed. These backups should be daily and the database should be in ARCHIVELOG mode to assure complete recovery. For added security, you can add user exports of the RMAN schema. This way you have a fall-back plan in case you have a problem with your physical backup.
Summary
T
his chapter discussed the considerations you need to take into account when you are using the recovery catalog as well as its major components. You saw many practical examples of how to perform certain tasks in RMAN and the recovery catalog. One such example demonstrated how to create the recovery catalog in a database. In addition to these examples, this chapter demonstrated various commands and methods you need to use to manage the recovery catalog. Some of these commands are included in the lists and reports groups and are used to query information in the recovery catalog. These commands also help validate backup status and schedule backups. You also learned how to use scripting capabilities within RMAN to group commands, store them within the catalog, and run stored scripts. These scripts can reduce the number of errors that can occur in scripts or programs not centrally stored in the RMAN schema.
Understanding the recovery catalog is a key to understanding all the features of RMAN. When you choose to implement the recovery catalog, all features of RMAN are allowed. Additional maintenance and administration is involved with recovery catalog database, and this must be considered. Knowing how this information can be stored and utilized in the recovery catalog is key for the test and for getting the most out of RMAN in the workplace.
Exam Essentials Understand the considerations that you must take into account when you are using the recovery catalog. The recovery catalog considerations consist of the following: you must use a separate and distinct database for the recovery catalog, you must have database administration of the catalog database, you must consider backup strategies for the recovery catalog, and you must use the recovery catalog to provide full functionality of RMAN commands. Know the components that make up recovery catalog. The components that make up the recovery catalog are as follows: storage of long-term backup and recovery information, storage and reuse of RMAN scripts, and storage of backup information about data files, archived logs, and information about the physical makeup of the target database. Understand how to create the recovery catalog. To create the recovery catalog, you must create an account with the following privileges: CONNECT, RESOURCE, and RECOVERY_CATALOG_OWNER. Then with RMAN connected to the recovery catalog, the CREATE CATALOG TABLESPACE command must be executed. Know the commands you will need to use to maintain the recovery catalog. You will need to be familiar with the following commands that are used to maintain the recovery catalog: DELETE INPUT, KEEP, NOKEEP, CHANGE, DELETE, DELETE EXPIRED, and CROSSCHECK. Know how to register, resynchronize, and reset a database. The register, resynchronize, and reset commands perform initial setup and configuration operations on the recovery catalog. Registering the database creates the initial incarnation of the database in the recovery catalog. Resetting the recovery catalog creates a new incarnation of a database after it has been opened with
the RESETLOGS option. Resynchronizing the recovery catalog updates the recovery catalog with changed physical information from the control file of the target database. Know the different methods you would use to generate reports and lists from the recovery catalog. Be able to use list commands to generate basic information from the recovery catalog in a formatted output, and be able to use reports to provide more complicated outputs that answer more detailed questions. Know the new Oracle9i list commands. You should be aware of the two new list commands that are new to Oracle9i: LIST BACKUP SUMMARY and LIST BACKUP BY FILE. Know how to create, store, and execute RMAN scripts. Make sure that you are familiar with the command syntax that is necessary in order to create and store a RMAN script—CREATE SCRIPT <script_name>. You should also know that in order to execute a script, the RUN {EXECUTE SCRIPT <script_name> } command must be performed. Understand the different methods you will need to use to back up and recover the recovery catalog. The recovery catalog can be backed up by using either user-managed or RMAN backups. An export can also be used to back up the RMAN catalog schema.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: full resynchronization
Review Questions 1. The RMAN utility does not require the use of which of the following? A. Recovery catalog B. Server sessions C. Allocated channel for backup D. Allocated channel for restore 2. What are the features supported by the recovery catalog? (Choose all
that apply.) A. Backup databases, tablespaces, data files, control files, and
archived logs B. Compressed backups C. Scripting capabilities D. Tests that determine whether backups can be restored E. All of the above 3. Where is the best place to store the database housing the recovery
catalog? A. On the same server but in a different filesystem than the target
database being backed up by the recovery catalog B. On the same server and in the same filesystem as the target data-
base that is being backed up by the recovery catalog C. On a different server than the target database D. None of the above 4. Which privileges are required for the Recovery Manager catalog user
account? (Choose all that apply.) A. DBA privilege B. Connect privilege C. Resource privilege D. RECOVERY_CATALOG_OWNER privilege
5. What command can be performed only once on a target database? A. CHANGE AVAILABILITY OF BACKUPS B. DELETE BACKUPS C. REGISTER THE DATABASE D. RESYNCHRONIZING THE DATABASE 6. Which of the following statements best describes the target database? A. Any database designated for backup by RMAN B. The database that stores the recovery catalog C. A database not targeted to be backed up by RMAN D. A special repository database for the RMAN utility 7. What type of backups can be stored in the recovery catalog of
RMAN? (Choose all that apply.) A. Non-RMAN backups based on OS commands B. Full database backups C. Tablespace backups D. Control file backups E. All of the above 8. Which of the following are instruments used to get information from
the recovery catalog? (Choose all that apply.) A. REPORT command B. A query in SQL*Plus C. LIST command D. RETRIEVAL command
9. What must you do prior to running the REPORT or LIST commands? A. Determine the log file. B. Spool the output. C. Connect to the target. D. Connect to the target and recovery catalog. 10. What is the main difference between reports and lists? A. Lists have more output than reports. B. Reports have more output than lists. C. Reports provide more detailed information than lists. D. Lists provide more detailed information than reports. 11. What command stores scripts in the recovery catalog? A. CREATE SCRIPT <SCRIPT_NAME> B. STORE SCRIPT <SCRIPT_NAME> C. CREATE OR REPLACE <SCRIPT_NAME> D. Scripts cannot be stored in the recovery catalog. 12. What are the new Oracle9i list commands? (Choose all that apply.) A. LIST BACKUP BY FILE B. LIST BACKUP SET C. LIST BACKUP SUMMARY FILE D. LIST BACKUP SUMMARY 13. If you add a new data file to a database, what should you do to the
incarnation of that database in the recovery catalog? A. Execute a reset. B. Execute a resync. C. Execute a register. D. Execute a report.
14. The code below is intended to launch a stored RMAN script. Find the
line with the incorrect syntax. run { execute program complete_bac; } A. The first line B. The second line C. The third line D. The fourth line 15. The code below is intended to create a script that can be used to copy
archived logs. Find the line with the incorrect syntax. Create script Arch_bac_up { Allocate channel cl type disk; Backup archives all;} A. The first line B. The second line C. The third line D. The fourth line
Answers to Review Questions 1. A. The recovery catalog is optional. The recovery catalog is used to
store information about the backup and recovery process, in much the same way as the data dictionary stores information about the database. The other options are all required elements for RMAN to function normally. 2. E. All answers are capabilities of the RMAN utility. 3. C. The recovery catalog database should be on a different server than
the target database to eliminate the potential of a failure on one server affecting the backup and restore capabilities of RMAN. 4. B, C, D. The DBA privilege is not required from the recovery catalog
user account. This user must be able to connect to the database, create objects within the database, and have the RECOVERY_CATALOG_OWNER privilege. 5. C. Registering the database can be performed only once for each data-
base unless the database is unregistered. 6. A. The target database is any database that is targeted for backup by
the RMAN utility. 7. E. RMAN can catalog non-RMAN backups based on OS commands
as well as full database backups, tablespace backups, and control file backups. 8. A, B, C. The RMAN utility provides the REPORT and LIST commands
to generate outputs from the recovery catalog. SQL*Plus can also be used to manually query the recovery catalog in certain instances. 9. D. Before running any REPORT or LIST command, you must be con-
nected to the target database and recovery catalog.
10. C. The REPORT command provides more detailed information than
the LIST command. The REPORT command is used to answer more “what if” or “what needs to be done” type questions than the LIST command. 11. A. The CREATE SCRIPT <SCRIPT_NAME> command stores the associated
script in the recovery catalog. This script can then be run at a later date. 12. A, D. The LIST BACKUP OF FILE and LIST BACKUP SUMMARY com-
mands are both new list commands in Oracle9i. 13. B. A resync command should be run if the physical components of
the target database change. 14. C. The incorrect syntax is program. The correct syntax should be as
follows: run { execute script complete_bac; } 15. D. The incorrect syntax is BACKUP ARCHIVES ALL;}. This should be
Transporting Data between Databases ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Describe the uses of the Export and Import utilities. Describe Export and Import concepts and structures. Perform simple Export and Import operations. List guidelines for using Export and Import.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
he Oracle database software provides two primary ways to back up the database. The first is a physical backup, which consists of copying files and recovering these files as needed. You have been reading about this approach in the previous four chapters. The second type of backup is a logical backup. A logical backup of the database requires reading certain database objects and writing them to a file without concern for their physical location. The file can then be inserted back into the database as a logical restore. Oracle also provides two utilities to handle the backups and recoveries in an Oracle database: the Export utility and the Import utility. The Export utility creates logical backups of the database, and the Import utility performs logical recoveries of the database. This chapter demonstrates how to back up and recover the database with the Export and Import utilities. It also explains incremental backups and recoveries. The topics in this chapter are an important part of the test and real-life recovery situations. The Export and Import utilities are a good supplement to a user-managed or RMAN-based backup. These utilities can be used in real-world recovery situations in which developers or end users inadvertently drop tables or delete data.
Using the Export and Import Utilities
T
he Export and Import utilities primarily perform logical backups and recoveries, but they may be used for such varied operations as the following: Recovering database objects The primary use of the Export and Import utilities for backup and recovery is to replace an individual object, such as
a table or a schema consisting of multiple database objects. Though the entire or full database can be recovered also, this is not common. In some cases, a table may be dropped by a user or a developer, but you may not want to restore the whole database. Though the whole database can be restored, because of the point-in-time nature of exports, the data may be inconsistent. Recovering using tablespace point-in-time recovery (TSPITR) In Oracle8i, the tablespace point-in-time recovery (TSPITR) was introduced. The TSPITR was designed to recover large individual objects using a fairly complicated recovery process. As a result, we would recommend that you get help from Oracle Support before you perform a recovery with TSPITR. To successfully perform this type of recovery, you will need to use both physical and logical recovery techniques. In addition, you will need to use a clone database to perform the complete process, and this will require extra resources. With so many drawbacks, you may wonder why anyone would use this. The reason is that TSPITR allows you to recover all the objects in an entire tablespace without recovering the entire database. This is useful in large databases where a complete restore and recovery would take a long time. There are a couple of specialized parameters you must be familiar with in order to use TSPITR: TRANSPORT_TABLESPACE and DATAFILES. The TRANSPORT_TABLESPACE parameter identifies the tablespace that will be used in the TSPITR process. The DATAFILES parameter identifies the data files that support the tablespace to be recovered. Reorganizing the database to improve performance The Export and Import utilities can also be used to reorganize a database and to restore individual objects. When objects are restored in this manner, the data they contain may be packed or compressed into fewer extents so that it can be retrieved more expeditiously. This is accomplished by using the COMPRESS=Y command, which is the default. Below is an example of using this parameter. oracle@octilli:~ > exp userid=test/test compress=Y file=tst9.dmp Upgrading an object or a database You can also use these utilities to upgrade databases or objects from one version to another. If these utilities are used in this manner, an export is taken of a table or database, and then it is imported into the new database with the upgraded version.
Moving data from one database to another The Export and Import utilities can also be used to move data from one database to another. The binary dump file created by the export process can be sent from one server to another via FTP, and then it can be loaded into a new database.
Using Export for Added Protection Exports provide extra protection above and beyond the normal backup plans, and they are often used to supplement backup protection of individual objects. Under certain circumstances, the export can also serve as a valuable asset for complete recovery. In this case, a junior DBA inadvertently dropped a tablespace that contained only one table of static data for a small insurance provider. Realizing his error, the junior DBA attempted to recover the database from the previous night’s open RMAN backup. To his dismay, there was a problem with the tape hardware, and he was unable to get it working. This hardware issue could not begin to be addressed until the next day at the earliest. At this point, it was already late in the evening and the DBA knew that the database would begin receiving automated caller information from the east coast early the next morning. As a result, the database was required to be available before the hardware issue could be addressed; if it wasn’t, the company’s business would be severely impacted. But the DBA remembered that there was an automated daily export of key tables in the databases, including the one table in the tablespace that was dropped. This export occurred at noon, between the bulk loads. After careful thought, the DBA re-created this erroneously dropped tablespace. After the DBA finished re-creating the tablespace, he imported the one table that resided within the newly created tablespace. Finally, the DBA finished the process by reloading the few flat files that were necessary so that the table was totally consistent. By using an export, the DBA completely restored the database and only four hours of service were lost. This situation worked nicely because the data was isolated and fairly static. The primary changes to the table were identified in the flat files.
As stated in the introduction to this chapter, the Export utility can be used to create logical backups of the Oracle database. The Import utility can recover these logical backups. There are two types of exports: the conventional-path export and the direct-path export. The conventional-path export is the default mode of the Export utility. This method can be time consuming for large objects or exports. However, it works well for everyday uses. The direct-path export is enabled by typing DIRECT=Y on the command line. This export option is substantially faster than the conventional-path export. This direct-path option bypasses the SQL evaluation layer as it regenerates the commands to be stored in the binary dump file. This is where the performance improvements are made. Figure 14.1 displays the execution path of each type of export. FIGURE 14.1
The differences between direct-path and conventional-path exports Conventional path
The Export utility performs a full SELECT on a table and then dumps the data into a binary file called a dump file. (This file has a .dmp file extension and is named expdat.dmp by default.) The Export utility then creates the tables and indexes by reproducing the Data Definition Language (DDL) of the backup tables. This information can then be played back by the Import utility to rebuild the object and its underlying data. To display all the export options available, issue the command EXP –HELP from the command line. oracle@octilli:/db01/oracle/tst9 > exp -help Export: Release 9.0.1.0.0 - Production on Fri Oct 5 23:09:00 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
You can let Export prompt you for parameters by entering the EXP command followed by your username/password: Example: EXP SCOTT/TIGER Or, you can control how Export runs by entering the EXP command followed by various arguments. To specify parameters, you use keywords: Format: EXP KEYWORD=value or KEYWORD=(value1,value2,...,valueN) Example: EXP SCOTT/TIGER GRANTS=Y TABLES=(EMP,DEPT,MGR) or TABLES=(T1:P1,T1:P2), if T1 is partitioned table USERID must be the first parameter on the command line.
Keyword Description (Default) ------------------------------------------------------USERID username/password BUFFER size of data buffer FILE output files (EXPDAT.DMP) COMPRESS import into one extent (Y) GRANTS export grants (Y) INDEXES export indexes (Y) ROWS export data rows (Y) CONSTRAINTS export constraints (Y) LOG log file of screen output DIRECT direct path (N) FEEDBACK display progress every x rows (0) FILESIZE maximum size of each dump file QUERY select clause used to export a subset of a table VOLSIZE number of bytes to write to each tape volume FULL export entire file (N) OWNER list of owner usernames TABLES list of table names RECORDLENGTH length of IO record INCTYPE incremental export type RECORD track incr. export (Y) PARFILE parameter filename CONSISTENT cross-table consistency STATISTICS analyze objects (ESTIMATE) TRIGGERS export triggers (Y) FEEDBACK FILESIZE FLASHBACK_SCN to FLASHBACK_TIME the specified time QUERY of a table RESUMABLE encountered(N)
display progress every x rows (0) maximum size of each dump file SCN used to set session snapshot back time used to get the SCN closest to select clause used to export a subset suspend when a space related error is
RESUMABLE_NAME text string used to identify resumable statement RESUMABLE_TIMEOUT wait time for RESUMABLE TTS_FULL_CHECK perform full or partial dependency check for TTS VOLSIZE number of bytes to write to each tape volume TABLESPACES list of tablespaces to export TRANSPORT_TABLESPACE export transportable tablespace metadata (N) TEMPLATE template name which invokes iAS mode export Export terminated successfully without warnings. oracle@octilli:/db01/oracle/tst9 >
There are a few changes in Oracle9i ’s export features. These are as follows: triggers are no longer exported in the sys schema in full export mode to accommodate the Java virtual machine, and INCREMENTAL/CULMULATIVE/COMPLETE exports are no longer supported as well as the process of creating a 7.x export file from an Oracle9i database is. Please read your Oracle documentation for complete details on these changes.
Performing Simple Export and Import Operations
L
et’s look at how a DBA performs a simple export and import. We will use the interactive method of running both the Export and the Import utilities. This means that the user will respond to each of the utilities prompts. First, we will export a table called t1. Then we will import the same table. Additionally, we will demonstrate how to run an export with a PARFILE so that it doesn’t require user interaction. This type of export can be run at the command line, and it can be both scripted and scheduled. The PARFILE is a parameter file that has export parameters grouped together in one file so
that each parameter does not have to be entered by the DBA on the command line. This process will be demonstrated after the export process that follows. Let’s work through each of these examples.
Performing a Simple Export In this interactive export example, you will perform a full export of the table t1, which is owned by the user TEST. Here are the steps used to perform the export: 1. Start the Export utility by executing exp on the command line. In
Unix, you should first set your ORACLE_SID environment variable to point to the database to which you are trying to connect. This can be done by executing export ORACLE_SID = tst9 at the command line. Alternatively, you could connect via SQL*Net through a tnsnames .ora entry that explicitly defines the target database. oracle@octilli:/db01/oracle/tst9 > exp Export: Release 9.0.1.0.0 - Production on Fri Oct 5 23:18:08 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
Username: test Password: Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production 2. Enter the appropriate buffer size. If you want to keep the default
buffer size of 4096KB, press Enter. The buffer size is the buffer used to fetch data. The default size is usually adequate. The maximum size is 64KB. Enter array fetch buffer size: 4096 >
3. The next prompt asks for the filename of the dump file. You can enter
your own filename or choose the default, expdat.dmp. Export file: expdat.dmp > t1.dmp 4. The next prompt designates users or tables in the dump file. The
USERS choice, (2), exports all objects owned by the user you are using to connect. The TABLE option, (3), exports all designated tables. (2)U(sers), or (3)T(ables): (2)U > 3 5. The next prompt asks whether to export the table data.
Export table data (yes/no): yes> y 6. The next prompt asks whether to compress extents. For example, if a
table has 20 1MB extents, and you choose yes, the extents will be compressed into one 20MB extent. All extents are compressed into one. If you choose no, then the 20 1MB extents will remain. This compression option will reduce fragmentation in the tablespace by compressing the extents or making a bigger initial table extent. This can present problems when the compressed size is large. If the compressed size is large enough, there may not be enough contiguous free space in the tablespace, so the import will fail. This can be remedied by resizing the tablespace. Compress extents (yes/no): yes > y 7. The next prompt specifies which tables to export. In this example, you
have chosen the t1 table. When the export is complete, you are prompted for another table or partition. If you are done, hit Enter. Export done in US7ASCII character set and US7ASCII NCHAR character set About to export specified tables via Conventional Path ... Table(T) or Partition(T:P) to be exported: (RETURN to quit) > t1 . . exporting table 3 rows exported
Table(T) or Partition(T:P) to be exported: (RETURN to quit) > Export terminated successfully without warnings. oracle@octilli:/db01/oracle/tst9 > This concludes the interactive export of the individual table, t1. This same process can be performed without prompts or user interaction by utilizing a parameter file or the command line with specified parameters. Specifying the PARFILE at the command line, for example, is one method of performing a non-interactive export. Here is an example of this technique that will export the full database, which includes all users and all objects. This can be done on a table-by-table basis also. oracle@octilli:~ > exp parfile=tst9par.pr file=tst9.dmp oracle@octilli:~ > cat tst9par.pr USERID=system/manager DIRECT=Y FULL=Y GRANTS=Y ROWS=Y CONSTRAINTS=Y RECORD=Y The following is an example of using the command-line parameter specifications. This technique also does not require user interaction to respond to prompts. oracle@octilli:~ > exp userid=system/manager full=y constraints=Y file=tst9.dmp
A useful feature of the export is that it can detect block corruption in the database. Block corruption occurs when the database block becomes corrupt or unreadable; it will cause an export to fail. Block corruption has to be fixed before a logical backup can be completed.
Performing a Simple Import The Import utility is used to perform a logical recovery of the Oracle database. This utility reads the dump file generated by the Export utility. As discussed earlier, the Export utility dumps the data in the table and then uses the necessary DDL commands to re-create the table. The Import utility then plays back these commands, re-creates the table, and inserts the data stored in the binary dump file. This section provides a step-by-step outline of how this process works. The Import utility also has numerous options. To display all the import options available, issue the command IMP –HELP from the command line. oracle@octilli:~ > imp -help Import: Release 9.0.1.0.0 - Production on Fri Oct 5 23:24:45 2001 (c) Copyright 2001 Oracle Corporation. All rights reserved.
You can let Import prompt you for parameters by entering the IMP command followed by your username/password: Example: IMP SCOTT/TIGER Or, you can control how Import runs by entering the IMP command followed by various arguments. To specify parameters, you use keywords: Format: IMP KEYWORD=value or KEYWORD=(value1,value2,...,valueN) Example: IMP SCOTT/TIGER IGNORE=Y TABLES=(EMP,DEPT) FULL=N or TABLES=(T1:P1,T1:P2), if T1 is partitioned table
USERID must be the first parameter on the command line.
Keyword Description (Default) ------------------------------------------------------USERID username/password BUFFER size of data buffer FILE input files (EXPDAT.DMP) SHOW just list file contents (N) IGNORE ignore create errors (N) GRANTS import grants (Y) INDEXES import indexes (Y) ROWS import data rows (Y) LOG log file of screen output DESTROY overwrite tablespace data file (N) INDEXFILE write table/index info to specified file SKIP_UNUSABLE_INDEXES skip maintenance of unusable indexes (N) FEEDBACK display progress every x rows(0) TOID_NOVALIDATE skip validation of specified type ids FILESIZE maximum size of each dump file STATISTICS import precomputed statistics (always) RESUMABLE suspend when a space related error is encountered(N) RESUMABLE_NAME text string used to identify resumable statement RESUMABLE_TIMEOUT wait time for RESUMABLE COMPILE compile procedures, packages, and functions (Y) VOLSIZE number of bytes in file on each volume of a file on tape)
The following keywords only apply to transportable tablespaces TRANSPORT_TABLESPACE import transportable tablespace metadata (N) TABLESPACES tablespaces to be transported into database
DATAFILES datafiles to be transported into database TTS_OWNERS users that own data in the transportable tablespace set Import terminated successfully without warnings. oracle@octilli:~ > The following steps indicate how to perform the recovery of a table with the Import utility. Here the Import utility is in interactive mode. 1. To recover the database, you must validate what should be recovered.
In this case, use the export from the t1.dmp created in the earlier section entitled “Performing a Simple Export.” Here, you are exporting the complete t1 table, which contains four rows of data. SQL> select * from t1; C1 C2 ---------- ---------------------------------------------1 This is a test one - before hot backup 3 This is a test three - after hot backup 2 This is a test two - after hot backup 4 This is test four - after complete export 4 rows selected. SQL> 2. Truncate the table to simulate a complete data loss in the table.
SQL> truncate table t1; Statement processed. SQL> 3. From the working directory of the exported dump file, execute the IMP
command and connect to the user TEST. oracle@octilli:~ > imp Import: Release 9.0.1.0.0 - Production on Fri Oct 5 23:31:44 2001 (c) Copyright 2001 Oracle Corporation. All rights reserved.
Username: test Password: Connected to: Oracle9i Enterprise Edition Release 9.0.1.0.0 - Production With the Partitioning option JServer Release 9.0.1.0.0 - Production 4. The next prompt is for the filename of the dump file. You can enter the
fully qualified filename, or just the filename if you are in the working directory of the dump file. Import file: expdat.dmp > t1.dmp] 5. The next prompt is for the buffer size of the import data loads. Choose
the minimum, which is 8KB. Enter insert buffer size (minimum is 8192) 30720> 8192 6. The next prompt asks whether to list the contents of the dump file,
instead of actually doing the import. The default is no; choose that option for this example. Export file created by EXPORT:V09.00.01 via conventional path import done in US7ASCII character set and AL16UTF16 NCHAR character set List contents of import file only (yes/no): no > n 7. The next prompt asks whether to ignore the CREATE ERROR if the
object exists. For this example, choose yes. Ignore create error due to object existence (yes/no): no > y 8. The next prompt asks whether to import the grants related to the
object that is being imported. Choose yes. Import grants (yes/no): yes > y 9. The next prompt asks whether to import the data in the table, instead
of just the table definition. Choose yes. Import table data (yes/no): yes > y
10. The next prompt asks whether to import the entire export file. In this
case, choose yes because the import consists only of the table you want to import. If it had multiple objects in the dump file, then you would choose the default, which is no. Import entire export file (yes/no): no > y . . importing TEST's objects into TEST . . importing table "T1" 4 rows imported Import terminated successfully without warnings. oracle@octilli:~ > 11. Next, you need to validate that the import successfully restored the
table t1. Perform a query on the table to validate that there are four rows within the table. SQL> select * from t1; C1 C2 ---------- -------------------------------------------1 This is a test one - before hot backup 2 This is a test two - after hot backup 3 This is a test three - after hot backup 4 This is test four - after complete export 4 rows selected. SQL>
The IGNORE CREATE ERROR DUE TO OBJECT EXISTENCE option can be confusing. This option means that if the object exists during the import, then the Import utility ignores the CREATE or REPLACE errors during the object re-creation. Furthermore, if you specify IGNORE=Y, the Import utility continues its work without reporting the error. Even if using the IGNORE=Y parameter, the Import utility still does not replace an existing object; instead, it will skip the object.
The guidelines for using the Export and Import utilities require that logical backup and recovery of the data is acceptable. Another way to look at this is, “Can the data that is being recovered be from a point in time?” With exports, there is no way to roll forward as you can with a complete or an incomplete recovery. The data in the export is from the time the export was taken and cannot change. It is like a picture of data at a certain point in time. That being the case, Export and Import utilities are not the best backup and recovery solutions available to you. These utilities function more like supplements to a good physical backup strategy. Below is a list of some the guidelines for using the Export and Import utility: Data should be static. The data that is being exported should be consistent or it should not be in the process of being updated if at all possible. If heavy DML activity is occurring during the export, the dump file will not be useful for recovery purposes. There should be limited logical references to the tables. The tables should have limited referential integrity constraints. Complex referential integrity presents problems for the Export utility. The data sets should be small to medium sized (with the exception of TSPITR). The Export and Import utilities are not the best choice for large data sets. These utilities can take long periods of time to export and import data.
Summary
This chapter demonstrated the capabilities of the Oracle logical backup and recovery utilities: Export and Import. During the course of this chapter, you walked through the backup and recovery of a database object that used these tools. You learned how to display all the options available in the Export and Import utilities, and you saw an example of the direct-path export, which is significantly faster than the standard conventional-path export.
In addition, you were introduced to the types of scenarios for which the Export and Import utilities are useful. You also became familiar with the guidelines you would need to follow in order to get the best use out of the Export and Import utilities. In most environments, the Export and Import utilities do not serve as the primary backup, but rather as a supplement to a physical backup. However, the Export and Import utilities do provide extra protection and eliminate the need for certain physical recoveries. The topics covered in this chapter will be an important part of the testing and can be equally beneficial in the workplace. The Export and Import utilities are a valuable supplement to any backup and recovery plan. These utilities can prevent a complete recovery and eliminate downtime when user or developer errors occur in databases.
Exam Essentials Understand the uses of the Export and Import utilities. The Export and Import utilities can be used to recover database objects, reorganize databases for best performance, upgrade a database, or move data from one database to another. Understand how to perform an export. To perform an export, you must be familiar with the commands involved. You should also know that the two ways to export are through user interaction and by using a PARFILE. Understand how to perform an import. To perform an import, you must be familiar with the commands involved. You should also know that the two ways to import are through user interaction and by using a PARFILE. Identify the guidelines for utilizing Export and Import utilities. When you are using the Export and Import utilities, there are three major guidelines you should follow: your data should be somewhat static, you should limit logical references, and you should make sure your data sets are small to medium sized (with the exception of TSPITR).
Describe the TSPITR concept. Tablespace point-in-time-recovery (TSPITR) is an export of metadata to a cloned database that is conducted while you are using physical recovery attributes to restore individual data files. This is mainly used for large databases where complete recovery would be very time consuming. Describe the parameters associated with TSPITR. The TRANSPORT_ TABLESPACE and DATAFILES parameters are unique to the use of transportable tablespace features.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: block corruption
Review Questions 1. What type of database utility performs a logical backup? A. Export B. Import C. Cold backup D. Hot backup 2. What type of database utility reads the dump file into the database? A. Export B. Import C. SQL*Loader D. Forms 3. What is the default dump file named? A. export.dmp B. expdat.dmp C. expdata.dmp D. export_data.dmp 4. Review the following export command and determine the incorrect
syntax. exp users=ar/trar2 conventional=false tables=ra_customer_deductions A. The first line is incorrect. B. The second line is incorrect. C. The third line is incorrect. D. The fourth line is incorrect.
5. What types of backup are required to use TSPITR? (Choose all
that apply.) A. Logical B. Physical C. Import D. Export 6. What is the name of the parameter that reads the parameter file? A. PARAMETER FILE B. PARFILE C. PARAFILE D. PAR-FILE 7. What is the name of the other database called when you are
using TSPITR? A. Secondary database B. Recovery database C. Cloned database D. Backup database 8. What is the name of the parameter that is used specifically for
TSPITR? A. DATABASE B. TRANSPORT_TABLESPACE C. CONTROL D. COMPRESS
9. Which export type is the fastest? A. Complete B. Cumulative C. Direct-path D. Conventional-path 10. What is the complex recovery method that uses the Export and Import
utilities designed for large database objects? A. Full Export B. Full=Y C. TPSTIR D. TSPITR 11. What is the export command that can be used in conjunction with
database reorganization to improve performance? A. COMPRESS_EXTENT=Y B. COMPRESS_EXT=Y C. COMPRESS=Y D. COMPEXT=Y 12. Which of the following is a true statement about the Export and
Import utilities? A. They cannot be used to upgrade a database. B. They can be used to upgrade a database. C. Only the Export utility can be used to upgrade the database. D. Only the Import utility can be used to upgrade the database.
13. Which of the following is a correct statement about how you would
use the Export utility on a table without using any export keyword options? A. The Export utility loads data into a database. B. The Import utility extracts data from a database. C. The Export utility performs a full SELECT of a table and then
dumps the data into a binary file. D. The Export utility performs a full SELECT of a table and then
dumps the data into an ASCII file. 14. Which of the following export keyword options selects a subset of
a table? A. SUBSET B. SELECT C. WHERE D. QUERY 15. Which of the following export keyword options performs a complete
database export? A. ALL=Y B. COMPLETE=Y C. TOTAL=Y D. FULL=Y
Answers to Review Questions 1. A. The Export utility performs the logical backup of the Oracle
database. 2. B. The Import utility is responsible for reading in the dump file. 3. B. The expdat.dmp is the default dump filename. 4. C. Conventional path export is the default. The line
conventional=false is invalid syntax. 5. A, B, D. Both logical and physical database backups are performed
with the TSPITR, and exports are logical backups. Import is not a backup; it is a recovery utility in this context. 6. B. The PARFILE parameter is the name of the parameter that specifies
the parameter file. 7. C. The cloned database is the other database that is recovered when
you are using TSPITR. 8. B. The TRANSPORT_TABLESPACE parameter is used specifically for
TSPITR. It is used to identify the tablespace that needs to be recovered. 9. C. A direct-path export is the fastest. Complete and cumulative
exports are data-volume dependent. 10. D. The tablespace point-in-time recovery (TSPITR) is an Export and
Import recovery method that is designed for large database objects. 11. C. The COMPRESS=Y command can be used to compress extents in a
table when a table or database is reorganized for performance reasons. 12. B. When used together, both the Export and Import utilities can be
Using SQL*Loader to Load Data ORACLE9i: DBA FUNDAMENTALS II EXAM OBJECTIVES COVERED IN THIS CHAPTER: Demonstrate usage of direct-load insert operations. Describe the usage of SQL*Loader. Perform basic SQL*Loader operations. List guidelines for using SQL*Loader and direct-load insert.
Exam objectives are subject to change at any time without prior notice and at Oracle’s sole discretion. Please visit Oracle’s Certification website (http://www.oracle.com/education/ certification/) for the most current exam objectives listing.
he Oracle SQL*Loader utility is a tool designed to load external data into the Oracle database. This utility loads data from an external file format into tables within an Oracle database. SQL*Loader has three primary ways it uses to load data: a conventional load, a direct-path load, or an external-path load. Each of these load methods is designed for a different usage. We will also demonstrate a Data Manipulation Language (DML) statement called direct-path insert. This is not actually part of SQL*Loader, but it is similar to the SQL*Loader direct-path load method. Finally, we will list some guidelines that explain the best ways to use SQL*Loader. We will outline each topic and then briefly discuss each guideline. It is important to conquer the topics covered in this chapter as you prepare for the test. In addition, this information has a practical application in the workplace. SQL*Loader can be a valuable utility for multiple uses including moving data from one database to another, making backups of tables, and performing maintenance operations. Familiarity with this tool is a valuable asset to any DBA.
Using Direct-Load Insert Operations
The direct-load insert operation is not actually a function of the SQL*Loader utility. Instead, this operation is a DML command, which can be performed within the database using SQL*Plus. The direct-load insert has similarities to the SQL*Loader direct-path load, which we will discuss in more detail in the next section, “Using SQL*Loader.” Both the direct-load insert and SQL*Loader direct-path load will bypass the buffer cache and
write directly to blocks within the tables you are loading. This functionality allows both of these operations to be extremely fast. Direct-load insert comes in two flavors: serial direct-load insert and parallel direct-load insert. Serial direct-load insert is the normal operation that uses one server process to insert data beyond the high-water mark. Serial direct-load insert can be used on unpartitioned, partitioned, and subpartitioned tables. Below is an example of serial direct-load insert. SQL> INSERT /*+ APPEND / INTO T1_NEW 1> NOLOGGING 2> SELECT * FORM T1 3> COMMIT; The second type of direct-load insert is called parallel direct-load insert. Parallel direct-load insert is the same DML insert operation as serial directload insert, but in this case, the statement or table is put into parallel mode. This description implies that the database must be configured for parallel operations, which means that you must have parallel query slaves configured in your initialization parameters. (This topic is covered in more detail in OCP: Oracle9i Performance Tuning Study Guide, by Joseph C. Johnson [Sybex, 2002].) Furthermore, you must enable parallel DML for the session in which you will be performing the parallel direct-load insert. You will do this with the ALTER SESSION ENABLE PARALLEL DML command. After you have used this command, you must enable a hint or place the table to be inserted into parallel mode. A hint is created with a multiple line comment with a plus symbol, such as /*+ PARALLEL (TEST.T1_NEW, 4) */. A hint is a mechanism that influences the explain path of a query so that the query performs in a certain way. Alternatively, you can alter the table T1_NEW to be in degree of parallel 4 with the following command: ALTER TABLE T1_NEW PARALLEL (DEGREE 4). The parallel direct-load insert causes multiple parallel server processes to break up the insert function into multiple temporary segments, and it also causes the parallel coordinator to merge the temporary segments into the table. Below is an example of parallel direct-load insert statement. SQL> ALTER SESSION ENABLE PARALLEL DML; SQL> INSERT /*+PARALLEL (TEST.T1_NEW, 4) / INTO T1_NEW 1> NOLOGGING 2> SELECT * FORM T1 3> COMMIT;
SQL*Loader is an Oracle utility that was designed to load or transport external data from one database to another. SQL*Loader is often used in nightly extracts from other non-Oracle-based systems with which your database may interface. It has become the tool of choice for loading data into the Oracle database in order to build or refresh Data Warehouse or Data Marts. This new-found popularity results from its speed and versatility on large data sets for both Oracle and non-Oracle sources. Because of SQL*Loader’s parsing capabilities, the external files it works with can be in various formats. As a result of this versatility, you can use SQL*Loader to do the following:
Load data from multiple data files or input files during the same load session.
Load data into multiple tables during the same load session.
Combine multiple input records into a logical record for loading.
Specify the character set of the data.
Accept input fields of fixed or variable length.
Selectively load data (you can load records based on the records’ values).
Manipulate the data before loading it using SQL functions.
Generate unique, sequential key values in specified columns.
Use the operating system’s filesystem to access the data files.
Load data from disk, tape, or named pipe.
Append data to and replace existing data in the tables.
Load data directly into data blocks by passing the buffer cache.
Use secondary data files for loading large binary objects (LOBs) and collections.
There are three main file structures that you will need to be familiar with when you are using SQL*Loader: the control file; log files, which include bad logs, discard logs, and general logs; and input data, or data files. These file structures will be discussed in detail in this section.
SQL*Loader also uses three methods to load data into the database—the conventional load, the direct-path load, and the external-path load—and three other methods in parallel mode to further improve performance—parallel conventional load, intersegment concurrency with direct-path load, and intrasegment concurrency with direct-path load. Both sets of these methods are also discussed in the following pages.
Control Files, Log Files, and Input Data or Data Files SQL*Loader’s three main components are the control file, the log files, and the input data, or data files. The control file is the logic behind the data load that determines how and what data will be loaded. The log files include the bad, the discard, and the general log files. The bad log documents records that were rejected by the Oracle database or SQL*Loader because they have an invalid format or unique key violations, or because they have a required field that is null. The discard log contains records that were discarded in the load process or didn’t meet the control file load criteria. The general log file contains a detailed summary of the load process. The data file, or input data, is the raw data that will be loaded. Figure 15.1 displays this SQL*Loader process utilizing the logs, the data files, and the control file. FIGURE 15.1
SQL*Load input and output files Data File
SQL* Loader field processing General log file
Load information SQL* Loader when clause filter
Discard file
*Load data
Control file
Load information Rejected records
Data file
Discard records Oracle server
Database
*Load data can be stored in data file (dat file or infile) or control file. Storing data in the control file is typically used for one-time loads or test situations.
Control Files The control file is a fairly detailed component of SQL*Loader that is responsible for most of the functionality in the load process. The control file does just what its name implies; it controls the SQL*Loader process. The control performs the following functions in the SQL*Loader process:
Shows you where to find the data
Supplies the data format
Houses configuration and setup information
Tells you how to manipulate data
Actual load data can be kept in the file, but this should only be performed for sample or test loads. The example below shows a control file that contains no input data; an external file (invoice_header.dat), which is designated with the INFILE parameter, is used to reference this input data. The line numbers to the left are used to identify the lines we will be discussing in more detail. 1 -- Invoice Header Sample Control File 2 -- Created 10-19-01 3 LOAD DATA 4 INFILE ‘invoice_header.dat’ 5 BADFILE ‘invoice_header.bad’ 6 DISCARDFILE ‘invoice_header.dsc’ 7 REPLACE 8 INTO TABLE invoice_header 9 WHEN SALSESPERSON (100) = ‘EDI’ 10 FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' 11 TRAILING NULLCOLS 12 ( COMPNO decimal external , INV_NO decimal external , DISCOUNT_RATE decimal external “:discount_rate * .90” , DUEDATE date "SYYYYMMDDHH24MISS" , INVDATE date "SYYYYMMDDHH24MISS" , CUST_NO char , CUST_CAT char NULLIF cust_cat=BLANKS , CO_OBJ decimal
Control files have many options that need to be addressed; the above example demonstrates many of these features. Let’s walk through each of these features and capabilities identified by the line numbers in this example. Lines 1, 2 These line numbers identify a comment section in the control file; such a section is designated with double hyphens (--). Line 3 The LOAD DATA statement in this line tells SQL*Loader that this is the beginning of a data load. If you are continuing from an aborted process, you could use the CONTINUE LOAD DATA statement here instead. Line 4 The INFILE keyword specifies the data that should be read from an input file; in this example, the input file is invoice_header.dat. Line 5 The BADFILE keyword specifies the name of the file that will store the rejected records. Line 6 The DISCARDFILE keyword specifies the name of the file that will store the discarded files. Line 7 The REPLACE keyword is one of the options you can use when you want to replace the existing data in a table with the data that you are loading into it. In this case, the existing data is deleted before the new data is copied in. Alternative keywords to use in similar situations are APPEND, for a table that is not empty but one in which you want to retain the data that is present, and INSERT, for a table that is empty. Line 8 The INTO TABLE keyword allows you to identify the table— invoice_header in this example—fields, and their data types. Line 9 The WHEN clause evaluates one or more field conditions to TRUE or FALSE. If the conditions are evaluated to TRUE, then the record is loaded, otherwise, the record is not loaded. Line 10 The FIELDS TERMINATED BY clause is the field delimiter or the character that separates the fields within the INFILE file invoice_ header.dat. The OPTIONALLY ENCLOSED BY clause says that fields can also be enclosed by double quotes (“”). Line 11 The TRAILING NULLCOLS clause tells SQL*Loader to handle any other columns that are not present in the record as null columns.
Line 12 The remainder of the lines between parentheses () contain the field list that provides details about column formats in the table being loaded.
Log Files The SQL*Loader log files consist of the bad files, the discard files, and the general log files. Each of these log files records information from the SQL*Loader activities at different parts of the load process. The bad and discard files have the information stored in a load data format. These files could actually be loaded again if desired. The general log is an actual log file that contains information about the overall SQL*Loader process. The input or data file is actual data that will be loaded into the table. We will discuss each of the files in more detail in the upcoming sections. The Bad Files Bad files are the first of the three log files generated by the SQL*Loader utility. These files contain the records that were rejected by the Oracle database or did not meet the format criteria in the control file. These logs are created so that they can be used in the reload process if needed. The type of situation in which this type of file could be useful would be one in which you had large input files. Bad log files can be much smaller, and therefore, problems that occurred in the load process would be much easier to identify in these smaller files. Discard Files Discard files are created by SQL*Loader if you have designated that they should be created; this process is optional. These log files contain the records that did not match any of the record selection criteria. Hence, these records were excluded from the load. This wasn’t because they were unacceptable to the load as was the case with the data in the bad file, but instead it was because these records got filtered out properly according to the way the control file had been set up. General Log Files These files store information about the load process in general. This information is used to determine when the SQL*Loader process was performed and how it went. Below is a list of the information that is covered in the general log file.
DEFAULTIF or NULLIF keyword usage specifications (optional)
RECNUM, SEQUENCE, CONSTANT, or EXPRESSION keyword usage specifications (optional)
DATETIME or INTERVAL specifications (optional) followed by the designated masking
Data record errors
Discarded records
Record load count
Rejected record count
Null field record count
Bind size
Records loaded per partition
Input Data or Data Files Input data, a data file, or an input file all describe the same file; one that contains the data that will be loaded into the database table or external table. This file can be in various formats. A common format that is easy to work with is called comma delimited. A comma-delimited data file has commas that separate the fields within the file. When the data file is loaded into the database, the data between each comma is loaded into a column in the table. Whatever the format of the data file, SQL*Loader has robust parsing capabilities and is capable of loading most formats.
Conventional Load, Direct-Path Load, and External-Path Load There are three methods to load data with SQL*Loader. These methods are conventional load, direct-path load, and external-path load. The conventional load is the default and will load data in much the same way as insert
statements are performed in SQL*Plus. The direct-path load is designed to improve the performance of the conventional load by providing similar functionality. Lastly, the external-path load is a specialized load designed so that SQL*Loader will work with external tables.
Conventional Load Conventional load is the first of three methods you can use to load data with SQL*Loader. A conventional load is much slower than a direct-path load. The conventional load builds a bind array, and when this bind array is full, a SQL insert statement is executed. This method is comparable to the one that Oracle uses to process normal DML statements such as inserts. Conventional load is the default method that SQL*Loader uses.
Direct-Path Load The direct-path load is initiated by using the DIRECT=TRUE keyword on the command line. This method is much faster than conventional load. The directpath load does not use a bind array; instead it uses the direct-path API to load data more directly into the database without the overhead of standard SQL command processing. Because the data actually bypasses the normal SQL processing layers, Oracle must load this data into blocks that are unused or are above the high-water mark of the table. The high-water mark is the last used block in the table whether there is data in the block or not. Table 15.1 compares conventional loads and direct-path loads, and Figure 15.2 displays the functional differences between these two methods. TABLE 15.1
Conventional Load versus Direct-Path Load Conventional Load
Direct-Path Load
Uses standard commits to save data
Uses data saves
Always generates redo information
Generates redo information when certain conditions are met
Conventional Load versus Direct-Path Load (continued) Conventional Load
Direct-Path Load
Allows users to modify the table data during load process or have active transactions
Locks users out so that they cannot make changes or have active transactions
Allows SQL functions to be used in the control file
Does not allow SQL functions to be used in the control file
The direct-path load functions in a similar manner to the way the directpath insert does. Each control file and input data file is loaded into a temporary segment and then it is merged into the table above the high-water mark. See Figure 15.2, which shows SQL*Loader parallel direct-path load in comparison to conventional load. FIGURE 15.2
SQL*Loader overview of direct-path load versus conventional load
External-Path Load This external-path load is specialized to allow the SQL*Loader utility to be used on external tables. By default, the external-path load is attempted in parallel if the table is made up of multiple data files. External tables are tables that are stored outside the database and have special restrictions (such as they are read-only). External tables are used primarily as temporary storage tables that can be loaded into normal database tables at some point. Let’s go through a brief overview of the necessary components for creating the external table. The command used to create external tables is very similar to a standard table creation command; it is CREATE TABLE
ORGANIZATION EXTERNAL. In addition, a logical directory that designates the files that will make up the external table will need to be created. This command is CREATE DIRECTORY EXTERNAL_TAB_DIR as '/u01/ora9i/ext_ files'. Once these steps have been completed, the schema that will create the table will need read and/or write privilege granted to the external directory, which in our case is EXTERNAL_TAB_DIR.
Adjusting the High-Water Mark with Direct-Path Load When you are using SQL*Loader’s direct-path load method, you can insert data directly into the data blocks within a table. When you do this, however, Oracle will not be able to identify whether the data blocks have data or not. As a result, every time you use direct-path load, Oracle must put data in past the high-water mark in the table. But this is not a problem if you are prepared for the space usage in the table to increase and such usage is part of your growth plan. If you aren’t prepared, the table will grow rapidly each time you load the data and it will consume unnecessary space. If you are reloading data into a table and replacing the old data, you may want to truncate the table before you do so in order to reset the high-water mark. This will prevent your table from growing the size of the data load each time the table is loaded.
SQL*Loader Parallel Load Methods SQL*Loader can be run in parallel to further improve performance of large data loads. There are three main variations of parallel loads: parallel conventional load, intersegment concurrency with direct-path load, and intrasegment concurrency with direct-path load. Here we explore each of these load methods. Parallel conventional load Parallel conventional load is performed by issuing multiple SQL*Loader commands, each with their own control file and input data file, all to the same table. The input data file is logically split on the record boundaries in the table. For example, records 1 through 100 are loaded using the inv.dat input file, and records 101 through 200 are loaded using the inv2.dat input file. oracle@octilli:> sqlldr test/test control=inv.ctl load=inv.dat oracle@octilli:> sqlldr test/test control=inv2.ctl load=inv2.dat Intersegment concurrency with direct-path load Intersegment concurrency with direct-path load is performed by using direct-path load to load into multiple tables or partitions within a table at the same time. This method is performed in the same way that parallel conventional load is, but it adds the DIRECT=TRUE keyword and uses different tables. Notice that the first load is going into the inv.dat table and the second is going into the orders.dat table as designated by the control file and input file name. oracle@octilli:> sqlldr test/test control=inv.ctl load=inv.dat direct=true oracle@octilli:> sqlldr test/test control=orders.ctl load=orders.dat direct=true Intrasegment concurrency with direct-path load Intrasegment concurrency with direct-path load is performed by using direct-path load to load data into a single table or partition. This can be performed by placing the DIRECT=TRUE and PARALLEL=TRUE option on the command line. In this parallel server, processes load the data into temporary segments and then merge them into the individual segments. oracle@octilli:> sqlldr test/test control=inv.ctl load=inv.dat direct=true parallel=true
The basic SQL*Loader functions are initiated by executing the sqlldr command at the OS command prompt. Here is an example in Unix that shows you all of the other commands and options available to the SQL*Loader utility. oracle@octilli:/oracle/admin/tst9/adhoc > sqlldr SQL*Loader: Release 9.0.1.0.0 - Production on Tue Oct 16 21:11:51 2001 (c) Copyright 2001 Oracle Corporation. reserved.
All rights
Usage: SQLLOAD keyword=value [,keyword=value,...] Valid Keywords: userid -- ORACLE username/password control -- Control file name log -- Log file name bad -- Bad file name data -- Data file name discard -- Discard file name discardmax -- Number of discards to allow (Default all) skip -- Number of logical records to skip (Default 0) load -- Number of logical records to load (Default all) errors -- Number of errors to allow (Default 50) rows -- Number of rows in conventional path bind array or between directs (Default: Conventional path 64, Direct path all)
bindsize -- Size of conventional path bind array in bytes (Default 256000) silent -- Suppress messages during run (header,feedback,errors,discards,par) direct -- use direct path (Default FALSE) parfile -- parameter file: name of file that contains parameter specifications parallel -- do parallel load (Default FALSE) file -- File to allocate extents from skip_unusable_indexes -- disallow/allow unusable indexes or index partitions () skip_index_maintenance -- do not maintain indexes, mark affected indexes as unu) commit_discontinued -- commit loaded rows when load is discontinued (Default F) readsize -- Size of Read buffer (Default 1048576) external_table -- use external table for load; NOT_USED, GENERATE_ONLY, EXECUTE) columnarrayrows -- Number of rows for direct path column array (Default 5000) streamsize -- Size of direct path stream buffer in bytes (Default 256000) multithreading -- use multithreading in direct path resumable -- enable or disable resumable for current session (Default FALSE) resumable_name -- text string to help identify resumable statement resumable_timeout -- wait time (in seconds) for RESUMABLE (Default 7200) PLEASE NOTE: Command-line parameters may be specified either by position or by keywords. An example of the former case is 'sqlldr scott/tiger foo'; an example of the latter is 'sqlldr control=foo userid=scott/tiger'. One may specify parameters by position before
but not after parameters specified by keywords. For example, 'sqlldr scott/tiger control=foo logfile=log' is allowed, but 'sqlldr scott/tiger control=foo log' is not, even though the position of the parameter 'log' is correct. oracle@octilli:/oracle/admin/tst9/adhoc > SQL*Loader can be run by using the command line and the parameters can be used within the line. There are many variations that can be performed, but we will demonstrate some common examples to illustrate the basic uses. This first example uses 10 records that are contained in the invoice_header.dat input file and in the invoice_header.ctl control file. Below are examples that show the control file first, then show the input file containing the load data, and finally show the actual sqlldr command. load data infile invoice_header.dat into table invoice_header fields terminated by ',' optionally enclosed by '"' trailing nullcols ( COMP_NO decimal external , INV_NO decimal external , CUST_NO decimal external , DUE_DATE date "SYYYYMMDDHH24MISS" , INV_DATE date "SYYYYMMDDHH24MISS" , CUST_NAME char , CUST_CAT char , CUST_OBJ decimal external , SALESMAN char , CUST_REF char ) 20,141596,1154427,20000925000000,20000827000000,CUSTONE,20,, EDI,09020 20,141597,1153954,20000925000000,20000827000000,SUPERSTOR,20 ,,EDI,09020
Commit point reached - logical record count 9 Commit point reached - logical record count 10 oracle@octilli: > Next, we will demonstrate an example of how to use the direct-path load method to load the same data. We will use the same control file and the same table as in the previous example. oracle@octilli: > sqlldr test/test control=invoice_ header.ctl direct=true SQL*Loader: Release 9.0.1.0.0 - Production on Tue Oct 16 21:11:51 2001 (c) Copyright 2001 Oracle Corporation. reserved.
Load completed - logical record count 10. oracle@octilli: >
Guidelines for Using SQL*Loader and Direct-Load Insert
In order to use SQL*Loader effectively, you need to consider the following guidelines: Use the parameter file to specify and simplify the use of commonly used command line options. This is a good choice for repeatable loads that can be scheduled. The parameter is similar to the parameter file used in the export and import utility. With the commands in a parameter file, the DBA will not need to interactively reply to prompts when the utility is executed. For a small, one-time load, you should place only small data sets in the control file. This approach restricts the use of the control file for new data sets. This method is primarily used to load sample or test data because the control file information is combined with the raw data. The raw data would need to be combined or concatenated to the control file each time it was loaded. This is more cumbersome than keeping the data separate in a data file. Optimize performance by allocating sufficient initial space. When you optimize in this manner, you prevent dynamic extent allocation, which slows the load process. Optimize performance by presorting data on the big indexes. Optimizing in this manner helps to alleviate the work that must be performed in the temporary segments in order for indexes to be rebuilt. Optimize performance by using different files for temporary segments in order to perform parallel loads. You should use different data files to represent the location of your temporary segments for each parallel sqlldr command line execution. This is done by using the FILE option in the control file to designate the data file where the temporary segments are created during the parallel load operation.
The Oracle SQL*Loader utility is a tool designed to load external data into the Oracle database. There are three methods of loading data when you are using the SQL*Loader: the conventional load, the direct-path load, and the external-path load. The direct-path insert DML statement that is run in SQL*Plus is similar to the direct-path load in SQL*Loader. This method has both serial and parallel load options. There are some basic SQL*Loader activities that can help you to become familiar with some of the uses of the SQL*Loader tool, and there are several commands, formats, and methods that you can use to run SQL*Loader. In addition, there are several usage guidelines for SQL*Loader. The topics covered in this chapter will be covered on the test and are valuable in real-world situations. SQL*Loader and direct-load insert can be used for a variety of purposes. SQL*Loader is very useful to bulk load data from one database to another. The direct-load insert operation is useful when you are making table backups within the database or you are moving data from one table to another. Both of these tools can be valuable assets for the DBA.
Exam Essentials Understand direct-load insert operations. Know that direct-load insert is a DML statement that passes data from one table to another within the Oracle database. When you use this DML statement, you can bypass the SQL buffer cache and direct loads into the data blocks within the targeted tables. Know the files used with SQL*Loader. Be able to list the files used with SQL*Loader—the control file, the bad log, the discard log, the general log, and the input data, or data file—and make sure you understand the purpose of each of these files. Know the methods SQL*Loader uses to load data. SQL*Loader uses the conventional load, the direct-path load, and the external-path load to load data. The direct-path load is used to improve performance, the conventional load is used for normal operations, and the external-path load is a specialized load option used only for external tables.
Identify the parallel SQL*Loader methods for loading data. Be able to pick out the differences between a parallel conventional load, an intrasegment concurrency with direct-path load, and an intersegment concurrency with direct-path load. Understand how to use SQL*Loader. Be sure you are familiar with the command line references you will need in order to execute SQL*Loader. Be able to use a command such as sqlldr username/password control=control.ctl at the OS prompt. Understand the guidelines for efficiently using SQL*Loader. Be familiar with the guidelines that must be followed to efficiently use SQL*Loader: using parameter files for common commands, refraining from keeping data in the control file, avoiding dynamic space allocation, alleviating sorting in the index process, and distributing the temporary segments on different data files with SQL*Loader FILE keyword.
Key Terms
Before you take the exam, be certain you are familiar with the following terms: bad log
Review Questions 1. What insert statement will bypass the SQL buffer cache so that data
may be loaded? A. SQL*Loader direct-path load B. SQL*Loader conventional load C. Direct-load insert D. SQL*Loader external table 2. Which of the following methods causes the target table to be in
parallel mode when it is using the direct-path insert statement? (Choose all that apply.) A. Using the ALTER TABLE command B. Using multiple direct-path load SQL*Loader statements C. Using hints with the PARALLEL clause D. Using one SQL*Loader direct-path load statement 3. Which of the following is the SQL*Loader file structure that is
responsible for formatting the data to be loaded? A. Bad file B. Format file C. Control file D. Discard file 4. Which log file contains cumulative information about the number
of records loaded in a SQL*Loader run? A. Bad log B. General log C. Discard log D. Control file
5. What log file contains records that were rejected by the database? A. Bad log B. General log C. Discard log D. Control file 6. What log file contains records that were filtered out of the load due to
clauses in the control file? A. Bad log B. General log C. Discard log D. Control file 7. Which file contains the data that gets loaded in the SQL*Loader run?
(Choose all that apply.) A. Input file B. Control file C. Data file D. Log file 8. Which SQL*Loader load keyword deletes data in table before loading? A. APPEND B. DELETE C. REPLACE D. INSERT
9. What SQL*Loader technique allows parallel processing? A. Running multiple sqlldr command lines on the same table with
different input files B. Using the PARALLEL hint C. Using the ALTER TABLE command on the target table D. Using the PARALLEL command on the sqlldr command line 10. What SQL*Loader method cannot load into clustered tables? A. Conventional load B. Direct-path load C. Using the ALTER TABLE command on the target table D. Parallel load hints 11. Which SQL*Loader method uses bind arrays? A. Direct-path load B. Conventional load C. Direct-path insert D. Parallel load hints 12. Which SQL*Loader method cannot have active transactions during
the load process? A. Conventional load B. Direct-path load C. Data stored in the input file only D. Data stored in the control file only
13. What must you do to enable direct-path insert? (Choose all
that apply.) A. Use the ALTER SESSION ENABLE PARALLEL DML; command. B. Use an INDEX hint. C. Use a PARALLEL hint. D. Run multiple direct-path inserts at the same time. 14. Which of the following is a true statement regarding SQL functions
and SQL*Loader? A. SQL functions cannot be used in a conventional load. B. SQL functions can be used in a direct-path load. C. SQL functions cannot be used in a direct-path load. D. SQL functions can be used in both conventional and direct-path
loads. 15. Which of the following is a true statement regarding active transac-
tions or changes when you are using SQL*Loader? A. No active transactions or changes can be performed on a conven-
tionally loaded table. B. No active transactions or changes can be performed on a direct-
path loaded table. C. Active transactions or changes can be performed on a direct-path
loaded table. D. Active transactions or changes cannot be performed on a direct-
Answers to Review Questions 1. C. The direct-load insert statement will bypass the buffer cache while
it performs an insert. SQL*Loader direct-path load will also bypass the buffer cache, but it is not an insert statement. 2. A, C. Both the ALTER TABLE command and hints used with the
PARALLEL clause will allow the table to be placed in parallel. 3. C. The control file is responsible for formatting the data to be loaded. 4. B. The general log contains cumulative information about the number
of records that were loaded. 5. A. The bad log contains records that were rejected by the database. 6. C. The discard log contains data that was filtered out of the load as
designed in the control file. 7. A, B, C. The input file, or data file, and the control file can contain
data that is loaded in the SQL*Loader run. 8. C. The REPLACE command deletes the current data in a table first
before it inserts new data. 9. A. Running multiple sqlldr command line entries with multiple
data files will cause parallel loads. 10. B. The direct-path load cannot load data into a clustered table. 11. B. The conventional load uses bind arrays to temporarily store the
data before inserting into the database. 12. B. Direct-path load cannot have active transactions taking place on
the table you are loading. This is because there is a lock placed on the table and the data is directly loaded into blocks.
13. A, C. To enable direct-path insert, the session you are performing
parallel DML on must have PARALLEL DML enabled. Secondly, you must either use a PARALLEL hint or place the table being inserted into parallel. 14. C. No SQL functions can be used in the control file when you are
using a direct-path load. This is because a direct-path load bypasses standard SQL statement processing for performance reasons. 15. B. Active transactions or changes cannot be performed on a direct-
path loaded table. Users are prevented from making changes.
A Application layer A layer of the Oracle Net stack that interacts with the user. This layer accepts commands and returns data. archived logs Also known as offline redo logs. Logs that are copies of the online redo logs and are saved to another location before the online copies are reused. ARCHIVELOG mode A mode of database operation. When the Oracle database is run in ARCHIVELOG mode, the online redo log files are copied to another location before they are overwritten. These archived log files can be used for point-in-time recovery of the database. They can also be used for analysis. archiver process (ARCn) to archived log files. ARCn ATL
Performs the copying of the online redo log files
See archiver process (ARCn). See automated tape library (ATL).
automated tape library (ATL) A tape device that can interface with RMAN and can automatically store and retrieve tapes via tape media software. automatic archiving The automatic creation of archived logs after the appropriate redo logs have been switched. The LOG_ARCHIVE_START parameter must be set to TRUE in the init.ora file for automatic archiving to take place. automatic channel allocation This type of channel allocation is performed by setting the RMAN configuration at the RMAN command prompt. This is done by using the CONFIGURE DEFAULT DEVICE or CONFIGURE DEVICE command. AVAILABLE The RMAN command used to make a backup set available or accessible in the RMAN repository.
B BACKUP The RMAN command is used to perform a backup that creates a backup set. backup and recovery strategy The backup and recovery plan for an organization’s databases, applications, and systems that is formalized and agreed upon by the required groups in the organization. BACKUP CONTROLFILE TO TRACE A create control file command, which makes an ASCII backup of the binary control file, which can then be executed to re-create the binary control file. The BACKUP CONTROLFILE is dumped as a user trace file. This file can be viewed, edited, and run as a script after you edit the comments and miscellaneous trace information. backup piece A physical object that stores data files, control files, or archived logs and resides within a backup set. backup script A script written in different OS scripting languages, such as Korn shell or C shell in the Unix environments. backup set A logical object that stores one or more physical backup pieces containing either data files, control files, or archived logs. Backup sets must be processed with the RESTORE command before these files are usable. bad log A SQL*Loader log which documents records that where rejected by the Oracle database or SQL*Loader because they had an invalid format or unique key violations, or because they has a required field that was null. bequeath connection A connection type in which control is passed directly to a dedicated spawned process or a dispatched process. No redirection is required. block The smallest unit of storage in an Oracle database. Data is stored in the database in blocks. The block size is defined at the time of database creation and is a multiple of the operating system block size. block corruption
A block within the database that is corrupt.
bounded time recovery Instance recovery that the DBA controls or puts bounds on the time that it takes for an instance to recover after instance failure.
C cancel-based recovery Atypeofincompleterecoverythatisstoppedwhen the DBA executes a CANCEL command during a manual recovery operation. cataloging Storing information in the Recovery Manager catalog. This is done by issuing the RMAN CATALOG command. CHANGE The RMAN command used to change the status of a backup set to either AVAILABLE or UNAVAILABLE. change-based recovery A type of incomplete recovery that is stopped by a change number designated when the recovery is initiated. change vector
See redo log entry.
channel allocation server session.
Allocating a physical device to be associated with the
checkpoint The process of updating the SCN in all the data files and control files in the database in conjunction with all necessary data blocks in the data buffers being written to disk. This is done for the purposes of ensuring database consistency and synchronization. checkpointing
See checkpoint.
checkpoint process (CKPT) The checkpoint process updates the headers of data files and control files; the actual blocks are written to the file by the DBWn process. CKPT
See checkpoint process (CKPT).
closed backup A backup that occurs when the target database is closed or shut down. This means that the target database is not available for use during this type of backup. This is also referred to as an offline or cold backup. cold backup
See closed backup.
commit To save or permanently store the results of a transaction to the database. complete recovery
consistent backup A backup of a target database that is mounted but not opened and was shut down with either a SHUTDOWN IMMEDIATE or SHUTDOWN NORMAL option, but not with SHUTDOWN ABORT. The database files are stamped with the same SCN at the same point in time. This occurs during a cold backup of the database and no recovery is needed. control file (database) The file that stores the RMAN repository information and records the physical information about the database. The control file contains the database name and timestamp of database creation, along with the name and location of every data file and redo log file. control file (SQL*Loader) The logic behind the data load that determines how and what data will be loaded. This is the SQL*Loader control file, not to be confused with the database control file. conventional-path export SQL-evaluation layer.
The standard export that goes through the
conventional load The conventional load is the default load process for SQL*Loader. The conventional load builds a bind array, and when this bind array is full, a SQL insert statement is executed. COPY The RMAN command that performs an image copy. CROSSCHECK TheRMANcommandusedtocomparetheRMANrepository to the stored media backups. cross checking The process of comparing the RMAN repository with the media backups that are stored using the CROSSCHECK command. cumulative incremental backup Backs up only the data blocks that have changed since the most recent backup of the next lowest level—n – 1 or lower (with n being the existing level of backup). current online redo logs LGWR process.
D database The physical structure that stores the actual data. The Oracle server consists of the database and the instance. database buffer cache The area of memory that caches the database data. It holds the recent blocks that are read from the database data files. database buffers
See data block buffers.
database writer process (DBWn) The DBWn process is responsible for writing the changed database blocks from the SGA to the data file. There can be up to 10 database writer processes (DBW0 through DBW9). data block
See block.
data block buffers Memory buffers containing data blocks that get flushed to disk if modified and committed. data dictionary A collection of database tables and views that contain metadata about the database, its structures, its privileges, and its users. Oracle accesses the data dictionary frequently during the parsing of SQL statements. data file (database) The data files or data files in a database contain all the database data. A data file can belong to only one database and tablespace. data file (SQL*Loader) The data file, input data, or infile all describe the same file, one that contains the raw data that is loaded in SQL*Loader. DBVERIFY utility An Oracle utility used to determine whether data files have corrupt blocks. DBWn
See database writer process (DBWn).
dedicated server Type of connection in which every client connection has an associated dedicated server process on the machine where the Oracle server exists. degree of parallelism The number of parallel processes you choose to enable for a particular parallel activity such as recovery.
differential incremental backup A type of backup that backs up only data blocks modified since the most recent backup at the same level or lower. direct-load insert A faster method to add rows to a table from existing tables by using the INSERT INTO … SELECT … statement. Direct-load insert bypasses the buffer cache and writes the data blocks directly to the data files. direct-path export The type of export that bypasses the SQL-evaluation layer, creating significant performance gains. direct-path load The type of SQL*Loader load that is initiated by using the DIRECT=TRUE keyword. This load process loads in the data blocks directly above the high-water mark and designs for performance capabilities. dirty buffers The blocks in the database buffer cache that are changed but are not yet written to the disk. disaster recovery Recovery of a database that has been entirely destroyed due to fire, earthquake, flood, or some other disastrous situation. discard log The log that stores the records that are discarded in the SQL*Loader because the control file load criteria was not met. disk failure
See media (disk) failure.
dispatchers Process in an Oracle Shared Server environment that is responsible for managing requests from one or more client connections. distributed transactions
Transactions that occur in remote databases.
dump file The file where the logical backup is stored. This file is created by the Export utility and read by the Import utilities. dynamic service registration The ability of an Oracle instance to automatically register its existence with a listener.
E Export (exp) utility A utility that Oracle uses to unload (export) data to external files in a binary format. The Export utility can export the definitions of all objects in the database. It also makes logical backups of the database.
external-path load The specialty SQL*Loader process designed to load external tables. external table A table that resides in a file outside of the Oracle database but is accessible through the database via SQL. extproc The default name of the callout process that is used when executing external procedures from Oracle.
F firewall Generally, a combination of hardware and software that is used to control network traffic and prevent intruders from compromising corporate network security. full backup A type of backup that backs up all the data blocks in the data files, modified or not. full resynchronization All physical changes to the database, including control files, data files, and redo logs. In addition, changed records are updated in the RMAN repository during this process.
G general log The log file that contains a detailed summary of all aspects of the load process. Generic Connectivity One of the Heterogeneous Services offered by Oracle that allows for connectivity solutions based on third-party connection options such as OLEDB and ODBC.
H header block The first block in a data file; it contains information about the data file, such as size information, transactional usage, and checkpoint information.
Heterogeneous Services Facilitythatprovidestheabilitytocommunicate with non-Oracle databases and services. high-water mark (HWM) The maximum number of blocks used by the table. The high-water mark is not reset when you delete rows. hint A multiline comment with a plus symbol within an SQL statement. The mechanism influences the explain plan of a query. host The name of the physical machine on which the Oracle server is located. This can be an Internet Protocol (IP) address or a real name that is resolved via some external naming solution, such as DNS. hostnaming method A names resolution method for small networks that minimizes the amount of configuration work the DBA must perform. hot backup Also called an opened, or online, backup. Occurs when the database is open and a physical file copy of the data files associated with each tablespace is made (placed into backup mode with the ALTER TABLESPACE [BEGIN/END] BACKUP commands).
I image copies Copies of data files, control files, or archived logs, either individually or as a whole database. These copies are not stored in an RMAN format. These are stored in a standard file format much like a file that must be stored in disk. Import utility An Oracle utility used to read and import data from the file created by the Export utility. A selective import can be performed using the appropriate parameters. The Import utility reads the logical backups generated by the export. incarnation
A reference of a target database in the recovery catalog.
incomplete recovery A form of recovery that doesn’t completely recover the database to the point of failure. The three types of incomplete recovery arecancel-based,time-based,andchange-based.Incompleterecoveryrequires a RESETLOGS.
inconsistent backup A backup of the target database that is conducted when it is opened but has crashed prior to mounting, or when it was shut down with the SHUTDOWN ABORT option prior to mounting. In this type of backup, the database files are stamped with different SCNs, which occurs during a hot backup of the database. Recovery is needed. incremental backup A type of backup that backs up only the data blocks in the data files that were modified since the last incremental backup. There are two types of incremental backups: differential and cumulative. init.ora The parameter file that contains the parameters required for instance startup. instance server.
The memory structures and background processes of the Oracle
instance failure An abnormal shutdown of the Oracle database that then requires that the latest online redo log be applied to the database when it restarts to assure database consistency. instance recovery The automatic recovery of an Oracle database instance that results from an instance failure or an abrupt shutdown of the database. interactive export An export in which the user responds to prompts from the Export utility to perform various actions. intersegment concurrency with direct-path load A type of parallel direct-path load performed by using direct-path load to load into multiple tables or partitions within a table at the same time. intrasegment concurrency with direct-path load A type of parallel direct-path load performed by using direct-path load to load data into a single table or partition. IP-filtering firewall Type of firewall that monitors the network packet traffic on IP networks and filters out packets that either originated or did not originate from specific groups of machines.
J Java Database Connectivity Connectivity solution used to connect Javabased applications to a database server.
L Large Pool An optional area in the SGA used for specific database operations, such as backup, recovery, or the User Global Area (UGA) space, when using an MTS configuration. LGWR
See log writer process (LGWR).
List commands RMAN commands that perform simple queries of the catalog to tell what has been done to date. listener Server-side process that is responsible for listening and establishing connections to an Oracle server based on a client connection request. listener.ora Configuration file for the Oracle listener located on the Oracle server. load balancing Ability of the Oracle listener to balance the number of connections between a group of dispatcher processes in an Oracle Shared Server environment. locally managed tablespace A tablespace that manages the extent allocationandde-allocationinformationthroughbitmapsinitsassociateddatafiles. localnaming method Names resolution method that relies on resolving an Oracle Net service name via a physical file, the tnsnames.ora file. LOG_ARCHIVE_DEST An init.oraparameterthatdeterminesthedestination of the archived logs. Cannot be used in conjunction with LOG_ARCHIVE_DEST_n. LOG_ARCHIVE_DEST_N An init.ora parameter that determines the other destinations of the archived logs, remote or local. This parameter supports up to five locations, N being a number 1 through 10. Only one of these destinations can be remote. Cannot be used with LOG_ARCHIVE_DEST or LOG_ ARCHIVE_DUPLEX_DEST. LOG_ARCHIVE_DUPLEX_DEST An init.ora parameter that determines the duplexed, or second, destination of archived logs in a two-location archived logconfiguration.Cannotbeusedinconjunctionwith LOG_ARCHIVE_DEST_n. LOG_ARCHIVE_START An init.ora parameter that enables automatic archiving.
Memory buffers that contain the entries that get written to the
log file A file to which the status of the operation is written when utilities such as SQL*Loader or Export or Import are being used. logging The recording of DML statements, creation of new objects, and other changes in the redo logs. This process also records significant events, such as starting and stopping the listener, along with certain kinds of network errors. logical backup Reads data in the database and stores it in an Export file to create a snapshot of all data in the database. Cannot be used in conjunction with incomplete recovery. logical objects Objects that do not exist outside of the database, such as tables, indexes, sequences, and views. logical structures The database structures that are seen by the user. Tablespaces, segments, extents, blocks, tables, and indexes are all examples of logical structures. LogMiner A utility that can be used to analyze the redo log files. It can provide a fix for logical corruption by building redo and undo SQL statements from the contents of the redo logs. LogMiner is a set of PL/SQL packages and dynamic performance views. log sequence number
A sequence number assigned to each redo log file.
log writer process (LGWR) Responsible for writing the redo log buffer entries (change vectors) to the online redo log files. lsnrctl Command line utility used to control and monitor the Oracle listener process. lsnrctl services Command used to view information about what services a particular Oracle listener is listening for. lsnrctl stop Command to stop the default or currently selected Oracle listener.
M manual archiving The execution of commands to create archived logs. Archived logs are not automatically created after redo log switching. manual channel allocation This type of channel allocation is performed any time you issue the command ALLOCATE CHANNEL. A manual command for allocating a channel is ALLOCATE CHANNEL TYPE DISK. mean time to recovery (MTTR) The mean (average) time needed to recover a database from a certain type of failure. media (disk) failure A physical disk failure, or one that occurs when the database files cannot be accessed by the instance. media management library (MML) or Media Management Layer A tape media library that allows RMAN to interface with a tape hardware vendor’s tape backup device. Also referred to as Media Management Layer. media recovery failure.
Recovery operation that results from a media (disk)
middleware Software and hardware that sits between a client and the Oracle server. Middleware can serve a variety of functions such as load balancing, security, and application-specific business logic processing. MML MTTR
See media management library (MML). See mean time to recovery (MTTR).
multiplexing Oracle’s mechanism for writing to more than one copy of the redo log file or control file. This process involves mirroring, or making duplicate copies. Multiplexing ensures that even if you lose one member of the redo log group or one control file, you can recover using the other one. It intersperses blocks from Oracle data files within a backup set.
N NAMES.DIRECTORY_PATH An entry found in the sqlnet.ora file that defines the net service name search method hierarchy for a client.
National Language Support (NLS) Enables Oracle to store and retrieve information in a format and language that can be understood by users anywhere in the world. The database character set and various other parameters are used to enhance this capability. net service name The name of an Oracle service on a network. This is the name the user enters when they are referring to an Oracle service. Network Program Interface (NPI) A layer in the Oracle Net stack found on the Oracle server that is responsible for server-to-server communications. NLS
See National Language Support (NLS).
NOARCHIVELOG mode A mode of database operation, whereby the redo log files are not preserved for recovery or analysis purposes. nologging In this process, recording DML statements, creating new objects, and other changes in the redo logs do not occur—therefore changes are unrecoverable until the next physical backup. non-current online redo logs Online redo logs that are not in the current or active group being written to. non-media failures Failures that occur for reasons other than disk failure. The types of failure that make up this group are the statement failure, the process failure, the instance failure, and user error. NPI
See Network Program Interface(NPI).
n-tier architecture A network architecture involving at least 3 computers, typically a client computer, a middle-tier computer, and a database server.
online redo logs Redo logs that are being written to by the LGWR process at some point in time and have not been archived. OPA
See Open Protocol Adapters (OPA) layer.
opened backup
See hot backup.
Open Systems Interconnection (OSI) A widely accepted model that defines how data communications are carried out across a network. OPI
See Oracle Protocol Interface (OPI) layer.
Optimal Flexible Architecture (OFA) A standard of presenting the optimal way to set up an Oracle database. It includes guidelines for creating database file locations for better performance and management. Oracle Advanced Security An optional package offered by Oracle that enhances and extends the security capabilities of the standard Oracle server configuration. Oracle Call Interface (OCI) layer A layer of the Oracle Net stack that is responsible for all of the SQL processing that occurs between a client and the Oracle server. Oracle Connection Manager An optional middleware feature from Oracle that provides multiplexing, Network Access Control, and Cross Connectivity–Protocol Connectivity. Oracle Enterprise Manager (OEM) A DBA system management tool that performs a wide variety of DBA tasks, including running the RMAN utility in GUI mode, managing different components of Oracle, and administering the databases at one location. ORACLE_HOME The environment variable that defines the location where the Oracle software is installed. Oracle Net Foundation layer A layer of the Oracle Net Stack that shields both the client and server from the complexities of network communications and is based on the Transparent Network Substrate (TNS). Oracle Program Interface (OPI) layer A layer of the Oracle Net Stack residing on the server. For every request made from the client, the Oracle Program Interface is responsible for sending the appropriate response back to the client.
Oracle Protocol Adapters (OPA) layer A layer of the Oracle Net Stack that maps the Oracle Net Foundation layer functions to the analogous functions in the underlying protocol. Oracle Recovery Manager (RMAN) The Recovery Manager utility, which is automated and is responsible for the backup and recovery of Oracle databases. Oracle Shared Server A connection configuration that enhances the scalability of the Oracle server. Shared Server is an optional configuration of the Oracle server that allows the server to support a larger number of concurrent connections without increasing physical resource requirements. ORACLE_SID The environment variable that defines the database instance name. If you are not using Net8, connections are made to this database instance by default. OSI
See Open Systems Interconnection (OSI).
Oracle Transparent Gateway A connectivity solution that seamlessly extends the reach of Oracle to non-Oracle data stores, which allows you to treat non-Oracle data sources as if they were part of the Oracle environment.
P parallel conventional load This type of conventional load is performed by issuing multiple SQL*Loader commands, each with their own control file and input data file, all to the same table. parallel direct-load insert This type of direct load is the same DML insert operation as serial direct-load insert, but in this case, the statement or table is put into parallel mode. PARALLEL_MAX_SERVERS An init.ora parameter that determines the maximum number of parallel query processes at any given time. parallel query processes Oracle background processes that process a portion of a query. Each parallel query process runs on a separate CPU. parameter file (Export, Import, SQL*Loader) line parameters for the utility, one per line.
parameter file (init.ora) A file with parameters used to configure memory, database file locations, and limits for the database. This file is read when the database is started. PARFILE The parameter file that stores options for Export, Import, or SQL*Loader. partial resynchronization In a partial resynchronization, RMAN reads the current control file to update modified information, but it does not resynchronize the metadata about the database physical schema, such as data files, tablespaces, redo threads, rollback segments produced when the database is open, and online redo logs. PGA
See Program Global Area (PGA).
physical backup A copy of all the Oracle database files, including the data files, control files, redo logs, and init.ora files. physical structure The database structure used to store the actual data and operation of the database. Data files, control files, and redo log files constitute the physical structure of the database. ping A TCP/IP utility that is used to check basic network connectivity between two computers. PMON
See process monitor process (PMON).
port A listening location used by TCP/IP. Ports are used to name the ends of logical connections, which carry conversations between two computers in a TCP/IP network. process
A daemon, or background program, that performs certain tasks.
process failure
The abnormal termination of an Oracle process.
process monitor process (PMON) Performs recovery of failed user processes. This is a mandatory process and is started by default when the database is started. It frees up all the resources held by the failed processes. Program Global Area (PGA) An area of memory in which information about each client session is maintained. This information includes bind variables, cursor information, and the client’s sort area.
proxy-based firewall A firewall that prevents information from outside the firewall from flowing directly into the corporate network. The firewall acts as a gatekeeper, inspecting packets and sending only the appropriate information through to the corporate network.
R RAID
See Redundant Array of Inexpensive Disks (RAID).
read-only tablespace A tablespace that allows only read activity, such as SELECT statements, and is available only for querying. The data is static and doesn’t change. No write activity (for example, INSERT, UPDATE, and DELETE statements)isallowed.Read-onlytablespacesneedtobebackeduponlyonce. read-write tablespace A tablespace that allows both read and write activity, including SELECT, INSERT, UPDATE, and DELETE statements. This is the default tablespace mode. RECOVER DATABASE This RMAN command is used to determine the necessary archived logs to be applied to the database during the recovery process. recovery The process that consists of starting the database and making it consistent using a complete or partial backup copy of some of the physical structures of the database. recovery catalog Information stored in a database used by the RMAN utility to back up and restore databases. Recovery Manager (RMAN)
See Oracle Recovery Manager (RMAN).
redirect connection A connection that requires the Oracle listener to send information back to a client about the location of the appropriate port to which to connect. redo buffers
See log buffers.
redo log buffer The area in the SGA that records all changes to the database. The changes are known as redo log entries, or change vectors, and are used to reapply the changes to the database in case of a failure. redo log entry
redo logs The redo log buffers from the SGA are periodically copied to the redo logs. Redo logs are critical to database recovery. They record all changes to the database, whether the transactions are committed or rolled back. Redo logs are classified as online redo logs or offline redo logs (also called archived logs), which are simply copies of online redo logs. There are also current redo logs that are being actively written to, and non-current redo logs that are not actively being written to. redo record A group of change vectors. Redo entries record data that you can use to reconstruct all changes made to the database, including the rollback segments. Redundant Array of Inexpensive Disks (RAID) The storage of data on multiple disks for fault tolerance, and to protect against individual disk crashes. If one disk fails, then that disk can be rebuilt from the other disks. RAID has many variations to redundantly store the data on separate disks, the most popular of which are termed RAID 0 through 5. refuse packet A packet sent via TCP/IP that acknowledges the refusal of some network request. registering The process of using the REGISTER command, which is required so that RMAN can store information about the target database in the recovery catalog. Report commands These RMAN commands provide queries of the catalog that are more detailed than lists and that tell you what may need to be done. request queue A location in the SGA in an Oracle Shared Server environment in which the dispatcher process places client requests. The shared server processes then process these requests. RESETLOGS The process that resets the redo log files’ sequence number. resetting Updating the recovery catalog for a target database that has been opened with the ALTER DATABASE OPEN RESETLOGS command. response queue The location in the SGA in an Oracle Shared Server environment where a shared server places a completed client request. The dispatcher process then picks up the completed request and sends it back to the client.
To copy backup files to disk from the backup location.
RESTORE DATABASE The RMAN command that is responsible for retrieving the database backup and converting it from the RMAN format back to the OS-specific file format on disk. resynchronization The process of updating the recovery catalog with either physical or logical information (or both) about the target database. reusable section The information stored within the target database’s control file that is used to backup and recover the target database. RMAN
See Oracle Recovery Manager (RMAN).
RMAN-based backup Performed by the Oracle Recover Manager utility, which is part of the Oracle software. RMAN repository The information necessary for RMAN to function in the recovery catalog, or the control files if the recovery catalog is not used, is called the RMAN repository. RMAN scripts Scripts that use RMAN commands that can be stored on the filesystem or within the recovery catalog. roll back
To undo a transaction from the database.
roll forward and roll backward process Applying all the transactions, committed or not committed, to the database and then undoing all uncommitted transactions. row chaining Storing a row in multiple blocks because the entire row cannot fit in one block. Usually this happens when the table has LONG or LOB columns. Oracle recommends using CLOB instead of LONG because the LONG data type is being phased out. row migration Moving a row from one block to another during update operation because there is not enough free space available to accommodate the updated row.
serial direct-load insert The direct-load insert operation that uses one server process to insert data beyond the high-water mark. This is the default for direct-load insert. server process A background process that takes requests from the user process and applies them to the Oracle database. session A job or a task that Oracle manages. When you log in to the database by using SQL*Plus or any tool, you start a session. SET UNTIL [TIME/CHANGE/CANCEL] The clause in RMAN that is necessary to perform an incomplete recovery by causing the recovery process to terminate on a timestamp or SCN, or to be manually cancelled. SGA
See System Global Area.
Shared Global Area
See System Global Area.
shared server processes Processes in an Oracle Shared Server configuration that are responsible for actually executing the client requests. single point of failure A point of failure that can bring down the whole database. single-tier architecture A network architecture in which a client is directly connected to a server via some type of hard wire link, such as a serial line. SMON
See system monitor process (SMON).
spfile.ora The server parameter file that stores persistent parameters that are required for instance startup and those that are modified when the database is started. SQL*Loader
A utility used to load data into Oracle tables from text files.
statement failure
Syntactic errors in the construction of a SQL statement.
static service registration The inputting of service name information directly into the listener.ora file via Oracle Net Manager. structure Either a physical or logical object that is part of the database, such as a file or a database object. System Area Network (SAN) Two or more computers that communicate overashortdistanceviaahigh-speedconnection.Anexamplewouldbeaconfiguration of web servers with high-speed connections to database servers. system change number (SCN) A unique number generated at the time of a COMMIT, acting as an internal counter to the Oracle database, and used for recovery and read consistency. System Global Area (SGA) shared by all users.
A memory area in the Oracle instance that is
system monitor process (SMON) Performs instance recovery at database startup by using the online redo log files. SMON is also responsible for cleaning up temporary segments in the tablespaces that are no longer used and for coalescing the contiguous free space in the tablespaces.
T tablespace A logical storage structure at the highest level. A tablespace can have many segments that may be used for data, index, sorting (temporary), or rollback information. The data files are directly related to tablespaces. A segment can belong to only one tablespace. tablespace point-in-time recovery (TSPITR) A type of recovery whereby logical and physical backups are combined to recover a tablespace to a different point in time from the rest of the database. TAG This command is used to assign a meaningful logical name to backups or image copies. target database
The database that will be backed up.
third mirror A method of performing a copy of a mirrored disk at the hardware level.
time-based recovery A type of incomplete recovery that is stopped by a point in time designated when the recovery is initiated. TNS_ADMIN An environmental variable in Unix and a Registry setting in Windows NT that defines the directory path of the Oracle Net files. tnsnames.ora The name of the physical file that is used to resolve an Oracle Net Service name when you are using the localnaming resolution method. tnsping An Oracle-supplied utility used to test basic connectivity from an Oracle client to an Oracle listener. tracing Process that records all events that occur on a network, even when an error does not happen. This facility can be enabled at the client, the middle-tier, or the server location. transaction
Any change, addition, or deletion of data.
transportable tablespace A feature that was introduced in Oracle8i whereby a tablespace belonging to one database can be copied to another database. TSPITR
See tablespace point-in-time recovery (TSPITR).
Two-Task Common (TTC) layer A layer in the Oracle Net stack that is responsible for negotiating any datatype or character set differences between the client and the server. two-tier architecture A network architecture that is characterized by a client computer and a back-end server that communicate using some type of network protocol, such as TCP/IP.
U UNAVAILABLE TheRMANcommandusedtomakeabackupsetunavailable or not accessible in the RMAN repository. unregistering Removes the information necessary to back up the database. This task is not performed in the RMAN utility; instead, it is performed by executing a stored procedure as the recovery catalog’s schema owner.
UNTIL CANCEL The clause in the RECOVER command that designates cancel-based recovery. UNTIL CHANGE The clause in the RECOVER command that designates change-based recovery. UNTIL TIME The clause in the RECOVER command that designates timebased recovery. user error An unintentional, harmful action on a database—such as deletion of data or dropping of tables—by a user. User Global Area (UGA) An area in the System Global Area (SGA) used to keep track of session-specific information in an Oracle Shared Server environment. user-managed backup A backup that consists of any custom backup; such a backup is usually performed in an OS script such as a Unix shell script or the DOS-based batch script.
V virtual circuit The shared memory segment utilized by the dispatcher to manage communications between the client and the Oracle server. The shared server processes use the virtual circuits to send and receive information to the appropriate dispatcher process. Virtual Interface (VI) protocol A lightweight network communication protocol that places the messaging burden on high-speed network hardware and removes it from the sending and receiving computer hardware.
W whole database backup A backup that gets the complete physical image of an Oracle database, such as the data files, control files, redo logs, and init.ora files.