This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
This course focuses on those features of Oracle Database 11g that are applicable to database administration. Previous experience with Oracle databases (particularly Oracle Database 10g) is required for a full understanding of many of the new features. Hands-on practices emphasize functionality rather than test knowledge.
Overview This course is designed to introduce you to the new features of Oracle Database 11g that are applicable to the work usually performed by database administrators and related personnel. The course does not attempt to provide every detail about a feature or cover aspects of a feature that were available in previous releases (except when defining the context for a new feature or comparing past behavior with current behavior). Consequently, the course is most useful to you if you have already administered other versions of Oracle databases, particularly Oracle Database 10g. Even with this background, you should not expect to be able to implement all of the features discussed in the course without supplemental reading, especially the Oracle Database 11g documentation. The course consists of instructor-led lessons and demonstrations, plus many hands-on practices that allow you to see for yourself how certain new features behave. As with the course content in general, these practices are designed to introduce you to the fundamental aspects of a feature. They are not intended to test your knowledge of unfamiliar syntax or to provide an opportunity for you to examine every nuance of a new feature. The length of this course precludes such activity. Consequently, you are strongly encouraged to use the provided scripts to complete the practices rather than struggle with unfamiliar syntax.
Oracle Database 10g: New Features for Administrators I-2
Oracle Database Innovation 30 years of sustained innovation…
Audit Vault Database Vault Grid Computing Self Managing Database XML Database Oracle Data Guard Real Application Clusters Flashback Query Virtual Private Database
Built in Java VM Partitioning Support Built in Messaging Object Relational Support Multimedia Support Data Warehousing Optimizations Parallel Operations Distributed SQL & Transaction Support Cluster and MPP Support Multi-version Read Consistency Client/Server Support Platform Portability Commercial SQL Implementation
Oracle Database Innovation As a result of early focus on innovation, Oracle has maintained the lead in the industry with a huge number of trend-setting products. The continued focus on Oracle’s key development areas has lead to a number of industry firsts, from the first commercial relational database, to the first portable tool set and UNIXbased client–server applications, to the first multimedia database architecture.
Oracle Database 10g: New Features for Administrators I-3
Customer Testimonials
“Oracle customers are highly satisfied with its Real Application Clusters and Automatic Storage Management when pursuing scale-out strategies.”
Mark Beyer, Gartner December 2006
“By consolidating with Oracle grid computing on Intel/Linux, we are witnessing about a 50% reduction in costs with increased performance.”
Tim Getsay, Assistant Vice Chancellor Management Information Systems Vanderbilt University
Customer Testimonials Managing service level objectives is an ongoing challenge. Users expect fast, secure access to business applications 24x7, and Information Technology managers have to deliver without increasing costs and resources. The manageability features in Oracle Database 11g are designed to help organizations easily manage Infrastructure Grids and deliver on their users’ service level expectations. Oracle Database 11g introduces more self-management, automation and advisors that help reduce management costs, while increasing the performance, scalability and security of their business applications around the clock.
Oracle Database 10g: New Features for Administrators I-4
Enterprise Grid Computing Oracle Database 10g was the first database designed for grid computing. Oracle Database 11g consolidates and extends Oracle’s unique ability to deliver the benefits of Grid computing. Oracle Infrastructure Grids fundamentally changed the way data centers look and operate, transforming data centers from silos of isolated system resources to shared pools of servers and storage. Oracle’s unique Grid architecture enables all types of applications to scale-out server and storage capacity on-demand. By clustering low cost commodity server and storage modules on Infrastructure Grids, organizations are able to improve user service levels, reduce downtime, and make more efficient use of their IT resources. Oracle Database 11g furthers the adoption of Grid Computing by offering: - Unique scale-out technology with single database image - Lower server and storage costs - Increased availability and scalability
Oracle Database 10g: New Features for Administrators I-5
Oracle Database 11g: Focus Areas
• • • • •
I-6
Manageability Availability Performance Business Intelligence and Data Warehousing Security
Oracle Database 11g: Focus Areas Oracle’s Infrastructure Grid technology enables Information Technology systems to be built out of pools of low cost servers and storage that deliver the highest quality of service in terms of manageability, high availability, and performance. Oracle’s existing Grid capabilities are extended in the areas listed on the slide making your databases more manageable. Manageability: New manageability features and enhancements increase DBA productivity, reduce costs, minimize errors, and maximize quality of service through change management, additional management automation and fault diagnosis. Availability: New high availability features further reduce the risk of downtime and data loss including further disaster recovery offerings, important high availability enhancements to Automatic Storage Management, support for online database patching, improved online operations, and more. Performance: Many innovative new performance capabilities are offered including SecureFiles, compression for OLTP, Real Application Clusters optimizations, Result Query Caches, TimesTen enhancements, and more.
Oracle Database 10g: New Features for Administrators I-6
Oracle Database 11g: Focus Areas
•
Information Management – – – – –
•
Application Development – – – –
I-7
Content Management XML Oracle Text Spatial Multimedia and Medical Imaging PL/SQL .NET PHP SQL Developer
Oracle Database 11g: Focus Areas Oracle’s Infrastructure Grid provides the additional functionality needed to manage all information in the enterprise with robust security, information lifecycle management, and integrated business intelligence analytics to support fast and accurate business decisions at the lowest cost.
Oracle Database 10g: New Features for Administrators I-7
Management Automation Oracle Database 11g continues the effort begun in Oracle9i and carried on through Oracle Database 10g to dramatically simplify and ultimately fully automate the tasks that DBAs need to perform. New in Oracle Database 11g is Automatic SQL Tuning with self-learning capabilities. Other new capabilities include automatic, unified tuning of both SGA and PGA memory buffers and new advisors for partitioning, database repair, streams performance, and space management. Enhancements to the Oracle Automatic Database Diagnostic Monitor (ADDM) give it a better global view of performance in Oracle Real Application Clusters (RAC) environments and improved comparative performance analysis capabilities.
Oracle Database 10g: New Features for Administrators I-8
Self-managing Database: Oracle Database 10g Self-managing is an ongoing goal for the Oracle Database. Oracle Database 10g mark the beginning of a huge effort to render the database more easy to use. With Oracle Database 10g, the focus for self-managing was more on performance and resources.
Oracle Database 10g: New Features for Administrators I-9
Self-managing Database: The Next Generation
Manage Performance and Resources Manage Change Manage Fault
Self-managing Database: The Next Generation Oracle Database 11g adds two more important axes to the overall self-management goal: Change management, and fault management.
Oracle Database 10g: New Features for Administrators I-10
Suggested Additional Courses
• • •
I-11
Oracle Database 11g: Real Application Clusters Oracle Database 11g: Data Guard Administration Oracle Enterprise Manager 11g Grid Control
Suggested Additional Courses For more information about key grid computing technologies used by Oracle products, you can take additional courses (listed in the slide) from Oracle University.
Oracle Database 10g: New Features for Administrators I-11
Further Information
For more information about topics that are not covered in this course, refer to the following: • Oracle Database 11g: New Features eStudies – http://www.oracle.com/education/library A comprehensive series of self-paced online courses covering all new features in great detail
•
Oracle by Example series: Oracle Database 11g – http://otn.oracle.com/obe/obe11gdb/index.html
Suggested Schedule The lessons in this guide are arranged in the order you will probably study them in class. The lessons are grouped into topic areas, but they are also organized by other criteria, including the following: • A feature is introduced in an early lesson and then referenced in later lessons. • Topics alternate between difficult and easy to facilitate learning. • Lessons are supplemented with hands-on practices throughout the course to provide regular opportunities for students to explore what they are learning. If your instructor teaches the class in the sequence in which the lessons are printed in this guide, then the class should run approximately as shown in the schedule. Your instructor may vary the order of the lessons, however, for a number of valid reasons. These include: • Customizing material for a specific audience • Covering a topic in a single day instead of splitting the material across two days • Maximizing the use of course resources (such as hardware and software)
Oracle Database 10g: New Features for Administrators I-13
Oracle Database 11g: New Features for Administrators 1 - 1
Objectives
After completing this lesson, you should be able to: • Install Oracle Database 11g • Upgrade your database to Oracle Database 11g • Use online patching
Oracle Database 11g: New Features for Administrators 1 - 2
Oracle Database 11g Installation Changes
•
•
1-3
Minor modifications to the install flow. New screens for: – Turning off secure configuration in the seed database – Setting the out-of-box memory target – Specifying the Database Character set – Modifications to OS authentication to support SYSASM Addition of new products to the install – SQL Developer – Movement of APEX from companion CD to main CD – Warehouse Builder (server-side pieces) – Oracle Configuration Management (OCM) – New Transparent Gateways
Oracle Database 11g Installation Changes The following is a list of components that were part of Oracle Database 10g release 2 (10.2), and are not available for installation with Oracle Database 11g: iSQL*Plus Oracle Workflow Oracle Data Mining Scoring Engine Oracle Enterprise Manager Java console
Oracle Database 11g: New Features for Administrators 1 - 4
Oracle Database 11g Installation Changes
• Minor changes to the clusterware installation – Support for block devices for storage of OCR and Voting Disks – Ship “fix-up” scripts with the product
• Support for upgrade of XE databases directly to 11g • Better conformance to OFA in the installation – Prompt for ORACLE_BASE explicitly – Warnings in the alert log when ORACLE_BASE isn’t set
Oracle Database 11g Installation Changes In Oracle Database 11g, Oracle Universal Installer prompts you to specify the Oracle base. The Oracle base you provide during the installation gets logged in the local inventory. You can share this Oracle base across all of the Oracle homes you create on the system. Oracle recommends that you share an Oracle base for all of the Oracle homes created by a user. Each Oracle home has a corresponding Oracle base. Oracle Universal Installer has a list box where you can edit or select the Oracle base. The installer derives the default Oracle home from the Oracle base location you provide in the list box. However, you can change the default Oracle home by editing the location. The following are the changes made in Oracle Database 11g with respect to Oracle base to make it Optimal Flexible Architecture compliant: ORACLE_BASE is a recommended environment variable. However, this variable will be made mandatory starting in future releases. By default, Oracle base and Oracle Clusterware home are at the same directory level during the Oracle Clusterware installation. You should not create Oracle Clusterware home under Oracle base. Specifying Oracle Clusterware home under Oracle base results in an error. Oracle recommends that you create the flash recovery area and data file location under Oracle base. In Oracle Database 10g, the default locations for the flash recovery area and data file are one level above the Oracle home directory. However, in Oracle database 11g, Oracle base is the starting point to set the default locations for flash recovery and data file. However, Oracle recommends 11g:and New foron Administrators that you keepOracle the flashDatabase recovery area dataFeatures file location separate disks. 1 - 5
Oracle Database Upgrade Enhancements
• • • •
1-6
Pre-Upgrade Information Tool Simplified Upgrade Upgrade performance enhancement Post-Upgrade Status Tool
Oracle Database Upgrade Enhancements Oracle Database 11g release 1 (11.1) continues to make improvements to simplify manual upgrades, upgrades performed using Database Upgrade Assistant (DBUA), and downgrades. DBUA provides the following enhancements for single-instance databases: • Support for improvements to the pre-upgrade tool in the areas of space estimation, initialization parameters, statistics gathering, and new warnings. • The catupgrd.sql script performs all upgrades and the catdwgrd.sql script performs all downgrades, for both patch releases and major releases. • DBUA can automatically take into account multi-CPU systems to perform parallel object recompilation. • Errors are now collected as they are generated during the upgrade and displayed by the PostUpgrade Status Tool for each component.
Oracle Database 11g: New Features for Administrators 1 - 6
Pre-Upgrade Information Tool
• SQL script, utlu111i.sql, analyzes the database to be upgraded • Checks for parameter settings that may cause upgrade to fail and generates warnings • Utility runs in “old server” & “old database” context • Provides guidance and warnings based on Oracle Database 11g Release 1 upgrade requirements • Supplies information to the DBUA to automatically perform any required actions
Pre-Upgrade Information Tool The pre-upgrade information tool analyzes the database to be upgraded. It is a SQL script that ships with Oracle Database 11g release 1 (11.1), and must be run in the environment of the database being upgraded. This tool displays warnings about possible upgrade issues with the database. It also displays information about required initialization parameters for Oracle Database 11g release 1 (11.1).
Oracle Database 11g: New Features for Administrators 1 - 7
Pre-Upgrade Analysis
The Pre-Upgrade Information Tool checks for: • Database version and compatibility • Redo log size • Updated initialization parameters (e.g. shared_pool_size) • Deprecated and obsolete initialization parameters • Components in database (JAVAVM, Spatial, etc.) • Tablespace estimates – Increase in total size – Additional allocation for AUTOEXTEND ON – SYSAUX tablespace
Oracle Database 11g: New Features for Administrators 1 - 8
Simplified Upgrade
• Upgrade driven from the contents of the component registry (DBA_REGISTRY view) • Single top-level script, catupgrd.sql, upgrades all components in the database using the information in the DBA_REGISTRY view • Supports re-run of catupgrd.sql, if necessary
Oracle Database 11g: New Features for Administrators 1 - 9
Startup Upgrade
STARTUP UPGRADE mode will suppress normal upgrade errors: • Previously, STARTUP MIGRATE in Oracle Database 9i R2 • Only real errors are spooled • Automatically handles setting system parameters that can otherwise cause problems during upgrade – Turns off job queues – Disables system triggers – Allows AS SYSDBA connections only
Startup Upgrade STARTUP UPGRADE enables you to open a database based on an earlier Oracle Database release. It also restricts logons to AS SYSDBA sessions, disables system triggers, and performs additional operations that prepare the environment for the upgrade (some of which are listed on the slide).
Oracle Database 11g: New Features for Administrators 1 - 10
Upgrade Performance Enhancement
Parallel recompilation of invalid PL/SQL database objects on multiprocessor CPUs: • Utlrp.sql can now exploit multiple CPUs to speed up the time required to recompile any stored PL/SQL and Java code.
Upgrade Performance Enhancement This script is a wrapper based on the UTL_RECOMP package. UTL_RECOMP provides a more general recompilation interface, including options to recompile objects in a single schema. Please see the documentation for package UTL_RECOMP for more details. By default this script invokes the utlprp.sql script with 0 as the degree of parallelism for recompilation. This means that UTL_RECOMP will automatically determine the appropriate level of parallelism based on Oracle parameters cpu_count and parallel_threads_per_cpu. If the parameter is 1, sequential recompilation is used.
Oracle Database 11g: New Features for Administrators 1 - 11
Post-Upgrade Status Tool
Run utlu111s.sql to display the results of the upgrade • Error logging now provides more information per component • Reviews the status of each component and lists the elapsed time • Provides information about invalid/incorrect component upgrades • Run this tool after the upgrade completes to see errors and check the status of the components
Post-Upgrade Status Tool The Post-Upgrade Status Tool provides a summary of the upgrade at the end of the spool log. It displays the status of the database components in the upgraded database and the time required to complete each component upgrade. Any errors that occur during the upgrade are listed with each component and must be addressed. Run utlu111s.sql to display the results of the upgrade.
Oracle Database 11g: New Features for Administrators 1 - 12
Rerun the Upgrade
Oracle Database 11.1 Upgrade Status Utility 03-18-2007 Component Status Version Oracle Server VALID 11.1.0.4.0 JServer JAVA Virtual Machine VALID 11.1.0.4.0 Oracle Workspace Manager VALID 11.1.0.4.0 Oracle Enterprise Manager VALID 11.1.0.4.0 Oracle XDK VALID 11.1.0.4.0 Oracle Text VALID 11.1.0.4.0 Oracle XML Database VALID 11.1.0.4.0 Oracle Database Java Packages VALID 11.1.0.4.0 Oracle interMedia VALID 11.1.0.4.0 Spatial ORA-04031: unable to allocate 4096 bytes of shared memory pool","java/awt/FrameSYS","joxlod exec hp",":SGAClass") ORA-06512: at "SYS.DBMS_JAVA", line 704 INVALID 11.1.0.4.0
Rerun the Upgrade The Post-Upgrade Status Tool should report VALID status for all components at the end of the upgrade. The following list shows and briefly describes other status values that you might see: As shown on the slide, the report returns INVALID for the Spatial component. This is because of the ORA-04031 error. In this case, you should fix the problem, then running utlrp.sql might change the status to VALID without rerunning the entire upgrade. Check the DBA_REGISTRY view after running utlrp.sql. If that does not fix the problem, or if you see UPGRADING status, the component upgrade did not complete. Resolve the problem and rerun catupgrd.sql after you shutdown immediate followed by a startup upgrade.
Oracle Database 11g: New Features for Administrators 1 - 13
Oracle Database 11g: New Features for Administrators 1 - 14
Prepare to Upgrade
1. Become familiar with the features of the New Oracle Database 11g Release 1 2. Determine the upgrade path 3. Choose an upgrade method 4. Choose an OFA compliant Oracle Home directory 5. Prepare a backup and recovery strategy 6. Develop a test plan to test your database, applications, and reports
Prepare to Upgrade Before you upgrade your database, you should perform the following steps: 1. Become familiar with the features of Oracle Database 11g release 1 (11.1). 2. Determine the upgrade path to the new release. 3. Choose an upgrade method. 4. Choose an Oracle home directory for the new release. 5. Prepare a backup and recovery strategy 6. Develop a testing plan.
Oracle Database 11g: New Features for Administrators 1 - 15
Oracle Database 11g Release 1 Upgrade Paths
• Direct upgrade to 11g is supported from 9.2.0.4 or higher, 10.1.0.2 or higher, and 10.2.0.1 or higher. • If you are not at one of these versions you need to perform a “double-hop” upgrade • For example: – 7.3.4 -> 9.2.0.8 -> 11.1 – 8.1.7.4->9.2..0.8->11.1
Oracle Database 11g Release 1 Upgrade Paths The path that you must take to upgrade to Oracle Database 11g release 1 (11.1) depends on the release number of your current database. It might not be possible to upgrade directly from your current version of Oracle Database to the latest version. Depending on your current release, you might be required to upgrade through one or more intermediate releases to upgrade to Oracle Database 11g release 1 (11.1). For example, if the current database is running release 8.1.6, then follow these steps: 1. Upgrade release 8.1.6 to release 8.1.7A using the instructions in Oracle8i Migration Release 3 (8.1.7). 2. Upgrade release 8.1.7A to 9.2.0.8 using the instructions in Oracle9i Database Migration Release 2 (9.2). 3. Upgrade release 9.2.0.8 to Oracle Database 11g release 1 (11.1) using the instructions in this lesson.
Oracle Database 11g: New Features for Administrators 1 - 16
Choose an Upgrade Method
• Database Upgrade Assistant (DBUA) – Automated GUI tool that interactively steps the user through the upgrade process and configures the database to run with Oracle Database 11g Release 1
• Manual Upgrade – Use SQL*Plus to perform any necessary actions to prepare for the upgrade, run the upgrade scripts and analyze the upgrade results
• Export-Import – Use Data Pump or original Export/Import
Choose an Upgrade Method Oracle Database 11g release 1 (11.1) supports the following tools and methods for upgrading a database to the new release: • Database Upgrade Assistant (DBUA) provides a graphical user interface (GUI) that guides you through the upgrade of a database. DBUA can be launched during installation with the Oracle Universal Installer, or you can launch DBUA as a standalone tool at any time in the future. DBUA is the recommended method for performing a major release upgrade or patch release upgrade. • Manual upgrade using SQL scripts and utilities provide a command-line upgrade of a database, using SQL scripts and utilities. • Export and Import utilities use the Oracle Data Pump Export and Import utilities, available as of Oracle Database 10g release 1 (10.1), or the original Export and Import utilities to perform a full or partial export from your database, followed by a full or partial import into a new Oracle Database 11g release 1 (11.1) database. Export/Import can copy a subset of the data, leaving the database unchanged. • CREATE TABLE AS SQL statement copies data from a database into a new Oracle Database 11g release 1 (11.1) database. Data copying can copy a subset of the data, leaving the database unchanged.
Oracle Database 11g: New Features for Administrators 1 - 17
Automates all tasks Performs both Release and Patch set upgrades Supports RAC, Single Instance and ASM Informs user and fixes upgrade prerequisites Automatically reports errors found in spool logs Provides complete HTML report of the upgrade process Command line interface allows ISVs to automate
• Disadvantages – Offers less control over individual upgrade steps
Oracle Database 11g: New Features for Administrators 1 - 19
Sample Test Plan
• Make a clone of your production system using Enterprise Manager • Upgrade test database to latest version • Update COMPATIBLE to latest version • Run your applications, reports, and legacy systems • Ensure adequate performance by comparing metrics gathered before and after upgrade • Tune queries or problem SQL statements • Update any necessary database parameters
Oracle Database 11g: New Features for Administrators 1 - 21
Performing a Manual Upgrade - 1
1. Install Oracle Database 11g Release 1 in new ORACLE_HOME 2. Analyze the existing database – Use rdbms/admin/utlu111i.sql with existing server – SQL> spool pre_upgrade.log – SQL> @utlu111i
3. Adjust redo logs and tablespace sizes if necessary 4. Copy existing initialization files to new ORACLE_HOME and make adjustments as recommended 5. Shutdown immediate, backup, then switch to the new ORACLE_HOME
Note: catuppst.sql is the post-upgrade script that performs remaining upgrade actions that do not require that the database be open in UPGRADE mode. It can be run at the same time utlrp.sql is being run.
Oracle Database 11g: New Features for Administrators 1 - 24
Now you are ready to use Oracle Database 11g Release 1! • Perform any required post-upgrade steps • Make additional post-upgrade adjustments to initialization parameters • Test your applications and tune performance • Finally, set initialization parameter COMPATIBLE to 11.1 to make full use of Oracle Database 11g Release 1 features • 10.0.0 is the minimum compatibility required for 11.1
Oracle Database 11g: New Features for Administrators 1 - 26
Downgrading a Database - 1 1. Major release downgrades are supported back to 10.2 and 10.1 2. Can only downgrade back to the release from which you upgraded 3. Shutdown and start up the instance in DOWNGRADE mode –
SQL> startup downgrade
4. Run the downgrade script which automatically determines the version of the database and calls the specific component scripts – –
SQL> SPOOL downgrade.log SQL> @catdwgrd.sql
5. Shutdown the database immediately after the downgrade script ends –
Oracle Database 11g: New Features for Administrators 1 - 29
Database Upgrade Assistant (DBUA) • DBUA is a GUI and command line tool for performing database upgrades • Uses a Wizard Interface – Automates the upgrade process – Simplifies detecting and handling upgrade issues
• Supported Releases for 11g – 9.2, 10.1 and 10.2
• Patchset Upgrades – Supported 10.2.0.3 onwards
• Support the following database types – Single instance – Real Application Clusters – Automatic Storage Management 1 - 30
Oracle Database 11g: New Features for Administrators 1 - 30
Key DBUA Features - 1
• Upgrade Scripts – Runs all necessary scripts to perform the upgrade
• Progress – Displays upgrade progress at a component level
• Configuration Checks – Automatically makes appropriate adjustments to initialization parameters – Checks for adequate resources such as SYSTEM tablespace size, rollback segments size, redo log size – Checks disk space for auto extended datafiles – Creates mandatory SYSAUX tablespace – Space Usage summary in SpaceUsage.txt
Oracle Database 11g: New Features for Administrators 1 - 31
Key DBUA Features - 2
• Recoverability – Performs a backup of the database before upgrade – If needed can restore the database after upgrade
• Pre-Upgrade Summary – Prior to upgrade provides summary of all actions to be taken – Wizard warns user about any issues found – Provides space analysis information for backup – Applies required changes to network configuration files
Oracle Database 11g: New Features for Administrators 1 - 32
Key DBUA Features - 3 • Configuration files – Creates init.ora and spfile in new ORACLE_HOME – Updates network configurations – Uses OFA compliant locations – Updates database information on Oracle Internet Directory
• Oracle Enterprise Manager – Allows you to setup and configure EM DB Control – Allows you to register database with EM Grid Control – If EM is in use upgrades EM repository and makes necessary configuration changes
• Logging and tracing – Writes detailed trace and logging files (ORACLE_BASE/cfgtoollogs/dbua/<sid>/upgradeNN) 1 - 33
Oracle Database 11g: New Features for Administrators 1 - 33
Key DBUA Features - 4
• Real Application Clusters – All nodes are upgraded – All configuration files are upgraded
• Minimizing Downtime – Speeds up upgrade by disabling archiving – Recompiles packages in parallel – User interaction is not required after upgrade starts
• Security features – Locks new users in the upgraded database
Command Line Syntax When invoked with the -silent command line option, DBUA operates in silent mode. In silent mode, DBUA does not present a user interface. It also writes any messages (including information, errors, and warnings) to a log file in ORACLE_HOME/cfgtoollogs/dbua/SID/upgraden, where n is the number of upgrades that DBUA has performed as of this upgrade. For example, the following command upgrades a database named ORCL in silent mode: dbua silent -dbName ORCL & Here is a list of important options you can use: • -backupLocation directory Specifies a directory to back up your database before the upgrade starts • -postUpgradeScripts script [, script ] ... Specifies a comma-delimited list of SQL scripts. Specify complete path names. The scripts are executed at the end of the upgrade. • -initParam parameter=value [, parameter=value ] ... Specifies a comma-delimited list of initialization parameter values of the form name=value • -emConfiguration {CENTRAL|LOCAL|ALL|NOBACKUP|NOEMAIL|NONE} Specifies Oracle Enterprise Manager management options. Note: For more information on these options refer to the Oracle Database Upgrade guide.
Oracle Database 11g: New Features for Administrators 1 - 37
Using DBUA to Upgrade Your Database Complete the following steps to upgrade a database using the DBUA graphical user interface: On Linux or UNIX platforms, enter the dbua command at a system prompt in the Oracle Database 11g release 1 (11.1) environment. The DBUA Welcome screen appears. Click Next. If an Automatic Storage Management (ASM) instance is detected on the system, then the Upgrade Operations page appears with options to upgrade a database or an ASM instance. If no ASM instance is detected, then Databases screen appears. At the Upgrade Operations page, select Upgrade a Database. This operation upgrades a database to Oracle Database 11g release 1 (11.1). Oracle recommends that you upgrade the database and ASM in separate DBUA sessions, in separate Oracle homes. Click Next.
Oracle Database 11g: New Features for Administrators 1 - 38
Choose Database to Upgrade and Diagnostic Destination
Choose Database to Upgrade and Diagnostic Destination The Databases screen appears. Select the database you want to upgrade from the Available Databases table. You can select only one database at a time. If you do not see the database that you want, then make sure an entry with the database name exists in the oratab file in the etc directory. If you are running DBUA from a user account that does not have SYSDBA privileges, then you must enter the user name and password credentials to enable SYSDBA privileges for the selected database. Click Next. DBUA analyzes the database, performing the following pre-upgrade checks and displaying warnings as necessary: • Redo log files whose size is less than 4 MB. If such files are found, then DBUA gives the option to drop/create new redo log files. • Obsolete or deprecated initialization parameters. When DBUA finishes its checks, the Diagnostic Destination screen appears. Do one of the following: • Accept the default location for your diagnostic destination • Enter the full path to a different diagnostic destination in the Diagnostic Destination field. Click Browse to select a diagnostic destination Click Next. Oracle Database 11g: New Features for Administrators 1 - 39
Moving Database Files If you are upgrading a single-instance database, then the Move Database Files screen appears. If you are upgrading an Oracle Real Application Clusters database, then the Move Database Files screen does not appear. Select one of the following options: • Do Not Move Database Files as Part of Upgrade • Move Database Files during Upgrade If you choose to move database files, then you must also select one of the following: • File System: Your database files are stored on the host file system. • Automatic Storage Management (ASM): Your database files are stored on ASM storage, which must already exist on your system. If you do not have an ASM instance, you can create one using DBCA and then restart DBUA. Click Next.
Oracle Database 11g: New Features for Administrators 1 - 40
Database File Locations The Database File Locations screen appears. Select one of the following options: • Use Common Location for All Database Files. If you choose to have all of your database files in one location, then you must also do one of the following: - Accept the default location for your database files - Enter the full path to a different location in the Database Files Location field - Click Browse and select a different location for your database files • Use Oracle-Managed Files. If you choose to use Oracle-Managed Files for your database files, then you must also do one of the following: - Accept the default database area - Enter the full path to a different database area in the Database Area field - Click Browse and select a different database area • Use a Mapping File to Specify Location of Database Files. This option enables you to specify different locations for your database files. A sample mapping file is available in the logging location. You can edit the property values of the mapping file to specify a different location for each database file. Click Next.
Oracle Database 11g: New Features for Administrators 1 - 41
Recovery Configuration The Recovery Configuration screen allows you to designate a Flash Recovery Area for your database. If you selected the Move Database Files during Upgrade, or if an Oracle Express Edition database is being upgraded to Oracle Enterprise Edition, then a Flash recovery Area must be configured. If a Flash Recovery Area is already configured, then current settings are retained but the screen will come up to allow you to override these values. Click Next.
Oracle Database 11g: New Features for Administrators 1 - 42
Management Options and Database Credentials If no other database is already being monitored with Enterprise Manager, then the Management Options screen appears. At the Management Options screen, you have the option of setting up your database so it can be managed with Enterprise Manager. Before you can register the database with Oracle Enterprise Manager Grid Control, an Oracle Enterprise Manager Agent must be configured on the host computer. To set up your database to be managed with Enterprise Manager, select Configure the Database with Enterprise Manager and then select one of the proposed options. Click Next. The Database Credentials screen appears. Choose one of the proposed options and click Next.
Oracle Database 11g: New Features for Administrators 1 - 43
Network Configuration If DBUA detects multiple listeners are configured, then the Network Configuration for the Database screen appears. The Network Configuration screen has two tabs. The Listeners tab is displayed if you have more than one listener. The Directory Service tab shows up if you have directory services configured. On the Listeners tab, select one of the following options: • Register this database with all the listeners • Register this database with selected listeners only If you choose to register selected listeners only, then you must select the listeners you want in the Available Listeners list and use the arrow buttons to move them to the Selected Listeners list. If you want to register your database with a directory service, then click the Directory Service tab. On the Directory Service tab, select one of the following options: • Yes, register the database: Selecting this option enables client computers to connect to this database without a local name file (tnsnames.ora) and also enables them to use the Oracle Enterprise User Security feature. • No, don't register the database If you choose to register the database, then you must also provide a user distinguished name (DN) in the User DN field and a password for that user in the Password field. An Oracle wallet is created as part of database registration. It contains credentials suitable for password authentication between this database and the directory service. Enter a password in the Wallet Password andDatabase Confirm Password fields. Oracle 11g: New Features for Administrators 1 - 44 Click Next.
Recompile Invalid Objects The Recompile Invalid Objects screen appears. Select Recompile invalid objects at the end of upgrade if you want DBUA to recompile all invalid PL/SQL modules after the upgrade is complete. This ensures that you do not experience any performance issues later, as you begin using your newly upgraded database. If you have multiple CPUs, then you can reduce the time it takes to perform this task by taking advantage of parallel processing on your available CPUs. If you have multiple CPUs available, then DBUA automatically adds an additional section to the Recompile Invalid Objects screen and automatically determines the number of CPUs you have available. DBUA also provides a recommended degree of parallelism, which determines how many parallel processes are used to recompile your invalid PL/SQL modules. Specifically, DBUA sets the degree of parallelism to one less than the number of CPUs you have available. You can adjust this default value by selecting a new value from the Degree of Parallelism menu. Select Turn off Archiving and Flashback logging for the duration of upgrade to reduce the time required to complete the upgrade. If the database is in ARCHIVELOG or flashback logging mode, then DBUA gives you the choice of turning them off for the duration of the upgrade. If you choose this option, Oracle recommends that you perform an offline backup immediately after the upgrade. Click Next. Oracle Database 11g: New Features for Administrators 1 - 45
Database Backup and Space Checks The Backup screen appears. Select Backup database if you want DBUA to back up your database for you. Oracle strongly recommends that you back up your database before starting the upgrade. If errors occur during the upgrade, you might be required to restore the database from the backup. If you use DBUA to back up your database, then it makes a copy of all your database files in the directory you specify in the Backup Directory field. DBUA performs this cold backup automatically after it shuts down the database and before it begins performing the upgrade procedure. The cold backup does not compress your database files, and the backup directory must be a valid file system path. You cannot specify a raw device for the cold backup files. In addition, DBUA creates a batch file in the specified directory. You can use this batch file to restore the database files: • On Windows operating systems, the file is called db_name_restore.bat. • On Linux or UNIX platforms, the file is called db_name_restore.sh. If you choose not to use DBUA for your backup, then Oracle assumes you have already backed up your database using your own backup procedures. Click Next. Note: If you decide to use DBUA to backup your database, DBUA checks that you have enough space before the backup is taken. Oracle Database 11g: New Features for Administrators 1 - 46
Database Upgrade Summary The Summary screen appears. The Summary screen shows the following information about the upgrade before it starts: • Name, version, and Oracle home of the old and new databases • Database backup location, available space, and space required • Warnings ignored • Database components to be upgraded • Initialization parameters changes • Database files location • Listener registration Check all of the specifications. Then do one of the following: • Click Back if anything is incorrect until you reach the screen where you can correct it. • Click Finish if everything is correct.
Oracle Database 11g: New Features for Administrators 1 - 47
Upgrade Progress and Results The Progress screen appears, and DBUA begins the upgrade. You might encounter error messages with Ignore and Abort choices. If other errors appear, then you must address them accordingly. If an error is severe and cannot be handled during the upgrade, then you have the following choices: • Click Ignore to ignore the error and proceed with the upgrade. You can fix the problem, restart DBUA, and complete the skipped steps. • Click Abort to terminate the upgrade process. If a database backup was taken by DBUA, then it asks if you want to restore the database. After the database has been restored, you must correct the cause of the error and restart DBUA to perform the upgrade again. If you do not want to restore the database, then DBUA leaves the database in its present state so that you can proceed with a manual upgrade. After the upgrade has completed, the following message is displayed on the Progress screen: Upgrade is complete. Click "OK" to see the results of the upgrade. Click OK. The Upgrade Results screen appears. The Upgrade Results screen displays a description of the original and upgraded databases and changes made to the initialization parameters. The screen also shows the directory where various log files are stored after the upgrade. You can examine these log files to obtain more details about the upgrade process. Click Restore Database if you are not satisfied with the upgrade results. Oracle Database 11g: New Features for Administrators 1 - 48
Best Practices - 1
• The three T’s: TEST, TEST, TEST – Test the upgrade – Test the application(s) – Test the recovery strategy
• Functional Testing – Clone your production database on a machine with similar resources – Use DBUA for your upgrade – Run your application and tools to ensure they work
Best Practices – 1 Perform the planned tests on the current database and on the test database that you upgraded to Oracle Database 11g release 1 (11.1). Compare the results, noting anomalies. Repeat the test upgrade as many times as necessary. Test the newly upgraded test database with existing applications to verify that they operate properly with a new Oracle database. You also might test enhanced functions by adding available Oracle Database features. However, first make sure that the applications operate in the same manner as they did in the current database. Functional testing is a set of tests in which new and existing features and functions of the system are tested after the upgrade. Functional testing includes all database, networking, and application components. The objective of functional testing is to verify that each component of the system functions as it did before upgrading and to verify that new functions are working properly. Create a test environment that does not interfere with the current production database. Practice upgrading the database using the test environment. The best upgrade test, if possible, is performed on an exact copy of the database to be upgraded, rather than on a downsized copy or test data. Do not upgrade the actual production database until after you successfully upgrade a test subset of this database and test it with applications, as described in the next step. The ultimate success of your upgrade depends heavily on the design and execution of an appropriate backup strategy. Oracle Database 11g: New Features for Administrators 1 - 49
Gather AWR or Statspack baselines during various workloads
– Gather sample performance metrics after upgrade —
Compare metrics before and after upgrade to catch issues
– Upgrade production systems only after performance and functional goals have been met
• Pre-Upgrade Analysis – You can run DBUA without clicking finish to get a preupgrade analysis or utlu111i.sql – Read general and platform specific release notes to catch special cases 1 - 50
Best Practices – 2 Performance testing of the new Oracle database compares the performance of various SQL statements in the new Oracle database with the statements' performance in the current database. Before upgrading, you should understand the performance profile of the application under the current database. Specifically, you should understand the calls the application makes to the database server. For example, if you are using Oracle Real Application Clusters, and you want to measure the performance gains realized from using cache fusion when you upgrade to Oracle Database 11g release 1 (11.1), then make sure you record your system's statistics before upgrading. For that, you can use various V$ views or AWR/Statspack reports.
Oracle Database 11g: New Features for Administrators 1 - 50
Best Practices - 3
• Automate your upgrade – Use DBUA in command line mode for automating your upgrade – Useful for upgrading a large number of databases
• Logging – For manual upgrade, spool upgrade results and check logs for possible issues – DBUA can also do this for you
• Automatic conversion from 32 bit to 64 bit database software • Check for sufficient space in SYSTEM, UNDO, TEMP and redo log files
Best Practices - 3 If you are installing 64-bit Oracle Database 11g release 1 (11.1) software but were previously using a 32-bit Oracle Database installation, then the databases is automatically converted to 64-bit during a patch release or major release upgrade to Oracle Database 11g release 1 (11.1). You must increase initialization parameters affecting the system global area, such as sga_target and shared_pool_size, to support 64-bit operation.
Oracle Database 11g: New Features for Administrators 1 - 51
Best Practices - 4
• Use Optimal Flexibility Architecture (OFA) – Offers best practices for locate your database files, configuration files and ORACLE_HOME
• Use new features – – – – –
1 - 52
Migrate to CBO from RBO Automatic management features for SGA, Undo, PGA etc. Use AWR/ADDM to diagnose performance issues Consider using the SQL tuning advisor Change COMPATIBLE and OPTIMIZER_FEATURES_ENABLE parameters to enable new optimizer features
Best Practices – 4 Oracle recommends the Optimal Flexible Architecture (OFA) standard for your Oracle Database installations. The OFA standard is a set of configuration guidelines for efficient and reliable Oracle databases that require little maintenance. OFA provides the following benefits: • Organizes large amounts of complicated software and data on disk to avoid device bottlenecks and poor performance • Facilitates routine administrative tasks, such as software and data backup functions, which are often vulnerable to data corruption • Alleviates switching among multiple Oracle databases • Adequately manages and administers database growth • Helps to eliminate fragmentation of free space in the data dictionary, isolates other fragmentation, and minimizes resource contention. If you are not currently using the OFA standard, then switching to the OFA standard involves modifying your directory structure and relocating your database files.
Oracle Database 11g: New Features for Administrators 1 - 52
Best Practices - 5
• Use Enterprise Manager Grid Control to manage your enterprise – Use EM to setup new features and try them out – EM provides complete manageability solution for Databases, Applications, Storage, Security, Networks
• Collect Object and System Statistics to improve plans generated by CBO • Check for invalid objects in the database before upgrading – SQL> select owner, object_name, object_type, status from dba_objects where status'INVALID';
Best Practices – 5 When upgrading to Oracle Database 11g release 1 (11.1), optimizer statistics are collected for dictionary tables that lack statistics. This statistics collection can be time consuming for databases with a large number of dictionary tables, but statistics gathering only occurs for those tables that lack statistics or are significantly changed during the upgrade. To decrease the amount of downtime incurred when collecting statistics, you can collect statistics prior to performing the actual database upgrade. As of Oracle Database 10g release 1 (10.1), Oracle recommends that you also use the DBMS_STATS.GATHER_DICTIONARY_STATS procedure to gather dictionary statistics in addition to database component statistics like SYS, SYSMAN, XDB, … using the DBMS_STATS.GATHER_SCHEMA_STATS procedure.
Oracle Database 11g: New Features for Administrators 1 - 53
Best Practices- 6
• Avoid upgrading in a crisis – Keep up with security alerts – Keep up with critical patches needed for your applications – Keep track of de-support schedules
• Always upgrade to latest supported version of the RDBMS • Make sure patchset is available for all your platforms • Data Vault Option needs to be turned off for upgrade
Best Practices- 6 If you have enabled Oracle Database Vault, then you must disable it before upgrading the database, and enable it again when the upgrade is finished.
Oracle Database 11g: New Features for Administrators 1 - 54
Deprecated Features in 11g Release 1
• Oracle Ultra Search • Java Development Kit (JDK) 1.4 • CTXXPATH index
Deprecated Features in 11g Release 1 The slide lists Oracle Database features deprecated in Oracle Database 11g release 1 (11.1). They are supported in this release for backward compatibility. But Oracle recommends that you migrate away from these deprecated features: • Oracle Ultra Search • Java Development Kit (JDK) 1.4: Oracle recommends that you use JDK 5.0; but JDK 1.5 is also fully supported. • CTXXPATH index: Oracle recommends that you use XMLIndex instead.
Oracle Database 11g: New Features for Administrators 1 - 55
Important Initialization Parameter Changes
• • • • •
USER_DUMP_DEST DIAGNOSTIC_DEST BACKGROUND_DUMP_DEST CORE_DUMP_DEST UNDO_MANAGEMENT not set implies AUTO mode To migrate to automatic undo management: 1. 2. 3. 4. 5.
1 - 56
Set UNDO_MANAGEMENT=MANUAL Execute your workload Execute DBMS_UNDO_ADV.RBU_MIGRATION function Create undo tablespace based on previous size result Set UNDO_MANAGEMENT=AUTO
Important Initialization Parameter Changes The DIAGNOSTIC_DEST initialization parameter replaces the USER_DUMP_DEST, BACKGROUND_DUMP_DEST, and CORE_DUMP_DEST parameters. Starting with Oracle Database 11g, the default location for all trace information is defined by DIAGNOSTIC_DEST which defaults to $ORACLE_BASE/diag. For more information about diagnostics, refer to the Diagnostics lesson in this course. A newly installed Oracle Database 11g instance defaults to automatic undo management mode, and if the database is created with Database Configuration Assistant, an undo tablespace is automatically created. A null value for the UNDO_MANAGEMENT initialization parameter now defaults to automatic undo management. It used to default to manual undo management mode in earlier releases. You must therefore use caution when upgrading a previous release to Oracle Database 11g.
Oracle Database 11g: New Features for Administrators 1 - 56
Important Initialization Parameter Changes (Continued) To migrate to automatic undo management, perform the following steps: 1. Set UNDO_MANAGEMENT=MANUAL. 2. Start the instance again and run through a standard business cycle to obtain a representative workload. 3. After the standard business cycle completes, run the following function to collect the undo tablespace size: DECLARE utbsiz_in_MB NUMBER; BEGIN utbsiz_in_MB := DBMS_UNDO_ADV.RBU_MIGRATION; end; / This function runs a PL/SQL procedure that provides information on how to size your new undo tablespace based on the configuration and usage of the rollback segments in your system. The function returns the sizing information directly. 4. Create an undo tablespace of the required size and turn on the automatic undo management by setting UNDO_MANAGEMENT=AUTO or by removing the parameter. Note: For RAC configurations, repeat these steps on all instances.
Oracle Database 11g: New Features for Administrators 1 - 57
Direct NFS Client Overview Oracle Database 10g
Optional generic configuration parameters
Oracle Database 11g
Oracle RDBMS kernel
Oracle RDBMS kernel Specific configuration parameters
DBA
Specific kernel NFS driver
Specific kernel NFS driver
Variations across platforms Lots of parameters to tune
Direct NFS Client Overview Direct NFS is implemented as a Direct Network File System client as a part of Oracle RDBMS Kernel in Oracle Disk Manager library. NAS-based storage systems use Network File System to access data. In Oracle Database 10g, NAS storage devices are accessed using the operating system provided kernel network file system driver, which require specific configuration settings to ensure its efficient and correct usage with Oracle Database. The following are the major problems that arise in correctly specifying these configuration parameters: • NFS clients are very inconsistent across platforms and vary across operating system releases. • With more than 20 parameters to tune, manageability is impacted. Oracle Direct Network File System implements NFS version 3 protocol within the Oracle RDBMS kernel. The following are the main advantages of implementing Oracle Direct NFS: • It enables a complete control over input-output path to Network File Servers. This results in a predictable performance and enables simpler configuration management and a superior diagnosability. • Its operations avoid the kernel network file system layer bottlenecks and resource limitations. However, the kernel is still used for network communication modules. • It provides a common Network File System interface for Oracle for potential use on all host platforms and supported Network File System servers. • It enables improved performance through load balancing across multiple connections to Network File System servers and deep pipelines of asynchronous input-output operations with improved concurrency. Oracle Database 11g: New Features for Administrators 1 - 58
Direct NFS Configuration
1
Mount all expected mount points using kernel NFS driver
Direct NFS Configuration By default Direct NFS attempts to serve mount entries found in /etc/mtab. No other configuration is required. You can optionally use oranfstab to specify additional Oracle specific options to Direct NFS. For example, you can use oranfstab to specify additional paths for a mount point as shown on the slide’s example. When oranfstab is placed in $ORACLE_HOME/dbs its entries are specific to a single database. However, when oranfstab is placed in /etc then it is global to all Oracle databases, and hence can contain mount points for all Oracle databases. Direct NFS looks for the mount point entries in the follwoing order: ORACLE_HOME/dbs/oranfstab, /etc/oranfstab, and /etc/mtab. It uses the first matched entry as the mount point. In all cases, Oracle requires that mount points be mounted by the kernel NFS system even when being served through Direct NFS. Oracle verifies kernel NFS mounts by cross-checking entries in oranfstab with operating system NFS mount points. If a mismatch exists, then Direct NFS logs an informational message, and does not serve the NFS server. Complete the following procedure to enable Direct NFS: 1. Make sure NFS mount points are mounted by your kernel NFS client. The filesystems to be used via ODM NFS should be mounted and available over regular NFS mounts in order for Oracle to retrieve certain bootstrapping information. The mount options used in mounting the filesystems are not relevant. Oracle Database 11g: New Features for Administrators 1 - 59
Direct NFS Configuration (Continued) 2. Optionally create an oranfstab file with the following attributes for each NFS server to be accessed using Direct NFS: • Server: The NFS server name. • Path: Up to four network paths to the NFS server, specified either by IP address, or by name, as displayed using the ifconfig command. The Direct NFS client performs load balancing across all specified paths. If a specified path fails, then Direct NFS reissues I/Os over any remaining paths. • Export: The exported path from the NFS server. Mount: The local mount point for the NFS server. 3. Oracle Database uses the ODM library libnfsodm10.so to enable Direct NFS. To replace this standard ODM library with the ODM NFS library, complete the following steps: • Change directory to $ORACLE_HOME/lib. • Enter the following commands: cp libodm10.so libodm10.so_stub ln -s libnfsodm10.so libodm10.so Use one of the following methods to disable the Direct NFS client: • Remove the oranfstab file. • Restore the stub libodm10.so file by reversing the process you completed in step 3. • Remove the specific NFS server or export paths in the oranfstab file. Note: • If you remove an NFS path that Oracle Database is using, then you must restart the database for the change to be effective. • If Oracle Database is unable to open an NFS server using Direct NFS, then Oracle Database uses the platform operating system kernel NFS client. In this case, the kernel NFS mount options must be set up correctly. Additionally, an informational message will be logged into the Oracle alert and trace files indicating that Direct NFS could not be established. • With the current ODM architecture, at any given time, there can only be one active ODM implementation per instance: Using NFS ODM in an instance precludes any other ODM implementation. • The Oracle files resident on the NFS server that are served by the Direct NFS Client are also accessible through the operating system kernel NFS client. The usual considerations for maintaining integrity of the Oracle files apply in this situation.
Oracle Database 11g: New Features for Administrators 1 - 60
Monitoring Direct NFS Use the following views for Direct NFS management: • V$DNFS_SERVERS: Shows a table of servers accessed using Direct NFS. • V$DNFS_FILES: Shows a table of files currently open using Direct NFS. • V$DNFS_CHANNELS: Shows a table of open network paths (or channels) to servers for which Direct NFS is providing files. • V$DNFS_STATS: Shows a table of performance statistics for Direct NFS.
Oracle Database 11g: New Features for Administrators 1 - 61
Online Patching Overview
Online Patching provides the ability to: • install • enable • disable a bug fix or diagnostic patch on a running Oracle instance.
Online Patching Overview Online Patching provides the ability to install, enable, and disable a bug fix or diagnostic patch on a live, running Oracle instance.
Oracle Database 11g: New Features for Administrators 1 - 62
Installing an Online Patch
• Applying an online patch does not require instance shutdown, relink of the oracle binary, or instance restart. • OPatch can be used to install or uninstall an online patch. • OPatch detects conflicts between two online patches, as well as between an online patch and a conventional patch.
Installing an Online Patch Unlike with traditional patching mechanisms, applying an online patch does not require instance shutdown or restart. Similar to traditional patching, you can use OPatch to install an online patch. You can determine if a patch is an online patch using the following command: opatch query -is_online_patch <patch location> or opatch query <patch location> -all Note: The patched code is shipped as a dynamic/shared library, which is then mapped into memory by each oracle process.
Oracle Database 11g: New Features for Administrators 1 - 63
Online Patching Benefits
• No downtime and no interruption of business. • Incredibly fast install/uninstall time. • Integrated with OPatch: – conflict detection – listed in patch inventory – work in RAC environment.
• Even though the on-disk oracle binary is unchanged, online patches persist across instance shutdown and startup.
Online Patching Benefits You do not have to shutdown your database instance while you apply the online patch. Unlike conventional patching, online patching is incredibly fast to install and uninstall. Because online patching uses OPatch, you basically get all the benefits you already have with conventional patching that uses OPatch. It does not matter how long or how many times you shutdown your database, an online patch will always persist across instance shutdown and startup.
Oracle Database 11g: New Features for Administrators 1 - 64
Conventional Patching and Online Patching Conventional patching basically requires a shutdown of your database instance. Online patching does not require any downtime. Applications can keep on running while you install or uninstalling an online patch.
Oracle Database 11g: New Features for Administrators 1 - 65
Online Patching Considerations
• Online patches may not be available on all platforms. Currently available on: – Linux x86 – Linux x86-64 – Solaris SPARC64.
• Some extra memory is consumed. Exact amount depends on: – Size of the patch – Number of concurrently running oracle processes. – The minimum amount of memory is approximately 1 OS page per running oracle process.
Online Patching Considerations One Operating System (OS) page is typically 4KB on Linux x86 and 8KB on Solaris SPARC64. Counting an average of a thousand oracle processes running at the same time, that would represents around 4MB of extra memory for a small online patch.
Oracle Database 11g: New Features for Administrators 1 - 66
Online Patching Considerations
• There may be a small delay (a few seconds) before every oracle process installs/uninstalls an online patch. • Not all bug fixes and diagnostic patches are available as an online patch. • Use online patches in situations when downtime is not feasible • When downtime is possible, you should install any relevant bug fixes as conventional patches.
Online Patching Considerations The vast majority of diagnostic patches are available as online patches. For bug fixes, it really depends of their nature.
Oracle Database 11g: New Features for Administrators 1 - 67
Using Online Patching
• Shops where downtime is extremely inconvenient or impossible (24x7) • Bugs with an unknown cause, and require a series of one or more diagnostic patches
Using Online Patching A very nice use case for online patching is when you hit a bug with an unknown cause. Oracle Support provides one or more diagnostic patches that can be installed quickly to narrow down the cause of the problem.
Oracle Database 11g: New Features for Administrators 1 - 68
Summary
In this lesson, you should have learned how to: • Install Oracle Database 11g • Upgrade your database to Oracle Database 11g • Use online patching
After completing this lesson, you should be able to: • Setup ASM fast mirror resynch • Use ASM preferred mirror read • Understand scalability and performance enhancements • Setup ASM disk group attributes • Use SYSASM role • Use various new manageability options for CHECK, MOUNT, and DROP commands • Use the mb_backup, md_restore, and repair ASMCMD extensions
Without ASM Fast Mirror Resync ASM offlines a disk whenever it is unable to complete a write to an extent allocated to the disk, while writing at least one mirror copy of the same extent on another disk if ASM redundancy is used by the corresponding disk group. With Oracle Database 10g, ASM assumes that an offline disk contains only stale data and therefore it does not read from such disks anymore. Shortly after a disk is put offline, ASM drops it from the disk group by recreating the extents allocated to the disk on the remaining disks in the disk group using redundant extent copies. This process is a relatively costly operation, and may take hours to complete. If the disk failure is only a transient failure, such as failures of cables, host bus adapters, or controllers, or disk power supply interruptions, you have to add the disk back again once the transient failure is fixed. However, adding the dropped disk back to the disk group incurs an additional cost of migrating extents back onto the disk.
Oracle Database 11g: New Features for Administrators 2 - 3
ASM Fast Mirror Resync Overview ASM redundancy is used
2
Disk access failure
Secondary extent
Primary extent
1
Oracle Database 11g
4 Disk again accessible: Only need to resync modified extents
ASM Fast Mirror Resync Overview ASM fast mirror resync significantly reduces the time required to resynchronize a transient failure of a disk. When a disk goes offline following a transient failure, ASM tracks the extents that are modified during the outage. When the transient failure is repaired, ASM can quickly resynchronize only the ASM disk extents that have been affected during the outage. This feature assumes that the content of the affected ASM disks has not been damaged or modified. When an ASM disk path fails, the ASM disk is taken offline but not dropped if you have set the DISK_REPAIR_TIME attribute for the corresponding disk group. The setting for this attribute determines the duration of a disk outage that ASM tolerates while still being able to resynchronize after you complete the repair. Note: The tracking mechanism uses one bit for each modified allocation unit. This ensures that the tracking mechanism very efficient.
Oracle Database 11g: New Features for Administrators 2 - 4
Using EM to Perform Fast Mirror Resync In Enterprise Manager (EM), when you offline an ASM disk, you are asked to confirm the operation. On the Confirmation page, you can override the default Disk Repair Time. Similarly, you can View by failure group and choose a particular failure group to offline.
Oracle Database 11g: New Features for Administrators 2 - 5
Using EM to Perform Fast Mirror Resync Similarly, you can online disks using Enterprise Manager.
Oracle Database 11g: New Features for Administrators 2 - 6
Setting Up ASM Fast Mirror Resync
ALTER DISKGROUP dgroupA SET ATTRIBUTE 'DISK_REPAIR_TIME'='3H';
ALTER DISKGROUP dgroupA OFFLINE DISKS IN FALUREGROUP controller2 DROP AFTER 5H; ALTER DISKGROUP dgroupA ONLINE DISKS IN FALUREGROUP controller2 POWER 2 WAIT;
ALTER DISKGROUP dgroupA DROP DISKS IN FALUREGROUP controller2 FORCE;
Setting Up ASM Fast Mirror Resync You setup this feature on a per disk group basis. You can do so after disk group creation using the ALTER DISKGROUP command. Use the following commands to enable ASM fast mirror resync: ALTER DISKGROUP SET ATTRIBUTE DISK_REPAIR_TIME="2D4H30M" After you repair the disk, run the SQL statement ALTER DISK GROUP DISK ONLINE. This statement brings a repaired disk group back online to enable writes so that no new writes are missed. This statement also starts a procedure to copy of all of the extents that are marked as stale on their redundant copies. You cannot apply the ONLINE statement to already dropped disks. You can view the current attribute values by querying the V$ASM_ATTRIBUTE view. You can determine the time left before ASM drops an offlined disk by querying the REPAIR_TIMER column of either V$ASM_DISK or V$ASM_DISK_IOSTAT. In addition a row corresponding to a disk resync operation will appear in V$ASM_OPERATION with the OPERATION column set to SYNC.
Oracle Database 11g: New Features for Administrators 2 - 7
Setting Up ASM Fast Mirror Resync (Continued) You can also use the ALTER DISK GROUP DISK OFFLINE SQL statement to manually bring ASM disks offline for preventative maintenance. With this command, you can specify a timer to override the one defined at the disk group level. After you complete maintenance, use the ALTER DISK GROUP DISK ONLINE statement to bring the disk online. If you cannot repair a failure group that is in the offline state, you can use the ALTER DISKGROUP DROP DISKS IN FAILUREGROUP command with the force option. This ensures that data originally stored on these disks is reconstructed from redundant copies of the data and stored on other disk in the same disk group. Note: The time elapses only when the disk group is mounted. Also changing the value of DISK_REPAIR_TIME does not affect disks previously offlined. The default setting of 3.6 hours for DISK_REPAIR_TIME should be adequate for most environments.
Oracle Database 11g: New Features for Administrators 2 - 8
ASM Preferred Mirror Read Overview When you configure ASM failure groups, ASM in Oracle Database 10g always reads the primary copy of a mirrored extent. It may be more efficient for a node to read from a failure group extent that is closest to the node, even if it is a secondary extent. This is especially true in extended cluster configurations where reading from a local copy of an extent provides improved performance. With Oracle Database 11g, you can do this by configuring preferred mirror read using the new initialization parameter, ASM_PREFERRED_READ_FAILURE_GROUPS, to specify a list of preferred mirror read names. The disks in those failure groups become the preferred read disks. Thus, every node can read from its local disks. This results in higher efficiency and performance and reduced network traffic. The setting for this parameter is instance-specific.
Oracle Database 11g: New Features for Administrators 2 - 9
ASM Preferred Mirror Read Setup Setup
ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEA
On first instance
ASM_PREFERRED_READ_FAILURE_GROUPS=DATA.SITEB
On second instance
Monitor
SELECT preferred_read FROM v$asm_disk; SELECT * FROM v$asm_disk_iostat;
ASM Preferred Mirror Read Setup To configure this feature, set the new ASM_PREFERRED_READ_FAILURE_GROUPS initialization parameter. This parameter is a multi-valued parameter and should contain a string with a list of failure group names separated by a comma. The failure group name specified should be prefixed with its disk group name and a ‘.’ character. This parameter is dynamic and can be modified using the ALTER SYSTEM command at any time. An example is shown on the slide. This initialization parameter is only valid for ASM instances. In a stretch cluster, the failure groups specified in this parameter should only contain the disks that are local to the corresponding instance. The new column PREFERRED_READ has been added to the V$ASM_DISK view. Its format is a single character. If the disk group that the disk is in pertains to a preferred read failure group, the value of this column is Y. To identify specific performance issues with ASM preferred read failure groups, use the V$ASM_DISK_IOSTAT view. This view displays disk I/O statistics for each ASM client. If this view is queried from a database instance, only the rows for this instance are shown.
Oracle Database 11g: New Features for Administrators 2 - 10
Enterprise Manager ASM Configuration Page You can specify a set of disks as preferred disks for each ASM instance using Enterprise Manager. The preferred read attributes are instance specific. In Oracle Database 11g the Preferred Read Failure Groups field (asm_preferred_read_failure_group) is added to the configuration page. This parameter only takes effect before the diskgroup is mounted or when the diskgroup is created. It only applies to newly opened files or a newly loaded extend map for a file.
Oracle Database 11g: New Features for Administrators 2 - 11
ASM Preferred Mirror Read - Best Practice Two sites / Normal redundancy
ASM Preferred Mirror Read - Best Practice In practice, there are only a limited number of good disk group configurations in a stretch cluster. A good configuration takes into account both performance and availability of a disk group in a stretch cluster. Here are some possible examples: For a two-site stretch cluster, a normal redundancy disk group should only have two failure groups and all disks local to one site should belong to the same failure group. Also, at most one failure group should be specified as a preferred read failure group by each instance. If there are more than two failure groups, ASM may not mirror a virtual extent across both sites. Furthermore, if the site with more than two failure groups were to go down, it would take the disk group down as well. If the disk group to be created is a high redundancy disk group, at most two failure groups should be created on each site with its local disks, having both local failure groups specified as preferred read failure groups for the local instance. For a three-site stretch cluster, a high redundancy disk group with three failure groups should be used. This is for ASM to guarantee that each virtual extent has a mirror copy local to each site and the disk group is protected against a catastrophic disaster on any of the three sites.
Oracle Database 11g: New Features for Administrators 2 - 12
ASM Scalability and Performance Enhancements
• Extent size grow automatically according to file size • ASM support variable extents size to: – Raise maximum possible file size – Reduce memory utilization in shared pool
• No administration needs apart manual rebalance in case of important fragmentation
ASM Scalability and Performance Enhancements ASM Variable Size Extents is an automated feature that enables ASM to support larger file size extents while improving memory usage efficiency. In Oracle Database 11g, ASM supports variable sizes for extents of 1, 4, 16, and 64 MB. ASM uses a predetermined number of extents of each size. As soon as a file cross a certain threshold, the next extent size is used. An ASM file can begin with 1 MB extents and as the file's size increases, the extent size also increases to 4, 16, or 64 MB based on predefined file size thresholds. With this feature, fewer extent pointers are needed to describe the file and less memory is required to manage the extent maps in the shared pool, which would have been prohibitive in large file configurations. Extent size can vary both across files and within files. Variable Size Extents also enable you to deploy Oracle databases using ASM that are several hundred TB even several PB in size. The management of variable size extents is completely automated and does not require manual administration.
Oracle Database 11g: New Features for Administrators 2 - 13
ASM Scalability and Performance Enhancements (Continued) However, external fragmentation may occur when a large number of non-contiguous small data extents have been allocated and freed, and no additional contiguous large extents are available. A defragmentation operation is integrated as part of the any rebalance operation. So, as a DBA, you always have the possibility to defragment your disk group by executing a rebalance operation. Nevertheless, this should only happen very rarely because ASM also automatically perform defragmentation during extents allocation if the desired size is unavailable. This can potentially render some allocation operations longer. Note: This feature also enables much faster file opens because of the significant reduction in the amount of memory that is required to store file extents.
Oracle Database 11g: New Features for Administrators 2 - 14
ASM Scalability In Oracle Database 11g
ASM imposes the following limits: • 63 disk groups • 10,000 ASM disks • 4 petabyte per ASM disk • 40 exabyte of storage • 1 million files per disk group • Maximum file size: – External redundancy: 140 PB – Normal redundancy: 42 PB – High redundancy: 15 PB
ASM imposes the following limits: • 63 disk groups in a storage system • 10,000 ASM disks in a storage system • 4 petabyte maximum storage for each ASM disk • 40 exabyte maximum storage for each storage system • 1 million files for each disk group • Maximum files sizes depending on the redundancy type of the disk groups used: 140 PB for external redundancy (value currently greater than possible database file size), 42 PB for normal redundancy, and 15 PB for high redundancy. Note: In Oracle Database 10g, maximum ASM file size for external redundancy was 35 TB.
Oracle Database 11g: New Features for Administrators 2 - 15
SYSASM Overview • SYSASM role to manage ASM instances avoid overlap between DBAs and storage administrators SQL> CONNECT / AS SYSASM SQL> CREATE USER ossysasmusername IDENTIFIED by passwd; SQL> GRANT SYSASM TO ossysasmusername; SQL> CONNECT ossysasmusername / passwd AS SYSASM; SQL> DROP USER ossysasmusername;
• SYSDBA will be deprecated: – Oracle Database 11g Release 1 behaves as in 10g – In future releases SYSDBA privileges restricted in ASM instances
SYSASM Overview This feature introduces a new SYSASM role that is specifically intended for performing ASM administration tasks. Using the SYSASM role instead of the SYSDBA role improves security by separating ASM administration from database administration. As of Oracle Database 11g Release 1, the OS group for SYSASM and SYSDBA is the same, and the default installation group for SYSASM is dba. In a future release, separate groups will have to be created, and SYSDBA users will be restricted in ASM instances. Currently, as a member of the dba group you can connect to an ASM instance using the first statement above. You also have the possibility to use the combination of CREATE USER and GRANT SYSMAN SQL statements from an ASM instance to create a new SYSASM user. This is possible as long as the name of the user is an existing OS user name. These commands update the password file of each ASM instance, and do not need instance to be up and running. Similarly, you can revoke the SYSMAN role from a user using the REVOKE command, and you can drop a user from the password file using the DROP USER command. Note: With Oracle Database 11g Release 1, if you log in to an ASM instance as SYSDBA, warnings are written in the corresponding alert.log file.
Oracle Database 11g: New Features for Administrators 2 - 16
Using EM to Manage ASM Users EM allows you to manage the users who access the ASM instance through remote connection (using password file authentication). These users are used exclusively for the ASM instance. You only have this functionality when connected as the SYSASM user. It is hidden if you connect as SYSDBA or SYSOPER users. When you click the Create button, the Create User page is displayed. When you click the Edit button the Edit User page is displayed. By clicking the Delete button, you can delete the created users. Note: Oracle Database 11g adds the SYSASM role to the ASM instance login page.
Oracle Database 11g: New Features for Administrators 2 - 17
ASM Disk Group Compatibility
• Compatibility of each disk group is separately controllable: – ASM compatibility controls ASM metadata on disk structure – RDBMS compatibility controls minimum client level – Useful with heterogeneous environments
• Setting disk group compatibility is irreversible DB instance
ASM Disk Group
ASM instance
COMPATIBLE >= COMPATIBLE.RDBMS option with the log deletion configuration.
Oracle Database 11g: New Features for Administrators 10 - 9
Deleting Backed Up Files
1) 1 Configuring a deletion policy: CONFIGURE ARCHIVELOG DELETION POLICY TO BACKED UP 2 TIMES TO DEVICE TYPE sbt;
Deleting Backed Up Files 1. Assume that you have an archived redo log deletion policy as shown in step 1. 2. The DELETE ... ARCHIVELOG command deletes all archived logs that meet the requirements of the configured deletion policy, which specifies that they must be backed up twice to tape. The DELETE INPUT and DELETE OBSOLETE commands work in the same way. 3. The third example assumes that you have two archiving destinations set: /arch1 and /arch2. The command backs up one archived redo log for each unique sequence number. For example, if archived redo log 1000 is in both directories, RMAN only backs up one copy of this log. The DELETE INPUT clause with the ALL keyword specifies that RMAN should delete all archived redo logs from both archiving directories after the backup.With the configuration in step 1 the DELETE INPUT clause will not delete a archived redo log until it has been backed up twice to tape.
Oracle Database 11g: New Features for Administrators 10 - 10
Duplicating a Database
• With network (no backups required) • Including customized SPFILE • Via Enterprise Manager or RMAN command line
Duplicating a Database Prior to Oracle Database 11g, you could create a duplicate database with RMAN for testing or for standby. It required the source database, a backup copy on the source or on tape, a copy of the backup on the destination system, and the destination database itself. Oracle Database 11g greatly simplifies this process. You can instruct the source database to do online image copies and archived log copies directly to the auxiliary instance, by using Enterprise Manager or the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command. The database files are coming from a TARGET or source database. They are copied via an interinstance network connection to a destination or AUXILIARY instance. RMAN then uses a “memory script” (one that is contained only in memory) to complete recovery and open the database.
Oracle Database 11g: New Features for Administrators 10 - 11
Active Database Duplication Usage Notes for Active Database Duplication: • Oracle Net must be aware of the source and destination databases. The FROM ACTIVE DATABASE clause implies network action. • If the source database is open, it must have archive logging enabled. • If the source database is in mounted state (and not a standby), the source database must have been shutdown cleanly. • Availability of the source database is not be affected by active database duplication. But the source database instance provides CPU cycles and network bandwidth. Enterprise Manager Interface In Enterprise Manager select Data Movement > Clone Database.
Oracle Database 11g: New Features for Administrators 10 - 12
Active Database Duplication Usage Notes for Active Database Duplication Password files are copied to the destination. The destination must have the same SYS user password as the source. In other words, at the beginning of the active database duplication process, both databases (source and destination) must have password files. When duplicating a standby database, the password file from the primary database overwrites the current (temporary) password file on the standby database. When you use command line and do not duplicate for a standby database, then you need to use the PASSWORD clause (with the FROM ACTIVE DATABASE clause of the RMAN DUPLICATE command).
Oracle Database 11g: New Features for Administrators 10 - 13
Customize Destination Options Prior to Oracle Database 11g, the SPFILE parameter file was not copied, because it requires alterations appropriate for the destination environment. You had to copy the SPFILE into the new location, edit it, and specify it when starting the instance in NOMOUNT mode or on the RMAN command line to be used before opening the newly copied database. With Oracle Database 11g you provide your list of parameters and desired values and the system sets them. The most obvious parameters are those whose value contains a directory specification. All parameter values that match your choice (with the exception of the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters) are placed. Note the case-sensitivity of parameters: The case must match for PARAMETER_VALUE_CONVERT. With the FILE_NAME_CONVERT parameters, pattern matching is OS specific. This functionality is equivalent to pausing the database duplication after restoring the SPFILE and issuing ALTER SYSTEM SET commands to modify the parameter file (before the instance is mounted). The example shows how to clone a database on the same host and in the same Oracle Home, with the use of different top-level disk locations. The source directories are under u01, the destination under u31. - You need to confirm your choice.
Oracle Database 11g: New Features for Administrators 10 - 15
Database Duplication: Job Run The example of the Job Run page shows the following steps: 1. Source Preparation 2. Create Control File 3. Destination Directories Creation 4. Copy Initialization and Password Files * Skip Copy or Transfer Controlfile 5. Destination Preparation 6. Duplicate Database * Skip Crating Standby ControlfIle * Skip Switching Clone Type 7. Recover Database 8. Add Temporary Files 9. Add EM Target 10. Cleanup Source Temporary Directory
Oracle Database 11g: New Features for Administrators 10 - 19
The RMAN DUPLICATE Command
DUPLICATE TARGET DATABASE TO aux FROM ACTIVE DATABASE SPFILE PARAMETER_VALUE_CONVERT '/u01', '/u31' SET SGA_MAX_SIZE = '200M' SET SGA_TARGET = '125M' SET LOG_FILE_NAME_CONVERT = '/u01','/u31' DB_FILE_NAME_CONVERT '/u01','/u31';
The RMAN DUPLICATE Command The example assumes you have previously connected to both the source or TARGET and the destination or AUXILIARY instance, which have a common directory structure but different top level disks. The destination instance uses automatically configured channels. • This RMAN DUPLICATE command duplicates an open database. • The FROM ACTIVE DATABASE clause indicates, that you are not using backups (it implies network action), and that the target is either open or mounted. • The SPFILE clause indicates that the SPFILE will be restored and modified prior to opening the database. • The repeating SET clause essentially issues an ALTER SYSTEM SET param = value SCOPE=SPFILE command. You can provide as many of these as necessary. Prerequistes: The AUXILIARY instance • Is at the nomount state having been started with a minimal pfile. • The pfile requires only DB_NAME and REMOTE_LOGIN_PASSWORFILE parameters. • The password file must exist and have the same SYS user password as the target. • The directory structure must be in place with the proper permission. • Connect to AUXILIARY using net service name as the SYS user.
Oracle Database 11g: New Features for Administrators 10 - 20
Duplicating a Standby Database
DUPLICATE TARGET DATABASE FOR STANDBY FROM ACTIVE DATABASE SPFILE PARAMETER_VALUE_CONVERT '/u01', '/u31' SET "DB_UNIQUE_NAME"="FOO" SET SGA_MAX_SIZE = "200M" SET SGA_TARGET = "125M" SET LOG_FILE_NAME_CONVERT = '/u01','/u31' DB_FILE_NAME_CONVERT '/u01','/u31';
Duplicating a Standby Database The example assumes that you are connected to the target and auxiliary instances and that the two environments have the same disk and directory structure. The FOR STANDBY clause initiates the creation of a standby database without using backups. The example uses "u01" as the disk of the source and "u31" as the top-level destination directory. All parameter values that match your choice (with the exception of the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters) are replaced in the SPFILE.
Oracle Database 11g: New Features for Administrators 10 - 21
RMAN Multi-Section Backups Overview
Multi-Section backups: • Created by RMAN • With your specified size value • Processed independently (serial or in parallel) • Producing multi-piece backup sets
RMAN Multi-Section Backups Overview Oracle data files can be up to 128TB in size. In prior versions, the smallest unit of RMAN backup is an entire file. This is not practical with such large files. In Oracle Database 11g, RMAN can break up large files into sections and backup and restore these sections independently, if you specify the SECTION SIZE option. Each file section is a contiguous range of blocks in a file. Each file section can be processed independently, either serially or in parallel. Backing up a file in separate sections can improve both the performance and allows large file backups to be restarted. A multi-section backup job produces a multi-piece backup set. Each piece contains one section of the file. All sections of a multi-section backup, except perhaps for the last section, are the same size. There are a maximum of 256 sections per file. Tip: You should not apply large values of parallelism to backup a large file that resides on a small number of disks. This feature is built into RMAN. No installation is required beyond the normal installation of the Oracle Database 11g. COMPATIBLE must be set to at least 11.0, because earlier releases cannot restore multi-section backups. In Enterprise Manager select Availability > Backup Settings > Backup Set (tabbed page).
Oracle Database 11g: New Features for Administrators 10 - 22
Using RMAN Multi-Section Backups
BACKUP and VALIDATE DATAFILE command option: SECTION SIZE [M | K | G]
Using RMAN Multi-Section Backups The BACKUP and VALIDATE DATAFILE commands accept a new option: SECTION SIZE [M | K | G]. Specify your planned size for each backup section. The option is both a backup-command and backup-spec level option, so that you can apply different section sizes to different files in the same backup job. Viewing meta-data about your multi-section backup • The V$BACKUP_SET and RC_BACKUP_SET views have a MULTI_SECTION column, that indicates whether this is a multi-section backup or not. • The V$BACKUP_DATAFILE and RC_BACKUP_DATAFILE views have a SECTION_SIZE column, that specifies the number of blocks in each section of a multisection backup. Zero means a whole-file backups.
Oracle Database 11g: New Features for Administrators 10 - 23
Creating Archival Backups with EM If you have business requirements to keep records for a long time, you can use RMAN to create an self contained archival backup of the database or tablespaces. RMAN does not apply the regular retention policies to this backup. Place your archival backup in a different long-term storage area, other than in the flash recovery area. To keep a backup for a long time, perform the following steps in Enterprise Manager: 1. Select Availability > Schedule Backup > Schedule Customized Backup. 2. Follow the steps of the Schedule Customized Backup wizard until you are on the Settings page. 3. Click Override Current Settings > Policy. In the Override Retention Policy section you can select to keep a backup for a specified number of days. A restore point is generated based on the backup job name. RMAN Syntax: KEEP {FOREVER|UNTIL TIME 'SYSDATE + n'} RESTORE POINT
Backups created with the KEEP option includes the spfile, control files, and archive redo log files required to restore this backup. This backup is a snapshot of the database at a point in time, and can be used to restore the database to another host.
Oracle Database 11g: New Features for Administrators 10 - 24
Creating Archival Backups with RMAN
Specifying the KEEP clause, when the database is online includes both data file and archive log backup sets KEEP {FOREVER | UNTIL TIME [=] ' date_string '} NOKEEP [RESTORE POINT rsname]
Creating Archival Backups with RMAN Prior to Oracle Database 11g, if you needed to preserve an online backup for a specified amount of time, RMAN assumed you might want to perform point in time recovery for any time within that period and RMAN retained all the archived logs for that time period unless you specified NOLOGS. However, you may have a requirement to simply keep the backup (and what is necessary to keep it consistent and recoverable) for a specified amount of time, for example, for two years. With Oracle Database 11g you can use the KEEP option to generate archival database backups, that satisfy business or legal requirements. The KEEP option is an attribute of the backup set (not individual of the backup piece) or copy. The KEEP option overrides any configured retention policy for this backup. You can retain archival backups, so that they are considered obsolete after a specified time (KEEP UNTIL) or never (KEEP FOREVER). The KEEP FOREVER clause requires the use of a recovery catalog. The RESTORE POINT clause creates a restore point in the control file that assigns a name to a specific SCN that can be restored from this backup. RMAN includes the data files, archived log files (only those needed to recover an online backup), and the relevant autobackup files. All these files must go to the same media family (or group of tapes) and have the same KEEP attributes.
Oracle Database 11g: New Features for Administrators 10 - 25
Managing Archival Database Backups
1) 1 Archiving a database backup: CONNECT TARGET / CONNECT CATALOG rman/rman@catdb CHANGE BACKUP TAG 'consistent_db_bkup' KEEP FOREVER;
2 2) Changing the status of a database copy: CHANGE COPY OF DATABASE CONTROLFILE NOKEEP;
Managing Archival Database Backups The CHANGE command changes the exemption status of a backup or copy in relation to the configured retention policy. For example, you can specify CHANGE ... NOKEEP, to make a backup that is currently exempt from the retention policy eligible for the OBSOLETE status. The first example changes a consistent backup into an archival backup, which you plan to store offsite. Because the database is consistent and therefore requires no recovery, you do not need to save archived redo logs with the backup. The second example specifies that any long-term image copies of data files and control files should lose their exempt status and so become eligible to be obsolete according to the existing retention policy: Deprecated clauses: KEEP [LOGS | NOLOGS] Preferred syntax: KEEP RESTORE POINT Note: The RESTORE POINT option is not valid with CHANGE. You cannot use CHANGE ... UNAVAILABLE or KEEP for files stored in the flash recovery area.
Oracle Database 11g: New Features for Administrators 10 - 26
Managing Recovery Catalogs
Managing recovery catalogs: 1. Create the recovery catalog. 2. Register your target databases in the recovery catalog. 3. If desired, merge recovery catalogs. NEW 4. If needed, catalog any older backups. 5. If needed, create virtual recovery catalogs for specific users. NEW 6. Protect the recovery catalog.
Managing Recovery Catalogs 1. Create the recovery catalog. 2. Register your target databases in the recovery catalog. This step enables RMAN to store metadata for the target databases in the recovery catalog. 3. If desired, you can also use the IMPORT CATALOG command to merge recovery catalogs. 4. If needed, catalog any older backups, whose records are no longer stored in the target control file. 5. If needed, create virtual recovery catalogs for specific users and determine the metadata to which they are permitted access. For more details, see the lesson Security New Features for details. 6. Protect the recovery catalog by including it in your backup and recovery strategy. The recovery catalog contains metadata about RMAN operations for each registered target database. The catalog includes the following types of metadata: • Datafile and archived redo log backup sets and backup pieces • Datafile copies • Archived redo logs and their copies • Tablespaces and data files on the target database • Stored scripts, which are named user-created sequences of RMAN commands • Persistent RMAN configuration settings
Oracle Database 11g: New Features for Administrators 10 - 27
Managing Recovery Catalogs (continued) The enrolling of a target database in a recovery catalog for RMAN use is called registration. The recommended practice is to register all of your target databases in a single recovery catalog. For example, you can register the prod1, prod2, and prod3 databases in a single catalog owned by the catowner schema in the catdb database. The owner of a centralized recovery catalog, which is also called the base recovery catalog, can grant or revoke restricted access to the catalog to other database users. All metadata is stored in the base catalog schema. Each restricted user has full read-write access to his own metadata, which is called a virtual private catalog. The recovery catalog obtains crucial RMAN metadata from the control file of each registered target database. The resynchronization of the recovery catalog ensures that the metadata that RMAN obtains from the control files is current. You can use a stored script as an alternative to a command file for managing frequently used sequences of RMAN commands. The script is stored in the recovery catalog rather than on the file system. A local stored script is associated with the target database to which RMAN is connected when the script is created, and can only be executed when you are connected to this target database. A global stored script can be run against any database registered in the recovery catalog. You can use a recovery catalog in an environment in which you use or have used different versions of the database. As a result, your environment can have different versions of the RMAN client, recovery catalog database, recovery catalog schema, and target database. You can now merge one recovery catalog (or metadata for specific databases in the catalog) into another recovery catalog for ease of management.
Oracle Database 11g: New Features for Administrators 10 - 28
The IMPORT CATALOG Command With the IMPORT CATALOG command you import the metadata from one recovery catalog schema into a different catalog schema. If you created catalog schemas of different versions to store metadata for multiple target databases, then this command enables you to maintain a single catalog schema for all databases. 1. RMAN must be connected to the destination recovery catalog, for example the cat111 schema, which is the catalog into which you want to import catalog data. This is the first step in all examples above. IMPORT CATALOG [DB_ID = [, ,…]] [DB_NAME=[, ALTER TABLESPACE undotbs1 RETENTION GUARANTEE;
To return a guaranteed undo tablespace to its normal setting, use the following command: SQL> ALTER TABLESPACE undotbs1 RETENTION NOGUARANTEE;
Backup Optimization Prior to Oracle Database 11g, RMAN has two ways of eliminating blocks from the backup piece, which only applies to full backups: • Null block compression: Never used blocks are not backed up. • Unused block compression: Currently not used blocks are not backed up.
Oracle Database 11g: New Features for Administrators 11 - 5
Guaranteeing Undo Retention (Continued) In Oracle Database 11g, undo data that is not needed for transaction recovery (for example, for committed transactions), is not backed up. The benefit is reduced overall backup time and storage by not backing up undo that applies to committed transactions. This optimization is automatically enabled.
Oracle Database 11g: New Features for Administrators 11 - 6
Preparing Your Database for Flashback To enable flashback features for an application, you must perform these tasks: • Create an undo tablespace with enough space to keep the required data for flashback operations. The more often users update the data, the more space is required. The database administrator usually calculates the space requirement. If you are uncertain about your space requirements, you can start with an automatically extensible undo tablespace, observe it through one business cycle (for example, 1 or 2 days), collect undo block information with the V$UNDO_STAT view, calculate your space requirements, and use them to create an appropriately sized fixed undo tablespace. (The calculation formula is in the Oracle Database Administrator's Guide.) • By default, Automatic Undo Management is enabled. If needed, enable Automatic Undo Management, as explained in the Oracle Database Administrator's Guide. • For a fixed-size undo tablespace, the Oracle database automatically tunes the system to give the undo tablespace the best possible undo retention. • For an automatically extensible undo tablespace (default), the Oracle database retains undo data to satisfy at a minimum, the retention periods needed by the longest running query and the threshold of undo retention, specified by the UNDO_RETENTION parameter.
Oracle Database 11g: New Features for Administrators 11 - 7
Preparing Your Database for Flashback (Continued) You can query V$UNDOSTAT.TUNED_UNDORETENTION to determine the amount of time for which undo is retained for the current undo tablespace. Setting the UNDO_RETENTION parameter does not guarantee, that unexpired undo data is not overwritten. If the system needs more space, the Oracle database can overwrite unexpired undo with more recently generated undo data. • Specify the RETENTION GUARANTEE clause for the undo tablespace to ensure that unexpired undo data is not discarded. • Grant flashback privileges to users, roles, or applications that need to use flashback features. To satisfy long retention requirements, create a flashback data archive.
Oracle Database 11g: New Features for Administrators 11 - 8
Flashback Data Archive
Original data in buffer cache
Undo data
DML operations Background process collects and writes original data to a flashback data archive
Flashback Data Archive Flashback data archives allow you to automatically track and archive the data in tables enabled for flashback data archive. This ensures that flashback queries obtain SQL-level access to the versions of database objects without getting a snapshot-too-old error. A flashback data archive provides the ability to track and store all transactional changes to a "tracked" table over its lifetime. It is no longer necessary to build this intelligence into your application. Flashback data archives are useful for compliance, audit reports, data analysis and decision support systems. The flashback data archive background process starts with the database. A flashback data archive consists of one or more tablespaces or parts thereof. You can have multiple flashback data archives. They are configured with retention duration. Based on your retention duration requirements, you should create different flashback data archives, for example, one for all records that must be kept for two years, another for all records that must be kept for five years. The database will automatically purge all historical information on the day after the retention period expires.
Oracle Database 11g: New Features for Administrators 11 - 9
Flashback Data Archives Process
1. 2. 3. 4.
11 - 10
Create the Flashback Data Archive Specify the default Flashback Data Archive Enable the Flashback Data Archive View Flashback Data Archive data
Flashback Data Archive Process The first step is to create a Flashback Data Archive. A Flashback Data Archive consists of one or more tablespaces. You can have multiple Flashback Data Archives. Second, you can specify a default Flashback Data Archive for the system. A Flashback Data Archive is configured with retention time. Data archived in the Flashback Data Archive is retained for the retention time. Third, you can enable flashback archiving (and then disable it again) for a table. While flashback archiving is enabled for a table, some DDL statements are not allowed on that table. By default, flashback archiving is off for any table. Last, you can examine the Flashback Data Archives. There are static data dictionary views that you can query for information about Flashback Data Archives.
Oracle Database 11g: New Features for Administrators 11 - 10
Flashback Data Archive Scenario
Using flashback data archive to access historical data: -- create the Flashback Data Archive CREATE FLASHBACK ARCHIVE DEFAULT fla1 TABLESPACE tbs1 QUOTA 10G RETENTION 5 YEAR;
1
-- Specify the default Flashback Data Archive ALTER FLASHBACK ARCHIVE fla1 SET DEFAULT;
2
-- Enable Flashback Data Archive ALTER TABLE inventory FLASHBACK ARCHIVE; ALTER TABLE stock_data FLASHBACK ARCHIVE;
3
SELECT product_number, product_name, count FROM inventory AS OF TIMESTAMP TO_TIMESTAMP ('2007-01-01 00:00:00', 'YYYY-MMDD HH24:MI:SS');
Flashback Data Archive Scenario You create a Flashback Data Archive with the CREATE FLASHBACK ARCHIVE statement. • You can optionally specify the default Flashback Data Archive for the system.If you omit this option, you can still make this Flashback Data Archive the default later. • You need to provide the name of the Flashback Data Archive • You need to provide the name of the first tablespace of the Flashback Data Archive • You can identify the maximum amount of space that the Flashback Data Archive can use in the tablespace. The default is unlimited. Unless your space quota on the first tablespace is unlimited, you must specify this value, or else an ORA-55621 will ensue. • You need to provide the retention time (number of days that Flashback Data Archive data for the table is guaranteed to be stored) In the example shown above in step 1, a default Flashback Data Archive named fla1 is created that uses up to 10 G of tablespace tbs1, whose data will be retained for five years. In the second step shown above, the default Flashback Data Archive is specified. By default, the system has no Flashback Data Archive. You can set it in one of two ways: 1. Specify the name of an existing Flashback Data Archive in the SET DEFAULT clause of the ALTER FLASHBACK ARCHIVE statement. 2. Include DEFAULT in the CREATE FLASHBACK ARCHIVE statement when you create a Flashback Data Archive. In the third step shown in the previous slide, Flashback Data Archive is enabled. If Automatic Undo Management is disabled, you receive the error ORA-55614 if you try to modify the table. Oracle Database 11g: New Features for Administrators 11 - 11
Flashback Data Archive Scenario (continued) To enable flashback archiving for a table, include the FLASHBACK ARCHIVE clause in either the CREATE TABLE or ALTER TABLE statement. In the FLASHBACK ARCHIVE clause, you can specify the Flashback Data Archive where the historical data for the table will be stored. The default is the default Flashback Data Archive for the system. To disable flashback archiving for a table, specify NO FLASHBACK ARCHIVE in the ALTER TABLE statement. The last statement shown in the previous slide shows how to retrieve the inventory of all items at the beginning of the year 2007. Continuing the previous examples: • Example 4 adds up to 5 GB of TBS3 tablespace to the FLA1 flashback data archive. • Example 5 changes the retention time for the FLA1 flashback data archive to two years. • Example 6 purges all historical data older than one day from the FLA1 flashback data archive. Normally purging is done automatically, on the day after your retention time expires. You can also override this for ad-hoc clean-up. • Example 7 drops the FLA1 flashback data archive and historical data, but not its tablespaces. With the ALTER FLASHBACK ARCHIVE command, you can: • Change the retention time of a flashback data archive • Purge some or all of its data • Add, modify, and remove tablespaces Note: Removing all tablespaces of a flashback data archive causes an error. Oracle Database 11g: New Features for Administrators 11 - 12
Flashback Data Archive
Some examples for which you may wish to use flashback data archive: • To access historical data • To generate reports • For Information Lifecycle Management (ILM) • For auditing • To recover data • To enforce digital shredding
Viewing Flashback Data Archives You can use the dynamic data dictionary views, to view tracked tables and flashback data archive metadata. To access the USER_FLASHBACK views, you need table ownership privileges. For the others, you need SYSDBA privileges.
Oracle Database 11g: New Features for Administrators 11 - 14
Flashback Data Archive DDL Restrictions
Using any of the following DDL statements on a table enabled for Flashback Data Archive causes the error ORA55610: • ALTER TABLE statement that does any of the following: – – – –
Drops, renames, or modifies a column Performs partition or subpartition operations Converts a LONG column to a LOB column Includes an UPGRADE TABLE clause, with or without an INCLUDING DATA clause
Guidelines • You can use the DBMS_FLASHBACK.ENABLE and DBMS_FLASHBACK.DISABLE procedures to enable and disable the Flashback Data Archives. • Use Flashback Query, Flashback Version Query, or Flashback Transaction Query for SQL code that you write, for convenience. • To obtain an SCN to use later with a flashback feature, you can use the DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER function. • To compute or retrieve a past time to use in a query, use a function return value as a timestamp or SCN argument. For example, add or subtract an INTERVAL value to the value of the SYSTIMESTAMP function. • To ensure database consistency, always perform a COMMIT or ROLLBACK operation before querying past data. • Remember that all flashback processing uses the current session settings, such as national language and character set, not the settings that were in effect at the time being queried. • To query past data at a precise time, use an SCN. If you use a timestamp, the actual time queried might be up to 3 seconds earlier than the time you specify. Oracle Database uses SCNs internally and maps them to timestamps at a granularity of 3 seconds. • You cannot retrieve past data from a dynamic performance (V$) view. A query on such a view always returns current data. However, you can perform queries on past data in static data dictionary views, such as *_TABLES.
Oracle Database 11g: New Features for Administrators 11 - 15
Flashback Transaction
• • • • •
Setting-up flashback transaction prerequisites Stepping through a possible workflow Using the Flashback Transaction Wizard Query transactions with and without dependencies Choose back-out options and flashing back transactions • Reviewing the results
Prerequisites In order to use this functionality, supplemental logging must be enabled and the correct privileges established. For example, the HR user in the HR schema decides to use Flashback Transaction for the REGIONS table. The SYSDBA performs the following setup steps in SQL*Plus: alter alter grant grant
database add supplemental database add supplemental execute on dbms_flashback select any transaction to
log data; log data (primary key) columns; to hr; hr;
Oracle Database 11g: New Features for Administrators 11 - 17
Flashing Back a Transaction
• You can flash back a transaction with Enterprise Manager or the command line. • EM uses the Flashback Transaction wizard which calls the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure with the NOCASCADE option. • If the PL/SQL call finishes successfully, it means that the transaction does not have any dependencies and a single transaction is backed out successfully.
Flashing Back a Transaction Security privileges To flash back or back-out a transaction, that is, to create a compensating transaction, you must have the SELECT, FLASHBACK and DML privileges on all affected tables. Conditions of Use • Transaction back-out is not support across conflicting DDL. • Transaction Backout inherits data type support from LogMiner. See the Oracle Database 11g documentation for supported data types. Recommendation • When you discover the need for transaction back-out, performance is better, if you start the back-out operation sooner. Large redo logs and high transaction rates result in slower transaction back-out operations. • Provide a transaction name for the back-out operation to facilitate later auditing. If you do not provide a transaction name, it will be automatically generated for you.
Oracle Database 11g: New Features for Administrators 11 - 18
Possible Workflow
• Viewing data in a table • Discovering a logical problem • Using Flashback Transaction – – – –
Performing a query Selecting a transaction Flashing back a transaction (with no conflicts) Choosing other back-out options (if conflicts exists)
Possible Workflow Assume that several transactions occurred as indicated below: connect hr/hr INSERT INTO hr.regions VALUES (5,'Pole'); COMMIT; UPDATE hr.regions SET region_name='Poles' WHERE region_id = 5; UPDATE hr.regions SET region_name='North and South Poles' WHERE region_id = 5; COMMIT; INSERT INTO hr.countries VALUES ('TT','Test Country',5); COMMIT; connect sys/<password> as sysdba ALTER SYSTEM ARCHIVE LOG CURRENT;
Oracle Database 11g: New Features for Administrators 11 - 19
Viewing Data To view the data in a table in Enterprise Manager, select Schema > Tables. While viewing the content of the HR.REGIONS table, you discover a logical problem. Region 5 is misnamed. You decide to immediately address this issue.
Oracle Database 11g: New Features for Administrators 11 - 20
Flashback Transaction Wizard In Enterprise Manager, select Schema > Tables > HR.REGIONS, then "Flashback Transaction" on the Actions drop-down, and click Go. This invokes the Flashback Transaction Wizard for your selected table. The Flashback Transaction: Perform Query page is displayed. Select the appropriate time range and add query parameters. (The more specific you can be, the shorter is the search of the Flashback Transaction Wizard.) In Enterprise Manager, Flashback Transaction and LogMiner are seemlessly integrated (as this page demonstrates). Without Enterprise Manager, use the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure, which is described in the PL/SQL Packages and Types Reference. Essentially, you take an array of transaction ids as starting point of your dependency search. For example: CREATE TYPE XID_ARRAY AS VARRAY(100) OF RAW(8); CREATE OR REPLACE PROCEDURE TRANSACTION_BACKOUT( numberOfXIDs NUMBER, -- number of transactions passed as input xids XID_ARRAY, -- the list of transaction ids options NUMBER default NOCASCADE, -- back out dependent txn timeHint TIMESTAMP default MINTIME -- time hint on the txn start );
Oracle Database 11g: New Features for Administrators 11 - 21
Flashback Transaction Wizard (Continued) The Flashback Transaction: Select Transaction page displays the transactions according to your previously entered specifications. First, display the transaction details to confirm that you are flashing back the correct transaction. Then select the offending transaction and continue with the wizard.
Oracle Database 11g: New Features for Administrators 11 - 22
Flashback Transaction Wizard (Continued) The Flashback Transaction Wizard now generates the Undo script and flashes back the transaction, but it gives you control to COMMIT this flashback. Before you commit the transaction, you can use the Execute SQL area on the bottom of the Flashback Transaction: Result page, to view what the result of your COMMIT will be.
Oracle Database 11g: New Features for Administrators 11 - 23
Finishing Up On the Flashback Transaction: Review page click the "Show Undo SQL Script" button to view the compensating SQL commands. Click Finish to commit your compensating transaction.
Oracle Database 11g: New Features for Administrators 11 - 24
Choosing Other Back-out Options The TRANSACTION_BACKOUT procedure checks dependencies, such as: • Write-after-write (WAW) • Primary and unique constraints A transaction can have a WAW dependency, which means a transaction updates or deletes a row, that has been inserted or updated by a dependent transaction. This can occur, for example, in a master/detail relationship of primary (or unique) and mandatory foreign key constraints. To understand the difference between the NONCONFLICT_ONLY and the NOCASCADE_FORCE options, assume that the T1 transaction changes rows R1, R2 and R3 and the T2 transaction changes rows R1, R3 and R4. In this scenario, both transactions update row R1, so it is a "conflicting" row. The T2 transaction has a WAW dependency on the T1 transaction. With the NONCONFLICT_ONLY option, R2 and R3 are backed out, because there is no conflict and it is assumed that you know best, what to do with the R1 row. With the NOCASCADE_FORCE option, all three rows (R1, R2, and R3) are backed out. Note: This screenshot is not part of the workflow example, but shows additional details of a more complex situation.)
Oracle Database 11g: New Features for Administrators 11 - 25
Choosing Other Back-out Options (Continued) The Flashback Transaction Wizard works as follows: • If the DBMS_FLASHBACK.TRANSACTION_BACKOUT procedure with the NOCASCADE option fails (because there are dependent transactions), You can change the recovery options. • With the NONCONFLICT_ONLY option, non-conflicting rows within a transaction are backed out, which implies that database consistency is maintained (although the transaction atomicity is broken for the sake of data repair). • If you want to forcibly back out the given transactions, without paying attention to the dependent transactions, use the NOCASCADE_FORCE option. The server will just execute the compensating DML commands for the given transactions in reverse order of their commit times. If no constraints break, you can proceed to commit the changes, or else roll back. • To initiate the complete removal of the given transactions and all their dependents in a post order fashion, use the CASCADE option. (Note: This screenshot is not part of the workflow example, but shows additional details of a more complex situation.)
Oracle Database 11g: New Features for Administrators 11 - 26
Final Steps Without EM
After choosing your back-out option, the dependency report is generated in the DBA_FLASHBACK_TXN_STATE and DBA_FLASHBACK_TXN_REPORT tables • Review the dependency report which shows all transactions which were backed out • Commit the changes to make them permanent • Roll back to discard the changes
Final Steps Without EM The DBA_FLASHBACK_TXN_STATE view contains the current state of a transaction: if it is alive in the system or effectively backed out. This table is atomically maintained with the compensating transaction. For each compensating transaction, there could be multiple rows, where each row provides the dependency relation between the transactions that have been compensated by the compensating transaction. The DBA_FLASHBACK_TXN_REPORT view provides detailed information about all compensating transactions that have been committed in the database. Each row in this view is associated with one compensating transaction. For a detailed description of these tables, see the Oracle Database Reference.
Oracle Database 11g: New Features for Administrators 11 - 27
LogMiner
• Powerful audit tool for Oracle databases • Direct access to redo logs • User interfaces: – SQL command-line – Graphical User Interface (GUI)
LogMiner What You Already Know: LogMiner is a powerful audit tool for Oracle databases, allowing you to easily locate changes in the database, enabling sophisticated data analyses, and providing undo capabilities to rollback logical data corruptions or user errors. LogMiner directly accesses the Oracle redo logs, which are complete records of all activities performed on the database, and the associated data dictionary. The tool offers two interfaces: SQL command-line and a GUI interface. What Is New: Enterprise Manager Database Control now has an interface for LogMiner. In prior releases, administrators were required to install and use the standalone Java Console for LogMiner. With this new interface, administrators have a task-based, intuitive approach to using LogMiner. This improves the manageability of LogMiner. In Enterprise Manager, select Availability > View and Manage Transactions. LogMiner supports the following activities: • Specifying query parameters • Stopping the query and showing partial results, if the query takes a long time • Partial querying, then showing the estimated complete query time • Saving the query result • Re-mining or refining the query based on initial results • Showing transaction details, dependencies and compensating "undo" SQL script • Flashing back and committing the transaction Oracle Database 11g: New Features for Administrators 11 - 28
Querying Transactions If you need, for example, to report on the lifecycle of a specific column or research transaction details, you may not know the specific transaction id. So your first step is to query the redo stream (internally done either in transaction tables or with LogMiner). In Enterprise Manager, select Availability > Browse Transactions. Specify a timeframe and either the username or table in question to start a query. The Start Time field defaults to the start time of the online log file. You have these basic options: • If you know at least one table involved in the transaction, then you must provide either a time range or SCN range as additional filter criteria. • If you know the username, but not the table, then you may want to know what else the OS user did in this time frame.
Oracle Database 11g: New Features for Administrators 11 - 29
Refining the Query You can refine the query with advanced query options. Click Advanced Query on the LogMiner page to specify additional column values and/or additional LogMiner WHERE clauses, such as: WHERE session_info= This matches all transactions initiated from the host. Click the Info icon to view all LogMiner options. You can select different combinations to form a where clause. Once the where clause is formed, you can edit it further by typing directly in the where clause text box. For example, if you want to find transactions that modified a certain column, you choose REDO_VALUE, column name and is present. If you then want to refine the query further, to show all transactions where the changed value is twice greater than the original value, you can specify a WHERE clause like this one: WHERE DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'HR.EMPLOYEES.SALARY') > 2*DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'HR.EMPLOYEES.SALARY');
Oracle Database 11g: New Features for Administrators 11 - 31
Reviewing Transactions Once you click Continue on the first LogMiner page, you see the Processing: Mining Transactions page. It displays among others, how many transactions were found and the approximate time to complete the operation. You can stop the query at any time, and review the results found so far.
Oracle Database 11g: New Features for Administrators 11 - 32
Reviewing Transactions You can review transaction details. Flashback Transaction is covered earlier in this lesson. Click OK to return to the "LogMiner Results" page.
Oracle Database 11g: New Features for Administrators 11 - 34
Summary
In this lesson, you should have learned how to: • Describe transactions and undo • Describe undo backup optimization • Prepare your database for flashback • Create, change and drop a flashback data archive • View flashback data archive metadata • Setup flashback transaction prerequisites • Query transactions with and without dependencies • Choose back-out options and flash back transactions • Using EM LogMiner • Review transaction details 11 - 35
11g Infrastructure Grid: Server Manageability 12 - 1
Objectives
After completing this lesson, you should be able to: • Setup Automatic Diagnostic Repository • Use Support Workbench • Run health checks • Use SQL Repair Advisor
Self-managing Database: Oracle Database 10g Self-managing is an ongoing goal for the Oracle Database. Oracle Database 10g mark the beginning of a huge effort to render the database more easy to use. With Oracle Database 10g, the focus for self-managing was more on performance and resources.
11g Infrastructure Grid: Server Manageability 12 - 3
Self-managing Database: The Next Generation
Manage Performance and Resources Manage Change Manage Fault
Self-managing Database: The Next Generation Oracle Database 11g adds two more important axes to the overall self-management goal: Change management, and fault management. In this lesson we concentrate on the fault management capabilities introduced in Oracle Database 11g.
11g Infrastructure Grid: Server Manageability 12 - 4
Oracle Database 11g R1 Fault Management
Goal: Reduce Time to Resolution Change Assurance and Automatic Health Checks
Oracle Database 11g R1 Fault Management The goals of the fault diagnosability infrastructure are the following: • Detecting problems proactively • Limiting damage and interruptions after a problem is detected • Reducing problem diagnostic time • Reducing problem resolution time • Simplifying customer interaction with Oracle Support
11g Infrastructure Grid: Server Manageability 12 - 5
Ease Diagnosis: Automatic Diagnostic Workflow An always-on, in-memory tracing facility enables database components to capture diagnostic data upon first failure for critical errors. A special repository, called Automatic Diagnostic Repository, is automatically maintained to hold diagnostic information about critical error events. This information can be used to create incident packages to be sent to Oracle Support Services for investigation. Here is a possible workflow for a diagnostic session: 1. Incident causes an alert to be raised in EM. 2. DBA can view alert via EM Alert page. 3. DBA can drill down to incident and problem details. 4. DBA or Oracle Support Services can decide or ask for that info to be packaged and sent to Oracle Support Services via MetaLink. DBA can add files to the data to be packaged automatically.
11g Infrastructure Grid: Server Manageability 12 - 6
Automatic Diagnostic Repository DIAGNOSTIC_DEST
Support Workbench BACKGROUND_DUMP_DEST CORE_DUMP_DEST
Automatic Diagnostic Repository (ADR) The ADR is a file-based repository for database diagnostic data such as traces, incident dumps and packages, the alert log, health monitor reports, core dumps, and more. It has a unified directory structure across multiple instances and multiple products stored outside of any database. It is therefore available for problem diagnosis when the database is down. Beginning with Oracle Database 11g R1, the database, Automatic Storage Management (ASM), Cluster Ready Services (CRS), and other Oracle products or components store all diagnostic data in the ADR. Each instance of each product stores diagnostic data underneath its own ADR home directory. For example, in a Real Application Clusters environment with shared storage and ASM, each database instance and each ASM instance has a home directory within the ADR. ADR's unified directory structure, consistent diagnostic data formats (UTS) across products and instances, and a unified set of tools enable customers and Oracle Support to correlate and analyze diagnostic data across multiple instances. Starting with Oracle Database 11g R1, the traditional …_DUMP_DEST initialization parameters are ignored. The ADR root directory is known as the ADR base. Its location is set by the DIAGNOSTIC_DEST initialization parameter. If this parameter is omitted or left null, the database sets DIAGNOSTIC_DEST upon startup as follows: If environment variable ORACLE_BASE is set, DIAGNOSTIC_DEST is set to $ORACLE_BASE. If environment variable ORACLE_BASE is not set, DIAGNOSTIC_DEST is set to $ORACLE_HOME/log.
11g Infrastructure Grid: Server Manageability 12 - 7
Automatic Diagnostic Repository (ADR) Within ADR base, there can be multiple ADR homes, where each ADR home is the root directory for all diagnostic data for a particular instance of a particular Oracle product or component. The location of an ADR home for a database is shown on the above graphic. Also, two alert files are now generated. One is textual, exactly like the alert file used with previous releases of the Oracle Database and is located under the TRACE directory of each ADR home. In addition, an alert message file conforming to the XML standard is stored in the ALERT subdirectory inside the ADR home. You can view the alert log in text format (with the XML tags stripped) with Enterprise Manager and with the ADRCI utility. The graphic on the slide shows you the directory structure of an ADR home. The INCIDENT directory contains multiple subdirectories, where each subdirectory is named for a particular incident, and where each contains dumps pertaining only to that incident. The HM directory contains the checker run reports generated by the Heath Monitor. There is also a METADATA directory that contains important files for the repository itself. You can compare this to a database dictionary. This dictionary can be queried using ADRCI. The ADR Command Interpreter (ADRCI) is utility that enables you to perform all of the tasks permitted by the Support Workbench, but in a command-line environment. ADRCI also enables you to view the names of the trace files in the ADR, and to view the alert log with XML tags stripped, with and without content filtering. In addition, you can use V$DIAG_INFO to list some important ADR locations.
11g Infrastructure Grid: Server Manageability 12 - 8
ADRCI the ADR Command Line Tool
• Allows interaction with ADR from OS prompt • Can invoke IPS with command line instead of EM • DBAs should use EM Support Workbench though: – Leverages same toolkit / libraries that ADRCI is built upon – Easy to follow GUI ADRCI> show incident ADR Home = /u01/app/oracle/product/11.1.0/db_1/log/diag/rdbms/orcl/orcl: ***************************************************************************** INCIDENT_ID PROBLEM_KEY CREATE_TIME ------------ -------------------------------------- --------------------------------1681 ORA-600_dbgris01:1,_addr=0xa9876541 17-JAN-07 09.17.44.843125000… 1682 ORA-600_dbgris01:12,_addr=0xa9876542 18-JAN-07 09.18.59.434775000… 2 incident info records fetched ADRCI>
ADRCI the ADR Command Line Tool ADRCI is a command-line tool that is part of the fault diagnosability infrastructure introduced in Oracle Database Release 11g. ADRCI enables you to: • View diagnostic data within the Automatic Diagnostic Repository (ADR). • Package incident and problem information into a zip file for transmission to Oracle Support. ADRCI has a rich command set, and can be used in interactive mode or within scripts. In addition, ADRCI can execute scripts of ADRCI commands in the same way that SQL*Plus executes scripts of SQL and PL/SQL commands. There is no need to log in to ADRCI, because the data in the ADR is not intended to be secure. ADR data is secured only by operating system permissions on the ADR directories. The easiest way to package and otherwise manage diagnostic data is with the Support Workbench of Oracle Enterprise Manager. ADRCI provides a command-line alternative to most of the functionality of Support Workbench, and adds capabilities such as listing and querying trace files. The above example shows you an ADRCI session where you are listing all open incidents stored in ADR. Note: For more information about ADRCI, refer to the Oracle Database Utilities guide.
11g Infrastructure Grid: Server Manageability 12 - 9
V$DIAG_INFO
SQL> SELECT * FROM V$DIAG_INFO;
NAME ------------------Diag Enabled ADR Base ADR Home Diag Trace Diag Alert Diag Incident Diag Cdump Health Monitor Default Trace File Active Problem Count Active Incident Count
V$DIAG_INFO The V$DIAG_INFO view lists all important ADR locations: • ADR Base: Path of ADR base • ADR Home: Path of ADR home for the current database instance • Diag Trace: Location of the text alert log and background/foreground process trace files • Diag Alert: Location of an XML version of the alert log • … • Default Trace File: Path to the trace file for your session. SQL Trace files are written here.
11g Infrastructure Grid: Server Manageability 12 - 10
Location for Diagnostic Traces The above table describes the different classes of trace data and dumps that reside both in Oracle Database 10g and Oracle Database 11g. With Oracle Database 11g, there is no distinction between foreground and background traces files. Both types of files go into the $ADR_HOME/trace directory. All non-incident traces are stored inside the TRACE subdirectory. This is the main difference compared with previous releases where critical error information is dumped into the corresponding process trace files instead of incident dumps. Incident dumps are placed in files separated from the normal process trace files starting with Oracle Database 11g. Note: The main difference between a trace and a dump is that a trace is more of a continuous output such as when SQL tracing is turned on, and a dump is a one-time output in response to an event such as an incident. Also, a core is a binary memory dump that is port specific.
11g Infrastructure Grid: Server Manageability 12 - 11
Viewing the Alert Log Using Enterprise Manager You can view the alert log with a text editor, with Enterprise Manager, or with the ADRCI utility. To view the alert log with Enterprise Manager: 1. Access the Database Home page in Enterprise Manager. 2. Under Related Links, click Alert Log Contents. The View Alert Log Contents page appears. 3. Select the number of entries to view, and then click Go.
11g Infrastructure Grid: Server Manageability 12 - 12
Viewing the Alert Log Using ADRCI adrci>>show alert –tail ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* 2007-04-16 22:10:50.756000 -07:00 ORA-1654: unable to extend index SYS.I_H_OBJ#_COL# by 128 in tablespace SYSTEM 2007-04-16 22:21:20.920000 -07:00 Thread 1 advanced to log sequence 400 Current log# 3 seq# 400 mem# 0: +DATA/orcl/onlinelog/group_3.266.618805031 Current log# 3 seq# 400 mem# 1: +DATA/orcl/onlinelog/group_3.267.618805047 … Thread 1 advanced to log sequence 401 Current log# 1 seq# 401 mem# 0: +DATA/orcl/onlinelog/group_1.262.618804977 Current log# 1 seq# 401 mem# 1: +DATA/orcl/onlinelog/group_1.263.618804993 DIA-48223: Interrupt Requested - Fetch Aborted - Return Code [1] adrci>> adrci>>SHOW ALERT -P "MESSAGE_TEXT LIKE '%ORA-600%'" ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl: ************************************************************************* adrci>>
Viewing the Alert Log Using ADRCI You can also use ADRCI to view the content of your alert log file. Optionally, you can change the current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET HOMEPATH command to change the current ADR home. Ensure that operating system environment variables such as ORACLE_HOME are set properly, and then enter the following command at the operating system command prompt: adrci. The utility starts and displays its prompt as shown on the slide. Then use the SHOW ALERT command. To limit the output, you can look at the last records using the –TAIL option. This displays the last portion of the alert log (about 20 to 30 messages), and then waits for more messages to arrive in the alert log. As each message arrives, it is appended to the display. This command enables you to perform live monitoring of the alert log. Press CTRL-C to stop waiting and return to the ADRCI prompt. You can also specify the amount of lines to be print if you want. You can also filter the output of the SHOW ALERT as shown on the bottom example on the slide where you only want to display alert log messages that contain the string 'ORA-600'. Note: ADRCI allows you to spool the output to a file exactly like in SQL*Plus.
11g Infrastructure Grid: Server Manageability 12 - 13
Problems and Incidents Problem ID
Critical Error
Problem Aut o Flood control
Problem Key
ma tic
Collecting
ally
Incident lly nua Ma
Incident Status
Incident ID
Automatic transition
Ready Tracking Data-Purged Closed
DBA Traces
ADR
MMON Auto-purge
Non-critical Error Package to be sent to Oracle Support
Problems and Incidents To facilitate diagnosis and resolution of critical errors, the fault diagnosability infrastructure introduces two concepts for Oracle Database: problems and incidents: • A problem is a critical error in the database. Problems are tracked in the ADR. Each problem is identified by a unique problem ID and has a problem key, which is a set of attributes that describe the problem. The problem key includes the ORA error number, error parameter values, and other information. Here is a possible list of critical errors: - All internal Errors – ORA-60x errors - All system access violations – (SEGV, SIGBUS) - ORA-4020 (Deadlock on library object), ORA-8103 (Object no longer exists ), ORA1410 (Invalid ROWID), ORA-1578 (Data block corrupted), ORA-29740 (Node eviction), ORA-255 (Database is not mounted), ORA-376 (File cannot be read at this time), ORA-4030 (Out of process memory), ORA-4031 (Unable to allocate more bytes of shared memory), ORA-355 (The change numbers are out of order), ORA-356 (Inconsistent lengths in change description), ORA-353 (Log corruption), ORA-7445 (Operating System exception). • An incident is a single occurrence of a problem. When a problem occurs multiple times, as is often the case, an incident is created for each occurrence. Incidents are tracked in the ADR. Each incident is identified by a numeric incident ID, which is unique within an ADR home.
11g Infrastructure Grid: Server Manageability 12 - 14
Problems and Incidents (Continued) When an incident occurs, the database makes an entry in the alert log, gathers diagnostic data about the incident (a stack trace, the process state dump, and other dumps of important data structures), tags the diagnostic data with the incident ID, and stores the data in an ADR subdirectory created for that incident. Each incident has a problem key and is mapped to a single problem. Two incidents are considered to have the same root cause if their problem keys match. Large amounts of diagnostic information can be created very quickly if a large number of sessions stumble across the same critical error. Having the diagnostic information for more than a small number of the incidents is not required. That is why ADR provides flood control so that only a certain number of incidents under the same problem can be dumped in a given time interval. Note that flood controlled incidents still generate incidents; they only skip the dump actions. By default only five dumps per hour for a given problem are allowed. You can view a problem as a set of incidents that are perceived to have the same symptoms. The main reason to introduce this concept is to make it easier for users to manage errors on their systems. For example, a symptom that occurs 20 times should only be reported to Oracle once. Mostly, you will manage problems instead of incidents using IPS to package a problem to be sent to Oracle Support. Most commonly incidents are automatically created when a critical error occurred. However, you are also allowed to create an incident manually, via the GUI provided by the EM Support Workbench. Manual incident creation are mostly done when you want to report problems that are not accompanied by critical errors raised inside the Oracle code. As time goes by, more and more incidents will be accumulated in the ADR. A retention policy allows you to specify how long to keep the diagnostic data. ADR incidents are controlled by two different policies: • The incident metadata retention policy controls how long the metadata is kept around. This policy has a default setting of one year. • The incident files and dumps retention policy controls how long generated dump files are kept around. This policy has a default setting of one month. You can change these setting using the Incident Package Configuration link on the EM Support Workbench page. Inside the RDBMS component, MMON is responsible for purging automatically expired ADR data.
11g Infrastructure Grid: Server Manageability 12 - 15
Problems and Incidents (Continued) The Status of an incident reflects the state of the incident. An Incident can be in any one of the following states: • Collecting: the incident has been newly created and is in the process of collecting diagnostic information. In this state the incident data can be incomplete and should not be packaged, and should be viewed with discretion. • Ready: the data collection phase has completed. The incident is now ready to be used for analysis, or to be packaged to be sent to Oracle Support. • Tracking: the DBA is working on the incident, and prefer the incident to be kept in the repository indefinitely. You have to manually change the incident status to this value. • Closed: the incident is now in a done state. In this state, ADR can elect the incident to be purged after it passes its retention policy. • Data-Purged: the associated files have been removed from the incident. In some case, even if the incident files may still be physically around, it is not advisable for users to look at them as they can be in an inconsistent state. Note that the incident metadata itself for the incident is still valid for viewing. If an incident has been in either the Collection or the Ready state for over twice its retention length, the incident automatically moves to the Closed state. You can manually purged incident files. For simplicity, problem metadata is internally maintained by ADR. Problems are automatically created when the first incident (of the problem key) occurs. The Problem metadata is removed after it last incident is removed from the repository. Note: It is not possible to disable automatic incident creation for critical errors.
11g Infrastructure Grid: Server Manageability 12 - 16
Incident Packaging Service (IPS)
• Uses rules to correlate all relevant dumps and traces from ADR for a given problem and allow you to package them to ship to Oracle Support • Rules can involve files that were generated around the same time, associated with the same client, same error codes, etc. • DBAs can explicitly add/edit or remove files before packaging • Access IPS through either EM or ADRCI
Incident Packaging Service With incident packaging service (IPS) you can automatically and easily gather all diagnostic data (traces, dumps, health check reports, SQL test cases, and more) pertaining to a critical error and package the data into a zip file suitable for transmission to Oracle Support. Because all diagnostic data relating to a critical error are tagged with that error's incident number, you do not have to search through trace files, dump files, and so on to determine the files that are required for analysis; the incident packaging service identifies all required files automatically and adds them to the package.
11g Infrastructure Grid: Server Manageability 12 - 17
• Incident Package is a logical structure inside ADR representing one or more problems • A package is a zip file containing dump information related to an incident package • By default only the first and last ADR three incidents of each Home problem are included to alert cdump incpkg an incident package … pkg_1 • You can generate complete or incremental zip files
Incident Packages To upload diagnostic data to Oracle Support Services, you first collect the data into an incident package. When you create an incident package, you select one or more problems to add to the incident package. The Support Workbench then automatically adds to the incident package the incident information, trace files, and dump files associated with the selected problems. Because a problem can have many incidents (many occurrences of the same problem), by default only the first three and last three incidents for each problem are added to the incident package. You can change this default number on the Incident Packaging Configuration page accessible from the Support Workbench page. After the incident package is created, you can add any type of external file to the incident package, remove selected files from the incident package, or edit selected files in the incident package to remove sensitive data. An incident package is a logical construct only, until you create a physical file from the incident package contents. That is, an incident package starts out as a collection of metadata in the ADR. As you add and remove incident package contents, only the metadata is modified. When you are ready to upload the data to Oracle Support Services, you either invoke a Support Workbench or ADRCI function that gathers all the files referenced by the metadata, places them into a zip file, and then uploads the zip to MetaLink.
11g Infrastructure Grid: Server Manageability 12 - 18
EM Support Workbench Overview
• Wizard that guides you through the process of handling problems • You can perform the following tasks with the Support Workbench: – – – – – – –
12 - 19
View details on problems and incidents Run heath checks Generate additional diagnostic data Run advisors to help resolve problems Create and track service requests through MetaLink Generate incident packages Close problems once resolved
EM Support Workbench Overview The Support Workbench is an Enterprise Manager wizard that helps you through the process of handling critical errors. It displays incident notifications, presents incident details, and enables you to select incidents for further processing. Further processing includes running additional health checks, invoking the incident packaging service (IPS) to package all diagnostic data about the incidents, adding SQL test cases and selected user files to the package, filing a technical assistance request (TAR) with Oracle Support, shipping the packaged incident information to Oracle Support, and tracking the TAR through its lifecycle. You can perform the following tasks with the Support Workbench: • View details on problems and incidents. • Manually run health checks to gather additional diagnostic data for a problem. • Generate additional dumps and SQL test cases to add to the diagnostic data for a problem. • Run advisors to help resolve problems. • Create and track a service request through MetaLink, and add the service request number to the problem data. • Collect all diagnostic data relating to one or more problems into an incident package and then upload the incident package to Oracle Support Services. • Close the problem when the problem is resolved.
11g Infrastructure Grid: Server Manageability 12 - 19
Oracle Configuration Manager Enterprise Manager Support Workbench uses Oracle Configuration Manager to upload the physical files generated by IPS to MetaLink. If Oracle Configuration Manager is not installed or properly configured, the upload may fail. In this case, a message is displayed with a path to the incident package zip file and a request that you upload the file to Oracle Support manually. You can upload manually with MetaLink. During Oracle Database 11g installation, the Oracle Universal Installer has a special Oracle Configuration Manager Registration screen shown above. On that screen you need to select the Enable check box and accept license agreement before you can enter your Customer Identification Number (CSI), your MetaLink account username, and your country code. If you do not configure Oracle Configuration Manager, you will still be able to manually upload incident packages to MetaLink. Note: For more information about Oracle Configuration Manager, see Oracle Configuration Manager Installation and Adminstration Guide, available at the following URL: http://www.oracle.com/technology/documentation/oem.html
11g Infrastructure Grid: Server Manageability 12 - 20
EM Support Workbench Roadmap 1
7
6
View critical error alerts in Enterprise Manager
Close incidents
2
View problem details
Track the SR and implement repairs
3
Gather additional diagnostic information
Package and upload diagnostic data to Oracle Support
EM Support Workbench Roadmap The above graphic is a summary of the tasks that you complete to investigate, report, and in some cases, resolve a problem using Enterprise Manager Support Workbench: 1. Start by accessing the Database Home page in Enterprise Manager, and reviewing critical error alerts. Select an alert for which to view details. 2. Examine the problem details and view a list of all incidents that were recorded for the problem. Display findings from any health checks that were automatically run. 3. Optionally, run additional health checks and invoke the SQL Test Case Builder, which gathers all required data related to a SQL problem and packages the information in a way that enables the problem to be reproduced at Oracle Support. 4. Create a service request with MetaLink and optionally record the service request number with the problem information. 5. Invoke a wizard that automatically packages all gathered diagnostic data for a problem and uploads the data to Oracle Support. Optionally edit the data to remove sensitive information before uploading. 6. Optionally maintain an activity log for the service request in the Support Workbench. Run Oracle advisors to help repair SQL failures or corrupted data. 7. Set status for one, some, or all incidents for the problem to Closed.
11g Infrastructure Grid: Server Manageability 12 - 21
View Critical Error Alerts in Enterprise Manager You begin the process of investigating problems (critical errors) by reviewing critical error alerts on the Database Home page. To view critical error alerts, access the Database Home page in Enterprise Manager. From the Home page, you can look at the Diagnostic Summary section from where you can click the Active Incidents link if there are incidents. You can also use the Alerts section and look for critical alerts flagged as Incidents. When you click the Active Incidents link you end up on the Support Workbench page from where you can retrieve details about all problems and corresponding incidents. From there, you can also retrieve all Health Monitor checker run and created packages. Note: The tasks described in this section are all Enterprise Manager–based. You can also accomplish all of these tasks with the ADRCI command-line utility and PL/SQL package procedures. See Oracle Database Utilities for more information on the ADRCI utility.
11g Infrastructure Grid: Server Manageability 12 - 22
View Problem Details From the Problems sub-page on the Support Workbench page, click the ID of the problem you want to investigate. This takes you to the corresponding Problem Details page. On this page, you can see all incidents that are related to your problem. You can associate your problem with a MetaLink service request and bug number. In the Investigate and Resolve section of the page, you have a Self Service sub-page that has direct links to the operation you can do on this problem. In the same section, the Oracle Support sub-page has direct links to MetaLink. The Activity Log sub-page shows you the system-generated operations that have occurred on your problem so far. This sub-page allows you to add your own comments while investigating your problem. From the Incidents sub-page, you can click on a related incident ID to get to the corresponding Incident Details page.
11g Infrastructure Grid: Server Manageability 12 - 23
View Incident Details Once on the Incident Details page, the Dump Files sub-page appears and lists all corresponding dump files. You can then click the goggles for a particular dump file to visualize the file content with its various sections.
11g Infrastructure Grid: Server Manageability 12 - 24
View Incident Details On the Incident Details page, click Checker Findings to view the Checker Findings sub-page. This page displays findings from any health checks that were automatically run when the critical error was detected. Most of the time, you will have the possibility to select one or more findings, and invoke an advisor to fix the issue.
11g Infrastructure Grid: Server Manageability 12 - 25
Create a Service Request Before you can package and upload diagnostic information for the problem to Oracle Support, you must create a service request. To create a service request, you need to go to MetaLink first. MetaLink can be accessed directly from the Problem Details page when you cclik the Go to Metalink button in the Investigate and Resolve section of the page. Once on MetaLink, log in and create a service request in the usual manner. Once done, you have the possibility to enter that service request for your problem. This is entirely optional and is for your reference only. In the Summary section, click the Edit button that is adjacent to the SR# label, and in the window that opens, enter the SR#, and then click OK.
11g Infrastructure Grid: Server Manageability 12 - 26
Package and upload diagnostic data to Oracle Support
Package and upload diagnostic data to Oracle Support Support Workbench provides two methods for creating and uploading an incident package: the Quick Packaging method and the Advanced Packaging method. The example on the slide shows you how to use Quick Packaging. Quick Packaging is a more automated method with a minimum of steps. You select a single problem, provide an incident package name and description, and then schedule the incident package upload, either immediately or at a specified date and time. Support Workbench automatically places diagnostic data related to the problem into the incident package, finalizes the incident package, creates the zip file, and then uploads the file. With this method, you do not have the opportunity to add, edit, or remove incident package files or add other diagnostic data such as SQL test cases. To package and upload diagnostic data to Oracle Support: 1. On the Problem Details page, in the Investigate and Resolve section, click Quick Package. The Create New Package page of the Quick Packaging wizard appears. 2. Enter a package name and description. 3. If you did not record the service request number in the previous task, enter it here. 4. Click Next, and then proceed with the remaining pages of the Quick Packaging wizard. Click Submit on the Review page to upload the package.
11g Infrastructure Grid: Server Manageability 12 - 27
Track the SR and Implement Repairs After uploading diagnostic information to Oracle Support, you might perform various activities to track the service request and implement repairs. Among these activities are the following: Add an Oracle bug number to the problem information. To do so, on the Problem Details page, click the Edit button that is adjacent to the Bug# label. This is for your reference only. Add comments to the problem activity log. To do so, complete the following steps: 1. Access the Problem Details page for the problem. 2. Click Activity Log to display the Activity Log subpage. 3. In the Comment field, enter a comment, and then click Add Comment. Your comment is recorded in the activity log. Respond to a request by Oracle Support to provide additional diagnostics. Your Oracle Support representative might provide instructions for gathering and uploading additional diagnostics.
11g Infrastructure Grid: Server Manageability 12 - 28
Track the SR and Implement Repairs From the Incident Details page, you can run an Oracle advisor to implement repairs. Access the suggested advisor in one of the following ways: • In the Self-Service tab of the Investigate and Resolve section of the Problem Details page. • On the Checker Findings sub-page of the Incident Details page as shown on the slide. The advisors that help you repair critical errors are: • Data Recovery Advisor: Corrupted blocks, corrupted or missing files, and other data failures. • SQL Repair Advisor: SQL statement failures.
11g Infrastructure Grid: Server Manageability 12 - 29
Close Incidents and Problems When a particular incident is no longer of interest, you can close it. By default, closed incidents are not displayed on the Problem Details page. All incidents, whether closed or not, are purged after 30 days. You can disable purging for an incident on the Incident Details page. To close incidents: 1. Access the Support Workbench home page. 2. Select the desired problem, and then click View. The Problem Details page appears. 3. Select the incidents to close and then click Close. A confirmation page appears. 4. Click Yes on the Confirmation page to close your incident.
11g Infrastructure Grid: Server Manageability 12 - 30
Incident Packaging Configuration As already seen, you can configure various aspects of retention rules and packaging generation. Using Support Workbench, you can access the Incident Packaging configuration page from the Related Links section of the Support Workbench page by clicking the Incident Package Configuration link. Here are the parameters you can change: • Incident Metadata Retention Period: Metadata is basically information about the data. As for incidents, it is the incident time, ID, size, problem, and so forth. Data is the actual contents of an incident, such as traces. • Cutoff Age for Incident Inclusion: This value includes incidents for packaging that are in the range to now. If the cutoff date is 90, for instance, the system only includes the incidents that are within the last 90 days. • Leading Incidents Count: For every problem included in a package, the system selects a certain number of incidents from the problem from the beginning (leading) and the end (trailing). For example, if the problem has 30 incidents, and the leading incident count is 5 and the trailing incident count is 4, the system includes the first 5 incidents and the last 4 incidents. • Trailing Incidents Count: See above.
11g Infrastructure Grid: Server Manageability 12 - 31
Incident Packaging Configuration (Continued) • Correlation Time Proximity: This parameter is the exact time interval that defines "happened at the same time". There is a concept of correlated incidents/problems to a certain incident/problem. That is, what problems seem to have a connection with a said problem. One criterion for correlation is time correlation: find the incidents that happened at the same time as the incidents in a problem.
11g Infrastructure Grid: Server Manageability 12 - 32
Custom Packaging: Create New Package Custom Packaging is a more manual method than Quick Packaging, but gives you greater control over the incident package contents. You can create a new incident package with one or more problems, or you can add one or more problems to an existing incident package. You can then perform a variety of operations on the new or updated incident package, including: • Adding or removing problems or incidents • Adding, editing, or removing trace files in the incident package • Adding or removing external files of any type • Adding other diagnostic data such as SQL test cases • Manually finalizing the incident package and then viewing incident package contents to determine if you must edit or remove sensitive data or remove files to reduce incident package size. With the Custom Packaging method, you create the zip file and request upload to Oracle Support as two separate steps. Each of these steps can be performed immediately or scheduled for a future date and time. To package and upload a problem with custom packaging: 1. In the Problems sub-page at the bottom of the Support Workbench home page, select the first problem that you want to package, and then click Package. 2. On the Package: Select packaging mode, select the Custom Packaging option, and then click Continue. 3. The Custom Packaging: Select Package page appears. To create a new incident package, select 11g Infrastructure Server Manageability 12 - 33 and then click the Create New Package option, enter anGrid: incident package name and description, OK. To add the selected problems to an existing incident package, select the Select from Existing
Custom Packaging: Manipulate Incident Package On the Customize Package, you get the confirmation that your new package has been created. This page displays the incidents that are contained in the incident package, plus a selection of packaging tasks to choose from. You run these tasks against the new incident package or the updated existing incident package. As you can see from the slide, you can exclude/include incidents or files as well as many other possible tasks.
11g Infrastructure Grid: Server Manageability 12 - 34
Custom Packaging: Finalize Incident Package Finalizing an incident package is used to add correlated files from other components, such as Health Monitor to the package. Recent trace files and log files are also included in the package. You can finalize a package by clicking the Finish Contents Preparation link in the Packaging Tasks section as shown on the slide. A confirmation page is displayed that lists all files that will be part of the physical package.
11g Infrastructure Grid: Server Manageability 12 - 35
Custom Packaging: Generate Package Once your incident package has been finalized, you can generate the package file. You need to go back to the corresponding package page and click Generate Upload File. The Generate Upload File page appears. There, Select the Full or Incremental option to generate a full incident package zip file or an incremental incident package zip file. For a full incident package zip file, all the contents of the incident package (original contents and all correlated data) are always added to the zip file. For an incremental incident package zip file, only the diagnostic information that is new or modified since the last time that you created a zip file for the same incident package is added to the zip file. Once done, select the Schedule and click Submit. If you scheduled the the generation immediately, a Processing page appears until packaging is finished. This is followed by the Confirmation page where you can click OK. Note: The Incremental option is unavailable if a physical file was never created for the incident package.
11g Infrastructure Grid: Server Manageability 12 - 36
Custom Packaging: Upload Package Once you generated the physical package, you can go back to the Customize Package page from where you can click the View/Send Uploaded Files link from the Packaging Tasks section. This takes you to the View/Send Upload Files page from where you can select your package, and click the Send to Oracle button. The Send to Oracle page appears. There, you can enter the service request number for your problem, and choose a Schedule. You can then click Submit.
11g Infrastructure Grid: Server Manageability 12 - 37
Viewing and Modifying Incident Packages Once a package is created, you always have the possibility to modify it through customization. For example, go to the Support Workbench page and click the Packages tab. This takes you to the Packages sub-page. From this page, you can select a package and delete it, or click the package link to go to the Package Details page. There, you can click Customize to go to the Customize Package page from where you can manipulate your package by adding/removing problem, incidents, or files.
11g Infrastructure Grid: Server Manageability 12 - 38
Create User-reported Problems Critical errors generated internally to the database are automatically added to the Automatic Diagnostic Repository (ADR) and tracked in the Support Workbench. However, there may be a situation in which you want to manually add a problem that you noticed to the ADR so that you can put that problem through Support Workbench workflow. An example of such a situation would be if the performance of the database or of a particular query suddenly noticeably degraded. Support Workbench includes a mechanism for you to create and work with such a userreported problem. To create a user-reported problem, go to the Support Workbench page, and click Create UserReported Problem link in the Related Links section. This takes you to the Create User-Reported Problem page from where you are asked to run a corresponding advisor before continuing. This is only necessary if you are not sure about your problem. However, if you already know exactly what is going on, select the issue that describes most the type of problem you are encountering and click Continue with Creation of Problem. By clicking this button, you basically create a pseudo problem inside Support Workbench. This allows you to manipulate this problem using the previously seen Support Workbench workflow for handling critical errors. So, you end up on a Problem Details page for your issue. Note that at first the problem does not have any diagnostic data associated with it. At this point, you need to create a package and upload necessary trace files by customizing that package. This has already been described previously. 11g Infrastructure Grid: Server Manageability 12 - 39
Invoking IPS Using ADRCI INCIDENT
IPS SET CONFIGURATION
PROBLEM | PROBLEM KEY
IPS CREATE PACKAGE SECONDS | TIME INCIDENT NEW INCIDENTS
Invoking IPS Using ADRCI Creating a package is a two-step process: you first create the logical package, and then generate the physical package as a zip file. Both steps can be done using ADRCI commands. To create a logical package, the command IPS CREATE PACKAGE is used. There are several variants of this command, that allow you to choose the contents: • IPS CREATE PACKAGE creates an empty package. • IPS CREATE PACKAGE PROBLEMKEY creates a package based on problem key. • IPS CREATE PACKAGE PROBLEM creates a package based on problem ID. • IPS CREATE PACKAGE INCIDENT creates a package based on incident ID. • IPS CREATE PACKAGE SECONDS creates a package containing all incidents generated from seconds ago until now. • IPS CREATE PACKAGE TIME creates a package based on the specified time range. It's also possible to add contents to an existing package. For instance: • IPS ADD INCIDENT PACKAGE adds an incident to an existing package. • IPS ADD FILE PACKAGE adds a file inside ADR to an existing package.
11g Infrastructure Grid: Server Manageability 12 - 40
Invoking IPS Using ADRCI (Continued) IPC COPY copies files between the ADR repository and the external file system. It has two forms: • IN FILE to copy an external file into ADR, associating it with an existing package, and optionally an incident. • OUT FILE to copy a file from ADR to a location outside ADR. IPS COPY is essentially used to COPY OUT a file, edit it, and COPY IN it back into ADR. IPS FINALIZE is used to finalize package for delivery which means that other components, such as Health Monitor, are called to add their correlated files to the package. Recent trace files and log files are also included in the package. If required, this step is run automatically when a package is generated. To generate the physical file, the command IPS GENERATE PACKAGE is used. The syntax is as follows: IPS GENERATE PACKAGE IN [COMLPETE | INCREMENTAL] and generates a physical zip file for an existing logical package. The file name contains either COM for complete or INC for incremental followed by a sequence number that is incremented each time a zip file is generated. IPS SET CONFIGURATION is used to set IPS rules. Note: Refer to the Oracle Database Utilities guide for more information about ADRCI.
11g Infrastructure Grid: Server Manageability 12 - 41
Health Monitor Overview Beginning with Release 11g, Oracle Database includes a framework called Health Monitor for running diagnostic checks on various components of the database. Health Monitor checks examine various components of the database, including files, memory, transaction integrity, metadata, and process usage. These checkers generate reports of their findings as well as recommendations for resolving problems. Health Monitor checks can be run in two ways: • Reactive: The fault diagnosability infrastructure can run Health Monitor checks automatically in response to critical errors. • Manual: As a DBA, you can manually run Health Monitor health checks using either the DBMS_HM PL/SQL package or the Enterprise Manager interface. On the slide, you can see some of the checks that Health Monitor can run. For a complete description of all possible checks, look at V$HM_CHECK. These health checks fall into one of two categories: • DB-online: These checks can be run while the database is open (that is, in OPEN mode or MOUNT mode). • DB-offline: In addition to being runnable while the database is open, these checks can also be run when the instance is available and the database itself is closed (that is, in NOMOUNT mode).
11g Infrastructure Grid: Server Manageability 12 - 42
Health Monitor Overview (Continued) After a checker has run, it generates a report of its execution. This report contains information about the checker’s findings, including the priorities (low, high, or critical) of the findings, descriptions of the findings and their consequences, and basic statistics about the execution. Health Monitor generates reports in XML and stores the reports in ADR. You can view these reports using either V$HM_RUN, DBMS_HM, ADRCI, or Enterprise Manager. Note: The Redo Check and the Database Cross Check are DB-offline checks. All other checks are DB-online checks. There around 25 checks you can run.
11g Infrastructure Grid: Server Manageability 12 - 43
Running Health Checks Manually: EM Example Enterprise Manager provides an interface for running Health Monitor checkers. You can find this interface in the Checkers tab on the Advisor Central page. The page lists each checker type, and you can run a checker by clicking on it and then OK on the corresponding checker page after you entered the parameters for the run. This is illustrated on the slide where you run the Data Block Checker manually. Once a check is completed, you can view the corresponding checker run details by selecting the checker run from the Results table and click Details. Checker runs can be reactive or manual. On the Findings sub-page you can see the various findings and corresponding recommendations extracted from V$HM_RUN, V$HM_FINDING and V$HM_RECOMMENDATION. If you click View XML Report on the Runs sub-page, you can view the run report in XML format. Viewing the XML report in Enterprise Manager generates the report for the first time if it is not yet generated in your ADR. You can then view the report using ADRCI without needing to generate it.
11g Infrastructure Grid: Server Manageability 12 - 44
Running Health Checks Manually: PL/SQL Example SQL> exec dbms_hm.run_check('Database Dictionary Check', 'mycheck',0,'TABLE_NAME=tab$'); SQL> set long 100000 SQL> select dbms_hm.get_run_report('mycheck') from dual; DBMS_HM.GET_RUN_REPORT('mycheck') ------------------------------------------------------------------------------- <TITLE>HM Report: mycheck Database Dictionary Check 21mycheck MANUALCOMPLETED … TABLE_NAME=tab$ … Dictionary Inconsistency22 FAILUREOPEN CRITICAL … … …invalid column number 7 on Object tab$ Failed Damaged … Object SH.JFVTEST is referenced …
Running Health Checks Manually: PL/SQL Example You can use the DBMS_HM.RUN_CHECK procedure for running a health check. To call RUN_CHECK, supply the name of the check found in V$HM_CHECK, the name for the run (this is just a label used to retrieve reports later), and the corresponding set of input parameters for controlling its execution. You can view these parameters using the V$HM_CHECK_PARAM. On the above example, you want to run a Database Dictionary Check for TAB$ table. You call this run MYCHECK, and you do not want to set any timeout for this check. Once executed, you execute the DBMS_HM.GET_RUN_REPORT function to get the report extracted from V$HM_RUN, V$HM_FINDING and V$HM_RECOMMENDATION. The output clearly shows you that a critical error was found in TAB$. This table contains an entry for a table with an invalid number of columns. Furthermore, the report gives you the name of the damaged table in TAB$. When you call the GET_RUN_REPORT function, it generate the XML report file in the HM directory of your ADR. For the above example, the file is called HMREPORT_mycheck.hm Note: Refer to the Oracle Database PL/SQL Packages and Types Reference for more information on DBMS_HM.
11g Infrastructure Grid: Server Manageability 12 - 45
Viewing HM Reports Using the ADRCI Utility You can create and view Health Monitor checker reports using the ADRCI utility. To do that, ensure that operating system environment variables such as ORACLE_HOME are set properly, and then enter the following command at the operating system command prompt: adrci. The utility starts and displays its prompt as shown on the slide. Optionally, you can change the current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET HOMEPATH command to change the current ADR home. You can then enter the SHOW HM_RUN command to list all the checker runs registered in the ADR repository and visible from V$HM_RUN. Locate the checker run for which you want to create a report and note the checker run name using the corresponding RUN_NAME field. The REPORT_FILE field contains a filename if a report already exists for this checker run. Otherwise, you can generate the report using the CREATE REPORT HM_RUN command as shown on the slide. To view the report, use the SHOW REPORT HM_RUN command.
11g Infrastructure Grid: Server Manageability 12 - 46
SQL Repair Advisor Overview You run the SQL Repair Advisor after a SQL statement fails with a critical error that generates a problem in ADR. The advisor analyzes the statement and in many cases recommends a patch to repair the statement. If you implement the recommendation, the applied SQL patch circumvents the failure by causing the query optimizer to choose an alternate execution plan for future executions. This is done without changing the SQL statement itself. Note: In case no workaround is found by the SQL Repair Advisor, you are still able to package the incident files and send the corresponding diagnostic data to Oracle Support.
11g Infrastructure Grid: Server Manageability 12 - 47
Accessing SQL Repair Advisor Using EM There are basically two ways to access the SQL Repair Advisor from Enterprise Manager. The first and easiest way is when you get alerted in the Diagnostic Summary section of database home page. Following a SQL statement crash that generates an incident in ADR, you are automatically alerted through the Active Incidents field. You can click on the corresponding link to get to the Support Workbench Problems page from where you can click on corresponding problem ID link. This takes you to the Problem Details page from where you can click on the SQL Repair Advisor link in the Investigate and Resolve section of the page.
11g Infrastructure Grid: Server Manageability 12 - 48
Accessing SQL Repair Advisor Using EM If the SQL statement crash incident is no longer active, you can always go to the Advisor Central page from where you can click the SQL Advisors link and choose the Click here to go to Support Workbench link in the SQL Advisor section of the SQL Advisors page. This takes you directly to the Problem Details page where you can click the SQL Repair Advisor link in the Investigate and Resolve section of the page. Note: To access the SQL Repair Advisor in case of non-incident SQL failures, you can either go to the SQL Details page or SQL Worksheet.
11g Infrastructure Grid: Server Manageability 12 - 49
Using SQL Repair Advisor from EM Once on the SQL Repair Advisor: SQL Incident Analysis page, specify a Task Name, a Task Description, and a Schedule. Once done, click Submit to schedule a SQL diagnostic analysis task. If you specified Immediately, you end up on the Processing: SQL Repair Advisor Task page that shows you the various steps of the task execution.
11g Infrastructure Grid: Server Manageability 12 - 50
Using SQL Repair Advisor from EM Once the SQL Repair Advisor task executed, you are sent to the SQL Repair Results for that task. On this page, you can see a corresponding Recommendations, and especially if SQL Patch was generated to fix your problem. If that is the case like shown on the slide, you can select the statement for which you want to apply the generated SQL Patch and click View. This takes you to the Repair Recommendations for SQL ID page from where you can ask the system to implement the SQL Patch by clicking Implement after selecting the corresponding Findings. You then get a confirmation for the implementation and you can execute again your SQL statement.
11g Infrastructure Grid: Server Manageability 12 - 51
Using SQL Repair Advisor from PL/SQL declare rep_out clob; t_id varchar2(50); begin t_id := dbms_sqldiag.create_diagnosis_task( sql_text => 'delete from t t1 where t1.a = ''a'' and rowid (select max(rowid) from t t2 where t1.a= t2.a and t1.b = t2.b and t1.d=t2.d)', task_name => 'sqldiag_bug_5869490', problem_type => DBMS_SQLDIAG.PROBLEM_TYPE_COMPILATION_ERROR); dbms_sqltune.set_tuning_task_parameter(t_id,'_SQLDIAG_FINDING_MODE', dbms_sqldiag.SQLDIAG_FINDINGS_FILTER_PLANS); dbms_sqldiag.execute_diagnosis_task (t_id); rep_out := dbms_sqldiag.report_diagnosis_task (t_id, DBMS_SQLDIAG.TYPE_TEXT); dbms_output.put_line ('Report : ' || rep_out); end; /
Using SQL Repair Advisor from PL/SQL It is also possible that you invoke the SQL Repair Advisor directly from PL/SQL. After you get alerted about an incident SQL failure, you can execute a SQL Repair Advisor task using the DBMS_SQLDIAG.CREATE_DIGNOSIS_TASK function like illustrated on the slide. You need to specify the SQL statement for which you want the analysis to be done, as well as a task name and a problem type you want to analyze (possible values are PROBLEM_TYPE_COMPILATION_ERROR, and PROBLEM_TYPE_EXECUTION_ERROR). You can then give the created task parameters using the DBMS_SQLTUNE.SET_TUNING_TASK_PARAMETER procedure. Once you are ready, you can then execute the task using the DBMS_SQLDIAG.EXECUTE_DIAGNOSIS_TASK procedure. Finally, you can get the task report using the DBMS_SQLDIAG.REPORT_DIAGNOSIS_TASK function. In the above example, it is assumed that the report asks you to implement a SQL Patch to fix the problem. You can then use the DBMS_SQLDIAG.ACCEPT_SQL_PATCH procedure to implement the SQL Patch.
11g Infrastructure Grid: Server Manageability 12 - 52
Viewing, Disabling, or Removing a SQL Patch After you apply a SQL patch with the SQL Repair Advisor, you may want to view it to confirm its presence, disable it, or remove it. One reason to remove a patch is if you install a later release of Oracle Database that fixes the problem that caused the failure in the non-patched SQL statement. To view, disable/enable, or remove a SQL Patch, access the Server page in Enterprise Manager and click SQL Plan Control link in the Query Optimizer section of the page. This takes you to the SQL Plan Control page. From there, click the SQL Patch tab. From the resulting SQL Patch sub-page, locate the desired patch by examining the associated SQL statement. Select it, and apply the corresponding task: Disable, Enable, or Delete.
11g Infrastructure Grid: Server Manageability 12 - 53
Database Repair Advisor •
Oracle provides outstanding tools for repairing problems – Lost files, corrupt blocks, etc.
• •
Analyzing the underlying problem and choosing the right solution is often the biggest component of downtime Analyzes failures based on symptoms – E.g. “Open failed” because datafiles missing
Intelligent Resolution: Database Repair Advisor Data Recovery Advisor: Enterprise Manager integrates with database health checks and RMAN to display data corruption problems, assess the extent of the problem (critical, high priority, low priority), describe the impact of the problem, recommend repair options, conduct a feasibility check of the customer-chosen option, and automate the repair process. Note: For more information about the Database Repair Advisor refer to the corresponding lesson in this course.
11g Infrastructure Grid: Server Manageability 12 - 54
Summary
In this lesson, you should have learned how to: • Setup Automatic Diagnostic Repository • Use Support Workbench • Run health checks • Use SQL Repair Advisor
Oracle Database 11g: New Features for Administrators 13 - 1
Objectives
After completing this lesson, you should be able to: • Describe your options for repairing data failure • Use the new RMAN data repair commands: – List failures – Receive repair advice – Repair failure
• Perform proactive failure checks • Query the Data Recovery Advisor views
Oracle Database 11g: New Features for Administrators 13 - 2
Repairing Data Failures
• Data Guard provides failover to a standby database, so that your operations are not affected by downtime. • Data Recovery Advisor, a new feature in the Oracle Database 11g, analyzes failures based on symptoms and determines repair strategies: – Aggregation of multiple failures for efficient repair – Presenting a single, recommended repair option – Performing automatic repairs
• The Flashback technology protects the lifecycle of a row and assists in repairing logical problems.
Repairing Data Failures A "data failure" is a missing, corrupted, or inconsistent data, log, control or other file, whose content the Oracle instance cannot access. When your database has a problem, analyzing the underlying cause and choosing the correct solution is often the biggest component of downtime. The Oracle Database 11g offers several new and enhanced tools for analyzing and repairing database problems. • Data Guard, by allowing you to failover to a standby database (that has its own copy of the data), allows you to continue operation if the primary database gets a data failure. Then, after failing over to the standby, you can take the time to repair the failed database (old primary) without worrying about the impact on your applications. There are many enhancements to Data Guard, which are addressed in separate lessons. • Data Recovery Advisor is a built-in tool that automatically diagnoses data failures and reports the appropriate repair option. If for example, Data Recovery Advisor discovers many bad blocks, it recommends restoring the entire file, rather than repairing individual blocks. So it assists you to perform the correct repair for a failure. You can either repair a data failure manually or request the Data Recovery Advisor to execute the repair for you. This decreases the amount of time to recover from a failure.
Oracle Database 11g: New Features for Administrators 13 - 3
Repairing Data Failures (continued) You can use the Flashback technology to repair logical problems. • Flashback Archive maintains persistent changes of table data for a specified period of time, allowing you to access the archived data. • Flashback Transaction allows you to back out of a transaction and all conflicting transactions with a single click. For more details, see the lesson titled "Using Flashback and LogMiner". What you already know: • RMAN automates data file media recovery (a common form of recovery that protects against logical and physical failures) and block media recovery (that recovers individual blocks rather than a whole data file). For more details, see the lesson titled "Using RMAN Enhancements". • Automatic Storage Management (ASM) protects against storage failures.
Oracle Database 11g: New Features for Administrators 13 - 4
Data Recovery Advisor
• • • •
Fast detection, analysis and repair of failures Downtime and runtime failures Minimizing disruptions for users User interfaces: – EM GUI interface – RMAN command-line
Functionality of the Data Recovery Advisor The Data Recovery Advisor automatically gathers data failure information when an error is encountered. In addition, it can proactively check for failures. In this mode, it can potentially detect and analyze data failures before a database process discovers the corruption and signals an error. (Note that repairs are always under human control.) Data failures can be very serious. For example, if your log files are missing, you cannot start your database. Some data failures (like block corruptions in data files) are not catastrophic, in that they do not take the database down or prevent you from starting the Oracle instance. The Data Recovery Advisor handles both cases: the one when you cannot start up the database (because some required database files are missing, inconsistent, or corrupted) and the one when file corruptions are discovered during runtime. The preferred way to address serious data failures is: to first failover to a standby database, if you are in a Data Guard configuration. This allows users to come back online as soon as possible. Then, you need to repair the primary cause of the data failure, but fortunately, this does not impact your users.
Oracle Database 11g: New Features for Administrators 13 - 5
User Interfaces The Data Recovery Advisor is available from Enterprise Manager (EM) Database Control and Grid Control. When failures exist, select Availability > Perform Recovery. You can also use it via the RMAN command-line. For example: rman target / nocatalog Supported Database Configurations In the current release, Data Recovery Advisor supports single-instance databases. Oracle Real Application Clusters databases are not supported. Data Recovery Advisor cannot use blocks or files transferred from a standby database to repair failures on a primary database. Also, you cannot use Data Recovery Advisor to diagnose and repair failures on a standby database. However, the Data Recovery Advisor does support failover to a standby database as a repair option (as mentioned above).
Oracle Database 11g: New Features for Administrators 13 - 6
Data Recovery Advisor
1. Assess data failures 2. List failures by severity 3. Advise on repair
Data Recovery Advisor
4. Choose and execute repair 5. Perform proactive checks
Data Recovery Advisor The automatic diagnostic workflow in Oracle Database 11g performs workflow steps for you. With the Data Recovery Advisor you only need to initiate an advise and a repair. 1. Health Monitor automatically executes checks and logs failures and their symptoms as "findings" into the Automatic Diagnostic Repository (ADR). For more details on Health Monitor, see the Diagnostics eStudy. 2. The Data Recovery Advisor consolidates findings into failures. It lists the results of previously executed assessments with failure severity (critical or high). 3. When you ask for repair advice on a failure, the Data Recovery Advisor maps failures to automatic and manual repair options, checks basic feasibility and presents you with the repair advice. 4. You can choose to manually execute a repair or request the Data Recovery Advisor to do it for you. 5. In addition to the automatic, primarily "reactive' checks of the Health Monitor and Data Recovery Advisor, Oracle recommends to additionally use the VALIDATE command as a "proactive" check.
Oracle Database 11g: New Features for Administrators 13 - 7
Data Failures Data failures are detected by checks, which are diagnostic procedures that asses the health of the database or its components. Each check can diagnose one or more failures, which are mapped to a repair. Checks can be reactive or proactive. When an error occurs in the database, "reactive checks" are automatically executed. You can also initiate "proactive checks", for example, by executing the VALIDATE DATABASE command. In Enterprise Manager select Availability > Perform Recovery.
Oracle Database 11g: New Features for Administrators 13 - 8
Listing Data Failures On the Perform Recovery page, click Perform Automated Repair. This example shows how the Data Recovery Advisor lists data failures and details. Activities, which you can initiate from the Data Recovery Advisor page, include: advising, classifying and closing failures. The RMAN LIST FAILURE command can also display data failures and details. Failure assessments are not initiated here; they are previously executed and stored in the ADR. Failures are listed in decreasing priority order: CRITICAL, HIGH, and LOW. Failures with the same priority are listed in increasing timestamp order.
Oracle Database 11g: New Features for Administrators 13 - 9
Listing of Data Failures
RMAN LIST FAILURE command lists previously executed failure assessment.
Syntax: LIST FAILURE [ ALL | CRITICAL | HIGH | LOW | CLOSED | failnum[,failnum,…] ] [ EXCLUDE FAILURE failnum[,failnum,…] ] [ DETAIL ]
Listing of Data Failures The RMAN LIST FAILURE command lists failures. If the target instance uses a recovery catalog, it can be in STARTED mode, otherwise it must be in MOUNTED mode. To learn more about the syntax: • failnum: Number of the failure to display repair options for • ALL: List failures of all priorities. • CRITICAL: List failures of CRITICAL priority and OPEN status. These failures require immediate attention, because the make the whole database unavailable (for example, a missing control file). • HIGH: List failures of HIGH priority and OPEN status. These failures make a database partly unavailable or unrecoverable; so they should be repaired quickly (for example, missing archived redo logs). • LOW - List failures of LOW priority and OPEN status. Failures of a low priority can wait, until more important failures are fixed. • CLOSED: List only closed failures. • EXCLUDE FAILURE: Exclude the specified list of failure numbers from the list. • DETAIL: List failures by expanding the consolidated failure. For example, if there are multiple block corruptions in a file, the DETAIL option lists each one of them. See the Oracle Database Backup and Recovery Reference for details on command syntax.
Oracle Database 11g: New Features for Administrators 13 - 10
Example of Listing Data Failures [oracle1@stbbv06 orcl]$ rman Recovery Manager: Release 11.1.0.3.0 - Beta on Wed Dec 20 11:22:10 2006 Copyright (c) 1982, 2006, Oracle. All rights reserved. RMAN> connect target sys/oracle@orcl connected to target database: ORCL (DBID=1137451268) using target database control file instead of recovery catalog RMAN> list failure all; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------5 HIGH OPEN 20-DEC-06 one or more datafiles are missing RMAN> list failure detail; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------5 HIGH OPEN 20-DEC-06 one or more datafiles are missing List of child failures for parent failure ID 5 Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------8 HIGH OPEN 20-DEC-06 datafile 5: '/u01/app/oracle/oradata/orcl/example01.dbf' is missing Impact: tablespace EXAMPLE is unavailable RMAN>
Oracle Database 11g: New Features for Administrators 13 - 11
Classifying and Closing Failures
RMAN CHANGE FAILURE command: • Changing failure priority (except for CRITICAL) • Closing one or more failures Example: RMAN> change failure 5 priority low; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------5 HIGH OPEN 20-DEC-06 one or more datafiles are missing Do you really want to change the above failures (enter YES or NO)? yes changed 1 failures to LOW priority 13 - 12
Classifying and Closing Failures This command is used to change failure priority or close one or more failures. Syntax: CHANGE FAILURE { ALL | CRITICAL | HIGH | LOW | failnum[,failnum,…] } [ EXCLUDE FAILURE failnum[,failnum,…] ] { PRIORITY {CRITICAL | HIGH | LOW} | CLOSE } – change status of the failure(s) to closed [ NOPROMPT ] – do not ask user for a confirmation
A failure priority can be changed only from HIGH to LOW and from LOW to HIGH. It is an error to change the priority level of CRTICAL. (One reason why you may wish to change a failure from HIGH to LOW is to avoid seeing it on the default output list of the LIST FAILURE command.) Open failures are closed implicitly when a failure is repaired. However, you can also explicitly close a failure. This involves a re-evaluation of all other open failures, because some of them might become irrelevant as the result of the failure closure. By default, the command asks the user to confirm a requested change.
Oracle Database 11g: New Features for Administrators 13 - 12
Advising on Repair
RMAN ADVISE FAILURE command: • • • •
Displaying summary of input failure list Including warning, if new failures appeared in ADR Displaying manual check list Listing a single recommended repair option
General repair options: – No-data-loss repair – Data-loss repair Data loss
Advising on Repair The RMAN ADVISE FAILURE command displays a recommended repair option for the specified failures. If this command is executed from within Enterprise Manager, then Data Guard is presented as a repair option. (This is not the case, if the command is executed directly from the RMAN command line). The ADVISE FAILURE command prints a summary of the input failure. The command implicitly closes all open failures that are already fixed. The default behavior (when no option is used) is to advise on all the CRITICAL and HIGH priority failures that are recorded in the Automatic Diagnostic Repository. If a new failure has been recorded in the ADR since the last LIST FAILURE command, this command includes a WARNING before advising on all CRITICAL and HIGH failures. Two general repair options are implemented: no-data-loss and data-loss repairs. Syntax: ADVISE FAILURE [ ALL | CRITICAL | HIGH | LOW | failnum[,failnum,…] ] [ EXCLUDE FAILURE failnum [,failnum,…] ]
Oracle Database 11g: New Features for Administrators 13 - 13
Advising on Repair On the Data Recovery Advisor page, click the Advise button. When the Data Recovery Advisor generates an automated repair option, it generates a script which shows you how RMAN plans to repair the failure. If you do not want the Data Recovery Advisor to automatically repair the failure, then you can use this script as a starting point for your manual repair. The OS location of the script is printed at the end of the command output. You can examine this script, customize it (if needed, and also execute it manually, if, for example, your audit trail requirements recommend such an action.
Oracle Database 11g: New Features for Administrators 13 - 14
Advising on Repair When the Data Recovery Advisor generates a manual checklist, it considers two types of failures: • Failures that require human intervention. An example is a connectivity failure, when a disk cable is not plugged in. • Failures that are repaired faster if you can undo a previous erroneous action. For example, if you renamed a datafile by error, it is faster to rename it back, rather than initiate RMAN restoration from backup. The Data Recovery Advisor displays this page as part of its "advise" process. Select the "Manual Actions Were Performed" checkbox, if you already executed a manual repair option.
Oracle Database 11g: New Features for Administrators 13 - 15
Command Line Example RMAN> advise failure; List of Database Failures ========================= Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------5 HIGH OPEN 20-DEC-06 one or more datafiles are missing List of child failures for parent failure ID 5 Failure ID Priority Status Time Detected Summary ---------- -------- --------- ------------- ------8 HIGH OPEN 20-DEC-06 datafile 5: '/u01/app/oracle/oradata/orcl/example01.dbf' is missing Impact: tablespace EXAMPLE is unavailable analyzing automatic repair options; this may take some time allocated channel: ORA_DISK_1 channel ORA_DISK_1: SID=117 device type=DISK analyzing automatic repair options complete Manual Checklist ================ 1. If file /u01/app/oracle/oradata/orcl/example01.dbf was unintentionally renamed or moved, restore it. Automated Repair Options ======================== Option Strategy Repair Description ------ ------------ -----------------1 no data loss Restore and recover datafile 5. Repair script: /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2979128860.hm RMAN>
Oracle Database 11g: New Features for Administrators 13 - 16
Executing Repairs This command should be used after an ADVISE FAILURE command in the same RMAN session. By default (with no option), the command uses the single, recommended repair option of the last ADVISE FAILURE execution in the current session. If none exists, the REPAIR FAILURE command initiates an implicit ADVISE FAILURE command. By default, you are asked to confirm the command execution, because you may be requesting substantial changes, that take time to complete. During execution of a repair, the output of the command indicates what phase of the repair is being executed. After completing the repair, the command closes the failure. You cannot run multiple concurrent repair sessions. However, concurrent REPAIR … PREVIEW sessions are allowed. • PREVIEW means: Do not execute the repair(s); instead, display the previously generated RMAN script with all repair actions and comments. • NOPROMPT: Do not ask for confirmation.
Oracle Database 11g: New Features for Administrators 13 - 17
Example of Repairing a Failure RMAN> repair failure preview; Strategy Repair script ------------ ------------no data loss /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2537574800.hm contents of repair script: # restore and recover datafile sql 'alter database datafile 5 offline'; restore check readonly datafile 5; recover datafile 5; sql 'alter database datafile 5 online'; RMAN> repair failure; Strategy Repair script ------------ ------------no data loss /u01/app/oracle/diag/rdbms/orcl/orcl/hm/reco_2537574800.hm contents of repair script: # restore and recover datafile sql 'alter database datafile 5 offline'; restore check readonly datafile 5; recover datafile 5; sql 'alter database datafile 5 online'; Do you really want to execute the above repair (enter YES or NO)? y executing repair script sql statement: alter database datafile 5 offline Starting restore at 21-DEC-06 using channel ORA_DISK_1 channel ORA_DISK_1: starting datafile backup set restore channel ORA_DISK_1: specifying datafile(s) to restore from backup set channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/orcl/example01.dbf channel ORA_DISK_1: reading from backup piece /u01/app/oracle/flash_recovery_area/ORCL/backupset/2006_12_20/o1_mf_n nndf_BACKUP_ORCL_000004_1_2rm4v9dj_.bkp channel ORA_DISK_1: piece handle=/u01/app/oracle/flash_recovery_area/ORCL/backup set/2006_12_20/o1_mf_nnndf_BACKUP_ORCL_000004_1_2rm4v9dj_.bkp tag=BACKUP_ORCL_00 0004_122006114740 channel ORA_DISK_1: restored backup piece 1 channel ORA_DISK_1: restore complete, elapsed time: 00:00:15 Finished restore at 21-DEC-06
Oracle Database 11g: New Features for Administrators 13 - 18
Example of Repairing a Failure (continued) Starting recover at 21-DEC-06 using channel ORA_DISK_1 starting media recovery archived log for thread 1 with sequence 5 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o1_mf_ 1_5_2rm50clp_.arc archived log for thread 1 with sequence 6 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o1_mf_ 1_6_2rmsgwyo_.arc archived log for thread 1 with sequence 7 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o1_mf_ 1_7_2rnbosby_.arc archived log for thread 1 with sequence 8 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o1_mf_ 1_8_2rnyc4c5_.arc archived log for thread 1 with sequence 9 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o1_mf_ 1_9_2rolp2b4_.arc archived log for thread 1 with sequence 10 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o1_mf_ 1_10_2rp2gg32_.arc archived log for thread 1 with sequence 11 is already on disk as file /u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o1_mf_ 1_11_2rpllvqk_.arc archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o 1_mf_1_5_2rm50clp_.arc thread=1 sequence=5 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_ 12_20/o1_mf_1_6_2rmsgwyo_.arc thread=1 sequence=6 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_20/o 1_mf_1_7_2rnbosby_.arc thread=1 sequence=7 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o 1_mf_1_8_2rnyc4c5_.arc thread=1 sequence=8 archived log file name=/u01/app/oracle/flash_recovery_area/ORCL/archivelog/2006_12_21/o 1_mf_1_9_2rolp2b4_.arc thread=1 sequence=9 media recovery complete, elapsed time: 00:00:01 Finished recover at 21-DEC-06 sql statement: alter database datafile 5 online repair failure complete RMAN>
Oracle Database 11g: New Features for Administrators 13 - 19
Executing Repairs In Enterprise Manager, the Data Recovery Advisor leads you to this page. The job scheduler initiates the execution of the RMAN repair script.
Oracle Database 11g: New Features for Administrators 13 - 20
Executing Repairs The Data Recovery Advisor displays this page. In the preceding example, a successful repair is completed.
Oracle Database 11g: New Features for Administrators 13 - 21
Data Recovery Advisor Views
Querying dynamic data dictionary views: • V$IR_FAILURE: Listing all failures, including closed ones (result of the LIST FAILURE command) • V$IR_MANUAL_CHECKLIST: Listing of manual advice (result of the ADVISE FAILURE command) • V$IR_REPAIR: Listing of repairs (result of the ADVISE FAILURE command)
Data Recovery Advisor Views See the Oracle Database Reference for details on the dynamic data dictionary views that the Data Recovery Advisor uses.
Oracle Database 11g: New Features for Administrators 13 - 22
Best Practice: Proactive Checks
Invoking proactive health check of the database and its components: • Health Monitor or RMAN VALIDATE DATABASE command • Checking for logical and physical corruption • Findings logged in ADR
Best Practice: Proactive Checks For very important databases, you may want to execute additional proactive checks (possibly daily during low peak interval periods). You can schedule periodic health checks through Health Monitor or by using the RMAN VALIDATE command. In general, when a reactive check detects failure(s) in a database component, you may want to execute a more complete check of the affected component. The RMAN VALIDATE DATABASE command is used to invoke health checks for the database and its components. It extends the existing VALIDATE BACKUPSET command. Any problem detected during validation is displayed to you. Problems initiate the execution of a failure assessment. If a failure is detected, it is logged into the Automated Diagnostic Repository (ADR) as a finding. You can use the LIST FAILURE command to view all failures recorded in the repository. The VALIDATE command supports validation of individual backup sets and data blocks. In a physical corruption, the database does not recognize the block at all. In a logical corruption, the contents of the block are logically inconsistent. By default, the VALIDATE command checks for physical corruption only. You can specify CHECK LOGICAL to check for logical corruption as well.
Oracle Database 11g: New Features for Administrators 13 - 23
Best Practice: Proactive Checks (continued) Block corruptions can be divided into interblock corruption and intrablock corruption. In intrablock corruption, the corruption occurs within the block itself and can be either physical or logical corruption. In interblock corruption, the corruption occurs between blocks and can only be logical corruption. The VALIDATE command checks for intrablock corruptions only. Example: RMAN> validate database; Starting validate at 21-DEC-06 using channel ORA_DISK_1 channel ORA_DISK_1: starting validation of datafile channel ORA_DISK_1: specifying datafile(s) for validation input datafile file number=00001 name=/u01/app/oracle/oradata/orcl/system01.dbf input datafile file number=00002 name=/u01/app/oracle/oradata/orcl/sysaux01.dbf input datafile file number=00005 name=/u01/app/oracle/oradata/orcl/example01.dbf input datafile file number=00003 name=/u01/app/oracle/oradata/orcl/undotbs01.dbf input datafile file number=00004 name=/u01/app/oracle/oradata/orcl/users01.dbf channel ORA_DISK_1: validation complete, elapsed time: 00:00:15 List of Datafiles ================= File Status Marked Corrupt Empty Blocks Blocks Examined High SCN ---- ------ -------------- ------------ --------------- ---------1
channel ORA_DISK_1: starting validation of datafile channel ORA_DISK_1: specifying datafile(s) for validation including current control file for validation including current SPFILE in backup set channel ORA_DISK_1: validation complete, elapsed time: 00:00:01 List of Control File and SPFILE =============================== File Type
Setting Corruption-Detection Parameters You can use the DB_ULTRA_SAFE parameter for easy manageability. It affects the default values of the following parameters: • DB_BLOCK_CHECKING, which initiates checking of database blocks. This check can often prevent memory and data corruption. (Default: FALSE, recommended: FULL) • DB_BLOCK_CHECKSUM, which initiates the calculation and storage of a checksum in the cache header of every data block when writing it to disk. Checksums assist in detecting corruption caused by underlying disks, storage systems or I/O systems. (Default: TYPICAL, recommended: TYPICAL) • DB_LOST_WRITE_PROTECT, which initiates checking for "lost writes". Data block lost writes occur on a physical standby database, when the I/O subsystem signals the completion of a block write, which has not yet been completely written in persistent storage. Of course, the write operation has been completed on the primary database. (Default: TYPICAL, recommended: TYPICAL) If you set any of these parameters explicitly, then your values remain in effect. The DB_ULTRA_SAFE parameter changes only the default values for these parameters.
Oracle Database 11g: New Features for Administrators 13 - 26
Setting Corruption-Detection Parameters (continued) Depending on your system's tolerance for block corruption, you can intensify the checking for block corruption. Enabling the DB_ULTRA_SAFE parameter (default: OFF) results in increased system overhead, because of these more intensive checks. The amount of overhead is related to the number of blocks changed per second; so it cannot be easily quantified. For a 'high-update' application, you can expect a significant increase in CPU, likely in the ten to twenty percent range, but possibly higher. This overhead can be alleviated by allocating additional CPUs.
Oracle Database 11g: New Features for Administrators 13 - 27
Summary
In this lesson, you should have learned how to: • Describe your options for repairing data failure • Use the new RMAN data repair commands: – List failures – Receive repair advice – Repair failure
• Perform proactive failure checks • Query the Data Recovery Advisor views
Oracle Database 11g: New Features for Administrators 14 - 1
Objectives
After completing this lesson, you should be able to: • Configure the password file to use case sensitive passwords • Encrypt a tablespace • Create a virtual private catalog for RMAN • Configure fined grained access to network services
Oracle Database 11g: New Features for Administrators 14 - 2
Secure Password Support
More Secure Password Support. Passwords • May be longer (up to 50 characters) • Are case sensitive • Contain more characters • Use more secure hash algorithm • Use salt in the hash algorithm Usernames are still Oracle identifiers (up to 30 characters, case insensitive)
Secure Password Support You must use more secure passwords to meet the demands of compliance to various security and privacy regulations. Passwords that very short and passwords that are formed from a limited set of characters are susceptible to brute force attacks. Longer passwords with more different characters allowed make the password much more difficult to guess or find. In Oracle Database 11g, the password is is handled differently than in previous versions; • Passwords may be longer. 50 character passwords are allowed. • Passwords are case sensitive. Upper and lower case characters are now different characters when used in a password. • Passwords may contain special characters, and multibyte characters. In previous versions of the database only the ‘$’,’_’, and ‘#’ special characters were allowed in the password without quoting the password. • Passwords are always passed through a hash algorithm, then stored as a user credential. When the user presents a password, it is hashed then compared to the stored credential. In Oracle Database 11g the hash algorithm is SHA-1 of the public algorithm used in previous versions of the database. SHA-1 is a stronger algorithm using a 160 bit key. • Passwords always use salt. A hash function always produces the same output, given the same input. Salt is a unique (random) value that is added to the input, to insure the output credential in unique.
Oracle Database 11g: New Features for Administrators 14 - 3
Automatic Secure Configuration Oracle Database 11g installs and creates the database with certain security features recommended by the CIS (Centre for Internet Security) benchmark. The CIS recommended configuration is more secure than the 10gR2 default installation; yet open enough to allow the majority of applications to be successful. Many customers have adopted this benchmark already. There are some recommendations of the CIS benchmark that may be incompatible with some applications.
Oracle Database 11g: New Features for Administrators 14 - 4
Password Configuration
By default: • Default password profile is enabled • Account is locked after 10 failed login attempts In upgrade: • Passwords are case insensitive until changed • Passwords become case sensitive by ALTER USER On creation: • Passwords are case sensitive
Secure Default Configuration When creating a custom database using the Database Configuration Assistant (DBCA), you can specify the Oracle Database 11g default security configuration. By default, If a user tries to log into an Oracle Database multiple times using an incorrect password, Oracle Database delays each login after the third try. This protection applies for attempts made from different IP addresses or multiple client connections. Afterwards, it gradually increases the time before the user can try another password, up to a maximum of about ten seconds. The default password profile is enabled with the settings: PASSWORD_LIFE_TIME 180 PASSWORD_GRACE_TIME 7 PASSWORD_REUSE_TIME UNLIMITED PASSWORD_REUSE_MAX UNLIMITED FAILED_LOGIN_ATTEMPTS 10 PASSWORD_LOCK_TIME 1 PASSWORD_VERIFY_FUNCTION NULL
When an Oracle Database 10g is upgraded, passwords are case insensitive until the ALTER USER… command is used to change the password. When the database is created, the passwords will case sensitive by default.
Oracle Database 11g: New Features for Administrators 14 - 5
Enable Built-in Password Complexity Checker
Execute the utlpwdmg.sql script to create the password verify function: SQL> CONNECT / as SYSDBA SQL> @?/rdbms/admin/utlpwdmg.sql
Alter the default profile: ALTER PROFILE DEFAULT LIMIT PASSWORD_VERIFY_FUNCTION verify_function_11g;
Enable Built-in Password Complexity Checker The verify_function_11g is a sample PL/SQL function that can be easily modified to enforce the password complexity policies at your site. This function does not require special characters to be embedded in the password. Both the verify_function_11g and the older verify_function are included in the utlpwdmg.sql file. To enable the password complexity checking, create a verification function owned by SYS. Use on of the supplied functions or modify one of them to meet your requirements. The example show using the utlpwdmg.sql script. With no modification, the script creates the verify_function_11g. The verify_function11g function checks that the password: contains at least 8 characters, contains at least one number and one alphabetic character, and differs from the previous password by at least 3 characters. The function also checks that the password is not: a username or username appended with an number 1 to 100, a username reversed, a server name or server name appended with 1-100, or one of a set of well know and common passwords such as 'welcome1', 'database1', 'oracle123', or oracle(appended with 1-100), etc
Oracle Database 11g: New Features for Administrators 14 - 6
Managing Default Audits
Review Audit logs: • Default audit options cover important security privileges Archive Audit records • Export • Copy to another table Remove archived audit records
Managing Default Audits Review the audit logs. By default, auditing is enabled in Oracle Database 11g for certain privileges that are very important to security. The audit trail is recorded in the database AUD$ table by default; the AUDIT_TRAIL parameter is set to DB. These audits should not have a large impact on database performance, for most sites. Archive audit records. To retain audit records export using Datapump export, or use the SELECT statement to capture a set of audit records into a separate table. Remove archived audit records. Remove audit records from the SYS.AUD$ table after review and archive. Audit records take up space in the SYSTEM tablespace. If the SYSTEM tablespace cannot grow, and there is not more space for audit records errors will be generated for each audited statement. Since CREATE SESSION is one of the audited privileges, no new sessions may be created. Note: the system tablespace is created with the autoextend on option. So the SYSTEM tablespace will grow as needed until there is no more space available on the disk.
Oracle Database 11g: New Features for Administrators 14 - 7
Managing Default Audits The following privileges are audited for all users on success and failure, and by access: CREATE EXTERNAL JOB CREATE ANY JOB GRANT ANY OBJECT PRIVILEGE EXEMPT ACCESS POLICY CREATE ANY LIBRARY GRANT ANY PRIVILEGE DROP PROFILE ALTER PROFILE DROP ANY PROCEDURE ALTER ANY PROCEDURE CREATE ANY PROCEDURE ALTER DATABASE GRANT ANY ROLE CREATE PUBLIC DATABASE LINK DROP ANY TABLE ALTER ANY TABLE CREATE ANY TABLE DROP USER ALTER USER CREATE USER CREATE SESSION AUDIT SYSTEM ALTER SYSTEM
Oracle Database 11g: New Features for Administrators 14 - 8
Adjust Security Settings When you create a database using the DBCA tool, you are offered a choice of security settings: • Keep the enhanced 11g default security settings (recommended). These settings include enabling auditing and new default password profile. • Revert to pre-11g default security settings. To disable a particular category of enhanced settings for compatibility purposes choose from the following: - Revert audit settings to pre-11g defaults - Revert password profile settings to pre-11g defaults. These settings can also be changed after the database is created using DBCA. Secure permissions on software are always set. It is not impacted by user’s choice for ‘Security Settings’ option.
Oracle Database 11g: New Features for Administrators 14 - 9
Setting Security Parameters
Restrict release of server information • SEC_RETURN_SERVER_RELEASE Protect against DoS attacks • SEC_PROTOCOL_ERROR_FURTHER_ACTION • SEC_PROTOCOL_ERROR_TRACE_ACTION Protect against old protocols attacks • SEC_DISABLE_OLDER_ORACLE_RPCS Protect against brute force attacks • SEC_MAX_FAILED_LOGIN_ATTEMPTS
Setting Security Parameters A set of new parameters have been added to the Oracle Database 11g to enhance the default security of the database. These parameters are system wide and static. Restrict release of server information A new parameter SEC_RETURN_SERVER_RELEASE reduces the amount of information about the server that is available to the client. When set to true the full banner is displayed. When the value is set to FALSE, a limited generic banner is displayed. (doesn’t work yet in 11.1.0.4 beta) Protect against denial of Service (DoS) attacks The two parameters shown specify the actions to be taken when the database receives bad packets from a client. The assumption is that the bad packets are from a possible malicious client. The SEC_PROTOCOL_ERROR_FURTHER_ACTION parameter specifies what action is to be taken with the client connection: Continue, drop the connection, or delay accepting requests. The other parameter SEC_PROTOCOL_ERROR_TRACE_ACTION specifies a monitoring action: NONE, TRACE, LOG, or ALERT. Protect against old protocols attacks Older protocols that are not as secure are a vector for attacks. If these older protocols are being used by applications in your database disable these protocols with the SEC_DISABLE_OLDER_ORACLE_RPCS parameter set to TRUE. Protect against brute force attacks A new initialization parameter SEC_MAX_FAILED_LOGIN_ATTEMPTS that has a default setting ofOracle 10 causes a connection to be automatically after the specified14 number Database 11g: New Featuresdropped for Administrators - 10 of attempts. This parameter is enforced even when the password profile is not enabled.
Setting Database Administrator Authentication
Use password file with case sensitive passwords Enable Strong authentication for administrator roles • Grant Administrator ROLE in the Directory • Use Kerberos tickets • Use Certificates with SSL
Setting Database Administrator Authentication The database administrator must always be authenticated. In Oracle Database 11g there are a new methods make administrator authentication more secure and centralize the administration of these privileged users Use case sensitive passwords with a password file for remote connections. orapwd file=orapworcl entries=5 ignorecase=N
If your concern is that the password file might be vulnerable or that the maintenance of many password files is a burden, then strong authentication can be implemented: • Grant OSDBA, or OSOPER roles in the Oracle Internet Directory. • Use Kerberos tickets • Use certificates over SSL To use any of the strong authentication methods the LDAP_DIRECTORY_SYSAUTH initialization parameter must be set to YES. Set this parameter to NO to disable the use of strong authentication methods. Authentication through Oracle Internet Directory or through Kerberos also can provide centralized administration or single sign-on. If the password file is configured, it will be checked first. The user may also be authenticated by the local OS by being a member of the OSDBA or OSOPER groups.
Oracle Database 11g: New Features for Administrators 14 - 11
Setup Directory Authentication for Administrative Users 1. Create the user in the directory 2. Grant SYSDBA or SYSOPER role to user 3. Set LDAP_DIRECTORY_SYSAUTH parameter in database 4. Check LDAP_DIRECTORY_ACCESS parameter is set to PASSWORD or SSL. 5. Test the connection $sqlplus fred/t%3eEGQ@orcl AS SYSDBA
Setup Directory Authentication for Administrative Users To enable the Oracle Internet Directory (OID) server to authorize SYSDBA and SYSOPER connections: 1. Configure the administrative user by using the same procedures you would use to configure a typical user. 2. In OID, grant SYSDBA or SYSOPER to the user for the database the user will administer. 3. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database, by a strong authentication method. 4. Ensure that the LDAP_DIRECTORY_ACCESS initialization parameter is not set to NONE. The possible values are PASSWORD or SSL. 5. Afterwards, the administrative user can log in by including the net service name in the CONNECT statement. For example, for Fred to log on as SYSDBA if the net service name is orcl: CONNECT fred/t%3eEGQ@orcl AS SYSDBA
Note: If the database is configured to use a password file for remote authentication, the password file will be checked first.
Oracle Database 11g: New Features for Administrators 14 - 12
Setup Kerberos Authentication for Administrative Users 1. Create the user in the Kerberos domain 2. Configure OID for Kerberos authentication 3. Grant SYSDBA or SYSOPER role to user in OID 4. Set LDAP_DIRECTORY_SYSAUTH parameter in database 5. Set LDAP_DIRECTORY_ACCESS parameter 6. Test the connection $sqlplus /@orcl AS SYSDBA
Setup Kerberos Authentication for Administrative Users To enable Kerberos to authorize SYSDBA and SYSOPER connections: 1. Configure the administrative user by using the same procedures you would use to configure a typical user. For more information on configuring Kerberos authentication, see the Oracle Database Advanced Security Administrator’s Guide 11g. 2. Configure OID for Kerberos authentication. See Oracle Database Enterprise User Administrator's Guide 11g Release 1 3. In OID, grant SYSDBA or SYSOPER to the user for the database the user will administer. 4. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database, by a strong authentication method. 5. Ensure that the LDAP_DIRECTORY_ACCESS initialization parameter is not set to NONE. This will be set to either PASSWORD or SSL 6. Afterwards, the administrative user can log in by including the net service name in the CONNECT statement. For example, to log on as SYSDBA if the net service name is orcl: CONNECT /@orcl AS SYSDBA
Oracle Database 11g: New Features for Administrators 14 - 13
Setup SSL Authentication for Administrative Users 1. Configure client to use SSL 2. Configure server to use SSL 3. Configure OID for SSL user authentication 4. Grant SYSOPER or SYSDDBA to the user 5. Set LDAP_DIRECTORY_SYSAUTH parameter in database 6. Test the connection $sqlplus /@orcl AS SYSDBA
Setup SSL Authentication for Administrative Users To enable SYSDBA and SYSOPER connections using certificates and SSL (for more information on configuring SSL authentication see the Oracle Database Advanced Security Administrator’s Guide 11g.) : 1. Configure the client to use SSL • Setup client wallet and user certificate. Update wallet location in sqlnet.ora. • Configure Oracle net service name to include server distinguished names and use TCP/IP with SSL in tnsnames.ora. • Configure TCP/IP with SSL in listener.ora. • Set the client SSL cipher suites and required SSL version, set SSL as an authentication service in sqlnet.ora. 2. Configure the server to use SSL: • Enable SSL for your database listener on TCPS and provide a corresponding TNS name. • Stored your database PKI credentials in the database wallet. • Set the LDAP_DIRECTORY_ACCESS initialization parameter to SSL 3. Configure OID for SSL user authentication. See Oracle Database Enterprise User Administrator's Guide 11g Release 1. 4. In OID, grant SYSDBA or SYSOPER to the user for the database the user will administer. 5. Set the LDAP_DIRECTORY_SYSAUTH initialization parameter to YES. When set to YES, the LDAP_DIRECTORY_SYSAUTH parameter enables SYSDBA and SYSOPER users to authenticate to the database, by a strong authentication method.
Oracle Database 11g: New Features for Administrators 14 - 14
6. Afterwards, the administrative user can log in by including the net service name in the CONNECT statement. For example, to log on as SYSDBA if the net service name is orcl:
Transparent Data Encryption
Support for Log Miner Support for Logical Standby Tablespace Encryption Hardware based Master key protection
Transparent Data Encryption Several new features enhance the capabilities of Transparent Data Encryption, and build on the same infrastructure.
Oracle Database 11g: New Features for Administrators 14 - 15
TDE and Log Miner
Log Miner supports Transparent Data Encryption encrypted columns. Restrictions: • The wallet holding the TDE master keys must be open • Hardware Security Modules are not supported • User Held Keys are not supported
TDE and Log Miner With Transparent Data Encryption (TDE), the encrypted column data is encrypted in the data files, the undo segments and the redo logs. Oracle Logical Standby depends on the log miner ability to transform redo logs into SQL statements for SQL Apply. Log Miner has been enhanced to support TDE. This enhancement provides the ability to support TDE on a logical standby database. The wallet containing the master keys for TDE must be open for Log Miner to decrypt the encrypted columns. The database instance must be mounted to open the wallet, therefore Log Miner cannot populate V$LOGMNR_CONTENTS to support TDE if the database instance is not mounted.. Log Miner populates V$LOGMNR_CONTENTS for tables with encrypted columns, displaying the column data unencrypted for rows involved in DML statements. Note that this is not a security violation: TDE is a file-level encryption feature and not an access control feature. It does not prohibit DBAs from looking at encrypted data. At Oracle Database 11g, Log Miner does not support TDE with hardware support module (HSM) for key storage. User held keys for TDE are PKI public and private keys supplied by the user for TDE master keys. User held keys are not supported by Log Miner.
Oracle Database 11g: New Features for Administrators 14 - 16
TDE and Logical Standby
Logical Standby database with TDE: • Wallet on the standby is a copy of the wallet on the primary • Master key may be changed only on the primary • Wallet open and close commands are not replicated • Table key may be changed on the standby • Table encryption algorithm may be changed on the standby
TDE and Logical Standby The same wallet is required for both databases. The wallet must be copied from the primary database to the standby database every time the master key has been changed using the "alter system set encryption key identified by <wallet_password>“. An error is raised if the DBA attempts to change the master key on the standby database. If auto-login wallet is not used. The wallet must opened on the standby. Wallet open and close commands are not replicated on standby. A different password can be used to open the wallet on the standby. The wallet owner can change the password to be used for the copy of the wallet on the standby. The DBA will have the ability to change the encryption key or the encryption algorithm of a replicated table at the logical standby This does not require a change to the master key or wallet.. This operation is performed with: ALTER TABLE table_name REKEY USING '3DES168';
There can be only one algorithm per table. Changing the algorithm at the table changes the algorithm for all the columns. A column on the standby can have a different algorithm than the primary or no encryption. To change the table key the guard setting must be lowered to NONE. TDE can be used on local tables in the logical standby independently of the primary, if encrypted columns are not replicated into the standby.
Oracle Database 11g: New Features for Administrators 14 - 17
Using Tablespace Encryption
Create an encrypted tablespace 1. Create or open the encryption wallet SQL> ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "welcome1";
2. Create a tablespace with the encryption keywords SQL> 2> 3> 4>
14 - 18
CREATE TABLESPACE encrypt_ts DATAFILE '$ORACLE_HOME/dbs/encrypt.dat' SIZE 100M ENCRYPTION USING '3DES168' DEFAULT STORAGE (ENCRYPT);
Tablespace Encryption Tablespace encryption is based on block level encryption that encrypts on write and decrypts on read. The data is not encrypted in memory. The only encryption penalty is associated with I/O. The SQL access paths are unchanged and all data types are supported. To use tablespace encryption the encryption wallet must be open. The CREATE TABLESPACE command has an ENCRYPTION clause that sets the encryption properties, and an ENCRYPT storage parameter that causes the encryption to be used. You specify USING 'encrypt_algorithm' to indicate the name of the algorithm to be used. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES128. You can view the properties in the V$ENCRYPTED_TABLESPACES view. The encrypted data is protected during operations like JOIN and SORT. This means that the data is safe when it is moved to temporary tablespaces. Data in undo and redo logs is also protected. Restrictions: • Temporary and undo tablespaces cannot be encrypted. (selected blocks are encrypted) • Bfiles and external tables are not encrypted. • Transportable tablespaces across different endian platforms is not supported. • The key for an encrypted tablespaces cannot be changed at this time. A workaround is: create a tablespace with the desired properties and move all objects to the new tablespace.
Oracle Database 11g: New Features for Administrators 14 - 18
Hardware Security Module
Encrypt and decrypt operations are performed on the hardware security module
Hardware Security Module A hardware security module (HSM) is a physical device that provides secure storage for encryption keys. It also provides secure computational space (memory) to perform encryption and decryption operations. HSM is a more secure alternative to the Oracle wallet. Transparent data encryption can use HSM to provide enhanced security for sensitive data. An HSM is used to store the master encryption key used for transparent data encryption. The key is secure from unauthorized access attempts as the HSM is a physical device and not an operating system file. All encryption and decryption operations that use the master encryption key are performed inside the HSM. This means that the master encryption key is never exposed in insecure memory. There are several vendors that provide Hardware Security Modules. The vendor must supply the appropriate libraries.
Oracle Database 11g: New Features for Administrators 14 - 19
Using a Hardware Security Module with TDE 1. Decrypt encrypted data before switching to HSM 2. Configure sqlnet.ora ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM))
3. Copy the PKCS#11 library to the correct path 4. Set up the HSM 5. Generate a master encryption key for HSM-based encryption ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY user_Id:password
Beta Only Using HSM involves an initial setup of the HSM device. You also need to configure transparent data encryption to use HSM. Once the initial setup is done, HSM can be used just like an Oracle software wallet. The following steps discuss configuring and using hardware security modules: • Decrypt Encrypted Data Before Switching to HSM • Set the ENCRYPTION_WALLET_LOCATION Parameter in sqlnet.ora ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=HSM))
• • •
Copy the PKCS#11 Library to It's Correct Path Set Up the HSM Generate a Master Encryption Key for HSM-Based Encryption
•
Ensure that the HSM Is Accessible
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY user_Id:password
Oracle Database 11g: New Features for Administrators 14 - 20
Encryption for LOB Columns
CREATE TABLE test1 (doc CLOB ENCRYPT USING 'AES128') LOB(doc) STORE AS SECUREFILE (CACHE NOLOGGING );
• LOB encryption is allowed only for SECUREFILE LOBS • All LOBs in the LOB column are encrypted • LOBs can be encrypted on per-column or per-partition basis – Allows for the co-existence of SECUREFILE and BASICFILE LOBs
Encryption for LOB Columns Oracle Database 11g introduces a completely reengineered large object (LOB) data type that dramatically improves performance, manageability, and ease of application development. This SecureFiles implementation (of LOBs) offers advanced, next-generation functionality such as intelligent compression and transparent encryption. The encrypted data in SecureFiles is stored inplace and is available for random reads and writes. You must create the LOB with the SECUREFILE parameter, with encryption enabled(ENCRYPT) or disabled(DECRYPT—the default) on the LOB column. The current TDE syntax is used for extending encryption to LOB data types. LOB implementation from prior versions is still supported for backward compatibility and is now referred to as BasicFiles. If you add a LOB column to a table, you can specify whether it should be created as SecureFiles or BasicFiles. The default LOB type is BasicFiles to ensure backward compatibility. Valid algorithms are 3DES168, AES128, AES192, and AES256. The default is AES192. Note: For further discussion on SecureFiles, please see the “ Managing Storage” lesson.
Oracle Database 11g: New Features for Administrators 14 - 21
Using Kerberos Enhancements
• Use stronger encryption algorithms (no action required) • Interoperability between MS KDC and MIT KDC (no Action required) • Longer principal name CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS '[email protected]';
Kerberos Enhancements The Oracle client Kerberos implementation now makes use of secure encryption algorithms like 3DES and AES in place of DES. This makes using Kerberos more secure. The Kerberos authentication mechanism in Oracle Database now supports the following encryption types: • DES3-CBC-SHA (DES3 algorithm in CBC mode with HMAC-SHA1 as checksum) • RC4-HMAC (RC4 algorithm with HMAC-MD5 as checksum) • AES128-CTS (AES algorithm with 128-bit key in CTS mode with HMAC-SHA1 as checksum) • AES256-CTS (AES algorithm with 256-bit key in CTS mode with HMAC-SHA1 as checksum) The Kerberos implementation has been enhanced to interoperate smoothly with Microsoft and MIT Key Distribution Centers. The Kerberos principal name can now contain more than 30 characters. It is no longer restricted by the number of characters allowed in a database user name. If the Kerberos principal name is longer than 30 characters use: CREATE USER KRBUSER IDENTIFIED EXTERNALLY AS '[email protected]';
Oracle Database 11g: New Features for Administrators 14 - 22
Managing TDE with Enterprise Manager The administrator using Enterprise Manager can open and close the wallet, move the location of the wallet and generate a new master key. The example shows that TDE options are part of the Create or Edit Table processes. Table encryption options allow you to choose the encryption algorithm and salt. The table key can also be reset. The other place where TDE changed the management pages is Export and Import Data. If TDE is configured, the wallet is open, and the table to exported has encrypted columns, the export wizard will offer data encryption. The same arbitrary key(password) that was used on export must be provided both on import in order to import any encrypted columns. A partial import that does not include tables that contain encrypted columns does not require the password.
Oracle Database 11g: New Features for Administrators 14 - 23
Managing Tablespace Encryption with Enterprise Manager
Managing Tablespace Encryption with Enterprise Manager You can manage tablespace encryption from the same console as you manage Transparent Database Encryption. Once encryption has been enabled for the database, the DBA can set the encryption property of a tablespace on the Edit Tablespace page or create
Oracle Database 11g: New Features for Administrators 14 - 24
Managing Virtual Private Database With Enterprise Manager 11g you can now manage the Virtual Private Database policies from the console. You can enable, disable, add, and drop polices. The console also allows you to manage application contexts. The application context page is not shown.
Oracle Database 11g: New Features for Administrators 14 - 25
Managing Label Security with Database Control Oracle Label Security (OLS) Management is integrated with Enterprise Manager Database Control. The Database Administrator can manage OLS from the same console that is used for managing the database instances, listeners and host. The differences between database control and grid control are minimal. Oracle Label Security (OLS) Management is integrated with Enterprise Manager Grid control. The Database Administrator can manage OLS from the same console that is used for managing the database instances, listeners and other targets.
Oracle Database 11g: New Features for Administrators 14 - 26
Managing Label Security with Oracle Internet Directory
Label Security with OID Oracle Label Security policies can now be created and stored in the Oracle Internet Directory using Enterprise Manager, then propagated to one or more databases. A database will subscribe to a policy making the policy available to the database, and the policy can be applied to tables and schemas in the database. Label authorizations can be assigned to enterpriser users in the form of profiles.
Oracle Database 11g: New Features for Administrators 14 - 27
Enterprise Users / Enterprise Manager The functionality of the Enterprise Security Manager has been integrated into Enterprise Manager. Enterpriser manager allows you to create and configure enterprise domains, enterprise roles, user schema mappings and proxy permissions. Databases can be configured for enterprise user security after they have been registered with OID. The registration is performed through the DBCA tool. Enterprise Users and groups can also be configured for enterprise user security. The creation of enterprise users and groups can be done through Delegated Administration Service (DAS). Administrators for the database can be created and given the appropriate roles in OID through Enterprise Manager. Enterpriser manager allows you to manage enterprise users and roles, schema mappings, domain mappings, and proxy users.
Oracle Database 11g: New Features for Administrators 14 - 28
Enterprise Manager Security Management Security management has been integrated into Enterprise Manager. Oracle Label Security, Application Contexts, and Virtual Private Database previous administered through Oracle Policy Manager tool are managed through the Enterprise Manager. Enterprise User Security is also now managed though Enterprise Manager instead of a separate tool. A graphical interface for managing Transparent Data Encryption has been added.
Oracle Database 11g: New Features for Administrators 14 - 29
Enterprise Manager Policy Manager Screen shot of policy manager
Enterprise Manager Policy Manager Enterprise Manager Policy manager allows you to compare your database configuration against a set of Oracle best practices. The Oracle best practices are in line with CIS and PCI requirements (CHECK before release possible better wording). For reviewers: Can the recommendations that are being used as a baseline be changed to match PCI or CIS recommendations?
Oracle Database 11g: New Features for Administrators 14 - 30
Managing Label Security with Oracle Internet Directory
Label Security with OID Oracle Label Security policies can now be created and stored in the Oracle Internet Directory, then applied to one or more databases. A database will subscribe to a policy making the policy available to the database, and the policy can be applied to tables and schemas in the database. Label authorizations can be assigned to enterpriser users in the form of profiles.
Oracle Database 11g: New Features for Administrators 14 - 31
Enterprise Users / Enterprise Manager The functionality of the Enterprise Security Manager has been integrated into Enterpriser Manager. Enterprise Users can be created and configured. Databases can be configured for enterprise user security after they have been registered with OID. The registration is performed through the DBCA tool. Administrators for the database can be created and given the appropriate roles in OID through Enterprise Manager. Enterpriser manager allows you to manage enterprise users and roles, schema mappings, domain mappings, and proxy users.
Oracle Database 11g: New Features for Administrators 14 - 32
Oracle Audit Vault Enhancements
Harden Streams (configuration?) DML/DDL capture on SYS schema ** Capture actions against SYS, SYSTEM, and CTXSYS schema *** maybe TMI *** Capture changes to SYS.AUD$ and SYS.FGA_LOG$
Oracle Audit Vault Enhancements Oracle Audit Vault provides auditing in a heterogeneous environment. Audit Vault consists of a secure database to store and analyze audit information from various sources such as databases, OS audit trails etc. Oracle Streams is an asynchronous information sharing infrastructure that facilitates sharing of events within a database or from one database to another. Events could be DML or DDL changes happening in a database. These events are captured by Streams implicit capture and are propagated to a queue in a remote database where they are consumed by a subscriber which is typically the Streams apply process. Oracle Streams can already capture all DML on participating tables and all DDL to the database. Streams is enhanced to capture the events that change the database audit trail, forwarding that information to Audit Vault. Harden the transfer and collect configuration. The configuration of audit vault is driven entirely from the Audit Vault instance. Audit sources will require only an initial configuration to enable. (Is this what is intended? JLS interpretation of FS)
Oracle Database 11g: New Features for Administrators 14 - 33
RMAN Security Enhancements Backup shredding is a key management feature, that allows the DBA to delete the encryption key of transparent encrypted backups, without physical access to the backup media. The encrypted backups are rendered in accessible if the encryption key is destroyed. This does not apply to password protected backups. Configure Backup Shredding with: CONFIGURE ENCRYPTION EXTERNAL KEY STORAGE ON; Or SET ENCRYPTION EXTERNAL KEY STORAGE ON;
The default setting is OFF, and backup shredding is not enabled. To shred a backup no new command is needed, use: DELETE FORCE;
Virtual Private Catalog. The RMAN catalog has been enhanced create virtual private RMAN catalogs for groups of databases and users. The catalog owner creates the base catalog, and grants RECOVERY_CATALOG_OWNER to the owner of the virtual catalog. The catalog owner can either grant access to registered database to the virtual catalog owner or grant REGISTER to the virtual catalog owner. The virtual catalog owner can then connect to the catalog for a particular target or register a target database. Once the virtual private catalog is configured the virtual private catalog owner uses it just like a standard base catalog. This feature allows a consolidation of RMAN repositories and maintains a separation of responsibilities. The catalog ownerDatabase can access all the New registered databasefor information in the catalog. Oracle 11g: Features Administrators 14 -The 34 catalog owner can see a listing of all databases registered with the SQL*Plus command: SELECT DISTINCT db_name FROM DBINC;
Using RMAN Virtual Private Catalog
1. Create an RMAN base catalog RMAN> CONNECT CATALOG catowner/oracle@catdb; RMAN> CREATE CATALOG;
2. Grant RECOVERY_CATALOG_OWNER to VPC owner SQL> CONNECT SYS/oracle@catdb AS SYSDBA SQL> GRANT RECOVERY_CATALOG_OWNER to vpcowner
3. Grant REGISTER to the VPC owner or RMAN> CONNECT CATALOG catowner/oracle@catdb; RAMN> GRANT REGISTER DATABASE TO vpcowner;
Grant CATALOG FOR DATABASE to the VPC owner RMAN>GRANT CATALOG FOR DATABASE db10g TO vpcowner
Using RMAN Virtual Private Catalog The RMAN catalog has been enhanced. You create virtual private RMAN catalogs for groups of databases and users. 1. The catalog owner creates the base catalog. 2. The DBA on the catalog database creates the user that will own the virtual private catalog and grants RECOVERY_CATALOG_OWNER to the owner of the virtual catalog. 3. The catalog owner can grant access for previously registered databases to the virtual catalog owner or grant REGISTER to the virtual catalog owner. The GRANT CATALOG command is: GRANT CATALOG FOR DATABASE prod1, prod2 TO vpcowner;
The GRANT REGISTER command is: GRANT REGISTER DATABASE TO vpcowner;
The virtual catalog owner can then connect to the catalog for a particular target or register a target database. Once the virtual private catalog is configured the virtual private catalog owner uses it just like a standard base catalog.
Oracle Database 11g: New Features for Administrators 14 - 35
Using RMAN Virtual Private Catalog (cont)
4. Create a virtual catalog for 11g clients or RMAN> CONNECT CATALOG vpcowner/oracle@catdb; RMAN> CREATE VIRTUAL CATALOG;
Create a virtual catalog for pre-11g clients SQL> CONNECT vpcowner/oracle@catdb SQL> exec catowner.dbms_rcvcat.create_virtual_catalog;
4. REGISTER a not previously cataloged database RMAN> CONNECT TARGET / CATALOG vpcowner/oracle@catdb; RAMN> REGISTER DATABASE;
5. Use the virtual catalog RMAN> CONNECT TARGET / CATALOG vpcowner/oracle@catdb; RAMN> BACKUP DATABASE; 14 - 36
Using RMAN Virtual Private Catalog (cont) 4. Create a virtual private catalog. • If the target database is an Oracle Database 11g and the RMAN client is an 11g client. You can use the RMAN command: CREATE VIRTUAL CATALOG;
• If the target database is Oracle Database 10g Release 2 or earlier, using a compatible client. You must execute the supplied procedure from SQL*Plus: base_catalog_owner.dbms_rcvcat.create_virtual_catalog;
5. Connect to the catalog using the VPC owner login, and use it as a normal catalog. This feature allows a consolidation of RMAN repositories and maintains a separation of responsibilities. The catalog owner can access all the registered database information in the catalog. The catalog owner can see a listing of all databases registered with the SQL*Plus command: SELECT DISTINCT db_name FROM DBINC;
The virtual catalog owner can only see the databases that have been granted. If the catalog owner has not been granted SYSDBA or SYSOPER on the target database, then most RMAN operations cannot be performed by catalog owner.
Oracle Database 11g: New Features for Administrators 14 - 36
Managing Fine-Grained Access to External Network Services 1. Create an ACL and its privileges BEGIN DBMS_NETWORK_ACL_ADMIN.CREATE_ACL ( acl => 'us-oracle-com-permissions.xml', description => ‘Permissions for oracle network', principal => ‘SCOTT', is_grant => TRUE, privilege => 'connect'); END;
Managing Fine-Grained Access to External Network Services The network utility family of PL/SQL packages such as UTL_TCP, UTL_INADDR, UTL_HTTP, UTL_SMTP, and UTL_MAIL allow Oracle users to make network callouts from the database using raw TCP or using higher level protocols built on raw TCP. A user either did or did not have EXECUTE privilege on these packages and there was no control over which network hosts were accessed. The new package DBMS_NETWORK_ACL_ADMIN allows fine-grained control using access control lists (ACL) implemented by XML DB. The first step is to create an access control list (ACL). The ACL is a list of users and privileges held in an XML file. The XML document named in the acl parameter is relative to the /sys/acl/ folder in the XML DB. In the example, SCOTT is granted connect. The username is case sensitive in the ACL and must match the username of the session. There are only resolve and connect privileges. The connect privilege implies resolve. Optional parameters can specify a start and end timestamp for these privileges. To add more users and privileges to this ACL use the ADD_PRIVILEGE procedure.
Oracle Database 11g: New Features for Administrators 14 - 37
Managing Fine-Grained Access to External Network Services 2. Assign an ACL to one or more network hosts BEGIN DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL ( acl => ‘us-oracle-com-permissions.xml', host => ‘*.us.oracle.com', lower_port => 80, upper_port => null); END
Managing Fine-Grained Access to External Network Services Assign an ACL to one or more network hosts. The ASSIGN_ACL procedure associates the ACL with a network host and optionally a port or range of ports. In the example, the host parameter allows wild card character for the host name to assign the ACL to all the hosts of a domain. The use of wild cards affect the order of precedence for the evaluation of the ACL. Fully qualified host names with ports are evaluated before hosts with ports. Fully qualified host names are evaluated before partial domain names, and sub-domains are evaluated before the top level domain level. Multiple hosts can be assigned to the same ACL and multiple users can be added to the same ACL in any order after the ACL has been created.
Oracle Database 11g: New Features for Administrators 14 - 38
Summary
In this lesson, you should have learned how to: • Configure the password file to use case sensitive passwords • Encrypt a tablespace • Create a virtual private catalog for RMAN • Configure fined grained access to network services
Summary A summary list appears at the end of each course, unit, module, and lesson. You can format the summary slide in two ways. For example, you can summarize the lesson or unit in a short paragraph, or you can simply restate the objectives. Whichever format you choose, use it consistently for every lesson and unit in your course. If you decide to simply restate the objectives, try not to repeat them verbatim. Use the following guidelines for the bulleted list: • Begin the summary list with this introduction: “In this lesson, you should have learned how to:” • Under this introduction, create list items that are sentence fragments beginning with imperative (action) verbs. Do not use end punctuation. • If the summary covers only one topic, incorporate that topic in the “In this lesson…” sentence. Do not create a one-bullet list. For example: In this lesson, you should have learned how to define a parameter. (Note the end punctuation.) not In this lesson, you should have learned how to: - Define a parameter
Oracle Database 11g: New Features for Administrators 14 - 39
Practice # Overview: Using Security Features This practice covers the following topics: • Configuring the password file to use case sensitive passwords • Encrypting a tablespace • Creating and using a virtual private catalog • Performing RMAN operations as SYSOPER
Note: Insert practice number (#) in the slide title following the word Practice.
SecureFiles Overview This feature introduces a completely reengineered large object (LOB) data type to dramatically improve performance, manageability, and ease of application development. The new implementation also offers advanced, next-generation functionality such as intelligent compression and transparent encryption. This feature significantly strengthens the native content management capabilities of Oracle Database: SecureFiles: API The enhanced SecureFiles LOB APIs provide seamless access to both SecureFiles and BasicFile LOBs. With this feature, you need only to learn one set of APIs irrespective of the LOB implementation they are using. The benefit of this feature is greater ease-of-use. The new APIs are an extension of the old APIs, so no relearning is required.
Oracle Database 11g: New Features for Administrators 15 - 3
SecureFiles Overview (Continued) SecureFiles: Compression This feature allows you to explicitly compress SecureFiles to gain disk, I/O, and redo logging savings. The benefits of this feature are: • Reduced costs due to the most efficient utilization of space. • Improved performance of SecureFiles as compression reduces I/O and redo logging (at some CPU expense). As CPUs get faster, and as clustered computing gets more ubiquitous, it is safer to err on the side of more CPU as long as it saves I/O and redo logging to disk. SecureFiles: Data Path Optimization This feature entails a number of performance optimizations for SecureFiles, including: • Dynamic use of CACHE and NOCACHE, to avoid polluting the buffer cache for large cache SecureFiles LOBs • SYNC and ASYNC to take advantage of the COMMIT NOWAIT BATCH semantics of transaction durability • Write Gather Caching, similar to dirty write caches of file servers. This write gathering amortizes the cost of space allocation, inode updates, and redo logging, and enables large I/O's to disk. • New DLM locking semantics for SecureFiles LOB blocks: Oracle Database no longer uses a cache fusion lock for each block of the SecureFiles. Instead, it amortizes the cost of going to DLM by covering all SecureFiles LOB blocks using a single DLM lock. This feature moves LOB performance close to that of other file systems. SecureFiles: Deduplication Oracle Database can now automatically detect duplicate SecureFiles LOB data and conserve space by storing only one copy. This feature implements disk storage, I/O, and redo logging savings for SecureFiles. SecureFiles: Encryption This feature introduces a new, more efficient encryption facility for SecureFiles. The encrypted data in now stored in-place and is available for random reads and writes. The benefit of this feature is enhanced data security. SecureFiles: Inodes New storage structures for SecureFiles have been designed and implemented in this release to support high performance (low-latency, high throughput, concurrent, space-optimized) transactional access to large object data. In addition to improving of basic data access, the new storage structures also support rich functionality all with minimal performance cost, such as: • Implicit compression and encryption • Data sharing • User-controlled versioning Note: The COMPATIBLE initialization parameter must be set to 11.1 or higher to use SecureFiles. The BasicFile (previous LOB) format is still supported under 11.1 compatibility. There is no downgrade capability after 11.1 is set.
Oracle Database 11g: New Features for Administrators 15 - 4
Benefits of SecureFile LOBS SecureFiles • Are easy to use • Provide superior performance over BASICFILE LOBs • • • • •
Have fewer tunable storage parameters Reduce fragmentation Improve DML performance Improve read/writes performance Provide efficient reuse of free space
SecureFile
– Flashback mode – AUTO mode – No Retention mode
• Enable tuning of space management performance 15 - 5
Benefits of SecureFile LOBS The new SecureFile space management model eliminates most of the drawbacks of basic LOBS. SecureFiles are easy to use and provide improved performance when compared to LOBs in earlier database releases. The key benefits of SecureFiles are: Fewer tunable storage parameters: There is no requirement to specify parameters like FREEPOOLS, FREELIST GROUPS, PCTVERSION, RETENTION and CHUNK. FREEPOOLS and FREELIST GROUPS were provided as a hint to specify the concurrency in RAC. While the former can be altered offline, the latter cannot even be altered. These parameters are no longer available. Fragmentation: In prior releases CHUNK provided the ability to batch I/O. The size of the chunk was user specifiable, the default being the tablespace block size. In Oracle database 11g CHUNK is a hidden concept, totally managed by the database. The chunk sizes vary dynamically based on factors like size of the LOB, availability of space in the segment and other factors. By using variable sized chunks, internal fragmentation is minimized. The second type of fragmentation that impacted the I/O performance in prior releases is the fragmentation of LOB instantiation. The LOB was allocated in too many small sized chunks or the chunks were not co-located in disk. This had a tremendous impact on I/O by increasing the seek times and not maximizing the I/O bandwidth by prefetching the several fragments. With the new LOB storage chunks are of varying size from block size to 64M. Best effort is made to place contiguous data in physically adjacent locations on disk. Oracle Database 11g: New Features for Administrators 15 - 5
Benefits of SecureFile LOBS Performance of DML Operations: Performance improvement of space search is many-fold: • Separating the committed and uncommitted data in different data structures avoids having to verify the transactional state of the chunks before consuming them. • Deletes are several times faster since the time to delete is not proportional to the size of the LOB instantiation, rather it depends on the number of LOB instantiations freed. • Avoiding space management activities like updating the metadata structures by batching the allocation of metadata structures • Keeping in-memory statistics on concurrency levels help better distribute heat on data structures • In-memory stats on the space usage patterns help make better decision in doing pro-active space management • An in-memory fast space allocator tries to dispense free space by reading from the SGA. When space is sufficiently kept free, allocations are made with zero block reads. • A background process tries to maintain free space in the segments by pre-allocating space based on segment growth. Performance of Read/Writes: Delayed allocation of space is used to improve data co-location and minimize fragmentation of LOBs. Data is cached in the write gather cache before space layer is called to allocate space on disk. This reduces the number of calls for metadata management and significantly improves the ability to use large chunks of space. During segment growth when requests come serially, the server allocates chunks that are physically adjacent. Provide efficient reuse of free space: The undo generated on LOB columns is huge. The undo is not copied to the undo tablespace since it entails a huge I/O performance overhead due to writing to redo logs and undo segment. Rollback of such transactions also suffers performance impact due to reading from undo segments and copying back all the data into the LOB segment. Oracle uses shadow paging technique to provide transaction recovery and complete recovery. In the new storage scheme, updated data is left in the original blocks, new blocks are allocated to contain the changes and pointers to the blocks updated in the metadata to reflect the change. During transaction recovery, it is sufficient to flip the pointers in the metadata. Since metadata blocks are transactionally managed, complete recovery on the metadata blocks reveals the right LOB blocks. In the new storage architecture, for updates involving smaller pieces of data, the changes are made through in place updates. The freed space in the LOB segment is ordered on “freed time” and a FIFO based reuse mechanism is used to reclaim the oldest committed free space first. • Flashback Mode: When the database is in flashback mode, the space requirements for LOB undo retained in the LOB segment can be very high. The user can restrict the space usage for the LOB segment by either specifying a limit on the LOB segment size or specifying a minimum duration for which the undo should be retained in the LOB segment. • AUTO mode: In this mode, the goal of space management is to provide complete recovery for LOBs. Flashback is not guaranteed. LOB_UNDO_RETENTION is computed as the maximum query length for the LOB segment. MIN(AUM_UNDO_RETENTION, LOB_UNDO_RETENTION) is used to retain the undo in LOB segment. • No Retention mode: Provides no retention of versioned space for purposes of benchmark and for users who do not anticipate flashback or complete recovery. Committed undo is not retained. The committed undo can be reclaimed in any order. Provide monitoring to tune space management performance: One of the many reasons due to which the prior LOB implementation regressed in performance was because the implementation suffered from the lack of self-adjustment of system resources (memory, space, etc) to adapt to dynamic workload. Previously a LOB segment neither pre-allocated space under high DML activity nor did it release space when there was no DML activity for a long time. Another example is that previously when LOBs became fragmented during workloads, Oracle Database 11g: New Features forconcurrent Administrators 15 there - 6 was no way to defragment them to improve their read performance.
Enabling SecureFiles Storage
SecureFiles storage can be enabled : • Using the DB_SECUREFILE initialization parameter, values: – ALWAYS | PERMITTED | NEVER | IGNORE
• Using the ALTER SESSION | SYSTEM command: SQL> ALTER SYSTEM SET db_securefile = 'ALWAYS';
Enabling SecureFiles Storage The DB_SECUREFILE initialization parameter allows DBAs to determine the usage of SecureFiles. Valid values are: • PERMITTED: Allow SecureFiles to be created(default). • NEVER: Disallow SecureFiles from being created going forward. • ALWAYS: Force all LOBs created going forward to be SecureFiles . • IGNORE: Disallow SecureFiles and ignore any errors that would otherwise be caused by forcing BasicFiles with SecureFiles options. If NEVER is specified, any LOBs that are specified as SecureFiles are created as BasicFiles. All SecureFiles specific storage options and features (for example compression, encryption, deduplication) will cause an exception. The BasicFile defaults will be used for storage options not specified. If ALWAYS is specified, all LOBs created in the system are created as SecureFiles. The LOB must be created in an Automatic Segment Space Management(ASSM) tablespace, else an error occurs. Any BasicFile storage options specified will be ignored. The SecureFiles defaults for all storage can be changed using the ALTER SYSTEM command as shown above.
Oracle Database 11g: New Features for Administrators 15 - 7
Creating SecureFiles LOB CREATE TABLE func_spec( id number, doc CLOB ENCRYPT USING 'AES128' ) LOB(doc) STORE AS SECUREFILE (DEDUPLICATE LOB CACHE NOLOGGING); CREATE TABLE test_spec ( id number, doc CLOB) LOB(doc) STORE AS SECUREFILE (COMPRESS HIGH KEEP_DUPLICATES CACHE NOLOGGING); CREATE TABLE design_spec (id number, doc LOB(doc) STORE AS SECUREFILE (ENCRYPT);
CLOB)
CREATE TABLE design_spec (id number, doc CLOB ENCRYPT) LOB(doc) STORE AS SECUREFILE;
Creating SecureFiles LOB You create a SecureFiles when the storage keyword SECUREFILE appears in the CREATE TABLE statement with a LOB column. If the keyword SECUREFILE is not used, or if the keyword BASICFILE is used then a basic LOB (as in prior releases) is created. BASICFILE is the default storage. The illustration above shows two examples of creating SecureFiles. In the first example you are creating a table called FUNC_SPEC to store documents as SecureFiles. Here you are specifying that you do not want duplicates stored for the LOB, that the LOB should be cached when read, and redo should not be generated when updates are performed to the LOB. In addition you are specifying that the documents stored in the doc column should be encrypted using AES128 encryption algorithm. KEEP_DUPLICATE is the opposite of DEDUPLICATE, and can be used in an ALTER statement. In the second example above you are creating a table called DESIGN_SPEC which stores documents as SecureFiles. For this table you have specified that duplicates may be stored, the LOBs should be stored in compressed format and should be cached but not logged. Default compression is MEDIUM which is the default. The compression algorithm is implemented on the server-side which allows for random reads and writes to LOB data. That property can also be changed via ALTER statements.
Oracle Database 11g: New Features for Administrators 15 - 8
Creating SecureFiles LOB (Continued) The third and fourth example on the slide are semantically exactly the same. The difference is mainly syntactical. The first version of the statement uses the new ENCRYPT option within the SECUREFILE clause. The second version uses the ENCRYPT key word directly after the column type. Both versions are using TDE to encrypt the corresponding column. Note: For a full description of the options available for the CREATE TABLE statement please see the Oracle Database SQL Reference.
Oracle Database 11g: New Features for Administrators 15 - 9
SecureFiles Key Parameters
• • • •
CHUNKSIZE: Deprecated PCTVERSION: Does not apply to SecureFiles MAXSIZE: Specify maximum segment size RETENTION: Specify retention policy to use: – – – –
15 - 10
MAX: Keep old versions until MAXISE is reached. MIN: Keep old versions at least MIN seconds. AUTO: Default NONE: Reuse old versions as much as possible
SecureFiles Key Parameters CHUNKSIZE is deprecated. Although you can still use it, it is not necessary as it is not used internally. PCTVERSION does not apply to SecureFiles anymore. MAXSIZE is a physical storage attribute for your SecureFiles. It specifies the maximum segment size related to the storage clause level. RETENTION is an overloaded key word. It signification for SecureFiles is the following: • MAX is used to start reclaiming old versions once segment MAXSIZE is reached. • MIN keeps old version for the specified least amount of time. • AUTO is the defualt setting which is basically a tradeoff between space and time. This is automatically determine. • NONE reuses old versions as much as possible.
Oracle Database 11g: New Features for Administrators 15 - 10
Altering SecureFiles ALTER TABLE t1 MODIFY LOB(a) ( KEEP_DUPLICATES );
ALTER TABLE t1 MODIFY LOB(a) (COMPRESS HIGH); Enable compression on SecureFiles within a single partition ALTER TABLE t1 MODIFY PARTITION p1 LOB(a) ( COMPRESS HIGH ); Enable encryption using 3DES168. ALTER TABLE t1 MODIFY ( a CLOB ENCRYPT USING '3DES168');
Altering SecureFiles DEDUPLICATE/KEEP_DUPLICATES: The DEDUPLICATE option allows you to specify that LOB data which is identical in two or more rows in a LOB column should all share the same data blocks. The opposite of this is KEEP_DUPLICATES. Oracle uses a secure hash index to detect duplication and combines LOBs with identical content into a single copy, reducing storage and simplifying storage management. VALIDATE: Perform a byte-by-byte comparison of the SecureFiles with the SecureFiles that has the same secure hash value, to verify the SecureFiles match before finalizing deduplication. The LOB keyword is optional and is for syntactic clarity only. COMPRESS/NOCOMPRESS: Enables or disables LOB compression. All LOBs in the LOB segment are altered with the new setting. ENCRYPT/DECRYPT: Turns on or turns off LOB encryption using TDE. All LOBs in the LOB segment are altered with the new setting. A LOB segment can be altered only to enable or disable LOB encryption. That is, ALTER cannot be used to update the encryption algorithm or the encryption key. The encryption algorithm or encryption key can be updated using the ALTER TABLE REKEY syntax. Encryption is done at the block level as the last thing. This allows for better performance (smallest encryption amount possible) when combined with other options. RETENTION: Altering RETENTION only effects space created after the ALTER TABLE statement was executed.
Oracle Database 11g: New Features for Administrators 15 - 11
Storing SecureFiles The centerpiece of disk structures for new LOBs is the inode which so named for its semantic similarity to inodes in traditional file-systems. A lob inode is a self-contained description of the user-data in a lob. Self-contained implies that, except for dictionary-level information, the inode does not refer to data in other segments in order to describe and access a lob. Additionally, the lob inode is transportable. When moving new lob tablespaces across platforms, the information in a lob inode is either converted per the dictates of the transportable tablespace infrastructure, or else is already stored in a platform-independent format and requires no conversion. Both the lob inode and lob user-data are stored in block format in a lob segment . A lob segment is a new segment type composed of a new type of transaction-managed blocks. New LOB segments are like automatic segment space management (ASSM) segments, but have additional space-management structures that are designed to make the allocation and de-allocation of large chunks of contiguous disk space extremely fast. The lob inode is a highly structured, variable-sized entity that describes coarse-grained user-level properties of a lob such as : • Byte and character length • User-data checksum or hash • Presence or absence of compression and encryption and the specific algorithms used if any presence or absence of user-controlled versions of the lob The lob inode also describes fine-grain storage for the user-data blocks of a lob such as the lob map. Oracle Database 11g: New Features for Administrators 15 - 12
Storing SecureFiles (Continued) The goal of the lobmap is to allow the inode layer to efficiently access any random byte offset within the user-data in addition to the more obvious requirement of mapping and accessing all bytes in the lob. The lobmap is a hybrid of different types of persistent data structures, each optimized for a different range of logical offsets in the lob. It is important to note that the lobmap is not a user-visible structure. User-data itself is stored in transaction-managed blocks in the lob segment, and contiguous ranges of such blocks are grouped into physical and logical units called chunks. As with the lobmap, chunks are visible only to the Inode and Space Management layers: within the Inode layer, chunks are the entities that the lobmap describes; within the Space Management layer, chunks are the ranges of disk blocks allocated and de-allocated as a unit. As far as the user of the lob is concerned, there are no preferred sizes or granularities or alignments for any of their data access---the Inode layer internally chooses the most appropriate chunking granularities for the user-data based on available resources. Note: These chunks are distinct from the chunks that comprise LOBs available in prior releases in almost every respect. The Inode layer provides the following functionality: • Support all existing functionality and access APIs for both old and new LOBs. • Provide improved performance for SecureFile lobs. • Implement new SecureFile functionality - Compression - Encryption - varying-width encoding - hashing, - versioning - sharing Note: There are no longer LOB indexes created for SecureFiles.
Oracle Database 11g: New Features for Administrators 15 - 13
Accessing SecureFiles Metadata Data layer interface is the exact same as with BASICFILES!
Accessing SecureFiles Metadata Regarding accessing SecureFile data itself, you use the exact same interface as with BasicFiles. DBMS_LOB Package: LOBs inherit the LOB column settings for deduplication, encryption, and compression, which can also be configured on a per-LOB level using the LOB locator API. However, the LONG API cannot be used to configure these LOB settings. You must use the following DBMS_LOB package additions for these features: • DBMS_LOB.GETOPTIONS: Settings can be obtained using this function. An integer corresponding to a pre-defined constant based on the option type is returned. • DBMS_LOB.SETOPTIONS:This procedure sets features and allows the features to be set on a per-LOB basis, overriding the default LOB settings. Incurs a round trip to the server to make the changes persistent. • DBMS_LOB.GET_DEDUPLICATE_REGIONS:This procedure outputs a collection of records identifying the deduplicated regions in a LOB. LOB-level deduplication contains only a single deduplicated region. DBMS_SPACE.SPACE_USAGE: The existing SPACE_USAGE procedure is overloaded to return information about LOB space usage. It returns the amount of disk space in blocks used by all the LOBs in the LOB segment. This procedure can only be used on tablespaces that are created with ASSM and does not treat LOB chunks belonging to BasicFiles as used space. Note: For further details please see the Oracle Database PL/SQL Packages and Types Reference.
Oracle Database 11g: New Features for Administrators 15 - 14
Record-oriented SecureFile LOB for XML Index Improvement
Record-oriented SecureFile LOB for XML Index Improvement Oracle Database 11g supports partial update in the form of delta update. DBMS_LOB package, OCI and other API are extended to support new piece-wise update calls .Lob reorganization occurs automatically during an update call when the server evaluates that a reorganization operation is more beneficial than a delta update operation. This operation is fully transparent to the client, only the piecewise update call would appear slow to the client in this case. Lob updates support CLOB, NCLOB . The API for CLOB and NCLOB is the same as BLOB only the offset fields are interpreted as character offsets.
Oracle Database 11g: New Features for Administrators 15 - 15
Migrating to SecureFiles There are two recommended methods for migration of BasicFiles to SecureFiles. These are partition exchange and on-line redefinition. Partition Exchange: • Needs additional space equal to the largest of the partitions in the table. • Can maintain indexes during the exchange. • Can spread the workload out over several smaller maintenance windows. • Requires that the table or partition needs to be offline to perform the exchange. Online Redefinition (Best recommended practice): • No need to take the table or partition offline. • Can be done in parallel. • Requires additional storage equal to the entire table and all LOB segments to be available. • Any global indexes must be rebuilt. If you want to upgrade your BasicFiles to SecureFiles, you need to upgrade by the normal methods typically used to upgrade data (for example, CTAS/ITAS, online redefinition, export/import, column to column copy, or using a view and a new column). Most of these solutions mean using two times the disk space used by the data in the input LOB column. However, doing partitioning and taking these actions on a partition-by-partition basis may help lower the disk space required.
Oracle Database 11g: New Features for Administrators 15 - 16
SecureFile Migration Example create table tab1 (id number not null, c clob) partition by range(id) (partition p1 values less than (100) tablespace tbs1 lob(c) store as lobp1, partition p2 values less than (200) tablespace tbs2 lob(c) store as lobp2, partition p3 values less than (300) tablespace tbs3 lob(c) store as lobp3);
Insert your data create table tab1_tmp (id partition by range(id) (partition p1 values less partition p2 values less partition p3 values less
number not null, c clob) than (100) tablespace tbs1 lob(c) store as lobp1, than (200) tablespace tbs2 lob(c) store as lobp2, than (300) tablespace tbs3 lob(c) store as lobp3);
begin dbms_redefinition.start_redef_table('scott','tab1','tab1_tmp','id id, c c'); dbms_redefinition.copy_table_dependents('scott','tab1','tab1_tmp',1, true,true,true,false,error_count); dbms_redefinition.finish_redef_table('scott','tab1','tab1_tmp'); end;
SecureFile Migration Example The above example can be used to migrate BasicFile to SecureFile LOBs. First, you create your table using BasicFiles. The example uses a partitioned table. Then, you insert data in your table. Following that, you create a transient table that has the same number of partitions but this time using secure files. Note that this transient table has the same columns and types. Last part is to redifine your table using the previously created transient table.
Oracle Database 11g: New Features for Administrators 15 - 17
SecureFile Monitoring
All the same mechanisms: • *_LOBS / *_LOB_PARTITIONS / *_PART_LOBS – New SECUREFILE column
• SYS_USER_SEGS / SYS_DBA_SEGS – New SECUREFILE segment subtype – New RETENTION column – New MINRETENTION column for RETENTION MIN
Oracle Database 11g: New Features for Administrators 16 - 1
Objectives
After completing this lesson, you should be able to: • Describe and use the enhanced online table redefinition and materialized views • Describe finer grained dependency management • Describe and use the enhanced PL/SQL recompilation mechanism • Use enhanced DDL – Apply the improved table lock mechanism – Create and use invisible indexes
Oracle Database 11g: New Features for Administrators 16 - 2
Objectives
After completing this lesson, you should be able to: • Use PL/SQL result cache • Create Bitmap join indexes for IOT • Describe System Managed Domain Indexes • Use automatic Native PL/SQL and Java Compilation • Use Client query Cache
Online Table Redefinition Enhancements When a table is redefined online, it is accessible to both queries and DML during much of the redefinition process. The process is enhanced in Oracle Database 11g to support tables with materialized views and view logs. In addition, online redefinition supports triggers with the FOLLOWS or PRECEDES clause, which establishes an ordering dependency between the triggers. Also, PL/SQL and dependent objects are not invalidated after a redefinition, unless they are logically affected. You can redefine a table online with the Enterprise Manager Reorganize Objects wizard or with the DBMS_REDEFINITION package. Note: You can access the Reorganize Objects wizard from the Schema sub-page.
Oracle Database 11g: New Features for Administrators 16 - 4
Online Redefinition Wizard In prior database versions, a table cannot be redefined if it has a log or materialized views (MV). In Oracle Database 11g, you can redefine tables with materialized views and MV logs. You can clone the materialized view log onto the interim table just like triggers, indexes and other similar dependent objects. At the end of the redefinition, rowid logs are invalidated. Initially, all dependent materialized views need to do a complete refresh. This enhancement saves you the effort and time to drop and recreate the materialized views and the materialized view logs. Note that for materialized view logs and queue tables, online redefinition is restricted to changes in physical properties. No horizontal or vertical sub-setting is permitted, nor are any column transformations. (The only valid value for the column mapping string is NULL).
Oracle Database 11g: New Features for Administrators 16 - 5
Redefinition and Materialized View The example shows redefinition of the the HR.LOCATION_MV materialized view and the HR.MLOG$_LOCATIONS view log based on HR.LOCATIONS table 1. Invoke the Reorganize Objects wizard. 2. Select all database objects related to HR.LOCATIONS. 3. This example uses default options. 4. The Reorganize Objects wizard analyses the space needed and displays an Impact Report.
Oracle Database 11g: New Features for Administrators 16 - 6
Continuing with the Example: 5. Schedule the reorganization for immediate execution. 6. Review the Script Summary and Full Script. (You may wish to save the Full script).
Oracle Database 11g: New Features for Administrators 16 - 7
Continuing with the Example: 7. Submit the job. 8. Verify its successful execution. Best practice tip: You should start the redefinition process prior to the start of the downtime and the downtime should be used complete the redefinition.
Oracle Database 11g: New Features for Administrators 16 - 8
Steps in Redefining a Table using PL/SQL 1. Choose the redefinition method 2. Use DBMS_REDEFINITION.CAN_REDEF_TABLE procedure to verify the table can be redefined 3. Create an empty interim table without indexes 4. Use DBMS_REDEFINITION.START_REDEF_TABLE procedure to start redefinition 5. Create indexes on the interim table 6. Use DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS to copy dependent objects into theinterim table 7. Check for errors in DBA_REDEFINITION_ERRORS views 8. Use DBMS_REDEFINITION.FINISH_REDEF_TABLE procedure to complete the redefinition 9. Drop the interim table 16 - 9
Steps in Redefining a Table using PL/SQL 1. Choose the redefinition method: by key (primary key or pseudo-primary key) or by rowid (if no key is available) 2. Verify that the table is a candidate for online redefinition with the CAN_REDEF_TABLE procedure. 3. Create an empty interim table (in the same schema as the table to be redefined) with the desired logical and physical attributes, but without indexes. Optionally and best practice: If you are redefining a large table and want to improve the performance of the next step by running it in parallel, issue the following statements: ALTER SESSION FORCE PARALLEL DML PARALLEL ; ALTER SESSION FORCE PARALLEL QUERY PARALLEL ; 4. Start the redefinition process by calling the START_REDEF_TABLE procedure. If you did not define indexes under step 3, the initial copy uses direct path inserts and does not have to maintain indexes at this point. This is a performance benefit. 5.Create any indexes and other dependent objects on the interim table. 6. Copy dependent objects of the original table onto the interim table with the COPY_TABLE_DEPENDENTS procedure. This procedure clones and registers dependent objects of the base table, such as triggers, indexes, materialized view logs, grants, and constraints. This procedure does not clone the already registered dependent objects.
Oracle Database 11g: New Features for Administrators 16 - 9
Steps in Redefining a Table using PL/SQL (Continued) 7. Query the DBA_REDEFINITION_ERRORS view to check for errors. Optionally and best practice: Synchronize the interim and the original tables periodically with the SYNC_INTERIM_TABLE procedure. Perform a final synchronization before completing the redefinition. 8. Complete the redefinition with the FINISH_REDEF_TABLE procedure. 9. Drop the interim table. The following are the end results of the redefinition process: • The original table is redefined with the columns, indexes, constraints, grants, triggers, and statistics of the interim table. • Dependent objects that were registered, either explicitly through the REGISTER_DEPENDENT_OBJECT procedure or implicitly through the COPY_TABLE_DEPENDENTS procedure, are renamed automatically, so that dependent object names on the redefined table are the same as before redefinition. If no registration is done or no automatic copying is done, then you must manually rename the dependent objects. • The referential constraints involving the interim table now involve the redefined table and are enabled. • Any indexes, triggers, materialized view logs, grants, and constraints defined on the original table (prior to redefinition) are transferred to the interim table and are dropped when the user drops the interim table. Any referential constraints involving the original table before the redefinition now involve the interim table and are disabled. • PL/SQL procedures and dependent objects are invalidated, if they are logically affected by the redefinition. They are automatically revalidated whenever they are used next. Note: The revalidation can fail if the logical structure of the table was changed as a result of the redefinition process.
Oracle Database 11g: New Features for Administrators 16 - 10
More Precise Dependency Metadata
Recording of additional, finer-grained dependency management example: • Prior to Oracle Database 11g, adding column D to table T invalidated the dependent objects. • Starting in Oracle Database 11g, adding column D to table T does not impact view V and does not invalidate the dependent objects.
Starting with Oracle Database 11g, you have access to records that describe more precise dependency metadata. This is called fine-grain dependencies and it enable you to see when that dependent objects are not invalidated without logical requirement. Earlier Oracle Database releases record dependency metadata.—for example, that PL/SQL unit P depends on PL/SQL unit F, or that view V depends on table T—with the precision of the whole object. This means that dependent objects are sometimes invalidated without logical requirement. For example, if view V depends only on columns A and B in table T, and column D is added to table T, the validity of view V is not logically affected. Nevertheless, before Oracle Database Release 11.1, view V is invalidated by the addition of column D to table T. With Oracle Database Release 11.1, adding column D to table T does not invalidate view V. Similarly, if procedure P depends only on elements E1 and E2 within a package, adding element E99 to the package does not invalidate procedure P. Reducing the invalidation of dependent objects in response to changes to the objects on which they depend increases application availability, both in the development environment and during online application upgrade.
Oracle Database 11g: New Features for Administrators 16 - 11
Fine-Grain Dependency Management
Adding a column to a table no longer impacts dependent views and does not invalidate the dependent objects. • Dependencies are tracked automatically • Requires no configuration
CREATE VIEW NEW_EMPOYEES AS SELECT LAST_NAME FROM EMPOYEES WHERE EMPLOYEE_ID > 20;
16 - 12
Dependent unit Cross-unit reference Parent unit Cross-unit reference
Fine-Grain Dependency Management In Oracle Database 11g, you now have access to records that describe more precise dependency metadata. This is called fine-grain dependencies and enables you to see when dependent objects are not invalidated without logical requirement. Oracle Database 11g dependencies are tracked at the element level within a unit. Element-based dependency tracking covers the following: • Dependency of a single-table view on its base table • Dependency of a PL/SQL program unit (package specification, package body, or subprogram) on the following: - Other PL/SQL program units - Tables - Views A cross-unit reference creates a dependency from the unit making the reference (the dependent unit, for example the NEW_EMPLOYEES view above) to the unit being referenced (the parent unit, for example the EMPLOYEES table). Dependencies are always tracked automatically by PL/SQL and SQL compilers. This mechanism is available out-of-the-box, and does not require any configuration. Reducing the invalidation of dependent objects in response to changes to the objects on which they depend increases application availability.
Oracle Database 11g: New Features for Administrators 16 - 12
Fine-Grain Dependency Benefit Example CREATE TABLE t (col_a NUMBER, col_b NUMBER, col_c NUMBER); CREATE VIEW v AS SELECT col_a, col_b FROM t;
1
SELECT ud.name, ud.type, ud.referenced_name, ud.referenced_type, uo.status FROM user_dependencies ud, user_objects uo WHERE ud.name = uo.object_name AND ud.name = 'V'; NAME TYPE REFERENCED_NAME REFERENCED_TYPE STATUS ---------------- ---------- ---------------- ----------------- ------V VIEW T TABLE VALID ALTER TABLE t ADD (col_d VARCHAR2(20));
2
SELECT ud.name, ud.type, ud.referenced_name, ud.referenced_type, uo.status FROM user_dependencies ud, user_objects uo WHERE ud.name = uo.object_name AND ud.name = 'V'; NAME TYPE REFERENCED_NAME REFERENCED_TYPE STATUS ---------------- ---------- ---------------- ----------------- ------V VIEW T TABLE VALID
Fine-Grain Dependency Benefit Example In the first example above, table T is created with three columns, COL_A, COL_B, and COL_C. A view named V is created based on columns COL_A and COL_B of table T. The dictionary views are queried and the view V is dependent on table T and its status is valid. In the second example above, table T is altered. A new column named COL_D is added. The dictionary views still report the view V is dependent because element based dependency tracking realizes that the columns COL_A and COL_B are not modified and therefore, the view does not need to be invalidated.
Oracle Database 11g: New Features for Administrators 16 - 13
Fine-Grain Dependency Benefit Example CREATE PACKAGE pkg IS PROCEDURE p1; END pkg; / CREATE PROCEDURE p IS BEGIN pkg.p1(); END; / CREATE OR REPLACE PACKAGE pkg IS PROCEDURE p1; PROCEDURE unheard_of; END pkg; / SELECT status FROM user_objects WHERE object_name = 'P'; STATUS -------VALID
Fine-Grain Dependency Benefit Example In the example shown above, you create a package named PKG that has a call to a procedure P1. Another procedure named P invokes PKG.P1. The definition of the package PKG is modified and another subroutine is added to the package declaration. When you query the USER_OBJECTS dictionary view for the status of the P package, it is still valid because the element you added to the definition of PKG is not referenced through procedure P.
Oracle Database 11g: New Features for Administrators 16 - 14
Usage Guidelines Partial invalidation:
Original: CREATE OR REPLACE PACKAGE PACK1 IS FUNCTION FUN1 RETURN VARCHAR2; FUNCTION FUN2 RETURN VARCHAR2; PROCEDURE PR1 (V1 VARCHAR2); END;
No invalidation: CREATE OR REPLACE PACKAGE PACK1 IS FUNCTION FUN1 RETURN VARCHAR2; FUNCTION FUN2 RETURN VARCHAR2; PROCEDURE PR1 (V1 VARCHAR2); PROCEDUREPR2 PR2(V1 (V1VARCHAR2); VARCHAR2); PROCEDURE
CREATE OR REPLACE PACKAGE PACK1 IS FUNCTION FUN1 RETURN VARCHAR2; FUNCTION FUN2 RETURN VARCHAR2; FUNCTION FUN3 FUN3 RETURN RETURN VARCHAR2; VARCHAR2; FUNCTION PROCEDURE PR1 (V1 VARCHAR2); PROCEDURE PR2 (V1 VARCHAR2); END;
Usage Guidelines to Reduce Invalidation 1. Add items to the end of a package to avoid changing slot numbers or entry point numbers of existing top-level elements. 2. Avoid SELECT *, table%rowtype and INSERT with no column names in PL/SQL units to allow for the ADD COLUMN functionality without invalidation. 3. Utilize views or synonyms to provide a layer of indirection between PL/SQL code and tables. The CREATE OR REPLACE VIEW command does not invalidate views and PL/SQL dependents if the view's new rowtype matches the old rowtype (this behavior is available in Oracle Database 10g Release 2). 4. Likewise, the CREATE OR REPLACE SYNONYM command does not invalidate PL/SQL dependents, if the old table and the new table have the same rowtype and privilege grants. Views and synonyms allow you to evolve tables independent of code in your application.
Oracle Database 11g: New Features for Administrators 16 - 15
Minimizing Dependent PL/SQL Recompilation
• After DDL commands • After online table redefinition • Transparent enhancement
In prior database versions all directly and indirectly dependent views and PL/SQL packages are invalidated after an online redefinition or other DDL operations. These views and PL/SQL packages are automatically recompiled whenever they are next invoked. If there are a lot of dependent PL/SQL packages and views, the cost of the revalidation or recompilation can be significant. In Oracle Database 11g, views, synonyms and other table dependent objects (with the exception of triggers) that are not logically affected by the redefinition, are not invalidated. So for example, if referenced column names and types are the same after the redefinition, then they would not be invalidated. This optimization is "transparent", that is, it is turned on by default. Another example: If the redefinition drops a column, only those procedures and views that reference the column are invalidated. The other dependent procedures and views remain valid. Note that all triggers on a table being redefined are invalidated (as the redefinition can potentially change the internal column numbers and data types), but they automatically revalidated with the next DML execution to the table.
Oracle Database 11g: New Features for Administrators 16 - 16
Serializing Locks
• Oracle Database 11g allows DDL commands to wait for DML locks • DDL_LOCK_TIMEOUT parameter set at system and session level • Values: 0 – 1000000 (in seconds) – 0: NOWAIT – 1,000,000: Very long WAIT
Serializing Locks You can limit the time that DDL commands wait for DML locks by setting the DDL_LOCK_TIMEOUT parameter at system or at session level. This initialization parameter is set by default to 0, that is NOWAIT, which ensures backwards compatibility. The range of values is 0 – 1000000 (in seconds). The maximum value of 1000000 seconds allows the DDL statement to wait for a very long time (11.5 days) for the DML lock. If the lock is not acquired on timeout expiration, your application should handle the timeout accordingly.
Oracle Database 11g: New Features for Administrators 16 - 17
Locking Tables Explicitly Useful for adding a column (without a default value) to a table that is frequently updated • Wait for up to 10 seconds for a DML lock: LOCK TABLE hr.jobs IN EXCLUSIVE MODE WAIT 10;
• Do not wait if another user already has locked the table: LOCK TABLE hr.employees IN EXCLUSIVE MODE NOWAIT;
• Lock a table that is accessible through the remote_db database link: LOCK TABLE hr.employees@remote_db IN SHARE MODE;
Locking Tables Explicitly DDL commands require exclusive locks on internal structures. If these locks are unavailable when a DDL command is issued, the DDL command fails, though it might have succeeded if it had been issued sub-seconds later. The WAIT option allows a DDL command to wait for its locks for a specified period of time before failing. The LOCK TABLE command has new syntax that lets you specify the maximum number of seconds the statement should wait to obtain a DML lock on the table. LOCK TABLE … IN lockmode MODE [NOWAIT | WAIT integer]
Specify NOWAIT if you want the database to return control to you immediately. If the specified table, partition, or table subpartition is already locked by another user, the database returns a message. Use the WAIT clause to indicate that the LOCK TABLE statement should wait up to the specified number of seconds to acquire a DML lock. There is no limit on the value of the integer. If you specify neither NOWAIT or WAIT, the database waits indefinitely until the table is available, locks it, and returns control to you. When the database is executing DDL statements concurrently with DML statements, a timeout or deadlock can sometimes occur. The database detects such timeouts and deadlocks and returns an error.
Oracle Database 11g: New Features for Administrators 16 - 18
Sharing Locks
The following commands will no longer acquire exclusive locks (X), but shared exclusive locks (SX). The benefits is that DML can continue while DDL is executed. This change is transparent, that is, there is not syntax change – CREATE INDEX ONLINE – CREATE MATERIALIZED VIEW LOG – ALTER TABLE ENABLE CONSTRAINT NOVALIDATE
Sharing Locks In highly concurrent environments, the requirement of acquiring an exclusive lock for example at the end of an online index creation and rebuild could lead to a spike of waiting DML operations and, therefore, a short drop and spike of system usage. While this is not an overall problem for the database, this anomaly in system usage could trigger operating system alarm levels. This feature eliminates the need row exclusive locks, when creating or rebuilding an online index.
Oracle Database 11g: New Features for Administrators 16 - 19
Invisible Indexes
• Index is altered as not visible to the optimizer: ALTER INDEX ind1 INVISIBLE;
• Optimizer considers this index for this statement: SELECT /*+ index(TAB1 IND1) */ COL1 FROM TAB1 WHERE …;
• Optimizer will always consider the index: ALTER INDEX ind1 VISIBLE;
• Creating an index as invisible initially: CREATE INDEX IND1 ON TAB1(COL1) INVISIBLE;
Invisible Indexes Oracle Database 11g allows you to create and alter indexes as invisible. An invisible index is maintained by DML operations, but it is not used by the optimizer during queries unless the query includes a hint that names the index. Using invisible indexes, you can: • Test the removal of an index before dropping it • Use temporary index structures for operations or modules of an application without affecting the overall application, for example during an application upgrade process. When an index is invisible, the optimizer generates plans that do not use the index. If there is no discernable drop in performance, you can then drop the index. If some queries show benefit from the index, you can make the index visible again, thus avoiding the effort of dropping an index and then having to recreate it. You can also create an index initially as invisible, perform testing and then determine whether to make the index available. You can query the VISIBILITY column of the *_INDEXES data dictionary views. SELECT INDEX_NAME, WHERE INDEX_NAME = INDEX_NAME -----------------IND1
VISIBILITY FROM USER_INDEXES ' IND1'; VISIBILITY ----------------VISIBLE
Oracle Database 11g: New Features for Administrators 16 - 20
Query Result Cache • Cache the result of a query or query block for future reuse • Cache is used across statements and sessions unless it is stale • Benefits: – Scalability – Reduction of memory usage Query Result Cache
Query Result Cache The Query Result Cache enables explicit caching of query result sets and query fragments in database memory. The cached result set data is transparently kept consistent with any changes done on the server side. Applications see improved performance for queries which have a cache hit and avoid round trips to the server for the sending of the query and fetching of the results. A separate shared memory pool is now used for storing and retrieving the cached results. Query retrieval from the query result cache is faster than re-running the query. Frequently executed queries see significant performance improvements when using the query result cache. The query results stored in the cache become invalid when data in the database objects being accessed by the query is modified. Note: Each node in a RAC configuration has a private result cache. The decision to use the result cache feature is a cluster wide decision. For more information on using result caches in a RAC configuration, please see the Oracle Database 11g Real Application Clusters documentation.
Oracle Database 11g: New Features for Administrators 16 - 21
Setting up Query Result Cache
• Set at database level using RESULT_CACHE_MODE initialization parameter. Values are : – AUTO: The optimizer determines which results are to be stored in the cache based on repetitive executions – MANUAL: Use the result_cache hint to specify results to be stored in the cache – FORCE: All results are stored in the cache
• Set at table level: ALTER TABLE employees RESULT_CACHE mode AUTO;
Setting up Query Result Cache The query optimizer manages the result cache mechanism depending on the settings of the RESULT_CACHE_MODE parameter in the initialization parameter file. You can use this parameter to determine whether or not the optimizer automatically sends the results of queries to the result cache. You can set the RESULT_CACHE_MODE parameter at the system, session, and the table level. The possible parameter values are AUTO, MANUAL, FORCE. When set to AUTO, the optimizer determines which results are to be stored in the cache based on repetitive executions. When set to MANUAL(the default), you must specify, by using the RESULT_CACHE hint, that a particular result is to be stored in the cache. When set to FORCE, all results are stored in the cache. The Query Result Cache can also be set at the table level using CREATE or ALTER statements. The syntax follows: CREATE/ALTER TABLE [<schema>.]
…. [RESULT_CACHE {(MODE AUTO| MANUAL|FORCE)}]
Setting the result cache mode at the table level ensures that whenever a query retrieves data from this table, the result is automatically stored in the result cache.
Oracle Database 11g: New Features for Administrators 16 - 22
Using the RESULT_CACHE Hint SELECT /*+ RESULT_CACHE */ department_id, AVG(salary) FROM employees GROUP BY department_id; -------------------------------------------------------------| Id
-------------------------------------------------------------SELECT /*+ NO_RESULT_CACHE */ department_id, AVG(salary) FROM employees GROUP BY department_id;
Using the Result_Cache Hint If you wish to use the query result cache and the RESULT_CACHE_MODE initialization parameter is set to MANUAL, you must explicitly specify the RESULT_CACHE hint in your query. This introduces the ResultCache operator into the execution plan for the query. When you execute the query, the ResultCache operator looks up the result cache memory to check if the result for the query already exists in the cache. If it exists, then the result is retrieved directly out of the cache. If it does not yet exist in the cache, then the query is executed, the result is returned as output, and is also stored in the result cache memory. If the RESULT_CACHE_MODE initialization parameter is set to AUTO or FORCE, and you do not wish to store the result of a query in the result cache, you must then use the NO_RESULT_CACHE hint in your query. For example, when the RESULT_CACHE_MODE value equals FORCE in the initialization parameter file, and you do not wish to use the result cache for the EMPLOYEES table, then use the NO_RESULT_CACHE hint. Note: Use of the [NO_]RESULT_CACHE hint takes precedence over the parameter settings.
Oracle Database 11g: New Features for Administrators 16 - 23
Managing the Query Result Cache
The following initialization parameters can be used to manage the Query result cache • RESULT_CACHE_MAX_SIZE parameter – Sets the memory allocated to the result cache – Result cache is disabled if you set the value to 0.
• RESULT_CACHE_MAX_RESULT – Sets maximum cache memory for a single result – Defaults to 5%
• RESULT_CACHE_REMOTE_EXPIRATION – Sets the expiry time for query result cache – Defaults to 0
Managing Query Results Cache You can alter various parameter settings in the initialization parameter file to manage the query result cache of your database. By default, the database allocates memory for the result cache in the Shared Pool inside the SGA. The memory size allocated to the result cache depends on the memory size of the SGA as well as the memory management system. You can change the memory allocated to the result cache by setting the RESULT_CACHE_MAX_SIZE parameter. The result cache is disabled if you set the value to 0. • Use the RESULT_CACHE_MAX_RESULT parameter to specify the maximum amount of cache memory that can be used by any single result. The default value is 5%, but you can specify any percent value between 1 and 100. This parameter can be implemented at the system and session level. • Use the RESULT_CACHE_REMOTE_EXPIRATION parameter to specify the time (in number of minutes) for which a result that accesses remote database objects remains valid. The default value is 0.
Oracle Database 11g: New Features for Administrators 16 - 24
Using the DBMS_RESULT_CACHE Package
Use the DBMS_RESULT_CACHE package to: • Manage memory allocation for the query result cache • View the status of the cache • Retrieve statistics on the cache memory usage: Create the report
EXECUTE DBMS_RESULT_CACHE.MEMORY_REPORT
• Remove all existing results and clear cache memory: Flush Cache
Using the DBMS_RESULT_CACHE Package The DBMS_RESULT_CACHE package provides statistics, information, and operators that enable you to manage memory allocation for the query result cache. You can use the DBMS_RESULT_CACHE package to perform various operations such as viewing the status of the cache, retrieving statistics on the cache memory usage, and flushing the cache. For example, to view the memory allocation statistics, use the following SQL procedure: SQL> set serveroutput on SQL> execute dbms_result_cache.memory_report
The output of this command will be similar to the following: R e s u l t C a c h e M e m o r y R e p o r t [Parameters] Block Size = 1024 bytes Maximum Cache Size = 720896 bytes (704 blocks) Maximum Result Size = 35840 bytes (35 blocks) [Memory] Total Memory = 46284 bytes [0.036% of the Shared Pool] ... Fixed Memory = 10640 bytes [0.008% of the Shared Pool] ... State Object Pool = 2852 bytes [0.002% of the Shared Pool] ... Cache Memory = 32792 bytes (32 blocks) [0.025% of the Shared Pool] ....... Unused Memory = 30 blocks ....... Used Memory = 2 blocks ........... Dependencies = 1 blocks ........... Results = 1 blocks ............... SQL = 1 blocks
Oracle Database 11g: New Features for Administrators 16 - 25
Viewing Result Cache Dictionary Information
The following views provide information about the query result cache: (G)V$RESULT_CACHE_STATISTICS
Lists the various cache settings and memory usage statistics.
(G)V$RESULT_CACHE_MEMORY
Lists all the memory blocks and the corresponding statistics.
(G)V$RESULT_CACHE_OBJECTS
Lists all the objects (cached results and dependencies) along with their attributes.
(G)V$RESULT_CACHE_DEPENDENCY
Lists the dependency details between the cached results and dependencies.
Viewing Result Cache Dictionary Information Note: For further information please see the Oracle Database Reference Guide.
Oracle Database 11g: New Features for Administrators 16 - 26
OCI Client Query Cache
• Extends server-side query caching to client side memory • Ensures better performance by eliminating round trips to the server • Leverages client-side memory • Improves server scalability by saving server CPU resources • Result cache is automatically refreshed if the result set is changed on the server • Particularly good for lookup tables
OCI Client Query Cache You can enable caching of query result sets in client memory with the OCI Client Query Cache in Oracle Database 11g. The cached result set data is transparently kept consistent with any changes done on the server side. Applications leveraging this feature see improved performance for queries which have a cache hit. Additionally, a query serviced by the cache avoids round trips to the server for sending the query and fetching the results. Server CPU, that would have been consumed for processing the query, is reduced thus improving server scalability. Before using client-side query cache, determine whether your application will benefit from this feature. Client-side caching is useful when you have applications that produce repeatable result sets, small result sets, static result sets or frequently executed queries.
Oracle Database 11g: New Features for Administrators 16 - 27
Using Client Side Query Cache You can use client-side query caching by: • Setting initialization parameters – CLIENT_RESULT_CACHE_SIZE – CLIENT_RESULT_CACHE_LAG
Using Client Side Query Cache The following two parameters can be set in your initialization parameter file: • CLIENT_RESULT_CACHE_SIZE: A non-zero value enables the client result cache. This is the maximum size of the client per-process result set cache in bytes. All OCI client processes get this maximum size and can be over-ridden by the OCI_RESULT_CACHE_MAX_SIZE parameter. • CLIENT_RESULT_CACHE_LAG : Maximum time (in milliseconds) since the last round trip to the server, before which the OCI client query execute makes a round trip to get any database changes related to the queries cached on client. A client configuration file is optional and overrides the cache parameters set in the server initialization parameter file. Parameter values can be part of a sqlnet.ora file. When parameter values shown above are specified, OCI client caching is enabled for OCI client processes using the configuration file: OCI_RESULT_CACHE_MAX_RSET_SIZE/ROWS Maximum size of any result set in bytes/rows in the per-process query cache. OCI applications can utilize application hints to force result cache storage. This overrides the deployment time settings of ALTER TABLE/ALTER VIEW. The application hints can be: • SQL hints /*+ result_cache */, and /*+ no_result_cache */ • OCIStmtExecute() modes. These override both SQL hints and ALTER TABLE/ALTER VIEW annotations. Note: To use this feature, your applications must be re-linked with release 11.1 or higher client libraries and be connected to a release 11.1 or higher server. Oracle Database 11g: New Features for Administrators 16 - 28
PL/SQL Function Cache • Stores function results in cache, making them available to other sessions. • Uses the Query Result Cache
PL/SQL Function Cache Starting in Oracle Database 11g, you can use the PL/SQL cross-section function result caching mechanism. This caching mechanism provides you with a language-supported and systemmanaged means for storing the results of PL/SQL functions in a shared global area (SGA), which is available to every session that runs your application. The caching mechanism is both efficient and easy to use, and it relieves you of the burden of designing and developing your own caches and cache-management policies. Oracle Database 11g provides the ability to mark a PL/SQL function to indicate that its result should be cached to allow lookup, rather than recalculation, on the next access when the same parameter values are called. This function result cache saves significant space and time. This is done transparently using the input parameters as the lookup key. The cache is system-wide so that all distinct sessions invoking the function benefit. If the result for a given set of parameters changes, you can use constructs to invalidate the cache entry so that it will be properly recalculated on the next access. This feature is especially useful when the function returns a value that is calculated from data selected from schema-level tables. For such uses, the invalidation constructs are simple and declarative. You can include syntax in the source text of a PL/SQL function to request that its results be cached and, to ensure correctness, that the cache be purged when any of a list of tables experiences DML. When a particular invocation of the result-cached function is a cache hit, then the function body is not executed; instead, the cached value is returned immediately.
Oracle Database 11g: New Features for Administrators 16 - 29
Using PL/SQL Function Cache • Include the RESULT_CACHE option in the function declaration section of a package or function definition • Optionally include the RELIES_ON clause to specify any tables or views on which the function results depend CREATE OR REPLACE FUNCTION productName (prod_id NUMBER, lang_id VARCHAR2) RETURN NVARCHAR2 RESULT_CACHE RELIES_ON (product_descriptions) IS result VARCHAR2(50); BEGIN SELECT translated_name INTO result FROM product_descriptions WHERE product_id = prod_id AND language_id = lang_id; RETURN result; END;
Using PL/SQL Function Cache In the example shown above, the function productName has result caching enabled through the RESULT_CACHE option in the function declaration. In this example, the RELIES_ON clause is used to identify the PRODUCT_DESCRIPTIONS table on which the function results depend. Usage Notes: • If function execution results in an unhandled exception, the exception result is not stored in the cache • The body of a result cached function executes: - The first time a session on this database instance calls the function with these parameter values. - When the cached result for these parameter values is invalid. A cashed result becomes invalid when any database object specified in the RELIES_ON clause of the function definition changes. - When the cached result for these parameter values has aged out. If the system needs memory, it might discard the oldest cached values. - When the function bypasses the cache. • The function should not have any side effects • The function should not not depend on session-specific settings • The function should not depend on session-specific application contexts.
Oracle Database 11g: New Features for Administrators 16 - 30
PL/SQL Function Cache Considerations
PL/SQL Function Cache cannot be used when: • The function is defined in a module that has invoker's rights or in an anonymous block. • The function is a pipelined table function. • The function has OUT or IN OUT parameters. • The function has IN parameter of the following types: BLOB, CLOB, NCLOB, REF CURSOR, collection, object, or record. • The function's return type is: BLOB, CLOB, NCLOB, REF CURSOR, object, record or collection with one of the preceding unsupported return types.
Bitmap join index for IOT Oracle Database 11g extends bitmap join index support to Index Organized Tables (IOTs). A join index is an index on table T1 built for a column of a different table T2 via a join. Therefore, the index provides access to rows of T1 based on columns of the table T2. Join indexes can be used to avoid actual joins of tables or can reduce the volume of data to be joined by performing restrictions in advance. Bitmap join indexes are space-efficient and can speed up queries via bitwise operations. As in the case of Bitmap Indexes, these IOTs have an associated Mapping Table. Since IOT rows may change their position due to DML or index reorganization operations, the bitmap join index cannot rely on the physical row identifiers of the IOT rows. Instead the row identifier of the mapping table associated with the IOT will be used.
Oracle Database 11g: New Features for Administrators 16 - 32
Automatic “Native” Compilation
• 100+% faster for pure PL/SQL or Java code • 10% – 30% faster for typical transactions with SQL • PL/SQL – Just one parameter - On / Off – No need for C compiler – No file system DLLs
• Java – – – –
16 - 33
Just one parameter – On / Off JIT “on the fly” compilation Transparent to user (asynchronous, in background) Code stored to avoid recompilations
Automatic “Native” Compilation PL/SQL Native Compilation: The Oracle executable generates native dynamic linked lists (DLL) directly from the PL/SQL source code without needing to use a third-party C compiler. In Oracle Database 10g the DLL is stored canonically in the database catalog. In Oracle Database 11g, when it is needed, the Oracle executable loads it directly from the catalog without needing to stage it first on the file system. The execution speed of natively compiled PL/SQL programs will never be slower than in Oracle Database 10g and may be improved in some cases by as much as an order of magnitude. The PL/SQL native compilation is automatically available with Oracle Database 11g. No third-party software (neither a C compiler nor a DLL loader) is needed. Java Native Compilation: Enabled by default and similar to the JDK JIT, this feature compiles Java in the database natively and transparently without the need of a C compiler. The JIT runs as an independent session in a dedicated Oracle server process. There is at most one compiler session per database instance; it is Oracle RAC-aware and amortized over all Java sessions. This feature brings two major benefits to Java in the database: increased performance of pure Java execution in the database and ease of use as it is activated transparently, without the need of an explicit command, when Java is executed in the database. As this feature removes the need for a C compiler there is cost and license savings.
Oracle Database 11g: New Features for Administrators 16 - 33
Adaptive Cursor Sharing SELECT ……FROM.. WHERE Job = :B1 ENAME
Adaptive Cursor Sharing In many cases one optimizer plan may not always be appropriate for all bind values. In Oracle Database 11g cursor sharing has been enhanced so that the optimizer peeks at bind values during plan selection and takes ranges of safe values into account when evaluating cursor shareability. This enables you to leverage cursor sharing more commonly while preserving bind variable specific plan optimizations for shared statements. In the above example, assume that a query is retrieving information for EMPLOYEES based on a bind variable. In case 1 if the the bind variable value at hard parse is "CLERK" five out of six records will be selected. Therefore the execution plan will be a full table scan. In case 2 if "VP" is the bind variable value at hard parse one out of the six records is selected and the execution plan may be an index look-up. Therefore instead of the execution plan being reused for each value of the bind variable the optimizer looks at the selectivity of the data and determines a different execution plan to retrieve the data. The benefits of adaptive cursor sharing are: • The optimizer shares the plan when binds variable values are “equivalent”. • Plans are marked with a selectivity range. If current bind values fall within the range they use the same plan. • The optimizer creates a new plan if bind variable values are not equivalent. • The optimizer generates a new plan for each selectivity range. • The optimizer avoids expensive table scans and index searches based on selectivity criteria thus speeding up data retrieval.
Oracle Database 11g: New Features for Administrators 16 - 34
Adaptive Cursor Sharing Views
The following views provide information about Adaptive Cursor Sharing usage: V$SQL V$SQL_CS_HISTOGRAM
Two new columns show whether a cursor is bind-sensitive or bindaware. Shows the distribution of the execution count across the execution history histogram.
V$SQL_CS_SELECTIVITY
Shows the selectivity ranges stored for every predicate containing a bind variable and whose selectivity is used in the cursor sharing checks.
V$SQL_CS_STATISTICS
Shows execution statistics of a cursor using different bind sets.
Adaptive Cursor Sharing Views Determining whether a query is bind-aware, is all handled automatically, without any user input. However, information about what is going on is exposed through V$ views so that the DBA can diagnose any problems. Two new columns have been added to V$SQL: • IS_BIND_SENSITIVE: Indicate if a cursor is bind sensitive, value YES | NO. A query for which the optimizer peeked at bind variable values when computing predicate selectivities and where a change in a bind variable value may lead to a different plan is called bindsensitive. • IS_BIND_AWARE :Indicates if a cursor is bind aware, value YES | NO. A cursor in the cursor cache that has been marked to use extended cursor sharing is called bind-aware. V$SQL_CS_HISTOGRAM: Shows the distribution of the execution count across the three-bucket execution history histogram. V$SQL_CS_SELECTIVITY: Shows the selectivity ranges stored in a cursor for every predicate containing a bind variable and whose selectivity is used in the cursor sharing checks. It contains the text of the predicates and the selectivity range low and high values. V$SQL_CS_STATISTICS:You use this view to find out whether executing the cursor with a different bind set, other than the ones used to build it, hinders performance. This view is populated with the information stored for the peeked bind set, and contains information for other bind sets only when running under diagnostic mode. The column PEEKED contains values YES if the bind set was used to build the cursor, and NO otherwise. Oracle Database 11g: New Features for Administrators 16 - 35
Temporary Tablespace Shrink • Sort segment extents are managed in memory once physically allocated. • This method can be an issue after big sorts are done. • To release physical space from your disks, you can shrink temporary tablespaces: – Locally-managed temporary tablespaces – Online operation CREATE TEMPORARY TABLESPACE temp TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m; ALTER TABLESPACE temp SHRINK SPACE [KEEP 200m]; ALTER TABLESPACE temp SHRINK TEMPFILE 'tbs_temp.dbf';
Temporary Tablespace Shrink Huge sorting operations can cause temporary tablespace to grow a lot. For performance reasons, once a sort extent is physically allocated, it is then managed in memory to avoid physical deallocation later. As a result, you can end up with a huge tempfile that stays on disk until it is dropped. One possible workaround is to create a new temporary tablespace with smaller size, set this new tablespace as default temporary tablespace for users, and then drop the old tablespace. However, there is disadvantage that the procedure requires no active sort operations happening at the time of dropping old temporary tablespace. Starting with Oracle Database 11g Release 1, you can use the ALTER TABLESPACE SHRINK SPACE command to shrink a temporary tablespace, or you can use the ALTER TABLESPACE SHRINK TEMPFILE command to shrink one tempfile. For both commands, you can specify the optional KEEP clause that defines the lower bound that the tablespace/tempfile can be shrunk to. If you omit the KEEP clause, then the database will attempt to shrink the tablespace/tempfile as much as possible (total space of all currently used extents) as long as other storage attributes are satisfied. This operation is done online. However, if some currently used extents are allocated above the the shrink estimation, the system waits until they are released to finish the shrink operation. Note: The ALTER DATABASE TEMPFILE RESIZE command generally fails with ORA-03297 because the tempfile contains used data beyond requested RESIZE value. As opposed to the ALTER TABLESPACE SHRINK, the ALTER DATABASE command does not try to de-allocate sort extents once they are allocated. Oracle Database 11g: New Features for Administrators 16 - 36
DBA_TEMP_FREE_SPACE
• Lists temporary space usage information. • Central point for temporary tablespace space usage Column name
Description
TABLESPACE_NAME
Name of the tablespace
TABLESPACE_SIZE
Total size of the tablespace, in bytes
ALLOCATED_SPACE
Total allocated space, in bytes, including space that is currently allocated and used and space that is currently allocated and available for reuse
FREE_SPACE
Total free space available, in bytes, including space that is currently allocated and available for reuse and space that is currently unallocated
DBA_TEMP_FREE_SPACE This dictionary view reports temporary space usage information at tablespace level. The information is derived from various existing views.
Oracle Database 11g: New Features for Administrators 16 - 37
Tablespace Option for Creating Temporary Table
• Specify which temporary tablespace to use for your global temporary tables. • Decide proper temporary extent size. CREATE TEMPORARY TABLESPACE temp TEMPFILE 'tbs_temp.dbf' SIZE 600m REUSE AUTOEXTEND ON MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1m; CREATE GLOBAL TEMPORARY TABLE temp_table (c varchar2(10)) ON COMMIT DELETE ROWS TABLESPACE temp;
Tablespace Option for Creating Temporary Table Starting with Oracle Database 11g Release 1, it becomes possible to specify a TABLESPACE clause when you create a global temporary table. If no tablespace is specified, the global temporary table is created in your default temporary tablespace. In addition, indexes created on the temporary table are also created in the same temporary tablespace as the temporary table. This possibility allows you to decide proper extent size that reflects your sort-specific usage, especially when you have several types of temporary space usage.
Oracle Database 11g: New Features for Administrators 16 - 38
Real-Time Query and Physical Standby Databases In previous database releases, when you opened the physical standby database for read-only, redo application stopped. Oracle Database 11g allows you to use a physical standby database for queries while redo is applied to the physical standby database. This enables you to use a physical standby database for disaster recovery and to offload work from the primary database during normal operation. In addition, this feature provides a loosely coupled read-write clustering mechanism for OLTP workloads when configured as follows: • Primary database: Recipient of all update traffic • Several readable standby databases: Used to distribute the query workload The physical standby database can be opened read-only only if all the files have been recovered up to the same system change number (SCN), otherwise the open will fail.
Oracle Database 11g: New Features for Administrators 16 - 39
Summary
In this lesson, you should have learned how to: • Describe and use the enhanced online table redefinition and materialized views • Describe finer grained dependency management • Use enhanced DDL – Apply the improved table lock mechanism – Create and use invisible indexes