40521.book Page i Monday, August 14, 2006 8:04 AM
MCITP Developer Microsoft® SQL Server™ 2005 Database Solutions Design Study Guide (70-441)
40521.book Page ii Monday, August 14, 2006 8:04 AM
40521.book Page iii Monday, August 14, 2006 8:04 AM
MCITP Developer Microsoft® SQL Server™ 2005 Database Solutions Design Study Guide (70-441)
Victor Isakov
Wiley Publishing, Inc.
40521.book Page iv Monday, August 14, 2006 8:04 AM
Acquisitions and Development Editor: Maureen Adams Technical Editor: Marilyn Miller-White Production Editor: Rachel Gunn Copy Editor: Kim Wimpsett Production Manager: Tim Tate Vice President and Executive Group Publisher: Richard Swadley Vice President and Executive Publisher: Joseph B. Wikert Vice President and Publisher: Neil Edde Permissions Editor: Shannon Walters Media Development Specialist: Kit Malone Book Designers: Judy Fung, Bill Gibson Compositor: Craig Woods, Happenstance Type-O-Rama Proofreader: Nancy Riddiough Indexer: Nancy Guenther Cover Designer: Ryan Sneed Copyright © 2006 by Wiley Publishing, Inc., Indianapolis, Indiana Published simultaneously in Canada ISBN-13: 978-0-470-04052-2 ISBN-10: 0-470-04052-1 No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, or online at http://www.wiley.com/go/permissions. Limit of Liability/Disclaimer of Warranty: The publisher and the author make no representations or warranties with respect to the accuracy or completeness of the contents of this work and specifically disclaim all warranties, including without limitation warranties of fitness for a particular purpose. No warranty may be created or extended by sales or promotional materials. The advice and strategies contained herein may not be suitable for every situation. This work is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional services. If professional assistance is required, the services of a competent professional person should be sought. Neither the publisher nor the author shall be liable for damages arising herefrom. The fact that an organization or Website is referred to in this work as a citation and/or a potential source of further information does not mean that the author or the publisher endorses the information the organization or Website may provide or recommendations it may make. Further, readers should be aware that Internet Websites listed in this work may have changed or disappeared between when this work was written and when it is read. For general information on our other products and services or to obtain technical support, please contact our Customer Care Department within the U.S. at (800) 762-2974, outside the U.S. at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloging-in-Publication Data is available from the publisher. TRADEMARKS: Wiley, the Wiley logo, and the Sybex logo are trademarks or registered trademarks of John Wiley & Sons, Inc. and/or its affiliates, in the United States and other countries, and may not be used without written permission. Microsoft and SQL Server are trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are the property of their respective owners. Wiley Publishing, Inc., is not associated with any product or vendor mentioned in this book. 10 9 8 7 6 5 4 3 2 1
40521.book Page v Monday, August 14, 2006 8:04 AM
To Our Valued Readers: Thank you for looking to Sybex for your Microsoft SQL Server 2005 certification exam prep needs. The Sybex team at Wiley is proud of its reputation for providing certification candidates with the practical knowledge and skills needed to succeed in the highly competitive IT workplace. Just as the Microsoft Learning is committed to establishing measurable standards for certifying individuals who design and maintain SQL Server 2005 systems, Sybex is committed to providing those individuals with the skills needed to meet those standards. The author and editors have worked hard to ensure that the Study Guide you hold in your hands is comprehensive, in-depth, and pedagogically sound. We’re confident that this book will exceed the demanding standards of the certification marketplace and help you, the SQL Server 2005 certification candidate, succeed in your endeavors. As always, your feedback is important to us. If you believe you’ve identified an error in the book, please visit the Customer Support section of the Wiley web site. Or if you have general comments or suggestions, feel free to drop me a line directly at
[email protected]. At Sybex we’re continually striving to meet the needs of individuals preparing for certification exams. Good luck in pursuit of your SQL Server 2005 certification!
Neil Edde Vice President & Publisher Sybex, an Imprint of John Wiley & Sons
40521.book Page vi Monday, August 14, 2006 8:04 AM
I’d like to dedicate this book to Paula Verkhivker, for your spirit, courage, energy and “feistiness.” And your skill, of course. ;oP
40521.book Page vii Monday, August 14, 2006 8:04 AM
Acknowledgments I hate writing books. But I am passionate about training, especially SQL Server training, having trained thousands of people around the globe over the past decade or so. So, first and foremost I thank all of those students I have had the privilege of teaching for their enthusiasm and for particularly their endless questions that have tested my knowledge (and patience at times, I’ll admit) to the fullest. I had the privilege of being involved in different capacities with Microsoft Learning in Redmond in the development of both the SQL Server 2005 certification and Microsoft Official Curriculum (MOC) instructor lead training (ILT) and e-Learning courseware. I saw firsthand the dedication and the hours put in by members of the team. Thanks go to Annie, Barbara, Ben, Chris, Colin, David, Ed, Kareena, Karl, Katalin, Ken, Mindy, Priya, Rebecca, and Sangeeta. Drinks are on me at the Taphouse Grill in Bellevue when I am over next! Unlike a lot of IT professionals I know, I certainly do not know everything about SQL Server, especially with the broad scope of this particular book, which pretty much touches on every single technology that makes up SQL Server 2005. Consequently, I thank the following MCTs for their contributions: Ted Malone Ted—MCT, MCSE, MCDBA, MCTS: SQL Server, MCTS: .NET Framework 2.0 Windows Applications, MCPD: Windows Developer, MCITP: Database Developer, MCITP: Database Administrator—has been working with Microsoft SQL Server since the days of OS/2 and has developed database systems ranging from small to extremely large. Ted is currently the principal software architect for Configuresoft, a Colorado Springs, Colorado, software development firm that specializes in delivering enterprise management tools for Fortune 1000 corporations. He is based in Colorado. Matthew Roche Matthew—OCP, MCT,MCSD, MCDBA, MCSE, MCSA, MCPD: EAD, MCTS: SQL Server 2005, MCTS: .NET Framework 2.0 Distributed Applications, MCTS: Web Developer, MCTS: .Net Framework 2.0 Windows Applications, MCTS: Windows Developer, MCTS: .NET Framework 2.0: Web Applications, MCTS: Enterprise Application Developer, MCTIP: Database Developer, MCTIP: Database Administrator—is the founder and chief software architect of Integral Thought & Memory. He is a software architecture consultant specializing in Microsoft SQL Server and Microsoft .NET distributed applications. Matthew has more than 10 years’ training experience and 20 years’ programming experience, including the current generation of Microsoft .NET tools and languages. He has also contributed to the development of numerous Microsoft Learning projects. He is based in Syracuse, New York. Christian Lefter Christian—MCT, MCSA, MCAD, MSCD .NET, MCTS: SQL Server 2005, MCITP: Database Developer, MCITP: Database Administrator—is a former SQL Server developer, database administrator, and trainer, and he is currently the CEO of MicroTraining, a consulting and training company. In his spare time, he is a technical reviewer, an author, and the leader for two user groups (ITBoard and Romanian SQL Server User Group). He is based in Bucharest, Romania.
40521.book Page viii Monday, August 14, 2006 8:04 AM
viii
Acknowledgments
Aaron Johal Aaron—BSc (Information Technology), MCT, MCSE, MCAD, MSCD .NET, MCTS: SQL Server 2005, MCITP: Database Developer, MCITP: Database Administrator—is a training consultant with QA IQ who has delivered a considerable part of the SQL Server curriculum and related techniques and technologies for all versions of the product. When he has time he likes to help the wider SQL Server community and has delivered a number of presentations at conferences such as SQL PASS. He is based in London, England. Steve Jones Steve Jones has been working with SQL Server for more than a decade, starting with v4.2 on OS/2 and has enjoyed the new features and capabilities of every version since. After working as a DBA and developer for a variety of companies, Steve founded SQLServerCentral.com along with Brian Knight and Andy Warren in 2001. SQLServerCentral.com has grown into a wonderful SQL Server community that provides daily articles and questions on all aspects of SQL Server to over 300,000 members. Starting in 2004, Steve became the full-time editor of the community and ensures it continues to evolve into the best resource possible for SQL Server professionals. Over the last decade, Steve has written more than 200 articles about SQL Server for SQLServerCentral.com, the SQL Server Standard magazine, SQL Server Magazine, and Database Journal. Steve has spoken at the PASS Summits where SQLServerCentral.com sponsors an opening reception every year. He is the author of a prior book on SQL Server 2000 as well as MCITP Developer: Microsoft SQL Server 2005 Database Solutions Design (Sybex, 2006). If you ever have the chance to sit in on a course run by any of these esteemed gentlemen and SQL Server scholars, I highly recommend that you do so! On a more personal note, I also thank Natasha Fiodoroff, Eugene Deefholts, Tim Haslett, Kevin Dunn, Steve Commins, James Squire, and Chimay Pères Trappistes for their inspiration in writing this book. Welcome to the world, Sophie Emma Yalisheff! You have some pretty funky parents. Make sure Victor, Veronica, and little Alex look after you. I promise to as well. Welcome also to “Bert” Deefholts! Eugene and (especially) Kate, are going to make fabulous parents. Hope to celebrate your birth and England’s World Cup victory on July 11th! (Eugene said that, Kate, not me!) To Iris Friedrichs, in of all my travels around the world you have proven to be one of the most beautiful souls I have encountered. I have yet to meet a more generous and empathetic person. I hope your schatzemaus is looking after you! To Marion Siekierski, wherever this book finds you: This Very Body the Buddha This Very Place the Lotus Paradise And finally, and most important, to Marc, Larissa, Natalie, and Alex. There is no need for words! All the very best!
40521.book Page ix Monday, August 14, 2006 8:04 AM
About the Author Victor Isakov—LLB / BSc (Computer Science), CTT, MCT, MCSE, MCDBA, MCTS: SQL Server 2005, MCITP: Database Developer, MCITP: Database Administrator—is a database architect and Microsoft Certified Trainer based in Sydney, Australia. Although he has a strong operating system and networking background, he specializes in SQL Server, providing consulting and training services to various organizations in the public, private, and NGO sectors globally. He runs the a Server User Group in Sydney and has a website dedicated to SQL Server, and training in particular, at www.SQLServerSessions.com. Victor has regularly been involved in different capacities and regularly presents at various international events and conferences such as Code Camp, Microsoft TechEd, SQL Connections, and SQL PASS. He has written a number of books about SQL Server and worked closely with Microsoft to develop the new generation of SQL Server 2005 certification and Microsoft official curriculum for both instructor-led training and e-learning courses. He also writes articles regularly for www.devx.com and searchsqlserver.com. Victor loves to travel and tries to spend as much time as he can hiking in remote parts of the world and exploring new places and cultures. He is an avid scuba diver and loves wreck diving and diving with the larger pelagics, namely, sharks, whales, and especially whale sharks at Ningaloo Reef in Western Australia or great white sharks off South Australia. He is passionate about shark conservation and platypode. Back home he enjoys sailing in the Sydney Harbour, beers, the annual Big Day Out in Sydney, beers, live music, beers, opera at the Opera House (Iris, you have a permanent invitation, remember?), and many of the great restaurants in Sydney. Victor specializes in designing, refactoring, and documenting database solutions; providing training; and performance tuning existing systems. He can be contacted via
[email protected] .
40521.book Page x Monday, August 14, 2006 8:04 AM
40521.book Page xi Monday, August 14, 2006 8:04 AM
Contents at a Glance Introduction
xxi
Assessment Test
xxx
Chapter 1
Designing a Database Solution
1
Chapter 2
Designing Database Objects
Chapter 3
Performance Tuning a Database Solution
131
Chapter 4
Securing a Database Solution
183
Chapter 5
Designing Database Testing and Code Management Procedures
263
Chapter 6
Designing a Web Services Solution
301
Chapter 7
Designing Messaging Services for a Database Solution
347
Chapter 8
Designing a Reporting Services Solution
417
Chapter 9
Designing Data Integration Solutions
477
Chapter 10
Case Studies
555
47
Glossary
579
Index
593
40521.book Page xii Monday, August 14, 2006 8:04 AM
40521.book Page xiii Tuesday, August 8, 2006 2:01 PM
Contents Introduction
xxi
Assessment Test
xxx
Chapter
Chapter
1
2
Designing a Database Solution
1
Designing a Logical Database Understanding Normalization Understanding Denormalization Designing the Physical Entities Designing Attributes Designing Entities Designing Entity Relationships Summary Exam Essentials Review Questions Answers to Review Questions
2 3 6 8 9 19 28 31 32 33 44
Designing Database Objects
47
Designing the Objects That Define Data Creating Partitioned Tables Designing Indexes Designing Objects That Retrieve Data and Extend Functionality Exploring Views Designing Indexed Views Designing T-SQL User-Defined Functions Designing CLR User-Defined Functions Designing CLR User-Defined Aggregates Designing Stored Procedures Designing Objects That Perform Actions Designing DML Triggers Designing DDL Triggers Designing WMI Triggers SQL Server Agent Summary Exam Essentials Review Questions Answers to Review Questions
48 48 53 61 63 67 71 77 79 80 83 84 90 95 104 120 120 121 128
40521.book Page xiv Tuesday, August 8, 2006 2:01 PM
xiv
Contents
Chapter
3
Performance Tuning a Database Solution Establishing Performance Objectives Monitoring SQL Server Performance Monitoring Proactively Understanding the Factors Affecting SQL Server Performance Evaluating the Tools for Monitoring Performance Using Windows Tools Using SQL Server Tools Choosing a Monitoring Tool Correlating a Trace with Windows Performance Log Data Using SQL Profiler Detecting and Responding to Performance Problems Troubleshooting Memory Problems Troubleshooting CPU Problems Troubleshooting I/O Bottlenecks Troubleshooting tempdb Problems Troubleshooting Poorly Running Queries Summary Exam Essentials Review Questions Answers to Review Questions
Chapter
4
Securing a Database Solution Designing an Application Solution to Support Security A Brief History of SQL Server Security Securing the SQL Server Solution Designing and Implementing Application Security Designing the Database to Enable Auditing Designing Database Security Defining the Security Model Defining Login Access Requirements Defining Database Access Requirements Specifying Database Object Security Permissions Specifying Database Objects Used to Maintain Security Defining Schemas to Manage Object Ownership Designing an Execution Context Strategy Summary Exam Essentials Review Questions Answers to Review Questions
131 132 133 133 137 139 139 141 158 159 165 166 169 172 173 174 175 176 177 181 183 184 185 189 192 193 208 210 212 214 228 233 237 240 253 253 255 260
40521.book Page xv Tuesday, August 8, 2006 2:01 PM
Contents
Chapter
5
Designing Database Testing and Code Management Procedures Designing a Unit Test Plan for a Database Assessing Unit Test Components Developing Appropriate Performance Profiles Testing Performance Profiles Designing Tests for Query Performance Creating a Plan for Deploying a Database Selecting an Appropriate Deployment Technique Designing Database Deployment Scripts Controlling Source Code Understanding the Benefits of Source Control Implementing SCM Techniques Summary Exam Essentials Review Questions Answers to Review Questions
Chapter
6
Designing a Web Services Solution “A Brief History of Distributed Application Development” Introducing XML Web Services Using Native XML Web Services in SQL Server What’s Going on Under the Hood? Building an XML Web Service Using SQL Server Using the Service Looking at the WSDL Building a Client Application Using Visual Studio 2005 Reviewing What You’ve Learned Creating and Configuring Native XML Web Services Using the CREATE ENDPOINT Statement Using the ALTER ENDPOINT Statement Using the DROP ENDPOINT Statement Querying Endpoint Metadata Reserving a Namespace: Running Side-by-Side with IIS Implementing Security with Native XML Web Services Implementing Authorization Implementing Authentication Implementing Security Best Practices for Native XML Web Services Using Alternative Technologies for SQL Server and Web Services Native XML Web Services vs. SQLXML Native XML Web Services vs. ASMX Web Services
xv
263 264 265 267 268 269 272 272 285 287 287 288 291 291 292 298 301 302 303 307 308 309 314 314 316 323 325 325 328 328 328 329 332 332 334 335 335 335 336
40521.book Page xvi Tuesday, August 8, 2006 2:01 PM
xvi
Contents
Implementing Best Practices for Native XML Web Services Summary Exam Essentials Review Questions Answers to Review Questions Chapter
7
Designing Messaging Services for a Database Solution Understanding Service-Oriented Architecture Understanding the Components of SOAs Understanding SOA in the Microsoft World Designing Service Broker Solutions for Asynchronous Applications Understanding the Service Broker Conversation Architecture Putting It All Together: A Sample Service Broker Application Developing Applications for SQL Server Notification Services Understanding the Notification Services Application Architecture Building Notification Services Applications Configuring a Notification Services Instance Deploying Notification Services Instances Looking at Notification Services Sample Applications Using SQL Server Database Mail Understanding the Database Mail Architecture Summary Exam Essentials Review Questions Answers to Review Questions
Chapter
8
Designing a Reporting Services Solution Understanding Your Options for Developing Reports Understanding the Report Model Building a Report Model Deploying a Report Model Using a Report Model Using the SSRS Report Wizard to Design Reports Using Microsoft Visual Studio to Develop Reporting Services Reports Understanding Report Layout and Rendering Options Working with Report Data Regions Grouping Report Data
337 339 339 340 344
347 349 349 350 350 351 357 362 363 365 385 388 392 393 395 407 408 409 414 417 419 419 420 429 429 431 437 445 445 446
40521.book Page xvii Tuesday, August 8, 2006 2:01 PM
Contents
Understanding How Reports Are Rendered at Runtime Controlling User Interaction within Reports Working with Graphical Report Elements Working with Report Style Elements Working with Nongraphical Report Elements Creating SSRS Reports Using the Microsoft Report Builder Utility Optimizing Report Execution Caching Reports Creating Report Snapshots Using Report Manager Delivering Reports Configuring Email Support Creating Subscriptions Summary Exam Essentials Review Questions Answers to Review Questions Chapter
Chapter
9
10
xvii
450 451 452 452 454 455 458 460 461 463 464 465 465 467 467 469 474
Designing Data Integration Solutions
477
Using SQL Server Integration Services Understanding the Package Structure Understanding the SSIS Architecture Testing and Debugging SSIS Packages Using Alternate Data Integration Technologies Using the SQL Server Agent Using the BCP Utility Using the BULK INSERT Statement Using Linked Servers Using Ad Hoc Distributed Queries Using Replication Summary Exam Essentials Review Questions Answers to Review Questions
478 487 503 506 526 527 528 532 534 538 541 546 546 547 552
Case Studies Case Study 1: United Nations Existing Environment Business Requirements Technical Requirements Review Questions Answers to Review Questions
555 557 557 558 558 559 562
40521.book Page xviii Tuesday, August 8, 2006 2:01 PM
xviii
Contents
Case Study 2: Energy Trading Existing Environment Business Requirements Technical Requirements Review Questions Answers to Review Questions Case Study 3: Nigerian Census Existing Environment Business Requirements Technical Requirements Review Questions Answers to Review Questions Case Study 4: Shark Conservation Existing Environment Business Requirements Technical Requirements Review Questions Answers to Review Questions Glossary Index
563 563 563 564 565 567 568 568 569 569 570 572 573 573 574 574 575 577 579 593
40521.book Page xix Tuesday, August 8, 2006 2:01 PM
Table of Exercises Exercise
1.1
Generating a Database Diagram . . . . . . . . . . . . . . . . . 8
Exercise
1.2
Creating a T-SQL User-Defined Data Type . . . . . . . . . . . . . 16
Exercise
1.3
Turning on CLR Integration in SQL Server 2005 . . . . . . . . . . . 17
Exercise
1.4
Calculating Page Density . . . . . . . . . . . . . . . . . . . . 20
Exercise
2.1
Creating Table Partitions . . . . . . . . . . . . . . . . . . . . 50
Exercise
2.2
Creating and Working with Views . . . . . . . . . . . . . . . . . 65
Exercise
2.3
Creating Indexed Views . . . . . . . . . . . . . . . . . . . . . 70
Exercise
2.4
Modifying a Stored Procedure’s DDL . . . . . . . . . . . . . . . 82
Exercise
2.5
Working with INSTEAD OF Triggers . . . . . . . . . . . . . . . . 88
Exercise
2.6
Working with DDL Triggers . . . . . . . . . . . . . . . . . . . 93
Exercise
2.7
Working with WMI Event Alerts
Exercise
3.1
Correlating a Trace with Windows Performance Log Data . . . . . . 159
Exercise
4.1
Auditing Changes to Objects through DDL Triggers . . . . . . . . 198
Exercise
4.2
Explicitly Changing the Execution Context . . . . . . . . . . . . 243
Exercise
4.3
Performing a Multitude of Security Tasks
Exercise
5.1
Deploying a Database Using the Copy Database Wizard . . . . . . 276
Exercise
6.1
Creating a Stored Procedure . . . . . . . . . . . . . . . . . . 310
Exercise
6.2
Stopping IIS. . . . . . . . . . . . . . . . . . . . . . . . . 312
Exercise
6.3
Creating a Windows Forms Web Service Client Application . . . . . 317
Exercise
6.4
Testing SQL Server and IIS . . . . . . . . . . . . . . . . . . 330
Exercise
7.1
Configuring SQL Server Database Mail . . . . . . . . . . . . . 396
Exercise
8.1
Grouping Data in SSRS . . . . . . . . . . . . . . . . . . . . 446
Exercise
9.1
Creating a Simple SSIS Package . . . . . . . . . . . . . . . . 480
Exercise
9.2
Creating a Not-So-Simple SSIS Package . . . . . . . . . . . . . 507
Exercise
9.3
Using the BCP Utility . . . . . . . . . . . . . . . . . . . . . 531
Exercise
9.4
Using the BULK INSERT Statement . . . . . . . . . . . . . . . 533
Exercise
9.5
Using the OPENROWSET Statement
. . . . . . . . . . . . . . . . 100
. . . . . . . . . . . . 249
. . . . . . . . . . . . . . 540
40521.book Page xx Tuesday, August 8, 2006 2:01 PM
40521.book Page xxi Tuesday, August 8, 2006 2:01 PM
Introduction Welcome to what is probably the most difficult exam in the current range of SQL Server 2005 exams available, the “Designing Database Solutions by Using Microsoft SQL Server 2005” 70-441 exam. The reason why it is such a difficult exam is because it has both a breadth and a depth that does not exist in the other SQL Server 2005 exams. It is not a DBA exam that focuses on a solution to a particular event or set of circumstances. And it is not a development exam asking you what syntax to use for a particular requirement. It is a design exam that will ask you what the best solution is, given a large set of business and technical requirements in the context of a particular environment. There will be many requirements, and they might conflict with each other at times, as in the real world. It is up to you to untangle these various requirements so as to be able to recommend the best solution. Designing database solutions can be a bit of an art form, requiring years of experience, a good theoretical background, and a broad knowledge of peripheral technologies and software development techniques. Consequently, I have tried to focus on best practices and database design techniques, giving alternatives and various considerations as appropriate, instead of simply giving you syntax and “a set of rules to follow.” I have had the good fortune and privilege over the past 10-odd years of working on a variety of database solutions for numerous public and private sector organizations, each with their own set of requirements, budgets, problems, and politics. I have brought those experiences and lessons learned from the field to this book. You’d be surprised at how poorly a database solution can be designed, given a simple set of requirements. Grab me for a beer if you ever see me, and I will gladly fill you in. In addition, I hope this book will go beyond helping you pass the exam and will give you some practical advice on how to develop a database solution based on Microsoft SQL Server 2005. Microsoft SQL Server 2005 has some amazing technology that you can leverage as a database architect or database developer, and I hope I can help you become more aware of the options you have when designing a database solution. The “Designing Database Solutions by Using Microsoft SQL Server 2005” exam is based purely on case studies; however, you will notice that each chapter in this book does not end in a case study. I did this deliberately by design. This was predominantly because of the way the chapters were structured. As a result, I could not ask database design questions at the end of each chapter that would incorporate concepts from a number of chapters. Instead, I have written questions at the end of each chapter that test and solidify what was taught in that chapter. Consequently, I have written an entire chapter at the end of the book, which comprises four case studies that will test you on the concepts covered by the entire book. You will also notice that I have provided either the full or partial syntax of the various relevant Transact-SQL statements and commands. Although I have at times explained some of the key clauses in more detail, that is not always the case. I think that it will be useful to get exposure to the syntax and the various options available, since you will more
40521.book Page xxii Tuesday, August 8, 2006 2:01 PM
xxii
Introduction
likely be tested on this rather than the graphical tools available. So make sure you understand the syntax and look up SQL Server 2005 Books Online as required. Last, I would recommend you familiarize yourself with the differences in features between the various editions of SQL Server 2005.
Introducing the Microsoft Certified IT Professional Since the inception of its certification program, Microsoft has certified millions of people. Over the years, Microsoft has learned what it takes to help people show their skills through certification. Based on that experience, Microsoft has introduced a new generation of certifications:
Microsoft Certified Technology Specialist (MCTS)
Microsoft Certified IT Professional (MCITP)
Microsoft Certified Professional Developer (MCPD)
Microsoft Certified Architect (MCA)
The MCITP certification program is designed to test your knowledge as an IT professional on a broader scale than that of the MCTS certification program. The MCTS certification program is an initial certification designed to test your skills and knowledge in a basic fashion on one product. To become an MCITP, one must first obtain an MCTS certification in their particular skill area. The MCITP program has three SQL Server 2005 certifications: database developer, database administrator, and business intelligence developer. Each of these focuses on a different aspect of SQL Server 2005 in recognition that the jobs requiring SQL Server 2005 skills are diverse with varying needs that often do not overlap. For example, a database developer may never need to know how to back up a database or configure the memory used by SQL Server. Those skills are more appropriate for a database administrator and aren’t tested as heavily. Instead, the focus for this certification is in using the development tools and structuring queries.
How to Become Certified as a MCITP: Database Developer Database developers are typically employed by mid-sized to large-sized organizations and are in high demand. They are responsible for designing and implementing relational database models and database storage objects. They also program the database solution by using various database objects such as user-defined functions, triggers, and stored procedures. They are also responsible for retrieving and modifying data using T-SQL queries and optimizing existing queries. Microsoft Certified IT Professional: Database Developer (MCITP: Database Developer) is the premier certification for database designers and developers. The MCITP: Database Developer credential demonstrates that you can design a secure, stable, enterprise database solution by using Microsoft SQL Server 2005. The MCITP: Database Developer certification requires an individual to pass two examinations as well as hold an MCTS: SQL Server 2005. As mentioned, the two exams are 70-441 and 70-442 and require extensive training in SQL Server 2005 to complete. In the past, the exams
40521.book Page xxiii Tuesday, August 8, 2006 2:01 PM
Introduction
xxiii
were structured to test knowledge of products by remembering specifications and with a minimum of extrapolation to real-world business situations. Often people memorized answers from “exam cram” books or from “brain dump” websites and were able to pass the exams. This book focuses on the skills you should have to pass the first exam, 70-441, while another Sybex book, MCITP Developer: Microsoft SQL Server 2005 Data Access Design and Optimization Study Guide (70-442), focuses on the second exam. I hope you not only use this book as a resource in exam preparation but also a reference as you seek to develop solutions for your employer. The new Microsoft certifications have been designed to prevent the integrity of its certification programs. They require extensive knowledge of the product and real-world experience to answer the questions being asked. You’ll need troubleshooting skills gained from actually using the product to solve the problems on the examination.
Make sure you take a Microsoft Skills Assessments for SQL Server 2005 to help you focus your exam preparation at http://assessment.learning .microsoft.com/test/home.asp.
This book is part of a series from Wiley that is designed to help you focus your preparation for the exams, but these books do not take the place of real-world experience and hands-on use of SQL Server 2005. Be sure you have actually used most of the features described in this book prior to registering for the exam. For information about other topics, visit the Wiley website at www.wiley.com for information about resources to use for the other exams.
Registering for the Exam You can take the Microsoft exams at any of more than 1,000 Authorized Prometric Testing Centers (APTCs) and VUE testing centers around the world. For the location of a testing center near you, call Prometric at 800-755-EXAM (755-3926), or call VUE at 888-837-8616. Outside the United States and Canada, contact your local Prometric or VUE registration center. Find out the number of the exam you want to take (70-441 for the “Designing Database Solutions by Using Microsoft SQL Server 2005” exam), and then register with Prometric or VUE. At this point, you will be asked for advance payment for the exam. The exams vary in price depending on the country in which you take them. You can schedule exams up to six weeks in advance or as late as one working day prior to the date of the exam. You can cancel or reschedule your exam if you contact the center at least two working days prior to the exam. Same-day registration is available in some locations, subject to space availability. Where sameday registration is available, you must register a minimum of two hours before test time.
You can also register for your exams online at www.prometric.com or www.vue.com.
40521.book Page xxiv Tuesday, August 8, 2006 2:01 PM
xxiv
Introduction
When you schedule the exam, you will be provided with instructions regarding appointment and cancellation procedures, information about ID requirements, and information about the testing center location. In addition, you will receive a registration and payment confirmation letter from Prometric or VUE. Microsoft requires certification candidates to accept the terms of a nondisclosure agreement before taking certification exams.
Taking the “Designing Database Solutions by Using Microsoft SQL Server 2005” Exam The “Designing Database Solutions by Using Microsoft SQL Server 2005” exam covers concepts and skills related to the design and implementation of a database solution using SQL Server 2005. It emphasizes the following elements of design: Designing database testing and code management procedures
Designing a performance benchmarking strategy Designing a database
Designing the logical database design
Designing the physical database design
Designing database security Designing database objects
Designing an application solution for SQL Server 2005
Designing an auditing strategy Developing applications that use SQL Server Support Services
SQL Server Integration Services
SQL Server Notification Services
SQL Server Reporting Services
SQL Server Web Services
This exam will test your knowledge of how you can best use SQL Server 2005 to meet the business requirements of an organization in a cost-effective manner. This exam requires a decision-making ability in all of the areas listed previously and an understanding of the implication of each of those decisions. To pass the test, you need to fully understand these topics. Careful study of this book, along with hands-on experience, will help you prepare for this exam.
Microsoft provides exam objectives to give you a general overview of possible areas of coverage on the Microsoft exams. Keep in mind, however, that exam objectives are subject to change at any time without prior notice and at Microsoft’s sole discretion. Please visit Microsoft’s Learning website (www.microsoft.com/learning) for the most current listing of exam objectives.
40521.book Page xxv Tuesday, August 8, 2006 2:01 PM
Introduction
xxv
Types of Exam Questions In an effort to both refine the testing process and protect the quality of its certifications, Microsoft has focused its Windows 2000, XP, and Server 2003 exams on real experience and hands-on proficiency. The test places a greater emphasis on your past working environments and responsibilities and less emphasis on how well you can memorize. In fact, Microsoft says an MCTS candidate should have at least one year of hands-on experience.
The 70-441 exam covers a set of precise objectives. I have written this book about these objectives and requirements for the Microsoft exam. When you take the exam, you will see approximately 52 questions, although the number of questions might be subject to change. At the end of an exam, you will get your exam score, pointing out your level of knowledge on each topic and your exam score total with a pass or a fail.
Exam questions may be in a variety of formats. Depending on which exam you take, you’ll see multiple-choice questions, select-and-place questions, and prioritize-a-list questions: Multiple-choice questions Multiple-choice questions come in two main forms. One is a straightforward question followed by several possible answers, of which one or more is correct. The other type of multiple-choice question is more complex and based on a specific scenario. The scenario may focus on several areas or objectives. Select-and-place questions Select-and-place exam questions involve graphical elements that you must manipulate to successfully answer the question. A typical diagram will show computers and other components next to boxes that contain the text Place here. The labels for the boxes represent various computer roles on a network, such as a print server and a file server. Based on information given for each computer, you are asked to select each label and place it in the correct box. You need to place all the labels correctly. No credit is given for the question if you correctly label only some of the boxes. Prioritize-a-list questions In the prioritize-a-list questions, you might be asked to put a series of steps in order by dragging items from boxes on the left to boxes on the right and placing them in the correct order. One other type requires that you drag an item from the left and place it under an item in a column on the right. Case studies In most of its design exams, Microsoft has opted to present a series of cases studies, some often quite lengthy and followed with a series of questions as to how certain design criteria will be used. Questions following the case studies may employ any of the previous formats and methodologies.
For more information about the various exam question types, refer to www.microsoft.com/learning.
40521.book Page xxvi Tuesday, August 8, 2006 2:01 PM
xxvi
Introduction
Microsoft will regularly add and remove questions from the exams. This is called item seeding. It is part of the effort to make it more difficult for individuals to merely memorize exam questions that previous test takers gave them.
Tips for Taking the Exam Here are some general tips for achieving success on your certification exam:
Arrive early at the exam center so you can relax and review your study materials. During this final review, you can look over tables and lists of exam-related information.
Read the questions carefully. Don’t be tempted to jump to an early conclusion. Make sure you know exactly what the question is asking. This is especially true for the 70-441 exam, which is based on case studies. Remember, all the pieces you need to answer the question are somewhere in the “Business Requirements” section, the “Technical Requirements” section, or even the “Existing Environment” section.
For questions you’re not sure about, use a process of elimination to get rid of the obviously incorrect answers first. This improves your odds of selecting the correct answer when you need to make an educated guess.
However, I have always given my students the tip of trusting their initial “gut” instinct when they are unsure. I certainly find that the more I read the question that I am unsure about, the more I doubt my initial choice, although it invariably ends up being correct.
What’s in the Book? When writing this book, I took into account not only what you need to know to pass the exam but what you need to know to take what you’ve learned and apply it in the real world. Each book contains the following: Objective-by-objective coverage of the topics you need to know Each chapter lists the objectives covered in that chapter.
The topics covered in this book map directly to Microsoft’s official exam objectives. Each exam objective is covered completely.
Assessment test Directly following this introduction is an assessment test that you should take before starting to read the book. It is designed to help you determine how much you already know about SQL Server 2005. Each question is tied to a topic discussed in the book. Using the results of the assessment test, you can figure out the areas where you need to focus your study. Of course, I do recommend you read the entire book.
40521.book Page xxvii Tuesday, August 8, 2006 2:01 PM
Introduction
xxvii
Exam essentials To highlight what you learn, essential topics appear at the end of each chapter. This “Exam Essentials” section briefly highlights the topics that need your attention as you prepare for the exam. Glossary Throughout each chapter, you will be introduced to important terms and concepts you will need to know for the exam. These terms appear in italics within the chapters, and at the end of the book, a detailed glossary defines these terms, as well as other general terms you should know. Review questions, complete with detailed explanations Each chapter is followed by 20 review questions that test what you learned in the chapter. The questions are written with the exam in mind, meaning they are designed to have the same look and feel as what you’ll see on the exam. Question types are just like the exam, including multiple-choice, select-and-place, and prioritize-a-list questions. Hands-on exercises In each chapter, you’ll find exercises designed to give you the important hands-on experience that is critical for your exam preparation. The exercises support the topics of the chapter, and they walk you through the steps necessary to perform a particular function. Case studies Because reading a book isn’t enough for you to learn how to apply these topics in your everyday duties, I have provided case studies in special sidebars. These explain when and why a particular solution would make sense in a working environment you’d actually encounter. You will notice that not every chapter contains case studies. This is because the nature of the material covered in a specific chapter may not lend itself to a case study situation. By contrast, when you take the actual exam, you will notice that the case studies and their subsequent questions are not as compartmentalized by a small group of objectives as I have done in these chapters in order to assure that you have mastered the relatively narrow topics. The end result is that I have mixed a variety of methodologies to give you the maximum opportunities to test your own knowledge of the material covered by the objective. I model the actual exam more closely in the case studies on the CD for those wanting a more “exam-like” environment. Interactive CD Every Sybex book in the Study Guide series comes with a CD complete with additional questions, flashcards for use with a PC or an interactive device, a Windows simulation program, and the book in electronic format. Details appear in the following section.
What’s on the Book’s CD? This new member of the best-selling Study Guide series includes quite an array of training resources. The CD offers numerous simulations, bonus exams, and flash cards to help you study for the exam. I have also included the complete contents of the book in electronic form. More specifically, you’ll find the following resources on the book’s CD: The Sybex ebook Many people like the convenience of being able to carry their whole study guide on a CD. They also like being able to search the text via computer to find specific information quickly and easily. For these reasons, I’ve included the entire contents of this book on
40521.book Page xxviii Tuesday, August 8, 2006 2:01 PM
xxviii
Introduction
the CD in Portable Document Format (PDF). I’ve also included Adobe Acrobat Reader, which provides the interface for the PDF contents as well as the search capabilities. The Sybex test engine This is a collection of multiple-choice questions that will help you prepare for your exam. You’ll find four sets of questions:
A series of case study–based exam questions designed to simulate the actual live exam.
All the questions from the study guide, presented in a test engine for your review. You can review questions by chapter or by objective, or you can take a random test.
The assessment test.
Sybex flashcards for PCs and handheld devices The “flashcard” style of question offers an effective way to quickly and efficiently test your understanding of the fundamental concepts covered in the exam. The Sybex flashcards consist of more than 150 questions presented in a special engine developed specifically for the Study Guide series. Because of the high demand for a product that will run on handheld devices, a version of the flashcards that you can take with you on your handheld device has been developed.
How Do You Use This Book? This book provides a solid foundation for the serious effort of preparing for the exam. To best benefit from this book, you may want to use the following study method: 1.
Read each chapter carefully. Do your best to fully understand the information.
2.
Complete all hands-on exercises in the chapter, referring to the text as necessary so you understand each step you perform. Install the evaluation version of SQL Server, and get some experience with the product.
Use an evaluation version of SQL Server Enterprise Edition instead of Express Edition because Express Edition does not have all the features discussed in this book. You can download the evaluation version from www.microsoft.com/sql.
3.
Answer the review questions at the end of each chapter. If you prefer to answer the questions in a timed and graded format, you can answer the chapter review questions from the CD that accompanies this book instead.
4.
Note which questions you did not understand and study the corresponding sections of the book again.
5.
Make sure you complete the entire book.
6.
Before taking the exam, go through the review questions, case studies, flashcards, and so on, included on the CD accompanying this book.
7.
Unfortunately, there is no substitute for experience. In fact, the exam is designed for people who have had experience with SQL Server 2005 in the enterprise. So, try to get your hands dirty with the technology.
40521.book Page xxix Tuesday, August 8, 2006 2:01 PM
Introduction
xxix
To learn all the material covered in this book, you will need to study regularly and with discipline. Try to set aside the same time every day to study, and select a comfortable and quiet place in which to do it, although I prefer Foo Fighters, Nine Inch Nails, Velvet Revolver, or any great blues.
For some great Australia bands, listen to the Hoodoo Gurus or Australian Crawl. For a “smoother” sound, try the Badloves. Brilliant!
If you work hard, you will be surprised at how quickly you learn this material. Again, good luck!
Hardware and Software Requirements You should verify that your computer meets the minimum requirements for installing SQL Server 2005. I suggest your computer meet or exceed the recommended requirements for a more orgasmic experience. Table I.1 details the minimum requirements for SQL Server 2005 editions on 32-bit computers. TABLE 1.1
SQL Server 2005 Editions and Minimum Operating System Requirements
Edition
Operating System Version and Edition
Enterprise
Windows XP with Service Pack 2 or later; Windows 2000 Server with Service Pack 4 or later; Windows 2003 Server: Standard, Enterprise, or Datacenter editions with Service Pack 1 or later; Windows Small Business Server 2003 with Service Pack 1 or later; Windows 2000 Professional with Service Pack 4 or later
Standard
Windows XP with Service Pack 2 or later; Windows 2000 Server with Service Pack 4 or later; Windows 2003 Server: Standard, Enterprise, or Datacenter editions with Service Pack 1 or later; Windows Small Business Server 2003 with Service Pack 1 or later; Windows 2000 Professional with Service Pack 4 or later
Workgroup
Windows XP with Service Pack 2 or later; Windows 2000 Server with Service Pack 4 or later; Windows 2003 Server: Standard, Enterprise, or Datacenter editions with Service Pack 1 or later; Windows Small Business Server 2003 with Service Pack 1 or later; Windows 2000 Professional with Service Pack 4 or later
Developer
Windows XP with Service Pack 2 or later; Windows 2000 Server with Service Pack 4 or later; Windows 2003 Server: Standard, Enterprise, or Datacenter editions with Service Pack 1 or later; Windows Small Business Server 2003 with Service Pack 1 or later; Windows 2000 Professional with Service Pack 4 or later
40521.book Page xxx Tuesday, August 8, 2006 2:01 PM
Assessment Test 1.
You have developed a database solution on a developer server and are ready to deploy the solution on a production server. The database on the developer server has test data and is 10GB in size. The database on the production server will be initially sized at 5GB. You don’t need the test data to exist on the production server. What is the quickest way of deploying the database on the production server? A. Back up the database on the development server, and restore the backup on the production server. Reduce the database to 5GB in size. B. Detach the database from the development server, copy the files to the production server, and attach the database files. C. Create a new database on the production server. Generate T-SQL DDL scripts from the development server, and execute them on the production server. D. Detach the database from the development server, copy the files to the production server, and attach the database files. Delete the data using T-SQL statements.
2.
You have developed an application that runs against your SQL Server database solution. As part of your testing you want to capture the network activity between the client application and SQL Server. You will use this information for analysis and potentially fine-tune the database solution. This network activity needs to be potentially replayed against an identical SQL Server instance located at the head office. What is the best tool to use? A. The SQL Server 2005 Upgrade Advisor B. The SQL Server Surface Area Configuration tool C. The Database Engine Tuning Advisor D. SQL Server Profiler
3.
Your manager wants to be notified whenever a developer modifies a table’s schema within the database solution. What database objects would you use to meet your manager’s requirements? (Choose two.) A. Use a DDL trigger. B. Use a DML trigger. C. Use the sp_sendmail stored procedure. D. Use the sp_send_dbmail stored procedure.
4.
Your database solution consists of two databases: SalesDB and ForecastDB. A view called [MySales] created and owned by Olya depends on another view called [MyForecast] that she has created in ForecastDB. Her boss, Victor, who has access only to the SalesDB database needs access to her [MySales] view, and she has granted him SELECT permissions on the [MySales] view. What else do you need to do to ensure Victor can use her view? (Choose all that apply.) A. Turn on the database option DB_CHAINING. B. Turn on the database option TRUSTWORTHY. C. Grant Victor SELECT permission to the [MyForecast] view. D. Make Victor a member of the db_ddladmin fixed database role in [SalesDB].
40521.book Page xxxi Tuesday, August 8, 2006 2:01 PM
Assessment Test
5.
xxxi
You are designing a new table for your marketing department that will be used to store vendor data. The schema for the [Vendor] table is as follows:
Name
Data Type
GUID
UNIQUEIDENTIFIER
Vendor
NVARCHAR(50)
Address
NVARCHAR(50)
City
NVARCHAR(20)
State
NCHAR(3)
PostCode
NCHAR(9)
Country
NVARCHAR(20)
Phone
INT
The [Vendor] table will have a clustered index on the GUID column and a nonclustered index on the Phone field. Marketing expects to have about 100,000 rows in the [Vendor] table. How much space should you allow for the [Vendor] database object? A. About 17,000KB B. About 19,500KB C. About 33,300KB D. About 35,700KB 6.
You’re a designing a database solution for your sales department. Join performance is critical. The [Product] table contains 666 products. With more than 10,000,000 customers you are anticipating a lot of invoices. It is expected that 99 percent of customers will be placing an order for only one product per invoice. Your database design is as follows: CREATE TABLE [Product] ( [ProductId] SMALLINT NOT NULL, [Product] VARCHAR(20) NOT NULL, [Price] MONEY NOT NULL, [StockLevel] INT NOT NULL, [StockValue] AS ([Price] * [StockPrice]), CONSTRAINT [PK_Product] PRIMARY KEY CLUSTERED (ProductId) ) CREATE TABLE [Invoice] ( [InvoiceNumber] INT NOT NULL, [InvoiceDate] DATETIME NOT NULL, [CustomerId] INT NOT NULL
40521.book Page xxxii Tuesday, August 8, 2006 2:01 PM
Assessment Test
xxxii
CONSTRAINT [PK_Invoice] PRIMARY KEY CLUSTERED (InvoiceNumber) ) CREATE TABLE [InvoiceDetails] ( [InvoiceNumber] INT NOT NULL REFERENCES Invoice(InvoiceNumber), [ProductId] SMALLINT NOT NULL REFERENCES Product(ProductId), [SalesQuantity] TINYINT NOT NULL, [SalesPrice] MONEY NOT NULL ) What indexes should you create to improve performance? (Choose all that apply.) A. CREATE CLUSTERED INDEX [CLI_InvoiceNumber] ON InvoiceDetails(Invoice Number) B. CREATE CLUSTERED INDEX [CLI_ProductId] ON InvoiceDetails (ProductId) C. CREATE CLUSTERED INDEX [CLI_ SalesQuantity] ON InvoiceDetails(SalesQuantity) D. CREATE NONCLUSTERED INDEX [NCI_InvoiceNumber] ON InvoiceDetails(InvoiceNumber) E. CREATE NONCLUSTERED INDEX [NCI_ProductId] ON InvoiceDetails (ProductId) F. 7.
CREATE NONCLUSTERED INDEX [NCI_SalesQuantity] ON InvoiceDetails (SalesQuantity)
You are architecting a distributed database solution that will use the Internet, so you are concerned about security. You need to allow a number of remote sites to access a number of stored procedures through HTTP so you can retrieve small amounts of secure lookup data. What solution should you architect? A. Set up an FTP service. Create an SSIS package that periodically exports the lookup data to CSV files in the FTP site. Require remote sites to access the FTP site, and retrieve the data using an FTP password. B. Install IIS on the SQL Server. Develop a web solution that will allow the remote sites to retrieve data through the stored procedures. C. Install IIS on a separate server. Develop a web solution that will allow the remote sites to retrieve data through the stored procedures. D. Develop a web services solution on the SQL Server that will allow the remote sites to retrieve data through the stored procedures.
40521.book Page xxxiii Tuesday, August 8, 2006 2:01 PM
Assessment Test
8.
xxxiii
You are developing a database solution for the University of New South Wales in Sydney, Australia. Because of the large number of students and marks awarded, you have decided to take advantage of partitioning. You have decided to partition the table into students who have failed and students who have passed. Students can get one of the following awards: Fail (F): 0–45 Pass terminating (PT): 46–47 Pass conceded: 48–49 Pass: 50–64 Credit: 65–74 Distinction: 75–84 High distinction: 85–100 You will have three partitions: one for students who have failed, one for students who have achieved a pass or credit, and one for distinctions. What partition function should you use? A. CREATE PARTITION FUNCTION PF_Mark (TINYINT) AS RANGE LEFT FOR VALUES (45,74) B. CREATE PARTITION FUNCTION PF_Mark (TINYINT) AS RANGE LEFT FOR VALUES (46,75) C. CREATE PARTITION FUNCTION PF_Mark (TINYINT) AS RANGE RIGHT FOR VALUES (45,74) D. CREATE PARTITION FUNCTION PF_Mark (TINYINT) AS RANGE RIGHT FOR VALUES (46,75)
9.
You are designing a database solution for a book distributor. The schema of the table is as follows. The [PublisherDescription] field is an industry standard that is populated by the book’s publisher, and the [Review] column is used to concatenate reviews from various sources that have been scrubbed and converted as free-form text. CREATE TABLE [Title] ( [TitleId] SMALLINT NOT NULL, [ISBN10] CHAR(10) NOT NULL, [ISBN13] CHAR(13) NOT NULL, [Title] VARCHAR(20) NOT NULL, [Price] MONEY NOT NULL, ... [PublisherDescription] XML NOT NULL, [Review] VARCHAR(MAX) NOT NULL, CONSTRAINT [TitleId] PRIMARY KEY CLUSTERED (TitleId) )
40521.book Page xxxiv Tuesday, August 8, 2006 2:01 PM
xxxiv
Assessment Test
Information workers who will be querying the table want to be able to search for words or phrases in the [Review] column. What kind of index should you create to improve performance? A. Create a clustered index on the column. B. Create a full-text index on the column. C. Create a nonclustered index on the column. D. Create an XML index on the column. 10. You are designing a database solution that will be using XML web services. What kind of endpoint do you have to configure? A. Database mirroring B. Service Broker C. SOAP D. T-SQL 11. You are architecting a database solution for FIFA for the World Cup in 2006, which involves a Reporting Services component. Information workers at FIFA need to be able to develop and run ad hoc reports. What utility should they use? A. Report Manager B. Report Builder C. Report Designer D. Model Designer 12. You have hired a new junior DBA/developer named Marg. She needs to be able to only add users to your database solution. What statement should you execute to give her the capabilities to fulfill her job? A. EXEC sp_addrolemember ‘db_accessadmin’, ‘Marg’ B. EXEC sp_addrolemember ‘db_ddladmin’, ‘Marg’ C. EXEC sp_addrolemember ‘db_owner’, ‘Marg’ D. EXEC sp_addrolemember ‘db_securityadmin’, ‘Marg’ 13. You are creating a database that will store weather data. The main table contains a date field and a field for every state of the United States; the state field will potentially store a weather satellite photo of that state. Each photo is less than 8KB in size. For any given day, all the fields might not be populated because of satellite problems or inclement weather. What data type should you use for the weather satellite photos? A. BINARY(8000) B. VARCHAR(8000) C. SQL_VARIANT D. VARBINARY(MAX)
40521.book Page xxxv Tuesday, August 8, 2006 2:01 PM
Assessment Test
xxxv
14. You have successfully developed a distributed database solution that will track scientific data for platypus colonies in Australia. The database solution uses Service Broker to asynchronously communicate and process the research data. The database solution needs to be deployed at the various research sites, and you have sent them the files that make up the PlatypusWorks database files. After attaching the database, the DBAs notice problems with the Service Broker. What should the DBAs at the research sites do to ensure the Service Broker components work correctly? A. Run the SQL Server Surface Area Configuration tool. B. Run the SQL Server Configuration Manager. C. Run the ALTER DATABASE PlatypusWorks SET ENABLE_BROKER statement. D. Run the DBCC CHECKDB statement. 15. You are developing a Reporting Services solution based on an OLTP database. You are currently developing a critical report, so performance is critical. The indexing strategy seems to be sufficient. The report is based on a [Customers] table that contains more than 6,900,000 rows. The table contains a BIT field indicating the gender of the customer. The report needs to be filtered on a number of SARGS. The gender is rarely one of those SARGs. How can you optimize the report? A. Use a subreport. B. Limit the data via a filter in the report. C. Limit the data via a WHERE clause in the underlying DML. D. Create a nonclustered index on the gender field. E. Use a matrix. 16. You are designing a Notification Services application for a hospital that will be used to page doctors. The hospital has a tight budget. The database solution will be deployed on a server with two CPUs and 4GB of RAM running Windows 2003 x64 Standard Edition. You want to ensure that the event distributor can handle the throughput and does not become the bottleneck. What edition of SQL Server should you use? A. Express Edition B. Developer Edition C. Standard Edition D. Enterprise Edition 17. You need to load a number of legacy CSV files into your new database solution. What techniques could you use to load these files? (Choose all that apply.) A. Use the BCP command. B. Use the DTSRUN command. C. Use the BULK INSERT statement. D. Use the OPENXML statement. E. Use the sp_addlinkedserver stored procedure.
40521.book Page xxxvi Tuesday, August 8, 2006 2:01 PM
xxxvi
Assessment Test
18. You need to script the database solution on your production SQL Server to save it in your source control solution. What is the easiest way of doing this? A. Use the sp_help stored procedure. B. Use SQL Server Management Studio. C. Use the SQL Server Configuration Manager. D. Use the sp_helptext stored procedure. E. Write a program that uses the DMO. F.
Write a program that uses the SMO.
19. You are working on a census database for an European Union project in Nigeria. A census was performed in 2006, and 130 million records have been captured. The database solution now requires the collected data to be separated into 37 databases, for each state of Nigeria, each of which is destined to be a data mart. You are designing an SSIS package that will read from the main table that contains the 130 million records and needs to send the rows to the correct state database. What SSIS package transformation should you use? A. Use a Copy Column transformation. B. Use a Data Conversion transformation. C. Use a Conditional Split transformation. D. Use a Derived Column transformation. 20. You are performing some benchmarking on your development server. You want to create a SQL Sever trace via SQL Server Profiler that will tell you how long unit test queries take to run. What SQL Server Profiler template should you use? A. TSQL B. TSQL_SPs C. TSQL_Duration D. TSQL_Replay E. Tuning
40521.book Page xxxvii Tuesday, August 8, 2006 2:01 PM
Answers to Assessment Test
xxxvii
Answers to Assessment Test 1.
C. Creating a new 5GB database on the production server and executing T-SQL DDL scripts from the development server is the quickest way of deploying a database with no test data at the correct size. See Chapter 5 for more information.
2.
D. SQL Server Profiler is designed to capture network activity between a client application and a SQL Server instance. The captured trace file can be replayed against a SQL Server instance. See Chapter 3 for more information.
3.
A, D. You should develop a DDL trigger that sends an email using the sp_send_dbmail system stored procedure. DML triggers fire only for INSERT, UPDATE, and DELETE statements. You should not use the sp_sendmail system stored procedure because it will be deprecated. See Chapters 2 and 7 for more information.
4.
A. You need to turn on database chaining to ensure that ownership chaining will work between the databases. Victor cannot be given SELECT permission to the [MyForecast] view because he does not exist in the ForecastDB database. See Chapter 4 for more information.
5.
D. Each row will take up a maximum of (16 + 100 + 100 + 40 + 6 + 18 + 40 + 4) = 324 bytes. You can fit (8,060 / 324) = 24 rows per 8KB page. So, the table will take up (100,000 / 24) = 4,167 8KB pages, or 33,336 KB. The nonclustered index is (4 + 16) = 20 bytes. You can fit (8,060 / 20) = 335 rows per 8KB page. Therefore, the index will take up (100,000 / 335) = 298 8KB pages, or 2,384 KB. So the table will take up about 35,700 KB (33,336 KB + 2,384 KB = 35,720 KB). See Chapter 1 for more information.
6.
B, D. You need an index on both the [InvoiceNumber] and [ProductId] columns of the [InvoiceDetails] table. A clustered index on the [ProductId] column will ensure smaller nonclustered indexes. See Chapter 2 for more information.
7.
D. A web services solution allows you to build an HTTP interface without a reliance on IIS. SQL Server stored procedures can be exposed to the outside world through HTTP endpoints. See Chapter 6 for more information.
8.
A. The LEFT range gives you three partitions: 00–45, 46–74, and 75–100. See Chapter 2 for more information.
9.
B. Full-text indexes are designed to capture the significant word in free-form text or unstructured data. XML indexes are designed to index XML data. Clustered and nonclustered indexes are designed to index structured data. A clustered index cannot be created on this table because it already exists. See Chapter 2 for more information.
10. C. Only SOAP endpoints are used for XML web services. See Chapter 6 for more information. 11. B. Report Builder enables users to create ad hoc reports. Report Manager is a web tool that manages the content of the report server database. Report Designer creates reports. Model Designer builds models for ad hoc reporting. See Chapter 8 for more information. 12. A. The db_accessadmin role will allow her to add new users to the database. See Chapter 4 for more information.
40521.book Page xxxviii Tuesday, August 8, 2006 2:01 PM
xxxviii
Answers to Assessment Test
13. D. The VARBINARY(MAX) field will allow you to take advantage of overflow pages so the total size of the row can exceed SQL Server’s 8,060-byte limit. See Chapter 1 for more information. 14. C. The Service Broker gets disabled whenever a database is restored or attached. The SET ENABLE_BROKER statement will enable the Service Broker. See Chapter 7 for more information. 15. C. Limiting the data in the WHERE clause will minimize the amount of data being consumed by the result set being. See Chapter 8 for more information. 16. C. SQL Server 2005 Standard Edition supports a single event distributor and a maximum of three threads, which should be sufficient. See Chapter 7 for more information. 17. C. The BCP command and BULK INSERT statement are designed to load files such as CSV files into SQL Server tables. See Chapter 9 for more information. 18. B. SQL Server Management Studio has the ability to script a database quickly and easily. The sp_helptext stored procedure will script only one object. See Chapter 5 for more information. 19. C. You can use a Conditional Split transformation to route data to different tables, depending on the data. See Chapter 9 for more information. 20. C. The TSQL_Duration template has all the information you need to determine how long queries take to complete. See Chapter 2 for more information.
40521.book Page 1 Monday, August 14, 2006 8:04 AM
Chapter
1
Designing a Database Solution MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER: Design a logical database.
Design a normalized database.
Optimize the database design by denormalizing.
Design data flow architecture.
Design table width.
Design an application solution that uses appropriate database technologies and techniques.
Design a solution for storage of XML data in the database.
Design objects that define data.
Design user-defined data types.
Design tables that use advanced features.
Design attributes.
Decide whether to persist an attribute.
Specify domain integrity by creating attribute constraints.
Choose appropriate column data types and sizes.
Design entities.
Define entities.
Define entity integrity.
Normalize tables to reduce data redundancy.
Establish the appropriate level of denormalization.
Design entity relationships (ERs).
Specify ERs for referential integrity.
Specify foreign keys.
Create programmable objects to maintain referential integrity.
40521.book Page 2 Monday, August 14, 2006 8:04 AM
Designing databases is becoming somewhat of a lost art form. Unfortunately, not enough energy is spent on the initial database design. Considering the life span of your database solution and the dependency of this solution on the fundamental database design, you should commit the necessary time to designing a framework that fulfills your current requirements and that can evolve as these requirements change. You should invest in the process of a formal logical database design, followed by a review, before you translate that model into your physical database design. In this chapter, I will go through the various issues you should consider in both the logical and physical database designs. Database developers are invariably concerned about performance. Although performance is important, it should not be your primary concern at this stage in the implementation of your database solution. At this stage in your database solution’s life cycle, you should be more concerned about data integrity, so I will go through the different types of data integrity before covering the various ways in which you can enforce your data integrity. SQL Server 2005 has a great range of features to take advantage of, so make sure you are aware of the product’s capabilities. This will enable you to deliver an optimal database solution that will keep everyone happy: the developers, the database administrators (DBAs), management, and those “ever-demanding” users.
Designing a Logical Database Designing a database solution customarily begins with a logical database design. This logical database design cycle generally involves the following processes:
Determining the data to be stored
Determining the data relationships
Logically structuring the data
When designing the logical database model, it is important to keep the users in mind. (Unfortunately, they are quite important!) So, your logical database model should be organized into logical entities that are easily understood and maintained. Appropriate naming conventions are also important, although beyond the scope of this chapter. The logical database model should reduce data repetition, and it typically does this through a process known as normalization.
40521.book Page 3 Monday, August 14, 2006 8:04 AM
Designing a Logical Database
3
Understanding Normalization Simplified, normalization is all about eliminating duplicate data in a relational database design. Although you can adopt a formal process, you will find that most database developers tend to naturally design normalized databases instead.
The Most Difficult Question: What Is Normalization? How many developers or DBAs do you meet who can remember the database fundamentals, such as “Codd’s Twelve Rules,” functional dependencies, and transitive dependencies, and are able to define normalization and the five normal forms? I’ve been training SQL Server worldwide for more than 10 years, and students generally struggle. Out of curiosity, I returned to my lecture notes from the University of New South Wales to see how my lecturer, Geoff Whale, defined normalisation (we spell it differently in Australia): “Normalisation—simply ‘common sense.’” Thanks, Geoff….
Basically, normalization is about converting an entity into tables of progressively smaller degree and cardinality until you reach an optimum level of decomposition. So, no (or little) data redundancy exists. Removing redundancy achieves a number of benefits. First, you save space by storing a data element only once. Second, it is then easier to maintain data consistency because you have only one instance of the data element to maintain. However, it is important to understand the disadvantage of normalization: that you need to join tables again to retrieve related data. Join operations are one of the most expensive operations in any relational database management system (RDBMS), including SQL Server. So, it is possible to “overnormalize,” but I’ll talk more about that in the “Understanding Denormalization” section. The formal normalization process typically starts with unnormalized data and progressively goes through a number of classifications called normal forms (NFs). Several normal forms exist; in fact, there are to date theories for six normal forms, with variations such as the Boyce-Codd and domain/key normal forms. Generally, however, third normal form (3NF) is commonly sufficient for most database solutions and is what is strived for in an initial, good, logical database design. You’ll now go through the first three normal forms using, as an example, a customer order form that you need to design a database model for, as shown in Figure 1.1. You can also represent the customer order form, using Victor’s Notation, as follows: Order(OrderNumber, CustomerNumber, Name, PassportNumber, Address, Region, PostCode, Country, OrderDate (ProductNumber, Product, Quantity, UnitPrice))
40521.book Page 4 Monday, August 14, 2006 8:04 AM
4
Chapter 1
FIGURE 1.1
Designing a Database Solution
Customer order form
Achieving First Normal Form A table is considered to be in first normal form (1NF) if it meets the following conditions:
Every column is atomic; it cannot be further decomposed into more subcolumns.
It is a valid table, so you separate any repeating groups or multivalued columns.
A unique key has been identified for each row. This primary key is typically denoted as being underlined.
All attributes are functionally dependent on all or part of the key.
So in this example, you need to decompose the [Name] attribute and separate the repeating group representing the OrderDetail tuple: Order(OrderNumber, CustomerNumber, FirstName, LastName, PassportNumber, Address, Region, PostCode, Country, OrderDate) OrderDetail(OrderNumber, ProductNumber, Product, Quantity, UnitPrice)
Figure 1.2 shows the end result. FIGURE 1.2
1NF
40521.book Page 5 Monday, August 14, 2006 8:04 AM
Designing a Logical Database
5
Achieving Second Normal Form The normal forms are cumulative, so on top of being in 1NF, a table is in second normal form (2NF) when all nonkey attributes are fully functionally dependent on the entire key. So, you are effectively identifying and getting rid of partial dependencies. Look for composite keys and where an attribute might be dependent on only one of the key columns. In this case, the [Product] and [UnitPrice] attributes depend only on [ProductNumber], not on the ([OrderNumber], ProductNumber]) combination. So to meet 2NF, you need to separate these attributes into a distinct entity: Order(OrderNumber, OrderDate, CustomerNumber, FirstName, LastName, PassportNumber, Address, Region, PostCode, Country) OrderDetail(OrderNumber, ProductNumber, Quantity) Product(ProductNumber, Product, UnitPrice)
Figure 1.3 represents this graphically. FIGURE 1.3
2NF
Achieving Third Normal Form Once in 2NF, you now look for transitive dependencies. “What, Victor? In plain English, please!” Basically, you need to ensure that no nonkey attribute depends on another nonkey attribute. In this case, the [FirstName], [LastName], [PassportNumber], [Address], [Region], [PostCode], and [Country] attributes all depend on the [CustomerNumber] attribute, not on the [OrderNumber] attribute. You are basically ensuring that an entity holds only the information related to it. This table will finally be in 3NF: Order(OrderNumber, CustomerNumber, OrderDate) Customer(CustomerNumber, FirstName, LastName, PassportNumber, Address, Region, PostCode, Country) OrderDetail(OrderNumber, ProductNumber, Quantity) Product(ProductNumber, Product, UnitPrice)
Figure 1.4 shows how you have created a separate entity to store the customer data.
40521.book Page 6 Monday, August 14, 2006 8:04 AM
6
Chapter 1
FIGURE 1.4
Designing a Database Solution
3NF
Once your logical database design is in 3NF, you have the foundation for a good database solution. The next phase is typically to start the physical database design and design your entities. However, at this stage in the book, it is appropriate for me to discuss denormalization techniques.
Understanding Denormalization Denormalization is the process of taking a normalized database design and bringing back levels of data redundancy to improve database performance. As discussed, a fully normalized database design will increase the number of join operations that SQL Server will have to perform, but an additional consideration of locking goes on when you are accessing multiple tables to retrieve the required data. By deliberately recombining tables or duplicating data, you can reduce the amount of work SQL Server has to perform as there are fewer joins to perform and locks to maintain. Additionally some techniques of denormalization can dramatically reduce the amount of calculations that SQL Server will have to perform, thereby reducing processor resources. The obvious disadvantage of denormalization is that you have multiple instances of the same data, which will lead to update anomalies unless you design some mechanism to maintain the denormalized data. This critical design issue depends on two factors. First, the denormalized data has to be up-to-date. Second, you must consider the volatility of the denormalized data in your database solution. Ideally, denormalized data should be nonvolatile. If your denormalized data needs to be maintained in real time, then you will probably have to take advantage of Data Manipulation Language (DML) triggers. On the other hand, if the data does not have to be 100 percent accurate, you can perhaps get away with running a scheduled Transact-SQL (T-SQL) script after-hours via a SQL Server Agent job. You can denormalize using different techniques. Figure 1.5 illustrates a foreign key being replicated to reduce the number of joins that SQL Server will have to perform. In the denormalized design, you do not need to join the [OrderDetail] table to the [Product] table just to retrieve [DistributorNumber], so SQL Server has one less join to perform. This is the normalized version: OrderDetail(OrderNumber, ProductNumber, Quantity) Product(ProductNumber, Product, UnitPrice) Distributor(DistributorNumber, Distributor, Address, State, PostCode, Country) Denormalized:
40521.book Page 7 Monday, August 14, 2006 8:04 AM
Designing a Logical Database
7
OrderDetail(OrderNumber, ProductNumber, DistributorNumber Quantity) Product(ProductNumber, Product, UnitPrice) Distributor(DistributorNumber, Distributor, Address, State, PostCode, Country)
[DistributerNumber] is a good candidate for denormalization since it is typically nonvolatile. You could maintain the denormalized data via a DML trigger without too much performance overhead. Figure 1.6 shows an example of a calculated year-to-date sales column being added to the [Product] table because it is a calculation that users frequently need in their reports. So instead of having to run a T-SQL query that would have to use the SUM() aggregate function and GROUP BY clause, users can simply look up the [YTDSales] column. Not only will this avoid SQL Server having to perform a join and consequently locking the table, this could substantially improve performance if you had millions of records across which the year-to-date sales calculation would have to be made. FIGURE 1.5
Duplicating the foreign key to reduce the number of joins
FIGURE 1.6
Denormalization: deriving aggregate data to improve performance
40521.book Page 8 Monday, August 14, 2006 8:04 AM
8
Chapter 1
Designing a Database Solution
This is the normalized version: OrderDetail(OrderNumber, ProductNumber, Quantity) Product(ProductNumber, Product, UnitPrice) Denormalized: OrderDetail(OrderNumber, ProductNumber, Quantity) Product(ProductNumber, Product, UnitPrice, YTDSales)
The important issue here is how you’ll maintain the [YTDSales] column. This depends on your user requirements. If the data needs to be 100 percent accurate at all times, then you have no choice but to implement a DML trigger. On the other hand, if it is not important for the value to be up-to-date in real time, then you could schedule an after-hours T-SQL job that would update the [YTDSales] column at the end of each day. So when deciding upon the potential level of denormalization in your database design, you need to counterbalance the benefits versus the overhead of maintaining the denormalized data so as not to lose referential integrity. You can achieve this only with thorough knowledge of your actual data and the specific business requirements of the users. And understanding how SQL Server’s engine works doesn’t hurt either!
Designing the Physical Entities Once you have designed the logical database model, it is now time to turn your attention to the physical implementation of that model. The physical database design generally involves the following processes:
Determining the data types to be used by the attributes
Designing the physical table to represent the logical entities
Designing the entity integrity mechanisms
Designing the relational integrity mechanisms
I always believe it’s useful to know where you’re heading. So, it is constructive to see what your end goal is when designing the physical database model. SQL Server Management Studio easily enables you to draw a database diagram of your physical database design that shows the physical tables that make up the database, the attributes, and the properties of the tables and the relationships between them (see Exercise 1.1). EXERCISE 1.1
Generating a Database Diagram 1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
In Object Explorer, expand Server Databases AdventureWorks Database Diagrams.
40521.book Page 9 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
9
EXERCISE 1.1 (continued)
3.
If prompted, select Yes when asked whether you want to create one or more of the support objects required to use database diagramming.
4.
Right-click the Database Diagrams folder, and select New Database Diagram.
5.
Select all tables in the Add Table dialog box, and click the Add button.
6.
Click the Close button.
7.
Examine the physical database model in the database diagram window.
When implementing the physical model, you always need to take into account the RDBMS engine you are using. The same logical database model might end up being implemented differently on SQL Server 2005 versus Microsoft Access or even SQL Server 2000.
Database Design Is an Iterative Process In 2000 I was involved in analyzing a database used in the Australian wool industry. As usual, the database had a number of performance problems. In this particular instance, after examining the database design, I found that no constraints were defined at all. Upon further investigation, I determined this database had been upgraded originally from SQL Server 6.5 to SQL Server 7.0 and, finally, to the then-current SQL Server 2000. At no stage did anyone examine the database design to see whether they could leverage any new features of the latest version of SQL Server. Your database design should be flexible and constantly reevaluated to ensure you are maximizing the potential of the SQL Server database engine on which you are running. In the case of upgrading to SQL Server 2005, you should be looking at redesigning your entities to take advantage of features such as the XML data type, support for large object data types, persisted columns, partitions, and so on.
Designing Attributes Every RDBMS will have its own combination of data types it natively supports. The major new additions for SQL Server 2005 are native support for XML data and the “dreaded” common language runtime (CLR) user-defined data types. In fact, SQL Server 2005 has a number of options as far as user-defined data types are concerned, but you will examine them in Chapter 2. You are primarily concerned with native system supplied data types here.
40521.book Page 10 Monday, August 14, 2006 8:04 AM
10
Chapter 1
Designing a Database Solution
Understanding Types of Data The modern database engine has to support a number of types of data in today’s demanding business environments:
Structured
Semistructured
Unstructured
It is important to understand the characteristics and differences of data types so as to be able to correctly choose the appropriate data type for the attributes in your database design.
Structured Data Structured data can be organized in semantic groups or entities. Similar entities are grouped together by using relations or classes because they have the same descriptions or attributes. Structured data tends to have a well-defined, well-known, and typically rigid schema. This is the type of data that an RDBMS typically excels at storing and managing and traditionally is what developers have all been using over the last decade. Structured data provides various advantages. You can normalize the data into various tables and thereby reduce redundancy, as discussed. Structured data also tends to lend itself to efficient storage and optimized access and is easy to define data integrity and business rules against.
Semistructured Data Semistructured data represents data whose schema is not well defined or may evolve over a period of time. Although semistructured data is often organized in semantic entities where similar entities are grouped together, entities in the same group may not have the same attributes. XML is a widely used form of semistructured data. It excels as a means of storing emails and business-to-business (B2B) documents that tend to be semistructured. Semistructured data is suitable, as indicated, when the structure of the data is not known completely or is likely to change significantly in the future. However, Extensible Markup Language (XML) also has a few disadvantages. Every element of data has to be marked with tags (which represent the schema), and this increases storage space; therefore, it is nowhere near as efficient as structured data types. Searching semistructured data is a little more complex because the schema is potentially different for each record.
Unstructured Data Unstructured data has no schema whatsoever, so it is typically unpredictable. Examples of unstructured data include free-form text, Microsoft Office documents, and images. The advantage of unstructured data is that you can change the format of the data at any time without any major database redesign. However, it is difficult to search using “traditional” techniques. SQL Server provides support for such Binary Large Objects (BLOBs) and has a “revamped” full-text indexing search engine to facilitate searches on free-form text.
40521.book Page 11 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
11
Choosing Data Types and Sizes From Databases 1.01: “Choose the smallest data type possible that can contain the range of values for that particular attribute.” It’s as simple as that. Mostly, anyway! You still need to take into account how SQL Server 2005 stores data rows when creating your table schemas, but generally this advice is sound. Table 1.1 shows all the system-supplied native data types supported by SQL Server 2005, the range of values they support, and how much space they consume on disk. TABLE 1.1
Native SQL Server 2005 Data Types
Data Type
Range
Storage
BIT
0 to 1
1 byte/eight BIT fields.
TINYINT
0 to 255
1 byte.
SMALLINT
–32,768 to 32,767
2 bytes.
INT
–2,147,483,648 to 2,147,483,647
4 bytes.
REAL
–.40E+38 to –1.18E–38, 0, and 1.18E–38 to 3.40E+38
4 bytes.
SMALLMONEY
–214,748.3648 to 214,748.3647
4 bytes.
SMALLDATETIME
January 1, 1900, through June 6, 2079
4 bytes.
BIGINT
–9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
8 bytes.
MONEY
–922,337,203,685,477.5808 to 922,337,203,685,477.5807
8 bytes.
DATETIME
January 1, 1753, through December 31, 9999
8 bytes.
TIMESTAMP
Generally used as a means of versionstamping rows
8 bytes.
FLOAT(n)
–1.79E+308 to –2.23E–308, 0, and 2.23E–308 to 1.79E+308
Variable. Depends on (n): (n) Value Storage 1–24 4 bytes 25–53 8 bytes
40521.book Page 12 Monday, August 14, 2006 8:04 AM
Chapter 1
12
TABLE 1.1
Designing a Database Solution
Native SQL Server 2005 Data Types (continued)
Data Type
Range
Storage
UNIQUEIDENTIFIER
Globally unique identifier (GUID) value matching xxxxxxxx-xxxx-xxxx-xxxxxxxxxxxxxxxx mask
16 bytes.
DECIMAL(p, s) NUMERIC(p, s)
Depends on fixed precision (p) and scale Variable. Depends on (p): (s); –10^38 +1 through 10^38 –1 Precision (p) Storage 1–9 5 bytes 10–19 9 bytes 20–28 13 bytes 29–38 17 bytes
BINARY(n)
1 to 8,000 bytes
(n) bytes.
CHAR(n)
1 to 8,000 characters
(n) bytes.
NCHAR(n)
1 to 4,000 Unicode characters
(n * 2) bytes.
VARBINARY(n)
1 to 8,000 bytes
Variable. Storage is the actual data length plus 2 bytes.
VARCHAR(n)
1 to 8,000 characters
Variable. Storage is the actual data length plus 2 bytes.
NVARCHAR(n)
1 to 4,000 Unicode characters
Variable. Storage is the actual data length plus 2 bytes.
SQL_VARIANT
1 to 8,000 bytes
Variable. Maximum storage of 8,016 bytes.
VARCHAR(MAX) NVARCHAR(MAX) VARBINARY(MAX) XML IMAGE TEXT NTEXT
Up to 2GB
Variable. Maximum storage of 2,147,483,647 bytes.
When choosing appropriate data types for your attributes, keep the following recommendations in mind:
Generally avoid using the FLOAT and REAL data types because they are imprecise. Remember, floating-point data is approximate and consequently cannot be represented exactly. In other words, what you put in might not be what you get out.
40521.book Page 13 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
13
The DATETIME data type includes the time, down to an accuracy of 3.33 milliseconds. The SMALLDATETIME data type includes it down to 1 second. Be careful when working with these data types because you always need to take the time component into account. The best solution if you are interested in storing just the date is to zero out the time component programmatically, such as with a DML trigger.
I am pretty sure an American National Standards Institute (ANSI) DATE data type has been one of the most requested features for SQL Server in the last 10-plus years. Not XML…. Not CLR…. The first person who explains to me satisfactorily why Microsoft has not yet implemented this basic feature will get lunch and drinks on me!
Be careful with using the VARCHAR(MAX), NVARCHAR(MAX), and VARBINARY(MAX) data types because they might slow down performance. You will examine this in more detail shortly when you learn how SQL Server 2005 stores data rows.
A single BIT field will still take up a byte, because SQL Server has to byte-align all fields. You might be better off implementing such an attribute as a more meaningful CHAR(1) field. For example, you might require a [Sex] field in the [Customer] table, which you are considering to implement as a BIT field. However, if it is the only BIT field in the table, you might consider implementing it as a CHAR(1) field because it will take up the same amount of space and the M and F values will be easier to interpret.
The NCHAR, NVARCHAR, and NTEXT data types all use 2 bytes to store data because they use Unicode encoding (the Unicode UCS-2 character set), which is useful for storing multilingual character data. However, it does take up twice the amount of space, which can affect performance.
Use the UNIQUEIDENTIER data type to store GUIDs, which, although not guaranteed to be unique, can be considered unique because the total number of unique keys (2,128, or 3.4028 1038) is so large that the possibility of duplicate numbers being generated twice is slight. Some developers use GUIDs as pseudorandom numbers. Be careful of using UNIQUEIDENTIFIER columns with clustered indexes because they have limited value, being 16 bytes in length. You use the NEWID() SQL Server system function to generate the GUID value.
You should use the VARBINARY data type to store BLOBs such as Microsoft Word and Adobe Acrobat documents, images such as JPEG files, and the like.
Use the VARCHAR and NVCHAR data types to store free-form text. The problem with freeform text is that it cannot be efficiently queried by using traditional SQL Server B-Tree indexes. In this case, you should investigate how full-text search and full-text indexes have been implemented in SQL Server 2005.
The NTEXT, TEXT, and IMAGE data types will be removed in a future version of Microsoft SQL Server. Use NVARCHAR, VARCHAR, and VARBINARY instead.
40521.book Page 14 Monday, August 14, 2006 8:04 AM
Chapter 1
14
Designing a Database Solution
The system-supplied native data types have not really changed significantly since SQL Server 2000. You can still take advantage of user-defined data types. The major new additions to SQL Server 2005 are native support for XML and CLR user-defined data types.
XML Data Type Sigh…how many arguments have I had over this one? As with all technology, it’s all in the implementation and not the technology itself. The benefits of XML are many. XML provides a textbased means to describe and apply a hierarchical structure to information, be it a record, list, or tree. Being based on text, XML is simultaneously computer- and human-readable and unencumbered by licenses or restrictions. It supports Unicode and so natively supports multiple languages. All these features make it a great mechanism for the following: Data exchange as a transport mechanism
B2B
Business-to-consumer (B2C)
Application-to-application (A2A) Document management
Office XML documents
Extensible HTML (XHTML) Messaging
Simple Object Access Protocol (SOAP)
Really Simple Syndication (RSS) Middle-tier collaboration
The million-dollar question of course is, what do you do with XML inside SQL Server? When working with XML data, you need to choose whether you should use the native XML data type, use some BLOB data type, or shred the XML into relational columns. Your considerations depend on a number of factors:
Whether you need to preserve document order and structure, or fidelity. The various data types will store the XML data differently.
Whether you want to further query your XML data at a finer grain. The frequency of querying the XML data will be a factor.
Whether you need modify your XML data at a finer grain. You will also need to consider how often the XML data will be modified.
Whether you want to speed up XML queries by indexing. You can create XML indexes on XML fields. SQL Server can index all tags, values, and paths for an XML field. I will cover this in more detail in Chapter 2.
Whether you have a schema for your XML data.
Whether your database solution requires system catalog views to administer your XML data and schemas.
40521.book Page 15 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
15
If you shred an XML document and then later reconstitute it, SQL Server does not guarantee that it will be identical.
SQL Server 2005 implements the ISO SQL-2003 standard XML data type and supports both untyped and typed XML. Microsoft recommends using the untyped XML in these cases:
When you do not have a schema for your XML data
When you do not want SQL Server to validate the XML data, although you have the schema Microsoft recommends using typed XML data in these cases:
When you have schemas for your XML data and you want SQL Server to validate the XML data accordingly
When you want to take advantage of storage and querying compilation that comes with the typed XML data
So, have I answered the question? Well...not exactly. I think of XML as being similar to “nuclear weaponry”—just because you have it doesn’t mean you should use it. It depends on your business requirements more than anything else. But if you require ad hoc modeling of semistructured data, where an object’s properties are sparsely populated, the schema is changing, or the multivalues properties do not fit into your traditional relational schema, then you should strongly consider using XML.
T-SQL User-Defined Data Types SQL Server 2005 allows you to create T-SQL user-defined data types, which are basically aliases to existing system-supplied native data types. Most new database solutions don’t implement these alias types because “they do not bring anything new to the table.” However, the following are still good reasons to take advantage of them:
When you are porting a database solution from another RDBMS that uses a data type that SQL Server does not natively support, they allow the database creation scripts to run seamlessly. Alternatively, use them if you are implementing a database solution on different RDBMS engines, such as when creating a [BOOLEAN] alias type so as to be compatible with Microsoft Access.
They are a great way of standardizing data types for commonly used attributes, such as [Name], [LastName], and [PostCode], to ensure compatibility.
If you create any user-defined data types inside the model system database, those data types will be automatically generated in any new databases that you subsequently create. This represents a great technique of implementing “standards” inside an organization.
40521.book Page 16 Monday, August 14, 2006 8:04 AM
16
Chapter 1
Designing a Database Solution
The partial syntax for creating a user-defined data type is as follows: CREATE TYPE [ schema_name. ] type_name { FROM base_type [ ( precision [ , scale ] ) ] [ NULL | NOT NULL ] | EXTERNAL NAME assembly_name [ .class_name ] } [ ; ]
You should no longer use the sp_addtype system stored procedure to create user-defined data types. This system stored procedure will be removed in a future version of Microsoft SQL Server. Use CREATE TYPE instead.
So if you decided to create a user-defined data type for sex inside your database, you would execute the following T-SQL code: CREATE TYPE [dbo].[Sex] FROM CHAR (1) NOT NULL ; GO
So let’s go through a very simple example of creating a T-SQL user-defined data type (see Exercise 1.2). EXERCISE 1.2
Creating a T-SQL User-Defined Data Type 1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
In Object Explorer, expand Server System Databases tempdb Programmability Types.
3.
Right-click User-Defined Data Types, and click New User-Defined Data.
4.
Enter Sex in the Name text box.
5.
Select Char in the Data Type drop-down list.
6.
Change the Length value to 1.
7.
Click the Allow NULLS check box.
8.
Click the OK button.
40521.book Page 17 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
17
CLR User-Defined Data Types A new feature of SQL Server 2005 is the support for CLR user-defined data types. The capability of going beyond the traditional native data types of SQL Server is a powerful feature as far as extensibility is concerned, but you need to consider the impact on performance, because the data types will perform slower. You will also probably need to write your own functions to manipulate your CLR user-defined data types because the native SQL Server 2005 functions will have limited functionality. Generally, you should try to use the native data types provided by SQL Server 2005. Consider implementing CLR user-defined data types when you have complex requirements such as a need for multiple elements and/or certain behavior; however, again, you could potentially use the XML data type. CLR user-defined data types are well suited to the following:
Geospatial data
Custom date, time, currency, and extended numeric data
Complex data structures such as arrays
Custom encoded or encrypted data
Having said all that, you will probably never have to implement CLR user-defined data types, which is probably for the best (but I sleep better at nights knowing they are there). To create a CLR user-defined data type, follow this procedure: 1.
Code and build the assembly that defines the CLR user-defined data type using a language that supports the Microsoft .NET Framework, such as Microsoft Visual C# and Microsoft Visual Basic .NET.
2.
Register the assembly in SQL Server 2005 using the T-SQL CREATE ASSEMBLY statement, which will copy the assembly into the database.
3.
Create the CLR user-defined data type within the database by using the T-SQL CREATE TYPE statement I covered previously.
By default, to reduce the surface area of attack, the CLR integration feature of SQL Server 2005 is turned off. You will need to enable it to create CLR user-defined data types. Turning on CLR Integration in SQL Server 2005 is done through the Surface Area Configuration tool which we cover in mode detail in Chapter 4. Let’s turn on the CLR functionality of your SQL Server 2005 instance in Exercise 1.3. EXERCISE 1.3
Turning on CLR Integration in SQL Server 2005 1.
Open SQL Server Surface Area Configuration.
2.
Click Surface Area Configuration for Features.
3.
Click CLR Integration.
4.
Click the Enable CLR Integration check box.
5.
Click the OK button.
40521.book Page 18 Monday, August 14, 2006 8:04 AM
18
Chapter 1
Designing a Database Solution
Designing Domain Integrity Domain (or column) integrity controls the values that are valid for a particular column. You can enforce domain integrity through a number of mechanisms:
The data type assigned to that column
Whether the column definition allows NULL values
Procedural code in an insert or update DML trigger
A foreign key constraint that references a set of primary key values in the parent table
A check constraint defined on that column
You should no longer use database rule objects to enforce domain integrity because they are being deprecated in later versions of SQL Server. They are available only for backward compatibility in SQL Server 2005.
Basically, you should use a check constraint to maintain domain integrity whenever possible because, like the other constraints, they are fast. The partial syntax for creating a primary key or unique constraint is as follows: ALTER TABLE [database_name .[ schema_name ] . | schema_name . ] table_name [ WITH { CHECK | NOCHECK } ] ADD CONSTRAINT constraint_name CHECK (logical_expression)
So if you decided to create a check constraint that would ensure that the [PassportNumber] column of the [Customer] table matched a pattern, you would execute the following T-SQL code: ALTER TABLE [Customer] ADD CONSTRAINT [CK_Customer(CustomerNumber)] CHECK ([PassportNumber] LIKE '%[0-9][0-9][0-9][0-9][0-9][0-9][0-9]') ; GO
The problem with using check constraint statements is that they are limited in the following ways:
They can reference other columns only in the same table.
They cannot call subqueries directly.
So for more complex domain integrity requirements, you will have to resort to some of the other mechanisms of enforcement.
40521.book Page 19 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
19
Designing Entities Once you have determined the appropriate data types for the attributes of your entities, you are ready to create the physical database model. This process typically involves the following:
Defining the entities
Defining entity integrity
Defining referential integrity
By now you should be done with most of the design work, so it’s more a case of being familiar with the T-SQL syntax for creating a table and the various options. Well…not quite. You still might need to redesign your entity to take into account how the SQL Server 2005 storage engine physically stores the data rows in your database files, because this can have a dramatic impact on performance. Without being overly concerned with database files, extents, and pages (although it would help dramatically, so off you go to research those terms more!), it is sufficient to say that the basic unit of input/output (I/O) in SQL Server 2005 is an 8KB page.
Understanding How SQL Server Stores Data Rows SQL Server 2005 stores data rows sequentially after a 96-byte header of the page, as shown in Figure 1.7. A row offset table starts at the end of the page and contains one entry for each row located on that page. Each entry keeps track of the beginning of the row relative to the start of the page. The entries in the row offset table are in reverse sequence from the rows on the page. FIGURE 1.7
SQL Server data page structure
Microsoft SQL Server Data Page Page Header Data Row 1 Data Row 2 Data Row 3 Free Space 3 2 1
Row Offsets
40521.book Page 20 Monday, August 14, 2006 8:04 AM
20
Chapter 1
Designing a Database Solution
Generally speaking, a row cannot span a page boundary. So, a table can store a maximum of 8,060 bytes per row. However, SQL Server 2005 relaxes this restriction for the VARCHAR, NVARCHAR, VARBINARY, SQL_VARIANT, and CLR user-defined data types. SQL Server 2005 does this by taking advantage of a special IN_ROW_DATA allocation unit that is used to store these overflow columns and tracks them via a special 24-byte pointer in the original data row. When the row wants to grow beyond the page limit, SQL Server automatically moves one or more of these variable-length columns to the ROW_OVERFLOW_DATA allocation unit. If subsequent DML operations shrink the data row, SQL Server dynamically moves the column to the original data row. Consequently, you need to take into account the percentage of rows likely to overflow, the frequency they will be queried (as they will be slower), and the frequency at which these rows will be modified (because this will also slow performance). You might be better off redesigning your database design by storing these columns in a separate table that has a one-to-one relationship with the original table design. It is also a good idea to calculate the page density, or how much of your 8KB page SQL Server 2005 will be using, because it might indicate a need to redesign your database. Ideally, you want to see a small percentage of the page not being used by SQL Server. In Exercise 1.4, you’ll learn how to calculate page density. EXERCISE 1.4
Calculating Page Density 1.
Calculate the record size of the table, assuming the maximum for variable length data types, using the data type lengths in Table 1.1. Column
Data Type
WhaleID
INT
WhaleName
NVARCHAR(20)
WhaleGender
CHAR(1)
Transmitter
UNIQUEIDENTIFIER
DateTagged
SMALLDATETIME
WhaleSpecies
VARCHAR(20)
LastSighting
SMALLDATETIME
2.
Divide 8,092 by the record size from step 1.
3.
Round the result from step 2 down to an integer to find the number of rows that will fit per page.
4.
Multiply the record size calculated from step 1 by the integer from step 3 to determine how much space will be consumed by the data rows.
40521.book Page 21 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
21
EXERCISE 1.4 (continued)
5.
Subtract the figure from step 4 from 8,092 to determine the amount of space on each data page that SQL Server will not be able to use.
So, if your record size is 3,000 bytes in size, you would have more than 2,000 bytes of free space on each page that SQL Server 2005 could not use. This would represent more than 25 percent wasted space on each page, which is quite a high figure. In this case, I would talk to the developers to see how you could redesign the table schema.
Defining Entities SQL Server 2005 supports up to two billion tables per database and 1,024 columns per table. The number of rows and total size of a table is virtually unlimited. As discussed, the maximum number of bytes per row is 8,060, although this restriction is for the VARCHAR(n), NVARCHAR(n), VARBINARY(n), and SQL_VARIANT data types, which can take advantage of overflow pages. Don’t forget that the lengths of each one of these columns must still fall within the limit of 8,000 bytes, but their combined widths may exceed the 8,060-byte limit in a table. You can use the CREATE TABLE T-SQL statement to create entities in SQL Server 2005. For all the SQL Server 2005 exams, you should familiarize yourself with the CREATE TABLE syntax and the various options available. CREATE TABLE [ database_name . [ schema_name ] . | schema_name . ] table_name ( { | } [ ] [ ,...n ] ) [ ON { partition_scheme_name ( partition_column_name ) | filegroup | "default" } ] [ { TEXTIMAGE_ON { filegroup | "default" } ] [ ; ]
The options allow you to control the columns that make up the table schema and their data types. It is recommended that you always explicitly control nullability instead of relying on any default behavior of SQL Server or your integrated development environment (IDE). ::= column_name [ COLLATE collation_name ] [ NULL | NOT NULL ] [
40521.book Page 22 Monday, August 14, 2006 8:04 AM
22
Chapter 1
Designing a Database Solution
[ CONSTRAINT constraint_name ] DEFAULT constant_expression ] | [ IDENTITY [ ( seed ,increment ) ] [ NOT FOR REPLICATION ] ] [ ROWGUIDCOL ] [ [ ...n ] ]
The options control the schema to which the data type belongs. If the type_ schema_name is not specified, SQL Server will reference the type_name in the following order:
The SQL Server system data type
The default schema of the current user in the current database
The [dbo] schema in the current database
::= [ type_schema_name . ] type_name [ ( precision [ , scale ] | max | [ { CONTENT | DOCUMENT } ] xml_schema_collection ) ]
The options allow you to define the various data integrity types through constraints. I’ll cover these various constraints later in this chapter. A lot of developers prefer using the ALTER TABLE statement after the initial CREATE TABLE statement because it is easier to read and to possibly reflect their design methodology. ::= [ CONSTRAINT constraint_name ] { { PRIMARY KEY | UNIQUE } [ CLUSTERED | NONCLUSTERED ] [ WITH FILLFACTOR = fillfactor | WITH ( < index_option > [ , ...n ] ) ] [ ON { partition_scheme_name ( partition_column_name ) | filegroup | "default" } ] | [ FOREIGN KEY ] REFERENCES [ schema_name . ] referenced_table_name [ ( ref_column ) ] [ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ] [ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ] [ NOT FOR REPLICATION ] | CHECK [ NOT FOR REPLICATION ] ( logical_expression ) }
40521.book Page 23 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
23
The options allow you to create a column that is based on a valid T-SQL expression. One of the more important new features to improve performance in SQL Server 2005 is the capability of persisting a computed column in your table schema. ::= column_name AS computed_column_expression [ PERSISTED [ NOT NULL ] ] [ [ CONSTRAINT constraint_name ] { PRIMARY KEY | UNIQUE } [ CLUSTERED | NONCLUSTERED ] [ WITH FILLFACTOR = fillfactor | WITH ( [ , ...n ] ) ] | [ FOREIGN KEY ] REFERENCES referenced_table_name [ ( ref_column ) ] [ ON DELETE { NO ACTION | CASCADE } ] [ ON UPDATE { NO ACTION } ] [ NOT FOR REPLICATION ] | CHECK [ NOT FOR REPLICATION ] ( logical_expression ) [ ON { partition_scheme_name ( partition_column_name ) | filegroup | "default" } ] ]
Deciding whether to persist a computed field depends entirely on your database solution and environment. You should take a number of considerations into account, however:
The extra space taken up by the computed field
How frequently users will query the computed field
Whether the computed field will be part of the result set, the SARG, or both
The same considerations of density and selectivity that you would have with an index
How frequently the data that the computed field is based on changes
How volatile the table is in general
For an online transaction processing (OLTP) environment where your data changes frequently, you’ll be less inclined to persist computed fields because of the overhead on the database engine of having to maintain the computed values.
40521.book Page 24 Monday, August 14, 2006 8:04 AM
24
Chapter 1
Designing a Database Solution
Furthermore, you can create indexes on computed fields to improve performance when searching on the computed field, but you need to meet a number of requirements:
The computed field expression is deterministic and precise.
The computed field expression cannot evaluate to the IMAGE, NTEXT, or TEXT data type.
All functions referenced by the computed field have the same owner as the table.
A number of SET options are met.
For a complete list of requirements, look up the “Creating Indexes on Computed Columns” topic in SQL Server 2005 Books Online.
The options are similar to the options discussed previously: < table_constraint > ::= [ CONSTRAINT constraint_name ] { { PRIMARY KEY | UNIQUE } [ CLUSTERED | NONCLUSTERED ] (column [ ASC | DESC ] [ ,...n ] ) [ WITH FILLFACTOR = fillfactor |WITH ( [ , ...n ] ) ] [ ON { partition_scheme_name (partition_column_name) | filegroup | "default" } ] | FOREIGN KEY ( column [ ,...n ] ) REFERENCES referenced_table_name [ ( ref_column [ ,...n ] ) ] [ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ] [ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ] [ NOT FOR REPLICATION ] | CHECK [ NOT FOR REPLICATION ] ( logical_expression ) }
40521.book Page 25 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
25
The options allow you to control some fine-tuning mechanism for indexes that might be implemented during table creation. Generally, you should leave the defaults alone unless you have a specific requirement. ::= { PAD_INDEX = { ON | OFF } | FILLFACTOR = fillfactor | IGNORE_DUP_KEY = { ON | OFF } | STATISTICS_NORECOMPUTE = { ON | OFF } | ALLOW_ROW_LOCKS = { ON | OFF} | ALLOW_PAGE_LOCKS ={ ON | OFF} }
So in this particular example, you would execute the following T-SQL code to create the [Customer] table: CREATE TABLE [Customer] ( [CustomerNumber] INT NOT NULL, [FirstName] VARCHAR(20) NULL, [LastName] VARCHAR(20) NULL, [PassportNumber] CHAR(8) NULL, [Address] VARCHAR(50) NULL, [Region] VARCHAR(20) NULL, [PostCode] VARCHAR(10) NULL, [Country] VARCHAR(20) NULL )
Designing Entity Integrity Simplified entity (or table) integrity in relational database theory requires that all rows in a table be uniquely identified via an identifier known as a primary key. A primary key can potentially be a single column or a combination of columns.
Sometimes several columns could potentially be the primary key, such as in the case of a [Customer] table that has both a [CustomerNumber] column and a [PassportNumber] column, both of which are guaranteed to have a value and be unique. These columns are referred to as candidate keys.
40521.book Page 26 Monday, August 14, 2006 8:04 AM
26
Chapter 1
Designing a Database Solution
When deciding on your primary key, you need to determine whether you are going to use a natural key or a surrogate key. A natural key is a candidate key that naturally has a logical relationship with the rest of the attributes in the row. A surrogate key, on the other hand, is an additional key value that is artificially generated by SQL Server, typically implemented as integer column with the identity property. The advantage of using natural keys is that they already exist, and since they have a logical relationship with the rest of the attributes, indexes defined on them will be used by both user searches and join operations to speed up performance. Obviously, you don’t need to add a new, unnatural column to your entity, which would take up additional space. The main disadvantage of natural keys is that they might change if your business requirements change. For example, if you have a primary key based on the [CustomerNumber] field and it subsequently changes from a numeric to an alphanumeric field, then you will need to change both the data type of the [CustomerNumber] field and all related tables where the [CustomerNumber] field is used as a foreign key. This might not be an easy task for a highavailability SQL Server solution running in a 24/7 environment. Another, potentially important consideration might exist if you have a compound or composite natural key or a very wide key. This can have a dramatic impact on performance if you have chosen to create a clustered index on such a wide natural key, abecause it will also impact the size of all your nonclustered indexes. Likewise, join operations using a nonclustered index on a wide natural key will not perform as well as a smaller surrogate key. The main advantage of a surrogate key is that it acts as an efficient index, which is created by the developer typically as an integer field, managed by SQL Server through the identity property, and never seen by the user. Therefore, a surrogate key makes a good potential candidate for a clustered index because of its small size. Since the users don’t actually work with the surrogate key, you do not need to cascade update operations as well.
You will find a number of people who believe that a primary key value, once inserted, should never be modified. Like Fox Mulder, “I want to believe….” In any case, the concept of a surrogate key works well in this case, because users never work with it directly.
You can implement a primary key, or the effect of a primary key, in a number of ways. The techniques in SQL Server include the following:
Using a primary key constraint
Using a unique constraint
Using a unique index
Using an insert and update DML trigger that verifies that the primary key value being modified is unique
All of these will have the same effect, ensuring that entity integrity is maintained by disallowing duplicates in the primary key column or columns. However, you should generally use a primary key constraint. Maintaining entity integrity programmatically through procedural
40521.book Page 27 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
27
code in triggers or elsewhere is considered error prone (in other words, you might goof it up!), involves more work, and is slower, since it works at a “higher” level of the SQL Server engine. Constraints are so much easier to work with, and they perform quicker…so use them!
You can define a primary key constraint only on a column that does not allow NULLs. If you need to allow one NULL value in your primary key, you will need to resort to a unique constraint instead.
The other advantage of using a primary key constraint is that it highlights the “importance” of the field on which it’s defined, so many third-party and Microsoft applications will be able to derive the primary key from the primary key constraint, as opposed to relying on naming conventions or indexes. The partial syntax for creating a primary key or unique constraint is as follows: ALTER TABLE [database_name .[ schema_name ] . | schema_name . ] table_name ADD CONSTRAINT constraint_name PRIMARY KEY | UNIQUE [ CLUSTERED | NONCLUSTERED ] (column_name) [ WITH ( index_options )]
So if you decided to create a primary key on the [CustomerNumber] column and a candidate key on the [PassportNumber] column of the [Customer] table, you would execute the following T-SQL code: ALTER TABLE [Customer] ADD CONSTRAINT [PK_Customer(CustomerNumber)] PRIMARY KEY (CustomerNumber) ; GO ALTER TABLE [Customer] ADD CONSTRAINT [UQ_Customer(PassportNumber)] UNIQUE (PassportNumber) ;
By default, SQL Server 2005 will create a clustered index for the primary key constraint in the previous T-SQL script. This might not be optimal for your particular table. Generally, I recommend that you don’t rely on such default behavior and always explicitly script all such options.
It is a highly recommended practice to have a primary key defined on all tables in a relational database. The main reason, of course, is that it will ensure entity integrity; however, it also makes life a lot easier for developers—and especially for contractors coming onto your site to help sort out the mess the developers have made. Ha! Be aware that there can also be performance issues and other unexpected implications in the future.
40521.book Page 28 Monday, August 14, 2006 8:04 AM
28
Chapter 1
Designing a Database Solution
Importance of Primary Key Constraints Just this week in Sydney, I had to analyze a large travel agent database solution that had been developed overseas. There were performance issues, as always…. By examining the sysindexes table, I determined that out of the 700-plus tables in the database, only 60 percent of them had primary keys defined. Of course, the client had been trying to implement transactional replication, which relies upon the primary key being defined. Without a primary key, SQL Server can replicate tables only via a snapshot, which can be particularly slow and expensive. One of my recommendations was to contact the developers and ask them to provide scripts that would create the “missing” primary keys. So in this particular instance, a primary key constraint not only had an impact on entity integrity but also on performance. You should always have a good reason not to implement a primary key constraint. A very, very, very good reason!
Designing Entity Relationships Relationships…I’ve implemented many. “The fundamental assumption of the relational database model is that all data is represented as mathematical n-ary relations, an n-ary relation being a subset of the Cartesian product of n sets.” What the…??? Simply put, relationships naturally come about as a consequence of the normalization process and the resultant multiple entities. It is critical that you enforce and implement these relationships correctly in any SQL Server database solution. Developers, DBAs, and users are always primarily concerned about performance. Although performance is important, data integrity and especially referential integrity should take precedence.
Maintaining Referential Integrity Referential integrity ensures that the relationships between the primary keys (referenced table) and foreign keys (referencing table) are always maintained; otherwise, you end up with orphaned records. Consequently, a row in a referenced table cannot be deleted, and a primary key can’t be changed, if a foreign key refers to that row. In some database implementations, orphaned records are acceptable. For example, it might be unimportant who you sold a particular item to, but it is important to keep a record of the transaction for accounting or taxation purposes—think drug or arms dealers, or perhaps a database used to collect web traffic for a website where user session information quickly multiplies. Another important decision point is whether you want SQL Server to automatically cascade update and delete operations. This feature has been available at the database engine level since SQL Server 2000.
40521.book Page 29 Monday, August 14, 2006 8:04 AM
Designing the Physical Entities
29
A great way to lose referential integrity is to implement referential integrity at the application layer. It just plain doesn’t work—trust me!
IT in ET In 2001 I was contracted by the United Nations to analyze a civil registry system for a developing nation that had been developed by overseas developers. (Wow! I got developing, developed, and developers in the one sentence. In any case, it was no laughing matter.) The developers had implemented what little referential integrity there was in the application layer. Needless to say, the database had substantial referential integrity loss. This resulted because of a poor application design, lack of database constraints, and no security, which allowed users to modify the data directly. The end result translated to a potential waste of money in excess of $300,000 USD, a system that was “almost useless,” and a mammoth data cleansing undertaking that took me months to perform, with no guarantee of 100 percent success. All of this could easily have been avoided by using a foreign key constraint with cascading actions.
So when deciding on what technique you should use for referential integrity, you have two options at the database engine level in SQL Server 2005: constraints or programmatic objects.
Maintaining Referential Integrity Using Foreign Key Constraints Since SQL Server 2000, the database engine has had the capability to automatically cascade update and delete operations. Consequently, given the ease of implementation and the performance, you should generally implement referential integrity between two tables through a foreign key constraint.
Another advantage of using a foreign key constraint over other means is that various database modeling and reporting tools can automatically determine the relationships, instead of trying to determine relationships through naming conventions or indexes.
The partial syntax for creating a foreign key constraint is as follows: ALTER TABLE [database_name .[ schema_name ] . | schema_name . ] table_name [ WITH { CHECK | NOCHECK } ] ADD CONSTRAINT constraint_name
40521.book Page 30 Monday, August 14, 2006 8:04 AM
30
Chapter 1
Designing a Database Solution
FOREIGN KEY (column_name) REFERENCES [ schema_name . ] referenced_table_name [ ( ref_column ) ] [ ON DELETE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ] [ ON UPDATE { NO ACTION | CASCADE | SET NULL | SET DEFAULT } ]
If you decided to create a foreign key on the [CustomerNumber] column [Orders] table that references the [Customer] table, you would execute the following T-SQL code: ALTER TABLE [Orders] ADD CONSTRAINT [FK_Orders(CustomerNumber)] FOREIGN KEY (CustomerNumber) REFERENCES [Customer](CustomerNumber) ON UPDATE CASCADE ON DELETE CASCADE ;
Notice that this cascades both delete and update operations in this particular case. This decision is purely based on your business requirements.
A common mistake I see being made time and time again is a lack of indexes on foreign key columns. Unlike with the primary key constraint, SQL Server 2005 does not create any index on the foreign key column. Considering, as discussed, that join operations can be expensive for the database engine and can run frequently in most SQL Server solutions, it is highly recommended that you index your foreign keys. The real art here is to determine the best type of index to use!
One major limitation with constraints is that they apply only within a database. So if your referential integrity requirements are between two tables that reside in different databases, you cannot use a foreign key constraint. You will have to implement such a cascading action through a DML trigger. Another major limitation of the foreign key constraint is that it does not support foreign keys with multiple cascade paths and cycles. Likewise, you will most likely have to implement cascading actions in relationships that are potentially cyclic through a DML trigger.
Maintaining Referential Integrity Using Procedural Code In certain scenarios, as highlighted previously, you will not be able to implement referential integrity through a foreign key constraint. Instead, you will have to take advantage of the programmable objects supported by SQL Sever 2005. A DML trigger is ideal for maintaining complex referential integrity requirements that cannot be met by a foreign key constraint. When writing such triggers, your developers should be conscious that the triggers will be firing for all update, insert, and delete operations, so performance is paramount. Make sure they implement the trigger efficiently!
40521.book Page 31 Monday, August 14, 2006 8:04 AM
Summary
31
SQL Server 2005 supports a number of triggers, which you will examine in Chapter 2. At this stage, it is sufficient to concentrate on how to maintain referential integrity through a DML after trigger. The partial syntax for creating a DML after trigger is as follows: CREATE TRIGGER [ schema_name . ]trigger_name ON { table | view } [ WITH [ ,...n ] ] { FOR | AFTER | INSTEAD OF } { [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] } [ WITH APPEND ] [ NOT FOR REPLICATION ] AS { sql_statement [ ; ] [ ,...n ] | EXTERNAL NAME } ::= [ ENCRYPTION ] [ EXECUTE AS Clause ]
So if you had to cascade a delete operation between the [Customer] table and a related [Orders] table that happens to be in a separate database (called [SalesDB] in this instance), you would write the following T-SQL code: CREATE TRIGGER [trg_Customer_CascadeDelete] ON [Customer] AFTER DELETE AS DELETE [SalesDB].[dbo].[Orders] WHERE [CustomerNumber] IN (SELECT [CustomerNumber] FROM inserted); GO
Summary The first technique you learned in this chapter was how to create a logical database design through the process of normalization. You learned the importance of 3NF in your initial database design. I then introduced the benefit of denormalization, which brings redundancy into your logical database. I discussed the trade-offs that denormalized data takes extra space and needs to be maintained. You then examined the issues you face when choosing the appropriate data types for the attributes of your entities. You learned the various data types natively supported by SQL Server 2005 and examined the new XML and CLR user-defined data types in detail.
40521.book Page 32 Monday, August 14, 2006 8:04 AM
32
Chapter 1
Designing a Database Solution
Next, I discussed domain integrity as well as the different SQL Server objects that help maintain the domain of a particular attribute. You then learned how to physically implement your entities by using the CREATE TABLE statement. A further explanation of how SQL Server stores data rows physically clarified the impact your database design has on minimizing the space wasted on the SQL Server data pages. I then discussed the importance of entity integrity and talked about how to implement it primarily via a primary key constraint. You then learned the importance of referential integrity in a relational database and how to implement it depending on your requirements.
Exam Essentials Understand 3NF. You should understand the process of normalization and be able to normalize a set of entities to 3NF. Know how to appropriately use the XML data type. Make sure you know how SQL Server 2005 implements the XML data type and where it is appropriate to use the XML data type in your database design. Understand the syntax for creating tables. SQL Server 2005 supports a rich number of options when creating tables. Ensure you understand the various options and how to use them. Know the different ways of implementing domain integrity. You can implement domain integrity using different techniques. Make sure you know which technique is appropriate for the given requirements. Know the different ways of implementing referential integrity. You can employ a number of mechanisms to achieve referential integrity. Make sure you understand the different mechanisms and when to choose the correct implementation.
40521.book Page 33 Monday, August 14, 2006 8:04 AM
Review Questions
33
Review Questions 1.
You are designing the [Product] table for a large warehouse database that will stock more than 100,000 different product IDs, called SKUs. Performance is paramount in this table. You have the possibility of amalgamating with other warehouse databases in the future, so you have decided to implement a surrogate key. The table should keep track of [SKU], [ProductName], and other related fields. What T-SQL commands should you run? (Choose all that apply.) A. Run: CREATE TABLE [Product] ( SKU VARCHAR(10) NOT NULL, Product VARCHAR(20) NOT NULL, ... ) B. Run: ALTER TABLE [Product] ADD CONSTRAINT [UQ_Product(SKU)] UNIQUE (SKU) C. Run: ALTER TABLE [Product] ADD CONSTRAINT [PK_Product(ProductId)] PRIMARY KEY (ProductId) D. Run: CREATE TABLE [Product] ( ProductID INT IDENTITY (1,1), SKU VARCHAR(10) NOT NULL, Product VARCHAR(20) NOT NULL, ... ) E. Run: CREATE TABLE [Product] ( ProductID UNIQUEIDENTIFIER DEFAULT NEWID(), SKU VARCHAR(10) NOT NULL, Product VARCHAR(20) NOT NULL, ... )
40521.book Page 34 Monday, August 14, 2006 8:04 AM
Chapter 1
34
F.
Designing a Database Solution
Run: ALTER TABLE [Product] ADD CONSTRAINT [PK_Product(SKU)] PRIMARY KEY (SKU)
G. Run: ALTER TABLE [Product] ADD CONSTRAINT [UQ_Product(ProductId)] UNIQUE (ProductId) 2.
You are designing a database for the electricity market in Australia. The market has a [DispatchInterval] field that will be used as a primary key and that uses the following mask: YYYYMMDD###. The last three digits correspond to five-minute intervals, so there are 288 values a day. What data type should you use? A. INT B. BIGINT C. NUMERIC(11,0) D. FLOAT
3.
You are developing a civil registry database for the Divided Nations nongovernmental organization that needs to keep track of demographic information about the population for the past 200 years. The main table contains millions of rows, so performance is critical. What data type should you use for the [DateOfBirth] field? A. Use a CHAR(10) data type to store dates using the DD-MM-YYYY format. B. Use the SMALLDATE data type. C. Use the DATETIME data type. D. Use a CLR user-defined data type.
4.
What objects in SQL Server 20005 can be used to maintain domain integrity? (Choose all that apply.) A. PRIMARY KEY CONSTRAINT B. FOREIGN KEY CONSTRAINT C. CHECK CONSTRAINT D. Data type E. DML trigger F.
NONCLUSTERED INDEX
G. Nullability (NULL/NOT NULL)
40521.book Page 35 Monday, August 14, 2006 8:04 AM
Review Questions
5.
35
You are designing a Sales database where the tables contain millions of records and there are thousands of users, so performance is paramount. One of the most common reports will be returning Store, Product, and SUM(Quantity). You decide to denormalize the following database schema to eliminate the need for join operations:
Product(ProductId, Product, Price) Store(StoreId, Store, Address, PostCode, State, Country) Sales(SalesId, StoreId, SalesDate, Status) SalesDetail(SalesId, ProductId, Quantity) What should you do? A. Add the [Product] column to the [SalesDetail] table. B. Add the [Store] column to the [Sales] table. C. Add the [Store] column to the [SalesDetail] table. D. Add the [Product] column to the [Sales] table. 6.
You are designing a database for a university and want to ensure that query performance will be optimal between the following two tables that will be frequently joined:
What should you do to improve query performance? A. Create an index on the [Student].[StudentNo] column. B. Create a unique constraint on the [Student].[StudentNo] column. C. Create an index on the [Grade].[StudentNo] column. D. Create a unique constraint on the [Grade].[StudentNo] column. 7.
You are designing a table for a large multinational weapons manufacturer that needs to store the Coordinated Universal Time (UTC) date and time, down to the millisecond, of when a missile is launched. What is the easiest data type to implement for such a field? A. Use a CLR user-defined data type. B. Use the DATETIME data type. C. Use a T-SQL user-defined data type. D. Use a BINARY data type.
40521.book Page 36 Monday, August 14, 2006 8:04 AM
Chapter 1
36
8.
Designing a Database Solution
You are designing a large sales database where performance is critical. You know that your sales personnel will be running queries constantly throughout the working hours to find out the year-to-date sales for a particular product. They want the data returned to be up-to-date. You have more than 690,000 products and generate approximately 10,000 invoices daily. Your database design is as follows:
CREATE TABLE [Product] ( [ProductId] SMALLINT NOT NULL, [Product] VARCHAR(20) NOT NULL, [Price]
MONEY NOT NULL,
[StockLevel] INT NOT NULL ) CREATE TABLE [Invoice] ( [InvoiceNumber]
INT NOT NULL,
[InvoiceDate] DATETIME NOT NULL, [CustomerId] INT NOT NULL ) CREATE TABLE [InvoiceDetails] ( [InvoiceNumber]
INT NOT NULL,
[ProductId] SMALLINT NOT NULL, [SalesQuantity]
TINYINT NOT NULL,
[SalesPrice] MONEY NOT NULL ) What two actions should you perform? A. Use ALTER TABLE [Invoice] ADD [YTDSales] INT. B. Use ALTER TABLE [Product] ADD [YTDSales] INT. C. Maintain [YTDSales] using a DML trigger. D. Maintain [YTDSales] using a T-SQL script that is scheduled to run after-hours. 9.
You need to decide which data type to use to store legal documents that natively have an XML format. The legal documents will need to be able to be searched. Government regulations stipulate that the legal documents cannot be altered by any party once submitted. What data type should you use? A. Use a CHAR(8000) data type. B. Use a TEXT data type. C. Use a VARCHAR(MAX) data type. D. Use an XML data type.
40521.book Page 37 Monday, August 14, 2006 8:04 AM
Review Questions
37
10. You need to ensure that you do not lose referential integrity between these two tables that have a one-to-one relationship and that SQL Server automatically cascades update but not delete operations:
CREATE TABLE [Product] ( [WareHouseId] TINYINT NOT NULL, [ProductId] INT NOT NULL, [ProductName] VARCHAR(20) NOT NULL, [Price] NCHAR(10) NOT NULL, [StockLevel] TINYINT NOT NULL ) GO ALTER TABLE [Product] ADD CONSTRAINT [PK_Product] PRIMARY KEY (WareHouseId, ProductId) CREATE TABLE [ProductDetails] ( [WarehouseId[ TINYINT NULL, [ProductId] INT NULL, [ProductDescription] XML NULL, [ProductPhoto] VARBINARY(MAX) NULL ) GO What should you execute? A. Execute: ALTER TABLE [ProductDetails ADD CONSTRAINT [FK_ProductDetails_Product] FOREIGN KEY (ProductId) REFERENCES Product (ProductId) ON UPDATE CASCADE ON DELETE CASCADE B. Execute: ALTER TABLE [ProductDetails] ADD CONSTRAINT [FK_ProductDetails_Product] FOREIGN KEY (WarehouseId, ProductId) REFERENCES Product (WareHouseId, ProductId) ON UPDATE CASCADE ON DELETE CASCADE
40521.book Page 38 Monday, August 14, 2006 8:04 AM
38
Chapter 1
Designing a Database Solution
C. Execute: ALTER TABLE [ProductDetails] ADD CONSTRAINT [FK_ProductDetails_Product] FOREIGN KEY (WarehouseId, ProductId) REFERENCES Product (WareHouseId, ProductId) ON UPDATE CASCADE ON DELETE NO ACTION D. Execute: ALTER TABLE [ProductDetails] ADD CONSTRAINT [FK_ProductDetails_Product] FOREIGN KEY (ProductId) REFERENCES Product (ProductId) ON UPDATE CASCADE ON DELETE NO ACTION 11. What normal form should you typically aim for when designing the initial relational database design? A. First normal form (1NF) B. Second normal form (2NF) C. Third normal form (3NF) D. Fourth normal form (4NF) E. Fifth normal form (5NF) 12. Whenever a programmer is fired from the company and their record deleted from the [Employee] table of the [HumanResources] database, you need to ensure that all related records are deleted from child tables in the [HumanResources] database. What is the quickest way to achieve this? A. Use a check constraint. B. Use a rule. C. Use a DML trigger that will automatically delete the related records. D. Use a foreign key constraint with the cascade delete option to automatically cascade delete operations. 13. What objects in SQL Server 2005 can be used to main entity integrity? (Choose all that apply.) A. PRIMARY KEY CONSTRAINT B. FOREIGN KEY CONSTRAINT C. CHECK CONSTRAINT D. UNIQUE CONSTRAINT E. DML trigger
40521.book Page 39 Monday, August 14, 2006 8:04 AM
Review Questions
39
14. You need to decide which data type to use to store technical documents that natively have an XML format. The technical documents will often be searched, so performance is important. The schema of the technical documents is expected to change over time. What data type should you use? A. Use a VARCHAR(MAX) data type. B. Use a CHAR(8000) data type. C. Use a TEXT data type. D. Use an XML data type. 15. You have designed your logical database model as shown here. What T-SQL statements should you execute to create the tables? (Choose all that apply in the correct order.)
A. Execute: ALTER TABLE [Grade] ADD CONSTRAINT [FK_Grade_Student] FOREIGN KEY(SubjectNo) REFERENCES [Student] (StudentNo) B. Execute: CREATE TABLE [Student] ( [StudentNo] INT NOT NULL, [Name] NCHAR(20) NOT NULL, [Surname] NCHAR(20)NOT NULL, [DOB] SMALLDATETIME NOT NULL, [Address] NCHAR(50) NOT NULL, [State] NCHAR(20) NOT NULL, [PostCode] NCHAR(20) NOT NULL, [Country] NCHAR(20) NOT NULL )
40521.book Page 40 Monday, August 14, 2006 8:04 AM
Chapter 1
40
Designing a Database Solution
C. Execute: CREATE TABLE [Student] ( [StudentNo] INT NOT NULL, [Name] NCHAR(20) NOT NULL, [Surname] NCHAR(20)NOT NULL, [DOB] SMALLDATETIME NOT NULL, [Address] NCHAR(50) NOT NULL, [State] NCHAR(20) NULL, [PostCode] NCHAR(20) NULL, [Country] NCHAR(20) NOT NULL ) D. Execute: CREATE TABLE [Grade] ( [StudentNo] INT , [SubjectNo] INT , [Session] TINYINT , [Mark] NUMERIC(3, 2), [Grade] CHAR (1) ) E. Execute: ALTER TABLE [Student] ADD CONSTRAINT [UQ_Student] UNIQUE (StudentNo) F.
Execute: ALTER TABLE [Grade] ADD CONSTRAINT [PK_Grade] PRIMARY KEY (StudentNo, SubjectNo)
G. Execute: CREATE TABLE [Grade] ( [StudentNo] INT NOT NULL, [SubjectNo] INT NOT NULL, [Session] TINYINT NOT NULL, [Mark] NUMERIC(3, 2) NULL, [Grade] CHAR (1) NULL )
40521.book Page 41 Monday, August 14, 2006 8:04 AM
Review Questions
41
H. Execute: ALTER TABLE [Grade] ADD CONSTRAINT [PK_Grade] PRIMARY KEY (StudentNo) I.
Execute: ALTER TABLE [Student] ADD CONSTRAINT [PK_Student] PRIMARY KEY (StudentNo)
J. Execute: ALTER TABLE [Grade] ADD CONSTRAINT [PK_Grade] PRIMARY KEY (SubjectNo) 16. You want to ensure that the [Name], [LastName], and [PhoneNumber] fields are implemented in a standard way across all new SQL Server database solutions in your enterprise. What should you do? (Choose two.) A. Use a T-SQL user-defined data type for the [Name], [LastName], and [PhoneNumber] fields. B. Create the user-defined types in the tempdb database. C. Use a CLR user-defined data type for the [Name], [LastName], and [PhoneNumber] fields. D. Create the user-defined data types in the model database. 17. You are capacity planning your database design and need to determine how much space will be required for the [ProductDescription] table. You predict that the table will contain 6,900,000 products. The table schema is as follows:
CREATE TABLE [ProductDescription] ( [ProductId] INT, [EnglishPrice] SMALLMONEY, [EnglishDescription] NCHAR(450), [RussianPrice] SMALLMONEY, [RussianDescription] NCHAR(450), [GermanPrice] SMALLMONEY, [GermanDescription] NCHAR(450) )
40521.book Page 42 Monday, August 14, 2006 8:04 AM
Chapter 1
42
Designing a Database Solution
How much space will the table consume? A. 55,200,000KB B. 27,600,000KB C. 18,400,000KB D. 13,800,000KB 18. You are designing a sales database and want ensure that a salesperson cannot issue an invoice for more products than currently in stock. Your database design is as follows:
CREATE TABLE [Product] ( [ProductId] SMALLINT NOT NULL, [Product] VARCHAR(20) NOT NULL, [Price]
MONEY NOT NULL,
[StockLevel] INT NOT NULL ) CREATE TABLE [Invoice] ( [InvoiceNumber]
INT NOT NULL,
[InvoiceDate] DATETIME NOT NULL, [CustomerId] INT NOT NULL ) CREATE TABLE [InvoiceDetails] ( [InvoiceNumber]
INT NOT NULL,
[ProductId] SMALLINT NOT NULL, [SalesQuantity]
TINYINT NOT NULL,
[SalesPrice] MONEY NOT NULL ) What should you use to ensure that salespeople do not issue an invoice for more products than currently in stock? A. Use a check constraint. B. Use a DML trigger. C. Use a foreign key constraint. D. Use a primary key constraint.
40521.book Page 43 Monday, August 14, 2006 8:04 AM
Review Questions
43
19. You need to ensure that whenever an employee is deleted from the [Employee] table of the [HumanResources] database, all related records are deleted from the [Marketing] database. What is the quickest way to achieve this? A. Use a check constraint. B. Use a DML trigger that will automatically delete the related records. C. Use a foreign key constraint with the cascade delete option to automatically cascade delete operations. D. Use a rule. 20. You are helping to design a database for a Swedish bank called Darrenbank. Tobias, the chief information officer (CIO), explains they require a field called [CreditRisk] in the [Customer] table. This [CreditRisk] field is based on a complex calculation that involves 69 fields from the [Customer] table. The [CreditRisk] field will generate a value ranging from 1 to 5. The [Customer] has more than 200 fields and contains more than 307,000 records. Tobias has indicated that the data changes “infrequently” and that “performance is important.” How do you implement the [CreditRisk] field? A. Choose all that apply. B. Create an index on the [CreditRisk] field. C. Create the [CreditRisk] field. D. Create the [CreditRisk] field as a computed field. E. Create the [CreditRisk] field as a persisted, computed field. F.
Create a DML trigger to maintain the [CreditRisk] field.
40521.book Page 44 Monday, August 14, 2006 8:04 AM
44
Chapter 1
Designing a Database Solution
Answers to Review Questions 1.
D, C, B. Option D correctly implements an efficient surrogate key. Option C correctly implements the surrogate key, and Option B correctly implements the natural key. Option E is inappropriate because it uses the 16-byte UNIQUEIDENTIFIER, which is generally a poor choice for a primary key because it slows down performance. Option A is incorrect because it does not implement a surrogate key. If the database needs to be amalgamated in the future, it is a simple matter of altering the table to make a compound primary key to facilitate amalgamation.
2.
B. The BIGINT data type is the smallest data type that can hold [DispatchInterval]. The INT data type is not large enough. The NUMERIC(11,0) data type consumes a byte more than BIGINT. The FLOAT data type is imprecise and is not inherently a cardinal.
3.
C. The DATETIME data type will be able to accommodate your domain. The SMALLDATE data type will not accommodate your domain because it does not allow dates before 1900. The CHAR(10) data type will consume two more bytes than the DATETIME data type, which will impact performance. A CLR user-defined data type will not perform as well as a native SQL Server data type in this instance.
4.
B, C, D, E, G. Foreign key constraints, check constraints, data types, nullability, and DML triggers all can help maintain domain integrity, or the set of values that are valid for a column. Primary key constraints help maintain entity integrity. Nonclustered indexes generally improve performance.
5.
C. By adding the [Store] column to the [SalesDetail] table, you have eliminated the need for the query to access the [Sales] and [Store] tables.
6.
C. Creating an index on the [Grade].[StudentNo] column will improve join performance because SQL Server 2005 does not automatically create indexes on foreign keys. Creating an index on the [Student].[StudentNo] field will worsen performance because there is already an index on that field via the primary key constraint, which means more indexes for SQL Server 2005 to maintain. creating a unique constraint on the [Student].[StudentNo] or [Grade].[StudentNo] column will prevent valid data from being inserted into the [Grade] table.
7.
A. None of the SQL Server–supplied data types stores the date, time, and time zone information natively. Although a BINARY data type could in fact potentially store such information, it would be difficult to implement and use. A CLR user-defined data type would easily be able to store this customized data.
8.
B, C. You should add the [YTDSales] column to the [Product] table because there will be only one instance of maintaining the denormalized data and no need for a join operation. Although maintaining performance is critical, the sales personnel needs the information to be accurate. Performance should not be impacted because you are changing only a small percentage of rows throughout the day on a static table. The DML trigger would have to increment only the existing [YTDSales] value, creating a minimal impact on performance.
9.
C. Since the document cannot be changed, you cannot use the XML data type because it does not guarantee that the inserted XML document will be the same as the retrieved XML document. A CHAR(8000) data type takes up too much needless space, and the table will grow beyond SQL Server’s page limit. The TEXT data type is being deprecated and should not be used.
40521.book Page 45 Monday, August 14, 2006 8:04 AM
Answers to Review Questions
45
10. C. Option C correctly implements referential integrity using the compound foreign key constraint and cascades the update operation only. Options A and D do not implement the compound foreign key correctly. Option B does not cascade the operations correctly. 11. C. When designing out initial database design, you should typically aim for 3NF. 12. D. You should always use a foreign key constraint over any other means to maintain referential integrity if possible because they are the quickest and easiest to implement. You could use a DML trigger, but that would involved more coding, would be slower, and would require more skills from your developers. 13. A, D, E. You can use a primary key constraint, unique constraint, and DML trigger to enforce uniqueness and therefore entity integrity. Foreign key constraints enforce mainly referential integrity; check constraints can maintain domain integrity. 14. D. The XML data type will give you the most flexibility, allowing the documents to be indexed to improve query performance and facilitate a changing schema. None of the other data types supports any schema natively, not any indexing within the SQL Server 2005 engine. 15. C, G, I, F, A. Options C, G, I, F, and A correctly implement the database schema. Options B and D do not correctly implement the NULLs. Options H and J do not correctly implement the composite primary key. Option E implements a unique constraint, not a primary key constraint. 16. A, D. T-SQL user-defined data types were designed for implementing such “standards” in multiple databases. By creating them in the model database, you will ensure that all future user databases will have the new user-defined data types. You should use CLR user-defined data types for much more complex requirements. Use the tempdb database only as a temporary global workspace. 17. B. A data row will consume 2,716 bytes (4 + 4 + 900 + 4 + 900 + 4 + 900). A page has 8,092 free space. Therefore, a page can hold only two rows (8,092 / 2716 = 2.98). Consequently, the table will consume 3,450,000 pages (6,900,000 / 2) or 27,600,000KB (3,450,000 * 8KB). 18. B. You can write a DML trigger to look up a product’s [StockLevel] from the [Product] table and ensure that the [SalesQuantity] being entered is less than that amount before allowing a record to be inserted into [InvoiceDetails]. A check constraint cannot look up a value in another table. A foreign key constraint can look up a value only in a related table. A primary key constraint maintains only entity integrity. 19. B. You can easily write a DML trigger to maintain cross-database referential integrity. A constraint works only within a database, so a foreign key constraint would not be appropriate in this particular instance. 20. D. Creating a persisted, computed field finds the best balance between improving performance without creating overhead on the system. Options B and E will be slower than creating a persisted computed column. There is no point in creating an index (Option A) because the data is not selective enough, even if it is searched on. Just creating a computed field (Option C) will not improve performance because each time the field is retrieved, a complex calculation will have to be done. Considering the extra space required (TINYINT), it is better to persist the field, and Tobias has indicated, in the subtle ways that Swedes do, that the data changes “infrequently.”
40521.book Page 46 Monday, August 14, 2006 8:04 AM
40521.book Page 47 Monday, August 14, 2006 8:04 AM
Chapter
2
Designing Database Objects MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER: Design a logical database.
Optimize queries by creating indexes.
Design index-to-table-size ratio.
Design a logical database.
Design objects that define data.
Design user-defined data types.
Design tables that use advanced features.
Design indexes.
Specify indexed views to meet business requirements.
Design objects that retrieve data.
Design views.
Design user-defined functions.
Design stored procedures.
Design objects that extend the functionality of a server.
Design scalar user-defined functions to extend the functionality of the server.
Design CLR user-defined aggregates.
Design stored procedures to extend the functionality of the server.
Design objects that perform actions.
Design DML triggers.
Design DDL triggers.
Design WMI triggers.
Design stored procedures to perform actions.
Design data distribution.
Design SQL Server Agent alerts.
40521.book Page 48 Monday, August 14, 2006 8:04 AM
After creating the initial database design, you typically start designing the database objects that will reference the database tables. SQL Server 2005 has a rich object model, so as a database architect you should familiarize yourself with the whole gamut of options. Always investigate the new features of the latest version of SQL Server and whether they are appropriate to your database solution. New features are commonly a manifestation of user requests. Therefore, they might represent new functionality that was requested by experienced database administrators (DBAs) and developers. In other words, Microsoft does not introduce new features for no reason. On the other hands, a lot of the time they were the result of user requests, so keep an open mind. Regardless, don’t fall into the trap of always choosing the new features when designing your database objects. Evaluate or reevaluate your choices with every new release. This chapter covers the various objects at your disposal when designing the object layer above your database tables.
Designing the Objects That Define Data Chapter 1 focused on how to create tables and the associated options you have when designing the objects that store data. In the following section, I will finish covering your table design options by covering a new exciting feature of SQL Server 2005: table partitioning. As the size of the data in your database tables grows, the performance will degrade because of the amount of data and how long it takes the database engine to search and retrieve the data that the queries are requesting. So, it is appropriate to examine indexes, whose primary performance is to improve query performance in databases.
Creating Partitioned Tables Chapter 1 covered a number of advanced table features you can use to enhance the functionality and performance of your tables. To recap, the advanced table features include the following:
Computed fields
Persisted fields
SQL Server 2005 introduces a new feature to improve the performance and manageability of large tables. Database architects now have the additional options of partitioning tables (and partitioning indexes).
40521.book Page 49 Monday, August 14, 2006 8:04 AM
Designing the Objects That Define Data
49
Table (and index) partitioning is available only in the SQL Server 2005 Enterprise and Developer Editions.
In SQL Server 2000 you were able create partitioned views to improve the performance of very large databases (VLDBs). They sort of worked, so the capability of truly partitioning your tables (and indexes) in SQL Server 2005 is a welcome addition. By partitioning data within your tables, you can take advantage of multiple central processing units (CPUs), multiple disk drives, and SQL Server’s concurrent architecture. Partitioning tables allows for easier management of data because you are fundamentally grouping data. Additionally, you can move data between partitions quickly because you are not physically moving data but modifying the metadata used by the partitioned tables.
Partitioned views built on tables that are local to one server are included in Microsoft SQL Server 2005 for backward compatibility only and are in the process of being deprecated.
When implementing your partition design, you must follow this methodology: 1.
Create a partition function that basically defines how the table (or index) will be partitioned. The syntax for creating partition functions is as follows: CREATE PARTITION FUNCTION partition_function_name ( input_parameter_type ) AS RANGE [ LEFT | RIGHT ] FOR VALUES ( [ boundary_value [ ,...n ] ] ) [ ; ]
2.
Create a partition scheme that basically maps the partitions of the partition function created previously to filegroups. The syntax for creating partition schemes is as follows: CREATE PARTITION SCHEME partition_scheme_name AS PARTITION partition_function_name [ ALL ] TO ( { file_group_name | [ PRIMARY ] } [ ,...n ] ) [ ; ]
3.
Create a table (or index) using the partition scheme created previously. The partial syntax for creating the table is as follows: CREATE TABLE [ database_name . [ schema_name ] . | schema_name . ] table_name ( { | }
40521.book Page 50 Monday, August 14, 2006 8:04 AM
Chapter 2
50
Designing Database Objects
[ ] [ ,...n ] ) [ ON { partition_scheme_name ( partition_column_name ) | filegroup | "default" } ] ...
You can use the $PARTITION function to retrieve the partition number to where a column value would be mapped.
All the previous steps might not be intuitive, so Exercise 2.1 will demonstrate how to create table partitions.
This exercise will work only with the Enterprise and Developer Editions of SQL Server 2005.
EXERCISE 2.1
Creating Table Partitions You want to improve performance and gain the administrative benefits of multiple database files and filegroups through table partitions. So, you need to create some new database files and associate them with some filegroups you will use for partitioning.
1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
Click the New Query toolbar button to open a new query window.
3.
Type the following Transact-SQL (T-SQL) code, and execute it: USE AdventureWorks ; GO -- Create new filegroups ALTER DATABASE AdventureWorks ADD FILEGROUP Filegroup1 ALTER DATABASE AdventureWorks ADD FILEGROUP Filegroup2 ALTER DATABASE AdventureWorks ADD FILEGROUP Filegroup3 GO -- Create new files
40521.book Page 51 Monday, August 14, 2006 8:04 AM
Designing the Objects That Define Data
EXERCISE 2.1 (continued)
ALTER DATABASE AdventureWorks ADD FILE ( NAME = File1, FILENAME = 'c:\File1.ndf', SIZE = 1MB ) TO FILEGROUP Filegroup1 GO ALTER DATABASE AdventureWorks ADD FILE ( NAME = File2, FILENAME = 'c:\File2.ndf', SIZE = 1MB ) TO FILEGROUP Filegroup2 GO ALTER DATABASE AdventureWorks ADD FILE ( NAME = File3, FILENAME = 'c:\File3.ndf', SIZE = 1MB ) TO FILEGROUP Filegroup3 GO
4.
Next you need to define the partition function that will be used. In this case, say you have decided to partition the data based on the A–H, I–Q, and R–Z ranges. Once you have defined your partition function, you associate the partitions with the underlying filegroups. Type the following T-SQL code, and execute it:
51
40521.book Page 52 Monday, August 14, 2006 8:04 AM
Chapter 2
52
Designing Database Objects
EXERCISE 2.1 (continued)
-- Create partition function CREATE PARTITION FUNCTION PartitionFunction (NVARCHAR(50)) AS RANGE RIGHT FOR VALUES ('I', 'R') GO -- Create partition scheme CREATE PARTITION SCHEME PartitionScheme AS PARTITION PartitionFunction TO (Filegroup1, Filegroup2, Filegroup3) GO
5.
Let’s test the table partitions you have created. First you need to create a table that uses the partition scheme you have created. Then you need to insert data into the table. So, type the following T-SQL code, and execute it: -- Create partitioned table CREATE TABLE [Customers] ( CustomerNumber INT IDENTITY(1,1), CustomerName NVARCHAR(50), CustomerSurname VARCHAR(50) ) ON PartitionScheme (CustomerSurname) GO -- Insert data INSERT Customers VALUES ('Larissa', 'Isakov') INSERT Customers VALUES ('Marc', 'Isakov') INSERT Customers VALUES ('Paula', 'Verkhivker') INSERT Customers VALUES ('Tim', 'Haslett') INSERT Customers VALUES ('Iris', 'Friedrichs') INSERT Customers VALUES ('Eugene', 'Deefholts') INSERT Customers VALUES ('Marion', 'Siekierski')
40521.book Page 53 Monday, August 14, 2006 8:04 AM
Designing the Objects That Define Data
EXERCISE 2.1 (continued)
INSERT Customers VALUES ('Victor', 'Yalisheff') INSERT Customers VALUES ('Olga', 'Kats') INSERT Customers VALUES ('Natasha', 'Fiodoroff') INSERT Customers VALUES ('Kevin', 'Dunn')
6.
Let’s check out the underlying partition! Type the following T-SQL code, and execute it: -- Retrieve partition information SELECT *, $partition.PartitionFunction(CustomerSurname) Partition FROM Customers
7.
To clean up your SQL Server instance, type the following T-SQL code, and execute it: -- Clean up DROP TABLE Customers GO DROP PARTITION SCHEME PartitionScheme DROP PARTITION FUNCTION PartitionFunction GO ALTER DATABASE AdventureWorks REMOVE FILE File1 ALTER DATABASE AdventureWorks REMOVE FILE File2 ALTER DATABASE AdventureWorks REMOVE FILE File3 ALTER DATABASE AdventureWorks REMOVE FILEGROUP Filegroup1 ALTER DATABASE AdventureWorks REMOVE FILEGROUP Filegroup2 ALTER DATABASE AdventureWorks REMOVE FILEGROUP Filegroup3 GO
Designing Indexes Another SQL Server book…another chapter on indexes…sigh…. The fundamentals of indexes have not changed since SQL Server 7.0 (since SQL Server 6.0 really, although that
53
40521.book Page 54 Monday, August 14, 2006 8:04 AM
54
Chapter 2
Designing Database Objects
can be debated because there have been some minor architectural changes). But you really should focus on the functional side of indexing because you are predominantly concerned about database design. Indexes are fundamentally structures internal to SQL Server, called B-Trees, which improve performance because they allow the database engine to access the data quicker by traversing the B-Tree. It’s as simple as that! It helps to understand what a B-Tree is, so Figure 2.1 shows an index on the LastName field of a [Customers] table. The top of our B-Tree is known as a root (Tobias, I know you’re laughing at that), and the bottom-most level is known as the leaf level. FIGURE 2.1
SQL Server index Leaf Level
Intermediate Level Page 8331
Key Page 1 531 21 532 41 533 ... ~16000
Root Level Page 8336
Key 1 16001 32001 48001 64001
Page 8331 8332 8333 8334 8335
Key Page ~64001 8335 ... 79941 79961 79981
Customer ID 1 Page 2 531 3 ... 20
FirstName LastName
PhoneNumber
Kate Veronica Beatriz ... Olga
Hill Brennan Perez ... Kats
03 9756 1220 03 8490 3031 03 7740 1221 ... 03 3489 0920
Page 532
21 22 23 ... 40
Iris Victor Marion ... Stefanie
Friedrichs Isakov Siekierski ... Wurst
03 9882 1757 03 9090 1441 03 8374 2178 ... 03 9794 9000
Page 533
41 42 43 ... 60
Victor Yalisheff Alex Yalisheff Veronica Horvath ... ... Sophie Emma Yalisheff
02 9766 9980 07 6823 8791 06 6789 1235 ... 06 2002 8683
Page 5780
79941 79942 79943 ... 79960
Natasha Paula Linda ... Marg
Fiodoroff Verkihvker Huskinsson ... Haynes
03 9756 1220 03 8490 3031 03 7740 1221 ... 03 3489 0920
Page 5781
79961 79962 79963 ... 79980
Larissa Natelie Marc ... Alexander
Isakov Isakov Isakov ... Isakov
03 8982 1757 03 9090 1441 03 8374 2178 ... 03 9794 9000
Page 5782
79981 79982 79983 ... 80000
Eugene Kevin Timothy ... James
Deefholts Dunn Haslett ... Squire
02 9766 9980 07 6823 8791 06 6789 1235 ... 06 2002 8683
[Customers] Table
Page ... 5780 5781 5782
40521.book Page 55 Monday, August 14, 2006 8:04 AM
Designing the Objects That Define Data
55
The disadvantages of indexes is that that they consume space and can potentially slow down the performance of your Data Manipulation Language (DML) operations, especially if you create too many of them, because SQL Server has to “maintain” the B-Tree structures. So, designing indexes is about finding a balance between the performance gains and overhead on the system. Ultimately, and this is a common mistake, the most important factors are your users and their data usage patterns. I know you don’t want to hear this, but the point is, you might have created what you think is the world’s best indexing strategy, but it’s based on assumptions about what data you think your users will be accessing. Generally, you should not be overly concerned about designing your indexes when designing your database. You really should be designing your indexing strategy after determining your users’ data usage patterns. In the real world this rarely happens, though; a basic indexing strategy is usually incorporated into the initial database design. The syntax for creating an index is as follows: CREATE [ UNIQUE ] [ CLUSTERED | NONCLUSTERED ] INDEX index_name ON ( column [ ASC | DESC ] [ ,...n ] ) [ INCLUDE ( column_name [ ,...n ] ) ] [ WITH ( [ ,...n ] ) ] [ ON { partition_scheme_name ( column_name ) | filegroup_name | default } ] [ ; ] ::= { [ database_name. [ schema_name ] . | schema_name. ] table_or_view_name } ::= { PAD_INDEX = { ON | OFF } | FILLFACTOR = fillfactor | SORT_IN_TEMPDB = { ON | OFF } | IGNORE_DUP_KEY = { ON | OFF } | STATISTICS_NORECOMPUTE = { ON | OFF } | DROP_EXISTING = { ON | OFF } | ONLINE = { ON | OFF } | ALLOW_ROW_LOCKS = { ON | OFF }
40521.book Page 56 Monday, August 14, 2006 8:04 AM
56
Chapter 2
Designing Database Objects
| ALLOW_PAGE_LOCKS = { ON | OFF } | MAXDOP = max_degree_of_parallelism }
You can create an index incorporating multiple columns, but you still have the 16-column limit. This has not changed from previous versions. Why? I don’t know.
For completeness’ sake, the syntax for creating an XML index is as follows: CREATE [ PRIMARY ] XML INDEX index_name ON ( xml_column_name ) [ USING XML INDEX xml_index_name [ FOR { VALUE | PATH | PROPERTY } ] [ WITH ( [ ,...n ] ) ] [ ; ] ::= { [ database_name. [ schema_name ] . | schema_name. ] table_name } ::= { PAD_INDEX = { ON | OFF } | FILLFACTOR = fillfactor | SORT_IN_TEMPDB = { ON | OFF } | STATISTICS_NORECOMPUTE = { ON | OFF } | DROP_EXISTING = { ON | OFF } | ALLOW_ROW_LOCKS = { ON | OFF } | ALLOW_PAGE_LOCKS = { ON | OFF } | MAXDOP = max_degree_of_parallelism }
The main option to be aware of is the type of index you’re creating, clustered or nonclustered; you will look at this shortly. Realistically, you’ll probably never have to implement this, but you should be familiar with what the FILLFACTOR option does. Basically, the FILLFACTOR option stipulates how much of an 8KB page that SQL Server uses to store data rows is used.
40521.book Page 57 Monday, August 14, 2006 8:04 AM
Designing the Objects That Define Data
57
To get a deeper understanding of where to use the FILLFACTOR and page splits, look up the “Fill Factor” topic in SQL Server Books Online.
The reason not to use all the available space in a page while creating an index is to lessen the impact on the performance of future DML operations. So, you would implement a FILLFACTOR setting only for indexes on volatile tables where you anticipate or are experiencing heavy online transacting processing (OLTP) activity and suspect that is the cause of performance problems.
A FILLFACTOR setting of 0 or 100 is identical.
SQL Server 2005 includes a new option when creating indexes to improve performance. I will discuss this INCLUDE option shortly when you examine nonclustered indexes.
Creating Clustered Indexes Creating a clustered index on a table has the effect of rearranging the data in your table so that the index order and the physical order are one and the same. Structurally this is because the leaf level of your clustered index is the actual data rows that make up the table.
A table without a clustered index on it is known as a heap.
So, if you create a clustered index based on the [LastName] field of your [Customers] table, your table will be physically stored in order of the customer’s surnames. Alternatively, if you create a clustered index on the [FirstName] field, all the customers will be stored in order of their first names. Simple! Because the clustered index determines the physical order in which the table is stored, you can have only one clustered index per table. After all, the clustered index is the table. Clustered indexes work well when the data is highly selective and when the data is dense.
Think of selectivity as the degree of uniqueness of your data. In other words, the [LastName] field should have a higher selectivity when compared to the [FirstName] field.
Density refers more to the number of duplicates in your data. Therefore, the “Smith” value will be denser than the “Isakov” value in the [LastName] field. Certainly in Australia, perhaps not in Russia.
40521.book Page 58 Monday, August 14, 2006 8:04 AM
58
Chapter 2
Designing Database Objects
This functionally means clustered indexes work well for point queries, range queries, join operations (which are really either a point or a range query), or pretty much most queries. The following example shows a clustered index being created on the [LastName] field of the [Customers] table: CREATE CLUSTERED INDEX [CLI_LastName] ON Customer(LastName)
Creating Nonclustered Indexes Nonclustered indexes are the default in SQL Server. They are separate B-Tree structures from the table. Consequently, you can have more than one nonclustered index. In fact, you should be able to create 249 nonclustered indexes on a table (I’ve never tried it).
Don’t create 249 nonclustered indexes on a table!
Nonclustered indexes work well when the data is highly selective, but they have limited value when the data is dense. Therefore, an inappropriate nonclustered index might have all the overhead with no performance benefits. Functionally nonclustered indexes work well in point queries but have limited value for range queries. Why? Well, simply put, the data is not “clustered” together but all over the place, unlike with clustered indexes.
Document Your Indexes I recall being asked to performance tune/troubleshoot a real-time auctioning system that was experiencing performance problems. It was the usual story and the usual suspects. They had engaged a number of contractors over the past year or so to tune their database solution. To my amusement I discovered that certain key columns in key tables had four or even more nonclustered indexes on them. In other words, different contractors had come in to tune the database solution and, after listening to how the system worked, had decided to create nonclustered indexes on the key fields. The problem was that they had not checked whether those fields were already indexed. SQL Server doesn’t care. It assumes you know what you’re doing. As long as all the nonclustered index names within a database are unique, it will quite happily create more nonclustered indexes on the same column or combination of columns.
40521.book Page 59 Monday, August 14, 2006 8:04 AM
Designing the Objects That Define Data
59
The following example shows a nonclustered index being created on the [FirstName] field of the [Customers] table: CREATE NONCLUSTERED INDEX [NCI_FirstName] ON Customer(FirstName)
You also have the capability of creating nonclustered indexes on multiple columns. They are commonly referred to as composite or compound indexes. There are two primary reasons for doing this. The first is to facilitate queries that have search arguments (SARGs) based on multiple columns from the same table. The second is to reduce the overall number of indexes that SQL Server 2005 has to maintain. Instead of creating a nonclustered index on the [LastName] and [FirstName] fields because they are frequently searched, you should consider creating a composite index on the [LastName, FirstName] combination. The following example shows a nonclustered index being created on multiple fields of the [Customers] table: CREATE NONCLUSTERED INDEX [NCI_CompoundIndex] ON Customer(LastName, FirstName)
When creating nonclustered indexes, remember that the more columns you add, the less likely the index will be used because it is getting too wide. So, keep the index-to-table-size ratio in mind. Of course there are exceptions, such as covering indexes, and you will examine them shortly.
SQL Server 2005 has a new index option that allows you to improve nonclustered indexes by including nonkey columns in the index Data Definition Language (DDL). The columns included by the INCLUDE clause are stored only in the leaf levels of the index and consequently are not subject to the 16-column limit. This allows for the creation of larger, covering indexes. These are the other restrictions:
You must define one key column.
You can define a maximum of 1,023 columns.
You cannot repeat included columns in the INCLUDE list.
You cannot define columns in both the nonclustered index and the INCLUDE list.
The following example shows a nonclustered index being created on the [Country] field of the [Customers] table that includes three more fields: CREATE NONCLUSTERED INDEX [NCI_PassportNumber] ON Customer(Country) INCLUDE (Address, Region, Postcode)
40521.book Page 60 Monday, August 14, 2006 8:04 AM
Chapter 2
60
Designing Database Objects
Did I mention covering indexes earlier? They represent the real art of indexing in SQL Server. The idea behind a covering index is to create an index that can cover important queries, thereby avoiding the need for the queries to go to the underlying table.
Covering Indexes To illustrate the concept of covering indexes, let’s assume you have some sort of a [Customers] table that has a large number of columns, say, more than 500, that contain demographical and other statistical information. The partial table definition is as follows: CREATE TABLE [Customers] ( [CustomerID]
INT
NOT NULL,
[Name]
VARCHAR(20)
NULL,
[SurName]
VARCHAR(20)
NULL,
[PhoneNumber]
VARCHAR(20)
NULL,
[FaxNumber]
VARCHAR(20)
NULL,
[PassportNumber] CHAR(8)
NULL,
[Address]
VARCHAR(50)
NULL,
[Region]
VARCHAR(20)
NULL,
[PostCode]
VARCHAR(10)
NULL,
[Country]
VARCHAR(20)
NULL,
[Sex]
VARCHAR(20)
NULL,
[MarriageStatus] VARCHAR(20)
NULL,
[Salary]
VARCHAR(20)
NULL,
[Dogs]
VARCHAR(20)
NULL,
[Cats]
VARCHAR(20)
NULL,
[Platypus]
VARCHAR(20)
NULL,
[Kids]
VARCHAR(20)
NULL,
... CONSTRAINT [PK_CustomerNumber] PRIMARY KEY CLUSTERED ([CustomerID]) ) This table has more than 20,000,000 records, so as you would expect, queries against this table are going to be slow. Call center personnel frequently wants to call up a particular customer, so run the following queries: SELECT * FROM [Customers] WHERE CustomerID = @ID SELECT * FROM [Customers]
40521.book Page 61 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
61
WHERE Surname = @Surname SELECT * FROM [Customers] WHERE Surname = @Surname AND Name = @Name You want to improve performance, so you can create two separate indexes for the [Surname] and [Name] fields. To reduce the number of indexes, a better choice might be to create a single index that includes the [Name] and [Surname] fields: CREATE INDEX [NCI_Surname_Name] ON Customer(Surname, Name) This will most likely improve performance for their original set of queries. But you can substantially improve performance further by reworking the queries to the following: SELECT ID, Name, Surname, Phone FROM [Customers] WHERE ID = @ID SELECT ID, Name, Surname, Phone FROM [Customers] WHERE Surname = @Surname SELECT ID, Name, Surname, Phone FROM [Customers] WHERE Surname = @Surname AND Name = @Name Why? Well, you have now created a covering index! In other words, all of the “unknowns” requested by the query, in this case the telephone number, are located within the B-Tree of the nonclustered index. SQL Server does not need to go through the additional step of traversing the table. Considering the underlying size of the table, you have substantially improved the performance because your queries can be serviced by a much smaller B-Tree. And it’s all about service!
Designing Objects That Retrieve Data and Extend Functionality One of the key decisions when designing your database security will be choosing at what layer you want to control data access. This will reflect the database objects that need to be designed and created to provide the access to your data. Generally, you do not want your users to access the base tables directly, so you need to create a layer of database objects above the base tables for your users to access instead. When designing this data access layer for your database solution, you’ll generally resort to the following objects:
40521.book Page 62 Monday, August 14, 2006 8:04 AM
62
Chapter 2
Designing Database Objects
Entities You can take advantage of explicitly created views and table-valued user-defined functions (UDFs) to allow relational DML access. This allows you to control security in the least restrictive fashion because your users will be able to connect to these entities using their favorite applications such as Microsoft Excel and Microsoft Access. Procedures Objects such as stored procedures and user-defined functions are more restrictive than entities, but they allow you to more restrictively control what happens in your database and what data users can access. Most database solutions use a combination of both entities and procedures to create this data access layer above the base tables. But you could, as an example for a more secure database solution, allow people to insert, update, delete, and retrieve data only through stored procedures.
Several tools are available that create this data access layer of objects above the base tables. One that is proving to be popular, for obvious reasons, is CodeSmith, which you can find at http://www.codesmithtools.com.
Now, you will not always want to adopt this particular strategy of creating a data access layer because it depends on the complexity of the database solution. For smaller database solutions, the investment in time and effort probably isn’t worth your while. However, this strategy has a number of benefits in more complex database solutions:
It is easier to implement and manage your database security because it involves only one layer of management.
It is easier to predict the resultant effective security because you don’t have to worry about what’s going on at the base table layer.
An additional benefit is that it allows you to modify the underlying table structures with a minimal impact on the higher-level objects.
The Benefits of a Data Access Layer In 2004–2005 I was involved in a major project for a major financial institution in Sydney, refactoring a VLDB that was being used for the trading energy in the Australian market. The database contained more than 300 tables, not to mention the various views and stored procedures above those tables. This particular database solution was a perfect candidate for a data access layer because it effectively created an interface or a layer of abstraction between the procedural objects (such as stored procedures and user-defined functions) and the data on which those objects depend. So by creating a data layer though views, it allowed the underlying table structure to be modified (or for partitioning to be implemented) whilst minimizing the recoding of the high-level objects. Take time in your initial database design to implement a data access layer because you will be saving a lot more time in the life of the database solution.
40521.book Page 63 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
63
In the following sections, you’ll examine the various database objects at your disposal. As you go through these database objects, you’ll see a number of options in their DDL syntax. A number of DDL options exist, but two of them are applicable to virtually all objects: ENCRYPTION The ENCRYPTION option encrypts the DDL code used to create the database objects. SCHEMABINDING The SCHEMABINDING option binds the object to the referenced objects’ schema definitions. This has the effect of preventing the dependent objects from being dropped unless the higher-level object is initially dropped.
Don’t forget to include these options if you need to change a database object’s DDL in subsequent ALTER statements.
Exploring Views Views are virtual tables that are supported by virtually all relational database systems. What you are basically doing with views is wrapping a T-SQL query into an entity. Typically views don’t have any performance overhead, so use them! Views have not substantially changed since the dimmest history of SQL Server’s past, although SQL Server 2000 introduced the ability to materialize them. You will look at indexed views shortly. These are some reasons for taking advantage of views: Implement the data access layer As discussed previously, views are great for implementing part of your data access layer. You can create a layer of abstraction from the tables for the information workers to access through views. These views can have friendlier entity and column names, so information workers will be happier. Plus, developers will have more scope to change the underlying tables with less impact on the information workers. Hide complexity Your information workers are not necessarily adept at writing T-SQL code. And they don’t necessarily need to be. Additionally, you might have a complex database design. Views can be a means of hiding that complexity from the information workers because they can join tables, generate aggregate summary data, or link to data in other databases or even servers. So, information workers can use them in canned reports. Offer security Chapter 4 will cover security in more detail, but it is common practice when designing security to allow database users to access the table through views only. Views would not be too useful if they were not updateable. And they wouldn’t support Codd’s sixth rule! However, views do have some restrictions:
You can update only one table at a time that makes up the underlying view. Therefore, if your view references multiple tables and you issue a DML statement that manipulates multiple columns in those multiple tables, your DML statement will fail and an error will be generated.
40521.book Page 64 Monday, August 14, 2006 8:04 AM
Chapter 2
64
Designing Database Objects
A way to get around this is to implement an INSTEAD OF trigger on the view that “rewrites” the single DML operation into multiple DML statements.
Updates to certain columns of views will not be allowed, as you would expect. Examples of these columns include the following:
Computed/calculated columns
Aggregated columns
Columns affected by the GROUP BY, HAVING, and DISTINCT clauses If you attempt to insert a record into a view that in turn attempts to perform a partial insert into the table where the unreferenced columns don’t have default values defined or NULLs allowed, the DML operation will fail.
Other restrictions exist, so use your common sense. The syntax for creating views has not changed substantially since the SQL Server 6.0 days, as you can see: CREATE VIEW [ schema_name . ] view_name [ (column [ ,...n ] ) ] [ WITH [ ,...n ] ] AS select_statement [ ; ] [ WITH CHECK OPTION ] ::= { [ ENCRYPTION ] [ SCHEMABINDING ] [ VIEW_METADATA ] WITH CHECK OPTION
}
The WITH CHECK OPTION stipulates that all data DML operations against the view must conform to its constraint. The query used by the view definition has some restrictions. The SELECT statement cannot have the following:
A reference a temporary table or table variable
The INTO keyword
The OPTION clause
The ORDER BY clause (unless the TOP clause is also used)
40521.book Page 65 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
65
Be careful with using the ORDER BY clause within view definitions; in fact, I would argue against using it altogether. I was quite excited when I discovered you can use TOP 100 PERCENT in view definitions to overcome the limitation of the ORDER BY clause. But since then, my excitement has abated, especially since the behavior has changed in SQL Server 2005 and there is no longer a guaranteed sort order. (Check out the public SQL Server newsgroups.) Use the ORDER BY clause outside the view definition!
In Exercise 2.2, you’ll create and work with views. EXERCISE 2.2
Creating and Working with Views You want to create a view that represents a new price list. Company policy dictates that all the objects that directly access the tables must be encrypted so as to not reveal the schema to the higher-level objects.
1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
Click the New Query toolbar button to open a new query window, and enter the following: USE AdventureWorks ; GO CREATE VIEW [viw_ProductPriceList] WITH ENCRYPTION AS SELECT ProductId, Name, ProductNumber, ListPrice FROM Production.Product GO
3.
Let’s see whether you can see the DDL code that makes up the view. Type the following T-SQL code, and execute it: sp_helptext [viw_ProductPriceList] GO You should see the following message: “The text for object ‘viw_ProductPriceList’ is encrypted.”
40521.book Page 66 Monday, August 14, 2006 8:04 AM
Chapter 2
66
Designing Database Objects
EXERCISE 2.2 (continued)
4.
You should test the view, of course, to make sure it returns the expected result set. Type the following T-SQL code, and execute it: SELECT * FROM [viw_ProductPriceList] GO
5.
Management now requires a new view to be created so they can analyze the impact of a 25 percent increase in price. Type the following T-SQL code, and execute it: CREATE VIEW [viw_ProductNewPriceList] AS SELECT ProductId, Name, ProductNumber, (ListPrice * 1.25) AS NewListPrice FROM
Production.Product
GO
6.
So again, you should check the view. Type the following T-SQL code, and execute it: SELECT * FROM [viw_ProductNewPriceList]
7.
During the analysis, management notices that a number of products have no price. So they decide to update the products with no prices to a minimum price of $50. Type the following T-SQL code, and execute it: UPDATE [viw_ProductNewPriceList] SET NewListPrice = $50 WHERE NewListPrice = $0 Because the [NewListPrice] field is a calculated field, the update operation will fail. You should see the following error message: Msg 4406, Level 16, State 1, Line 2 Update or insert of view or function 'viw_ProductNewPriceList' failed because it contains a derived or constant field.
8.
You could in theory write an INSTEAD OF trigger that would rewrite the DELETE statement, if you decided that was appropriate.
9.
Don’t delete the views you’ve created because you’ll use them in the next exercise.
40521.book Page 67 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
67
Designing Indexed Views Ahhh…indexed views! As a database architect, this is perhaps my favorite feature (together with partitioning). You have examined the benefits of both view and indexes. In this section, you will look at combining the two. The problem with views is that SQL Server needs to regenerate the result set whenever views are queried. This can be an expensive operation, particularly where calculations on large data sets are involved. A common requirement in modern database engines is the ability to materialize views. As with girlfriends, indexed views are very, very fussy, so there is a lot of setup work. But once you’ve implemented them, you’ll be well pleased! So get those flowers. Oops! I mean make sure you understand the requirements of indexed views.
Actually, Kim Wimpsett, the copyeditor for this tome, has informed me during the review that “Victor, girls prefer diamonds these days. Tell everyone.” Crikey! I’d better raise my contract rates. Shows how much I know about women. Hope I know a bit more about SQL Server. Thanks for the insight into the wiley female mind. (Pun intended.)
You must be asking whether they’re worth your while. Definitely! Indexed views are probably really designed for VLDB and enterprise environments, and that’s where they come into their own. They are particularly useful for joins and aggregations that process large data sets because they effectively materialize the aggregations themselves. However, remember the caveat about volatile OLTP systems where an indexed view can do more harm than good because of SQL Server having to maintain the indexed view on the fly.
To clarify some common misconceptions, indexed views are available in all editions of SQL Server 2005. However, SQL Server 2005 Enterprise Edition (and Developer Edition) can automatically use them; you need to provide the NOEXPAND optimizer hint with all other editions.
SQL Server 2005 is less restrictive than SQL Server 2000 as far as what columns can participate in an indexed view. You can now include the following: Scalar aggregates You can include scalar aggregates including SUM and COUNT_BIG without GROUP BY. Scalar expressions and UDFs You can include scalar expressions and user-defined functions. Persisted imprecise columns You can include persisted columns or computed columns that are based on a FLOAT or REAL. CLR data types Common language runtime (CLR) user-defined type columns, or expressions based on those columns, provided that the columns or expressions are deterministic and precise, persisted, or both.
40521.book Page 68 Monday, August 14, 2006 8:04 AM
Chapter 2
68
Designing Database Objects
CLR user-defined aggregates cannot be used in an indexed view.
When implementing an indexed view, you generally have to do the following: 1.
Ensure the correct options have been set for creating the indexed view.
2.
Verify the view definition is deterministic.
3.
Create the indexed view.
4.
Ensure the correct options have been set for creating the unique clustered index.
5.
Create the unique clustered index on the view.
6.
Create any other nonclustered indexes if required.
When creating the view that is subsequently going to be materialized, keep in mind the following considerations: The following SET options must be ON:
ANSI_NULLS
QUOTED_IDENTIFIER
The view must be created with the WITH SCHEMABINDING option.
All tables and user-defined functions referenced in the view must be referenced using their two-part name.
The view cannot reference other views.
All base tables referenced by the view must exist in the same database.
To get around this limitation, you might be able to implement indexed views in the separate database and then combine the resulting indexed views to get at least some of the performance benefits.
All base tables referenced by the view must have the same owner as the view.
All UDFs referenced in the view must have been created with the SCHEMABINDING option.
All UDFs referenced in the view must be deterministic.
CLR UDFs can appear only in the SELECT statement.
CLR UDFs cannot reference the clustered index key. The SELECT statement used in the view cannot have the following:
* syntax to indicate all columns
A table column name repeated in the SELECT list
An expression on a column used in the GROUP BY clause
40521.book Page 69 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
An expression on the results of an aggregate
A derived table
A common table expression (CTE)
Rowset functions
UNION, EXCEPT, and INTERSECT operators
69
Again, to get around the UNION limitation, you might be able to get away with creating multiple indexed views and then unioning them together.
Subqueries
Outer or self joins
The TOP clause
The ORDER BY clause
The DISTINCT keyword
The AVG, COUNT, MAX, MIN, STDEV, STDEVP, VAR, and VARP aggregate functions
COUNT_BIG(*) is allowed.
A SUM function that references a nullable expression
A CLR user-defined aggregate function
The full-text predicates CONTAINS or FREETEXT
The COMPUTE or COMPUTE BY clause
The CROSS APPLY or OUTER APPLY operators
Join or table hints
For GROUP BY clauses, the SELECT list must use the COUNT_BIG(*) expression.
When creating the clustered index that materializes the view, the following SET options must be ON:
ANSI_NULLS
ANSI_PADDING
ANSI_WARNINGS
CONCAT_NULL_YIELDS_NULL
QUOTED_IDENTIFIER
40521.book Page 70 Monday, August 14, 2006 8:04 AM
Chapter 2
70
Designing Database Objects
NUMERIC_ROUNDABORT must be OFF (which is the default).
I know this seems like a lot to remember. But so was your girlfriend’s name initially and all of her friends’ names. But as I said, in the end it is worth the pain, no? Now let’s move on to Exercise 2.3 and create some indexes on those views you created earlier. EXERCISE 2.3
Creating Indexed Views (This exercise depends on Exercise 2.2, so please ensure you have completed it.) Management has received complaints that the performance is poor when querying [viw_ ProductNewPriceList]. Through an analysis you have determined that the problem stems from that most SARGs are based on the [NewListPrice] field. Consequently, you have decided to materialize this calculated field in the view through an index. The first step you will have to take is to create a unique clustered index.
1.
Click the New Query toolbar button to open a new query window, and enter this: CREATE UNIQUE CLUSTERED INDEX CI ON [viw_ProductNewPriceList](ProductId) You should get the following error message: Msg 1939, Level 16, State 1, Line 1 Cannot create index on view 'viw_ProductNewPriceList' because the view is not schema bound.
2.
You need to correct the [viw_ProductNewPriceList] view, so type the following T-SQL code, and execute it: ALTER VIEW [viw_ProductNewPriceList] WITH SCHEMABINDING AS SELECT ProductId, Name, ProductNumber,
40521.book Page 71 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
71
EXERCISE 2.3 (continued)
(ListPrice * 1.25) AS NewListPrice FROM Production.Product GO
3.
Now you can create the clustered index and the nonclustered index, so type the following T-SQL code, and execute it: CREATE UNIQUE CLUSTERED INDEX CI ON [viw_ProductNewPriceList](ProductId) GO CREATE NONCLUSTERED INDEX NCI ON [viw_ProductNewPriceList](NewListPrice) GO
4.
Great! You have created an indexed view. Users should experience faster performance when querying the view on the [NewListPrice] column. Type the following T-SQL code and execute it to clean up your SQL Server instance: -- Clean up DROP VIEW [viw_ProductNewPriceList] GO DROP VIEW [viw_ProductPriceList] GO
Designing T-SQL User-Defined Functions T-SQL UDFs were a welcome addition to SQL Server 2000 because they allow developers to go beyond the built-in system functions by creating their own functions. They also allow developers to create parameterized views, which help reduce the amount of development work required because you potentially have fewer objects to create. User-defined functions are primarily designed to return data, so they should not be making any changes to the database outside the scope of the function. Sure, you can perform operations on local cursors, modify data within locally created table variables, and perform other types of actions—as long as the scope is local. As with most things in life, this has exceptions; in this case, you can also execute extended stored procedure.
40521.book Page 72 Monday, August 14, 2006 8:04 AM
72
Chapter 2
Designing Database Objects
Don’t forget the other restriction that you can use deterministic functions only within a user-defined function. Remember my comment earlier about exceptions? Well, with SQL Server 2005, you can use the following nondeterministic built-in system functions:
CURRENT_TIMESTAMP
GET_TRANSMISSION_STATUS
GETDATE
GETUTCDATE
@@CONNECTIONS
@@CPU_BUSY
@@DBTS
@@IDLE
@@IO_BUSY
@@MAX_CONNECTIONS
@@PACK_RECEIVED
@@PACK_SENT
@@PACKET_ERRORS
@@TIMETICKS
@@TOTAL_ERRORS
@@TOTAL_READ
@@TOTAL_WRITE
Understanding Deterministic and Nondeterministic Functions It is important to understand determinism because it establishes where you can use a function. Functions can be one of the following:
Strictly deterministic: A function is considered strictly deterministic if it always returns the same results for a given set of inputs.
Deterministic: A function is considered deterministic if it always returns the same results for a given set of inputs and database state.
Nondeterministic: A function is considered nondeterministic if it returns different values for a given set of inputs and database state.
The DATEADD() function is an example of a deterministic function, and the GETDATE() function obviously is nondeterministic. You can use deterministic functions only in indexed computed columns, indexed views, persisted computed columns, and user-defined functions.
40521.book Page 73 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
73
For the built-in functions of SQL Server 2005, determinism and strict determinism are equivalent.
SQL Server 2005 supports three types of user-defined functions: Scalar functions Scalar-valued functions (SVFs) return a single value. Table-valued functions Table-valued functions (TVFs) return a relational result set. They come in two forms:
Inline
Multistatement
You’ll now go through the syntax and see some examples of these three types of T-SQL user-defined functions.
Scalar Functions Scalar-valued functions are basically T-SQL functions that have a single RETURN statement that returns (funnily enough) a single value. Scalar UDFs were a blessing when they came out because they allowed database developers to enrich the programming model by creating their own business-related functions. They could extend the functionality of SQL Server by developing their own set of functions. Developers typically have a standard library of UDFs that are potentially used across database solutions within the enterprise. These UDFs perform typical “calculations” such as calculating the number of working days between two dates, formatting strings, and so forth.
Don’t forget that if you create your UDFs in the model system database, this automatically creates them in all subsequent new databases.
The syntax for creating scalar functions is as follows: CREATE FUNCTION [ schema_name. ] function_name ( [ { @parameter_name [ AS ][ type_schema_name. ] parameter_data_type [ = default ] } [ ,...n ] ] ) RETURNS return_data_type [ WITH [ ,...n ] ] [ AS ] BEGIN function_body
40521.book Page 74 Monday, August 14, 2006 8:04 AM
74
Chapter 2
Designing Database Objects
RETURN scalar_expression END [ ; ] ::= { [ ENCRYPTION ] | [ SCHEMABINDING ] | [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ] | [ EXECUTE_AS_Clause ] }
The following example shows a function that strips the time component from the DATETIME data type: CREATE FUNCTION [dbo].[udf_Date] (@Date DATETIME) RETURNS DATETIME AS BEGIN RETURN CONVERT(DATETIME, CONVERT(CHAR(8), (DATEPART(yy,@Date) * 10000) + (DATEPART(mm,@Date) * 100) + (DATEPART(dd,@Date)))) END GO
You can use this UDF in any T-SQL code wherever a valid expression is allowed, including a DEFAULT CONSTRAINT definition of a column. The following example shows the DDL for a table where the previous [udf_Date] UDF is used: CREATE TABLE [Customers] ( [CustomerNumber] INT NOT NULL, [FirstName] VARCHAR(20) NULL, [LastName] VARCHAR(20) NULL, [PassportNumber] CHAR(8) NULL, [Address] VARCHAR(50) NULL, [Region] VARCHAR(20) NULL, [PostCode] VARCHAR(10) NULL, [Country] VARCHAR(20) NULL, [DateAdded] SMALLDATETIME NOT NULL DEFAULT dbo.udf_Date(GETDATE()) )
40521.book Page 75 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
75
In this example, the date when the customer record was inserted will be automatically added.
Inline Table-Valued Functions Inline table-valued functions return a TABLE data type through a single SELECT statement; consequently, you don’t need a BEGIN ... END block. The syntax for creating inline table-valued functions is as follows: CREATE FUNCTION [ schema_name. ] function_name ( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type [ = default ] } [ ,...n ] ] ) RETURNS TABLE [ WITH [ ,...n ] ] [ AS ] RETURN [ ( ] select_stmt [ ) ] [ ; ] ::= { [ ENCRYPTION ] | [ SCHEMABINDING ] | [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ] | [ EXECUTE_AS_Clause ] }
The following example shows an inline table-valued function that returns the stores, the cities they are located in, and their postal code, given a particular region: USE AdventureWorks ; GO CREATE FUNCTION [Sales].[udf_StoresInRegion] ( @Region nvarchar(50) ) RETURNS table AS RETURN ( SELECT DISTINCT S.Name AS Store, A.City, A.PostalCode
40521.book Page 76 Monday, August 14, 2006 8:04 AM
76
Chapter 2
Designing Database Objects
FROM Sales.Store AS S JOIN Sales.CustomerAddress AS CA ON CA.CustomerID = S.CustomerID JOIN Person.Address AS A ON A.AddressID = CA.AddressID JOIN Person.StateProvince SP ON SP.StateProvinceID = A.StateProvinceID WHERE SP.Name = @Region ) GO
What you have created in the previous TVF is a parameterized view. If you want to generate a list of all stores in the New South Wales region, you can execute the following: SELECT * FROM Sales.udf_StoresInRegion('New South Wales')
For the Queensland region, you can use the following query: SELECT * FROM Sales.udf_StoresInRegion('Queensland')
Traditionally, the only way to achieve this through earlier versions of SQL Server would have been to create multiple views, one for each region! Sure, you can create a single parameterized stored procedure that accepts the region as an input parameter, but you don’t have this flexibility of being able to select from the object, of being able to perform joins on it directly, and so forth.
Multistatement Table-Valued Functions A multistatement table-valued function also returns a table data type but requires a BEGIN ... END block to contain the series of T-SQL statements that ultimately returns the result set. The syntax for creating multistatement table-valued functions is as follows: CREATE FUNCTION [ schema_name. ] function_name ( [ { @parameter_name [ AS ] [ type_schema_name. ] parameter_data_type [ = default ] } [ ,...n ] ] ) RETURNS @return_variable TABLE < table_type_definition > [ WITH [ ,...n ] ] [ AS ] BEGIN function_body RETURN
40521.book Page 77 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
77
END [ ; ] ::= { [ ENCRYPTION ] | [ SCHEMABINDING ] | [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ] | [ EXECUTE_AS_Clause ] }
Multistatement table-valued functions represent the maximum functionality because you have the ability to using multiple T-SQL statements within them so can take advantage of local cursors and table variables to manipulate data before generating a result set via the RETURN statement.
Designing CLR User-Defined Functions The introduction of the CLR in SQL Server 2005 allows you to create user-defined functions in any supported .NET programming language. You can use them to retrieve data, but they can be particularly powerful at extending the functionality of your SQL Server 2005 database solution. Creating CLR user-defined functions in SQL Server 2005 involves the following process: 1.
Define the function as a static method of a class in a language supported by the .NET Framework.
2.
Register the assembly in SQL Server 2005.
Assemblies are DLL files used in an instance of SQL Server to deploy functions, stored procedures, triggers, user-defined aggregates, and user-defined types that are written in managed code instead of T-SQL.
3.
Create the function that references the assembly registered previously.
You use the CREATE ASSEMBLY statement for registering assemblies. The syntax for registering assemblies is as follows: CREATE ASSEMBLY assembly_name [ AUTHORIZATION owner_name ] FROM { | [ ,...n ] } [ WITH PERMISSION_SET = { SAFE | EXTERNAL_ACCESS | UNSAFE } ] [ ; ]
40521.book Page 78 Monday, August 14, 2006 8:04 AM
78
Chapter 2
Designing Database Objects
:: = ➥'[\\computer_name\]share_name\[path\] manifest_file_name' | '[local_path\]manifest_file_name' :: = { varbinary_literal | varbinary_expression }
The most important factor when designing your database solution is to determine the appropriate permission set for the assembly. You will learn about the permission set options when you look at security and controlling the execution context of SQL Server 2005 modules in Chapter 4. You use the CREATE FUNCTION statement for actually creating the UDF. The syntax for creating CLR functions is as follows: CREATE FUNCTION [ schema_name. ] function_name ( { @parameter_name [AS] [ type_schema_name. ] parameter_data_type [ = default ] } [ ,...n ] ) RETURNS { return_data_type | TABLE } [ WITH [ ,...n ] ] [ AS ] EXTERNAL NAME [ ; ] Method Specifier ::= assembly_name.class_name.method_name
::= ( { column_name data_type } [ ,...n ] ) ::= } [ RETURNS NULL ON NULL INPUT | CALLED ON NULL INPUT ] | [ EXECUTE_AS_Clause ] }
As with T-SQL UDFs, CLR UDFs can be SVFs or TVFs. Got that?
40521.book Page 79 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
79
The obvious question is, should you write user-defined functions in T-SQL or managed code? The answer is, as with most things in life, it depends. For the purposes of this discussion, we’ll take the programmer’s knowledge out of the equation. Little difference exists, relatively speaking, between SQL Server 2005’s implementation of T-SQL SVFs and CLR SVFs. So, your choice of language probably depends on what the SVF does. If your SVF needs to work “closer” with the data, use T-SQL. Otherwise, if it involves more calculations, string manipulations, and other computations that need a rich programming language and associated functions (think Regex), the CLR is the way to go.
You might be wondering about whether this has any performance implications. The short answer is, it depends. Predicting performance is difficult because it depends on your environment and what you are trying to do. Use your common sense. Sorry, I know that’s not what you wanted to hear….
TVFs are another matter. T-SQL TVFs materialize the result set through an intermediate table. Consequently, they support constraints and unique indexes, which can be extremely useful for manipulating large result sets. CLR TVFs are not materialized because they use a streaming mechanism. So, keep that bit of trivia in mind to impress the rest of the SQL gurus at the “local”!
Designing CLR User-Defined Aggregates Another new feature of SQL Server 2005 that has been requested over the years is the ability to create custom aggregate functions. The native aggregate functions of SQL Server, such as SUM, AVG, MIN, and MAX, are fairly limiting. CLR User-Defined Aggregates on the other hand represent a powerful way of extending the functionality of your SQL Server 2005’s aggregation capabilities. With the integration of the CLR, SQL Server 2005 allows you to create custom aggregate functions. So, it is really up to your business requirements. You might want an aggregate function that takes into account your company holidays and weekends. Or you might want to aggregate strings into a comma-limited list for reporting purposes. Creating CLR user-defined aggregates is virtually identical to the process of creating CLR user-defined functions. The syntax for creating CLR user-defined aggregates is as follows: CREATE AGGREGATE [ schema_name . ] aggregate_name (@param_name ) RETURNS EXTERNAL NAME assembly_name [ .class_name ] ::= system_scalar_type | { [ udt_schema_name. ] udt_type_name }
40521.book Page 80 Monday, August 14, 2006 8:04 AM
80
Chapter 2
Designing Database Objects
::= system_scalar_type | { [ udt_schema_name. ] udt_type_name }
You call CLR user-defined aggregate functions similarly to your T-SQL user-defined functions: by using the two-part name.
For a simple example of a CLR user-defined aggregate, refer to the “Invoking CLR User-Defined Aggregate Functions” topic in SQL Server 2005 Books Online.
Designing Stored Procedures Stored procedures are programming routines that can pretty much do anything, including return a tabular result set, invoke any of the DML and DDL statements available, and return output parameters and messages to the client. You cannot use them in scalar expressions as you saw with user-defined functions earlier. It’s important to realize that the benefits of using stored procedures are more than just encapsulation and reuse of code. Without getting involved too much in the details, using stored procedures in preference of ad hoc T-SQL code has performance benefits. This primarily because of the way SQL Server can cache the stored procedures in memory for reuse by subsequent callers. Another benefit of implementing stored procedures, which you’ll examine in more detail in Chapter 4, is security. Stored procedures allow the database architect to strictly control what actions can be performed within a database. This can prove to be useful in web-based database solutions and in helping protect against SQL Injection attacks. SQL Server 2005 supports several types of stored procedures: T-SQL stored procedures T-SQL stored procedures are user-defined routines written using the T-SQL language. System stored procedures System stored procedures are stored procedures, written by Microsoft, that come with SQL Server and are used to perform administrative tasks and query the database engine. The majority of them reside in the master system database. They typically have an sp_ prefix.
A great way to learn more about SQL Server’s internals and see examples of good (and bad) T-SQL programming is to examine the source code of the system stored procedures. You can use the sp_helptext system stored procedure to view the source code.
CLR stored procedures CLR stored procedures are routines written in any number of .NET languages that leverage the new CLR capabilities of SQL Server 2005.
40521.book Page 81 Monday, August 14, 2006 8:04 AM
Designing Objects That Retrieve Data and Extend Functionality
81
Extended stored procedures The extended stored procedures are basically a wrapper for externally called DLLs that run in SQL Server’s address space. They typically have an xp_ prefix.
Extended stored procedures are being deprecated and will be removed in future versions of SQL Server. Avoid implementing extended stored procedures, and use CLR stored procedures instead.
The syntax for creating stored procedures is as follows: CREATE { PROC | PROCEDURE } [schema_name.] procedure_name [ ; number ] [ { @parameter [ type_schema_name. ] data_type } [ VARYING ] [ = default ] [ [ OUT [ PUT ] ] [ ,...n ] [ WITH [ ,...n ] [ FOR REPLICATION ] AS { [;][ ...n ] | } [;] ::= [ ENCRYPTION ] [ RECOMPILE ] [ EXECUTE_AS_Clause ] ::= { [ BEGIN ] statements [ END ] } ::= EXTERNAL NAME assembly_name.class_name.method_name
The option to watch out for in the DDL statement is the RECOMPILE option. As mentioned, one of the benefits of using stored procedures is that SQL Server 2005 caches the execution plans of stored procedures in a section of memory commonly referred to the procedure cache. This translates to a performance benefit because subsequent callers to the same stored procedures can access these cache versions. The problem however, is that in certain circumstances the cached version might not be optimal for the calling caller. You have a number of techniques at your disposal for forcing explicit recompilation:
You can use the WITH RECOMPILE option in the DDL statement used to create the procedure, which will effectively ensure that the procedure is never cached (which is not technically correct, but don’t worry about that).
You can use the WITH RECOMPILE option when calling the stored procedure.
You can use the sp_recompile system stored procedure, which causes stored procedures and triggers to be recompiled when they are next executed.
40521.book Page 82 Monday, August 14, 2006 8:04 AM
82
Chapter 2
Designing Database Objects
For more information about the different techniques you can use to force the recompilation of stored procedures, see the “Recompiling Stored Procedures” topic in SQL Server 2005 Books Online.
Instead of showing an example of a stored procedure, Exercise 2.4 will show how to view and potentially modify an existing stored procedure. EXERCISE 2.4
Modifying a Stored Procedure’s DDL You can easily alter an existing stored procedure in a database using SQL Server Management Studio.
1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
In Object Explorer, expand Server Databases AdventureWorks Programmability Stored Procedures.
3.
Right-click dbo.uspGetBillOfMaterials, and select Modify. You should see the DDL statement that was used to create the user-defined stored procedure.
4.
If you want to change the stored procedure’s DDL statements, you would modify the T-SQL code and execute the code. In this case, however, you will just examine the DDL code, so shut the query window when you’re done looking.
40521.book Page 83 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
83
Designing Objects That Perform Actions Database solutions involve more than data retrieval. Invariably you also need to perform various actions. When designing these database objects, you have a number of types to use: Service Broker applications The Service Broker component is a new technology in SQL Server 2005 that you can use to build service-oriented architecture–based applications. You will look at it in more detail in Chapter 7. SQL Server Agent Although the SQL Server Agent is primarily designed for “administrative purposes” such as for backups, you can leverage its capabilities as a database architect to perform actions as well. I will cover the SQL Server Agent’s capabilities at the end of this chapter. Store procedures As you have seen, stored procedures are code modules that are explicitly executed. Triggers As you will see, triggers are code modules that are implicitly executed. In this section, you will concentrate more on designing triggers. Triggers are close cousins to stored procedures, so the implementation is similar. The difference is in how you invoke them. Stored procedures are explicitly invoked; SQL Server automatically invokes triggers as a result of an action occurring on a particular table. The major benefits of triggers are that they cannot be bypassed and they are transparent to the user. So, connected users are not even aware of these actions being run automatically in the background. Consequently, you can use triggers for the following: Enforcing business rules Use them for complex business rules that cannot be enforced through nonprocedural means. Enforcing data integrity In certain cases, you need to use triggers to enforce data integrity.
As you saw in Chapter 1, constraints apply only within a database. So, if you have any cross-database data integrity requirements, you need to use triggers.
Enforcing referential integrity For complex referential integrity (for example, with cyclic relationships), you need to resort to triggers. Auditing You business requires an audit trail or a notification of an event occurring. Using the Recycle Bin Deleted data will be moved to a “recycle bin” table. Heaps, heaps more... There are many more examples of where triggers can be used to achieve some functional requirement. Ultimately, because they are implemented through procedural code, virtually anything can be done. But remember to take into account their performance impact on DML operations. SQL Server 2005 now supports two types of triggers: DML triggers DML triggers fire whenever a DML operation occurs on a table. DDL triggers DDL triggers fire whenever a DDL operation occurs in a database or on the server.
40521.book Page 84 Monday, August 14, 2006 8:04 AM
84
Chapter 2
Designing Database Objects
You can nest both DML and DDL triggers. Nesting basically means a trigger that modifies another table can spawn another trigger. This obviously brings up the potential problem of creating an infinite loop. Well, by default triggers are not recursive, but in any case there is a finite limit to the nesting level, which has not changed since the SQL Server 6.0 days.
You can nest DML and DDL triggers up to 32 levels.
In some complex VLDB environments that I have worked on, there was a requirement to nest beyond the level limit supported by SQL Server. But I was able to overcome this limitation by redesigning the trigger architecture. And you can apply some other techniques. Nested triggers are turned on by default. You can turn this feature off, but be cautious because it applies globally to the entire SQL Server instance: EXEC sp_configure 'nested_triggers',0 RECONFIGURE
You can disable both DML and DDL triggers, which can be a common requirement in database solutions. The syntax for disabling a trigger is as follows: DISABLE TRIGGER { [ schema . ] trigger_name [ ,...n ] | ALL } ON { object_name | DATABASE | ALL SERVER } [ ; ]
To disable all DDL triggers at the server level, you can execute the following: DISABLE TRIGGER ALL ON ALL SERVER GO
You can query the sys.triggers and sys.server_triggers catalog views to find out more information about the triggers that exist in your database solution and their states.
You’ll now look at the syntax and design considerations of DML and DDL triggers.
Designing DML Triggers DML triggers fire whenever an insert, update, or delete operation occurs on a table or view. You can define multiple triggers on the same table. However, remember that you can only partially control the execution order (you’ll learn more about this later). You can also define a trigger to fire multiple actions. DML triggers are versatile.
Implement only what needs to be done in triggers. Don’t forget that your trigger(s) will fire every time that DML operation occurs, which can potentially impact performance.
40521.book Page 85 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
85
The syntax for creating DML triggers is as follows: CREATE TRIGGER [ schema_name . ]trigger_name ON { table | view } [ WITH [ ,...n ] ] { FOR | AFTER | INSTEAD OF } { [ INSERT ] [ , ] [ UPDATE ] [ , ] [ DELETE ] } [ WITH APPEND ] [ NOT FOR REPLICATION ] AS { sql_statement [ ; ] [ ...n ] | EXTERNAL NAME } ::= [ ENCRYPTION ] [ EXECUTE AS Clause ] ::= assembly_name.class_name.method_name
Determining the Order of Execution When designing your database solution, it is critical to understand the order of execution of triggers within the confines of the database. Figure 2.2 shows the order in which triggers are fired in relation to other events in a DML operation. You need to consider this execution order when designing your trigger architecture. Another important consideration is the order in which multiple triggers fire for a given action. You can’t really “fully” control the order in which triggers fire, in which case you might be better off recoding multiple triggers into a single, longer, sequential trigger.
You can use the sp_settriggerorder system stored procedure to set the first and last trigger.
Creating AFTER DML Triggers AFTER DML triggers have always been available with SQL Server. They are the default DML trigger type if there is no explicit declaration in the DDL. By default, DML triggers within a database cannot call themselves recursively. To turn on recursive triggers, you need to enable them at the database level using the following syntax: ALTER DATABASE database_name SET RECURSIVE_TRIGGERS ON
AFTER DML triggers are your standard triggers that you use to perform the majority of your database actions. Don’t forget that these actions can pretty much do whatever you want within the databases, within other databases on the same SQL Server 2005 instance, and potentially within databases on other SQL Server instances.
40521.book Page 86 Monday, August 14, 2006 8:04 AM
86
Chapter 2
FIGURE 2.2
Designing Database Objects
Trigger execution order Transaction Started
Transaction Rolled Back
DML
INSTEAD OF Trigger Fires
Roll Back Transaction
DML
Nested Trigger
Constraints Checked
Violated
AFTER Trigger Fires
Roll Back Transaction
Transaction Committed
The following example shows a delete trigger that automatically “moves” the deleted data into a table, thereby allowing the DBA to “undelete” data if required. USE Adventureworks ; GO -- Create a table to store deleted customers CREATE TABLE [Audit].[tbl_DeletedCustomer] ( [CustomerID] INT NOT NULL, [TerritoryID] INT NULL, [AccountNumber] VARCHAR(10) NOT NULL, [CustomerType] NCHAR(1) NOT NULL, [rowguid] UNIQUEIDENTIFIER NOT NULL, [ModifiedDate] DATETIME NOT NULL, -- Extra Auditing Columns [HostName] VARCHAR(50) NOT NULL, [UserName] VARCHAR(50) NOT NULL, [DateDeletion] SMALLDATE NOT NULL )
40521.book Page 87 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
87
GO
-- Create "recycle bin" trigger CREATE TRIGGER [trg_DeletedCustomer] ON [Sales].[Customer] FOR DELETE AS INSERT [Audit]. [tbl_DeletedCustomer] ( CustomerId, TerritoryId, AccountNumber, CustomerType, Rowguid, ModifiedDate, HostName, UserName, DateDeletion ) SELECT CustomerId, TerritoryId, AccountNumber, CustomerType, Rowguid, ModifiedDate, HOST_NAME(),--Computer name SUSER_SNAME(),--Login name GETDATE()--Date of deletion FROM deleted GO
INSTEAD OF DML Triggers SQL Server 2000 introduced INSTEAD OF DML triggers. They allow you to effectively rewrite your DML operations! This is because, unlike AFTER DML triggers, they execute before the actual DML operation. The other important difference is that INSTEAD OF DML triggers can be bound to views. This opens a world of opportunities. In SQL Server 2000, INSTEAD OF triggers represented a more flexible technique of implementing partitioned views, but you can use them for many other reasons to overcome view restrictions and other purposes. Exercise 2.5 will demonstrate the power of these types of triggers.
40521.book Page 88 Monday, August 14, 2006 8:04 AM
Chapter 2
88
Designing Database Objects
EXERCISE 2.5
Working with INSTEAD OF Triggers You’ll look at a simple example (a proof of concept, really) of where you might want to use an INSTEAD OF trigger.
1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
Click the New Query toolbar button to open a new query window.
3.
Type the following T-SQL code, and execute it: USE tempdb ; GO -- Create underlying table CREATE TABLE [tbl_Products] ( ProductNumber
INT NOT NULL IDENTITY (1,1),
Product
VARCHAR(50) NOT NULL,
Price
MONEY NULL
) GO
-- Insert some data INSERT [tbl_Products] VALUES ('Muffin', $69) INSERT [tbl_Products] VALUES ('Pink Silly Cow', NULL) INSERT [tbl_Products] VALUES ('Cherry Cheesecake', $150) GO
-- Create view which calculates the GST price CREATE VIEW [viw_Products] AS SELECT ProductNumber, Product,
40521.book Page 89 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
EXERCISE 2.5 (continued)
Price, (Price * 1.1) AS GSTPrice -- Goods & Services Tax FROM [tbl_Products] GO
-- Test the view SELECT * FROM [viw_Products] GO
4.
Try to modify the [Product] column of the view. You should have no problems; type the following T-SQL code, and execute it: UPDATE [viw_Products] SET Product = 'Strawberry Cheesecake' WHERE ProductNumber = 3
5.
However, let’s see what happens when you try to modify the [GSTPrice] field. Type the following T-SQL code, and execute it: UPDATE [viw_Products] SET GSTPrice = $110 WHERE ProductNumber = 2 As expected, you get the following error message because you cannot modify a derived/ calculated field: Msg 4406, Level 16, State 1, Line 2 Update or insert of view or function 'viw_Products' failed because it contains a derived or constant field.
6.
So, a potential solution in this case (but not necessarily a good one) is to rewrite the DML operation on the fly. You can do this via an INSTEAD OF DML trigger. Type the following T-SQL code, and execute it: CREATE TRIGGER [trg_ReUpdate] ON [viw_Products]
89
40521.book Page 90 Monday, August 14, 2006 8:04 AM
Chapter 2
90
Designing Database Objects
EXERCISE 2.5 (continued)
INSTEAD OF UPDATE AS SET NOCOUNT ON IF UPDATE(GSTPrice) UPDATE [tbl_Products] -- Tibor will hate this ;o) SET Price = ((i.GSTPrice * 10)/11) FROM tbl_Products AS p JOIN inserted AS i ON p.ProductNumber = i.ProductNumber GO
7.
Let’s see what happens now if you try to update the [GSTPrice] field. Type the following T-SQL code, and execute it: UPDATE viw_Products SET GSTPrice = $110 WHERE ProductNumber = 2
8.
You do not get the error message! You have modified the record successfully, so look at the view now. Type the following T-SQL code, and execute it: SELECT * FROM [viw_Products] GO
As you can see, you have in effect really modified the [Price] column, not the calculated [GSTPrice] column. Now, this is not necessarily a good idea, but the exercise showed how you can use INSTEAD OF DML triggers.
Designing DDL Triggers DDL triggers are a new addition in SQL Server 2005. Basically, DDL triggers fire whenever certain CREATE, ALTER, DROP, GRANT, DENY, REVOKE, and UPDATE STATISTICS statements execute.
For a more complete understanding DDL triggers (and in fact WMI triggers as well, which I will be covering next), you should read about event notifications, which I will cover in Chapter 4.
40521.book Page 91 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
91
Obviously, DDL triggers are geared more toward DBAs and security administrators. You will be looking at related security considerations in more detail in Chapter 4. The syntax for creating a DDL trigger is as follows: CREATE TRIGGER trigger_name ON { ALL SERVER | DATABASE } [ WITH [ ,...n ] ] { FOR | AFTER } { event_type | event_group } [ ,...n ] AS { sql_statement [ ; ] [ ...n ] | EXTERNAL NAME < method specifier > }
[ ; ]
::= [ ENCRYPTION ] [ EXECUTE AS Clause ] ::= assembly_name.class_name.method_name
Unlike DML triggers, DDL triggers have two separate scopes at which they can fire: Server scope These events occur at the SQL Server instance level. Database scope These events occur at each individual database instance level.
When working with DDL triggers, it is important to identify the correct scope at which you should be working.
You can create DDL triggers that fire whenever certain DDL operations occur at the server level. The following list shows the trigger events at this server level:
DDL_SERVER_LEVEL_EVENTS: CREATE_DATABASE, ALTER_DATABASE, DROP_DATABASE
DDL_ENDPOINT_EVENTS: CREATE_ENDPOINT, ALTER_ENDPOINT, DROP_ ENDPOINT
DDL_SERVER_SECURITY_EVENTS
DDL_AUTHORIZATION_SERVER_EVENTS: ALTER_AUTHORIZATION_ SERVER
DDL_GDR_SERVER_EVENTS: GRANT_SERVER, DENY_SERVER, REVOKE_ SERVER
DDL_LOGIN_EVENTS: CREATE_LOGIN, ALTER_LOGIN, DROP_LOGIN
You can also define DDL triggers at the database scope. The following list shows the events against which you can define DDL triggers:
DDL_DATABASE_LEVEL_EVENTS
DDL_ASSEMBLY_EVENTS: CREATE_ASSEMBLY, ALTER_ASSEMBLY, DROP_ ASSEMBLY
40521.book Page 92 Monday, August 14, 2006 8:04 AM
Chapter 2
92
Designing Database Objects
DDL_DATABASE_SECURITY_EVENTS
DDL_APPLICATION_ROLE_EVENTS: CREATE_APPLICATION_ROLE, ALTER_APPLICATION_ROLE, DROP_APPLICATION_ROLE
DDL_AUTHORIZATION_DATABASE_EVENTS: ALTER_AUTHORIZATION_ DATABASE
DDL_CERTIFICATE_EVENTS: CREATE_CERTIFICATE, ALTER_CERTIFICATE, DROP_CERTIFICATE
DDL_GDR_DATABASE_EVENTS: GRANT_DATABASE, DENY_DATABASE, REVOKE_DATABASE
DDL_ROLE_EVENTS: CREATE_ROLE, ALTER_ROLE, DROP_ROLE
DDL_SCHEMA_EVENTS: CREATE_SCHEMA, ALTER_SCHEMA, DROP_ SCHEMA
DDL_USER_EVENTS: CREATE_USER, DROP_USER, ALTER_USER
DDL_EVENT_NOTIFICATION_EVENTS: CREATE_EVENT_NOTIFICATION, DROP_EVENT_NOTIFICATION
DDL_FUNCTION_EVENTS: CREATE_FUNCTION, ALTER_FUNCTION, DROP_FUNCTION DDL_PARTITION_EVENTS
DDL_PARTITION_FUNCTION_EVENTS: CREATE_PARTITION_FUNCTION, ALTER_PARTITION_FUNCTION, DROP_PARTITION_FUNCTION
DDL_PARTITION_SCHEME_EVENTS: CREATE_PARTITION_SCHEME, ALTER_PARTITION_SCHEME, DROP_PARTITION_SCHEME
DDL_PROCEDURE_EVENTS: CREATE_PROCEDURE, DROP_PROCEDURE, ALTER_PROCEDURE DDL_SSB_EVENTS
DDL_CONTRACT_EVENTS: CREATE_CONTRACT, DROP_CONTRACT
DDL_MESSAGE_TYPE_EVENTS: CREATE_MSGTYPE, ALTER_MSGTYPE, DROP_MSGTYPE
DDL_QUEUE_EVENTS: CREATE_QUEUE, ALTER_QUEUE, DROP_QUEUE
DDL_SERVICE_EVENTS: CREATE_SERVICE, DROP_SERVICE, ALTER_SERVICE
DDL_REMOTE_SERVICE_BINDING_EVENTS: CREATE_REMOTE_SERVICE_ BINDING, ALTER_REMOTE_SERVICE_BINDING, DROP_REMOTE_SERVICE_ BINDING
DDL_ROUTE_EVENTS: CREATE_ROUTE, DROP_ROUTE, ALTER_ROUTE
DDL_SYNONYM_EVENTS: CREATE_SYNONYM, DROP_SYNONYM
DDL_TABLE_VIEW_EVENTS
40521.book Page 93 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
DDL_INDEX_EVENTS: CREATE_INDEX, DROP_INDEX, ALTER_INDEX, CREATE_ XML_INDEX
DDL_STATISTICS_EVENTS: CREATE_STATISTICS, UPDATE_STATISTICS, DROP_STATISTICS
DDL_TABLE_EVENTS: CREATE_TABLE, ALTER_TABLE, DROP_TABLE
DDL_VIEW_EVENTS: CREATE_VIEW, ALTER_VIEW, DROP_VIEW
93
DDL_TRIGGER_EVENTS: CREATE_TRIGGER, DROP_TRIGGER, ALTER_TRIGGER
DDL_TYPE_EVENTS: CREATE_TYPE, DROP_TYPE
DDL_XML_SCHEMA_COLLECTION_EVENTS: CREATE_XML_SCHEMA_COLLECTION, ALTER_XML_SCHEMA_COLLECTION, DROP_XML_SCHEMA_COLLECTION
You can’t use all DDL events in DDL triggers because some events are intended for asynchronous, nontransacted statements only. A good example is the CREATE DATABASE event, which cannot be used in a DDL trigger. In these cases, you have to use event notifications instead. As indicated, I will cover event notifications in more detail in Chapter 4.
Within the body of the trigger, you have access to a structure called EVENTDATA(). You can interrogate this to extract information about the DDL activity that just occurred.
A common question with DDL triggers is, what does the EVENTDATA() contain? The answer, as with most things, is an unsatisfactory, it depends. The structure is actually represented as Extensible Markup Language (XML) because it contains different elements depending on the scope being covered and the particular event being processed.
Implementing DDL triggers is a relatively straightforward process, as shown in Exercise 2.6. EXERCISE 2.6
Working with DDL Triggers Once you have deployed your database solution to your production environment, you do not want developers to be able to create tables in the database solution—no matter what. Consequently, you have decided to implement DDL triggers to prevent the CREATE TABLE statement within the database.
1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
Click the New Query toolbar button to open a new query window.
40521.book Page 94 Monday, August 14, 2006 8:04 AM
Chapter 2
94
Designing Database Objects
EXERCISE 2.6 (continued)
3.
Type the following T-SQL code, and execute it: USE AdventureWorks ; GO -- Create DDL trigger CREATE TRIGGER [trg_UnauthorizedDDL] ON DATABASE FOR CREATE_TABLE AS RAISERROR ('*** Unauthorized DDL Operation. ***',16,1) ROLLBACK TRANSACTION GO
4.
It’s time to test the DDL trigger. Try to create a table within the database. Type the following T-SQL code, and execute it: CREATE TABLE [tbl_Futility] ( VeryLongColumnName INT, EvenLongerColumnName BIGINT ) GO Success! You should see the following error message: Msg 50000, Level 16, State 1, Procedure trg_UnauthorizedDDL, Line 4 *** Unauthorized DDL Operation *** Msg 3609, Level 16, State 2, Line 1 The transaction ended in the trigger. The batch has been aborted.
40521.book Page 95 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
95
EXERCISE 2.6 (continued)
5.
To clean up your SQL Server instance, you will need to execute the following code (unless you are sitting in front of mate’s computer and want to be nasty, in which case you can leave it as is): -- Clean up DROP TRIGGER [trg_UnauthorizedDDL] ON DATABASE GO
Designing WMI Triggers If you want to impress your mates at the local pub, ask them what the WMI triggers in SQL Server 2005 are. Chances are they will say there is no such thing, and you can win a few beers off them—unless they have read this book, of course. WMI triggers are really Windows Management Instrumentation (WMI) event alerts that are raised when a particular SQL Server–related event occurs that is being monitored by the WMI provider for server events that in turn is monitored by the SQL Server Agent. Figure 2.3 shows this “convoluted” architecture that involves quite a number of various SQL Server components. As you can see, the SQL Server Agent acts as a WMI management application, issuing WQL queries to the WMI interface and responding accordingly. FIGURE 2.3
WMI alert architecture
WMIEventProviderNotificationQueue Windows Management Instrumentation Service
Instance of SQL Server AdventureWorks
msdb
Event Notification Broker Service
Service Queue Dialog
Broker Service
SQLWEP XML EventData()
Other WMI Service Provider
XML EventData() Issues SQL Queries and Waits for Results in CIM/MOF
WMI Management Application Such as SQL Server Agent
40521.book Page 96 Monday, August 14, 2006 8:04 AM
Chapter 2
96
Designing Database Objects
Understanding the Windows Management Instrumentation (WMI) The WMI is Microsoft’s web-based enterprise management (WBEM)–compliant implementation of the Common Information Model (CIM) initiative developed by the Distributed Management Task Force (DMTF). I used to love teaching this in my SMS 2.0 courses (although the WMI first appeared in Windows NT 4.0 Option Pack and Windows NT 4.0 Service Pack 4). The WMI basically provides uniform support for systems and applications management. A key feature of the WMI, and fundamentally what I am talking about here, is the ability to notify a management component of a particular event occurring, such as hardware or software events (or errors). In this case, you are more interested in the SQL Server namespace obviously. For more information about the WMI, its background, and its purpose, I recommend reading http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnwmi/html/wmioverview.asp.
WQL stands for WMI Query Language, which is really a simplified version of the SQL language you know and love. It additionally has some WMI-specific extensions, as you would expect. If you are familiar with Microsoft’s System Management Server product, you should be extremely comfortable with the WMI and with WQL.
The WMI provider for server events manages a WMI namespace for each instance of SQL Server 2005. The namespace has the \\.\root\Microsoft\SqlServer\ServerEvents\ instance_name format. A default SQL Server 2005 instance has an instance_name of MSSQLSERVER. Two main categories of events make up the programming model for the WMI provider for server events. The categories are DDL and SQL trace events. The following list represents the set of DDL events:
DDL_DATABASE_LEVEL_EVENTS
DDL_ASSEMBLY_EVENTS: CREATE_ASSEMBLY, ALTER_ASSEMBLY, DROP_ ASSEMBLY DDL_DATABASE_SECURITY_EVENTS
DDL_APPLICATION_ROLE_EVENTS: CREATE_APPLICATION_ROLE, ALTER_APPLICATION_ROLE, DROP_APPLICATION_ROLE
DDL_AUTHORIZATION_DATABASE_EVENTS: ALTER_AUTHORIZATION_ DATABASE
40521.book Page 97 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
97
DDL_CERTIFICATE_EVENTS: CREATE_CERTIFICATE, ALTER_CERTIFICATE, DROP_CERTIFICATE
DDL_GDR_DATABASE_EVENTS: GRANT_DATABASE, DENY_DATABASE, REVOKE_DATABASE
DDL_ROLE_EVENTS: CREATE_ROLE, ALTER_ROLE, DROP_ROLE
DDL_SCHEMA_EVENTS: CREATE_SCHEMA, ALTER_SCHEMA, DROP_ SCHEMA
DDL_USER_EVENTS: CREATE_USER, DROP_USER, ALTER_USER
DDL_EVENT_NOTIFICATION_EVENTS: CREATE_EVENT_NOTIFICATION, DROP_EVENT_NOTIFICATION
DDL_FUNCTION_EVENTS: CREATE_FUNCTION, ALTER_FUNCTION, DROP_FUNCTION DDL_PARTITION_EVENTS
DDL_PARTITION_FUNCTION_EVENTS: CREATE_PARTITION_FUNCTION, ALTER_PARTITION_FUNCTION, DROP_PARTITION_FUNCTION
DDL_PARTITION_SCHEME_EVENTS: CREATE_PARTITION_SCHEME, ALTER_PARTITION_SCHEME, DROP_PARTITION_SCHEME
DDL_PROCEDURE_EVENTS: CREATE_PROCEDURE, DROP_PROCEDURE, ALTER_PROCEDURE DDL_SSB_EVENTS
DDL_CONTRACT_EVENTS: CREATE_CONTRACT, DROP_CONTRACT
DDL_MESSAGE_TYPE_EVENTS: CREATE_MSGTYPE, ALTER_MSGTYPE, DROP_MSGTYPE
DDL_QUEUE_EVENTS: CREATE_QUEUE, ALTER_QUEUE, DROP_QUEUE
DDL_SERVICE_EVENTS: CREATE_SERVICE, DROP_SERVICE, ALTER_ SERVICE
DDL_REMOTE_SERVICE_BINDING_EVENTS: CREATE_REMOTE_ SERVICE_BINDING, ALTER_REMOTE_SERVICE_BINDING, DROP_ REMOTE_SERVICE_BINDING
DDL_ROUTE_EVENTS: CREATE_ROUTE, DROP_ROUTE, ALTER_ROUTE
DDL_SYNONYM_EVENTS: CREATE_SYNONYM, DROP_SYNONYM
DDL_TABLE_VIEW_EVENTS
DDL_INDEX_EVENTS: CREATE_INDEX, DROP_INDEX, ALTER_INDEX, CREATE_ XML_INDEX
DDL_STATISTICS_EVENTS: CREATE_STATISTICS, UPDATE_STATISTICS, DROP_STATISTICS
DDL_TABLE_EVENTS: CREATE_TABLE, ALTER_TABLE, DROP_TABLE
40521.book Page 98 Monday, August 14, 2006 8:04 AM
Chapter 2
98
Designing Database Objects
DDL_VIEW_EVENTS: CREATE_VIEW, ALTER_VIEW, DROP_VIEW
DDL_TRIGGER_EVENTS: CREATE_TRIGGER, DROP_TRIGGER, ALTER_TRIGGER
DDL_TYPE_EVENTS: CREATE_TYPE, DROP_TYPE
DDL_XML_SCHEMA_COLLECTION_EVENTS: CREATE_XML_SCHEMA_ COLLECTION, ALTER_XML_SCHEMA_COLLECTION, DROP_XML_ SCHEMA_COLLECTION
DDL_SERVER_LEVEL_EVENTS: CREATE_DATABASE, ALTER_DATABASE, DROP_DATABASE
DDL_ENDPOINT_EVENTS: CREATE_ENDPOINT, ALTER_ENDPOINT, DROP_ ENDPOINT
DDL_SERVER_SECURITY_EVENTS: ADD_ROLE_MEMBER, ADD_SERVER_ ROLE_MEMBER, DROP_ROLE_MEMBER, DROP_SERVER_ROLE_MEMBER
DDL_AUTHORIZATION_SERVER_EVENTS: ALTER_AUTHORIZATION_ SERVER
DDL_GDR_SERVER_EVENTS: GRANT_SERVER, DENY_SERVER, REVOKE_ SERVER
DDL_LOGIN_EVENTS: CREATE_LOGIN, ALTER_LOGIN, DROP_LOGIN
The following list represents the set of SQL trace events:
TRC_CLR: ASSEMBLY_LOAD
TRC_DATABASE: DATA_FILE_AUTO_GROW, DATA_FILE_AUTO_SHRINK, DATABASE_MIRRORING_STATE_CHANGE, LOG_FILE_AUTO_GROW, LOG_ FILE_AUTO_SHRINK
TRC_DEPRECATION: DEPRECATION_ANNOUNCEMENT, DEPRECATION_ FINAL_SUPPORT
TRC_ERRORS_AND_WARNINGS: BLOCKED_PROCESS_REPORT, ERRORLOG, EVENTLOG, EXCEPTION, EXCHANGE_SPILL_EVENT, EXECUTION_WARNINGS, HASH_WARNING, MISSING_COLUMN_STATISTICS, MISSING_JOIN_PREDICATE, SORT_WARNINGS, USER_ERROR_MESSAGE
TRC_FULL_TEXT: FT_CRAWL_ABORTED, FT_CRAWL_STARTED, FT_CRAWL_ STOPPED
TRC_LOCKS: DEADLOCK_GRAPH, LOCK_DEADLOCK, LOCK_DEADLOCK_ CHAIN, LOCK_ESCALATION
TRC_OBJECTS: OBJECT_ALTERED, OBJECT_CREATED, OBJECT_DELETED
TRC_OLEDB: OLEDB_CALL_EVENT, OLEDB_DATAREAD_EVENT, OLEDB_ ERRORS, OLEDB_PROVIDER_INFORMATION, OLEDB_QUERYINTERFACE_ EVENT
40521.book Page 99 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
99
TRC_PERFORMANCE: SHOWPLAN_ALL_FOR_QUERY_COMPILE, SHOWPLAN_XML, SHOWPLAN_XML_FOR_QUERY_COMPILE, SHOWPLAN_ XML_STATISTICS_PROFILE
TRC_QUERY_NOTIFICATIONS: QN_DYNAMICS, QN_PARAMETER_TABLE, QN_SUBSCRIPTION, QN_TEMPLATE
TRC_SECURITY_AUDIT: AUDIT_ADD_DB_USER_EVENT, AUDIT_ADDLOGIN_ EVENT, AUDIT_ADD_LOGIN_TO_SERVER_ROLE_EVENT, AUDIT_ADD_ MEMBER_TO_DB_ROLE_EVENT, AUDIT_ADD_ROLE_EVENT, AUDIT_APP_ROLE_ CHANGE_PASSWORD_EVENT, AUDIT_BACKUP_RESTORE_EVENT, AUDIT_ CHANGE_AUDIT_EVENT, AUDIT_CHANGE_DATABASE_OWNER, AUDIT_ DATABASE_MANAGEMENT_EVENT, AUDIT_DATABASE_OBJECT_ACCESS_ EVENT, AUDIT_DATABASE_OBJECT_GDR_EVENT, AUDIT_DATABASE_OBJECT_ MANAGEMENT_EVENT, AUDIT_DATABASE_OBJECT_TAKE_OWNERSHIP_ EVENT, AUDIT_DATABASE_OPERATION_EVENT, AUDIT_DATABASE_ PRINCIPAL_IMPERSONATION_EVENT, AUDIT_DATABASE_PRINCIPAL_ MANAGEMENT_EVENT AUDIT_DATABASE_SCOPE_GDR_EVENT, AUDIT_DBCC_ EVENT, AUDIT_LOGIN, AUDIT_LOGIN_CHANGE_PASSWORD_EVENT, AUDIT_ LOGIN_CHANGE_PROPERTY_EVENT, AUDIT_LOGIN_FAILED, AUDIT_LOGIN_ GDR_EVENT, AUDIT_LOGOUT, AUDIT_SCHEMA_OBJECT_ACCESS_EVENT, AUDIT__SCHEMA_OBJECT_GDR_EVENT, AUDIT_SCHEMA_OBJECT_ MANAGEMENT_EVENT, AUDIT_SCHEMA_OBJECT_TAKE_OWNERSHIP_EVENT, AUDIT_SERVER_ALTER_TRACE_EVENT, AUDIT_SERVER_OBJECT_GDR_EVENT, AUDIT_SERVER_OBJECT_MANAGEMENT_EVENT , AUDIT_SERVER_OBJECT_ TAKE_OWNERSHIP_EVENT, AUDIT_SERVER_OPERATION_EVENT, AUDIT_ SERVER_PRINCIPAL_IMPERSONATION_EVENT, AUDIT_SERVER_PRINCIPAL_ MANAGEMENT_EVENT, AUDIT_SERVER_SCOPE_GDR_EVENT
TRC_SERVER: MOUNT_TAPE, SERVER_MEMORY_CHANGE, TRACE_FILE_ CLOSE
TRC_STORED_PROCEDURE: SP_CACHEINSERT, SP_CACHEMISS, SP_ CACHEREMOVE, SP_RECOMPILE
TRC_TSQL: SQL_STMTRECOMPILE, XQUERY_STATIC_TYPE
TRC_USER_CONFIGURABLE: USERCONFIGURABLE_0, USERCONFIGURABLE_1, USERCONFIGURABLE_2, USERCONFIGURABLE_3, USERCONFIGURABLE_4, USERCONFIGURABLE_5, USERCONFIGURABLE_6, USERCONFIGURABLE_7, USERCONFIGURABLE_8, USERCONFIGURABLE_9
Creating WMI event alerts is a relatively straightforward process through SQL Server Management Studio. The trick, of course, is to write the WQL query correctly. Figure 2.4 shows the WMI event alert that you will be creating in Exercise 2.7.
40521.book Page 100 Monday, August 14, 2006 8:04 AM
Chapter 2
100
FIGURE 2.4
Designing Database Objects
WMI alert
With no further ado, let’s go through the exercise of creating a WMI event alert!
For SQL Server Agent to receive WMI events, SQL Server 2005 Service Broker must be enabled in msdb and AdventureWorks.
EXERCISE 2.7
Working with WMI Event Alerts In this particular exercise, you want to create WMI event alerts that will capture deadlock information automatically to a [DeadlockEvents] table for further analysis as required. So, the first task you have to do is to create the table. The [DeadlockGraph] column will capture the XML document that shows the deadlock graph event’s properties.
1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
Click the New Query toolbar button to open a new query window.
3.
Type the following T-SQL code, and execute it: USE AdventureWorks ; GO
-- Table to capture deadlock events.
40521.book Page 101 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
101
EXERCISE 2.7 (continued)
CREATE TABLE [tbl_DeadlockEvents] ( AlertDateTime DATETIME, DeadlockEventGraph XML ) GO
4.
The next task you have to perform is to create the WMI event alert. You will be using the SELECT * FROM DEADLOCK_GRAPH WQL query. Type the following T-SQL code, and execute it: USE AdventureWorks ; GO
-- Create SQL Agent Job EXEC
msdb.dbo.sp_add_job
@job_name = 'Capture Deadlocks', @enabled = 1, @description = 'Capture DEADLOCK_GRAPH events' GO EXEC msdb.dbo.sp_add_jobstep @job_name = 'Capture Deadlocks', ➥@step_name = 'Insert DEADLOCK_GRAPH event into [DeadlockEvents]', @step_id = 1, @subsystem = 'TSQL', @command = 'INSERT INTO [tbl_DeadlockEvents] (AlertDateTime, DeadlockGraph) VALUES (GETDATE(), ''$(WMI(TextData))'')', @database_name = 'AdventureWorks'
40521.book Page 102 Monday, August 14, 2006 8:04 AM
Chapter 2
102
Designing Database Objects
EXERCISE 2.7 (continued)
GO EXEC msdb.dbo.sp_add_jobserver @job_name = 'Capture Deadlocks' GO -- Create WMI Event Alert EXEC msdb.dbo.sp_add_alert @name = 'Respond to DEADLOCK_GRAPH', @wmi_namespace = '\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER', @wmi_query = 'SELECT * FROM DEADLOCK_GRAPH', @job_name = 'Capture Deadlocks' GO
5.
It’s time to test the WMI event alert by creating a deadlock. So, you need to open a new query window to start your first transaction. Click the New Query toolbar button to open a second query window.
6.
Type the following T-SQL code, and execute it: USE AdventureWorks ; GO -- Start first transaction BEGIN TRANSACTION SELECT * FROM Person.Contact WITH (TABLOCKX) GO
7.
Now you need to open another new query window to start the second transaction. Click the New Query toolbar button to open a third query window.
8.
Type the following T-SQL code, and execute it: USE AdventureWorks ; GO
40521.book Page 103 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
EXERCISE 2.7 (continued)
-- Start second transaction BEGIN TRANSACTION SELECT * FROM Person.Address WITH (TABLOCKX) SELECT * FROM Person.Contact WITH (TABLOCKX) GO
9.
To create the deadlock, you have to switch back to the first transaction and try accessing a resource that has been exclusively locked by the second transaction. Switch back to the query window that was used to start the first transaction.
10. Type the following T-SQL code, and execute it: -- Back to first transaction to create the deadlock BEGIN TRANSACTION SELECT * FROM Person.Address WITH (TABLOCKX) GO SQL Server 2005 should choose one of the two transactions as a victim, and you will get an error message similar to the following one: Msg 1205, Level 13, State 51, Line 2 Transaction (Process ID 69) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.
11. Wait a while. By now SQL Server 2005 has written the deadlock graph to the [tbl_ DeadlockEvents] table, so let’s query it and examine the contents.
12. Close the two query windows you used to execute the two transactions. You should now be in the query window that was used to create the WMI event alert.
13. Type the following T-SQL code, and execute it (make sure you double-click the [DeadlockEventGraph] XML column): -- Examine deadlock graph SELECT * FROM [tbl_DeadlockEvents] GO
103
40521.book Page 104 Monday, August 14, 2006 8:04 AM
Chapter 2
104
Designing Database Objects
EXERCISE 2.7 (continued)
14. Type the following T-SQL code to clean up your SQL Server instance, and execute it: USE AdventureWorks ; GO -- Clean up EXEC msdb.dbo.sp_delete_alert @name = 'Respond to DEADLOCK_GRAPH' GO EXEC
msdb.dbo.sp_delete_job
@job_name = 'Capture Deadlocks' GO DROP TABLE [tbl_DeadlockEvents] GO
SQL Server Agent As I indicated earlier, the SQL Server Agent’s purpose is primarily designed for administrative purposes such as for backups. However, you can leverage its capabilities as a database architect or database designer to perform a variety of actions within your database solution, such as transferring data, emailing query results periodically, and running end-of-day batch processes. So the SQL Server Agent provides a very powerful and flexible framework for performing actions within a SQL Server solution. The SQL Server Agent basically has three core abilities that you can take advantage of. Execute Scheduled Jobs The ability to execute a range of external and internal jobs provides a power platform for the DBA to create a self-monitoring environment. However this ability can also be used by the developer to create a very rich database solution. It all depends on you, but the sky’s the limit! Generate Alerts SQL Server Agent has the ability to notify you whenever an event that you are interested in occurs. Although these are geared primarily toward the DBA, the developer can define their own counters which can then have thresholds set against them and alerts consequently triggered. Notify Operators A core component of the SQL Server Agent framework is the ability to notify operators using different technologies whenever a particular executing job has succeeded or failed or an alert has occurred within the database solution.
40521.book Page 105 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
105
Don’t forget the SQL Server Agent! For some reason a lot of people want to use SSIS packages and other techniques or tools when a simple SQL Server Agent job will do the job. This adds needless complexity to the task at hand and might add to the administrative overhead and an extra point of failure.
Scheduling a Job Using SQL Server Agent The ability of SQL Server Agent to run various jobs is an extremely powerful capability of SQL Server 2005. It’s a matter of examining the various options available to you, as either the DBA or database developer, so make sure you go through all of the dialog boxes. The SQL Server Agent provides the ability to execute a number of different types of tasks: ActiveX Script You can execute various ActiveX scripts through the SQL Server Agent. SQL Server 2005 natively supports VBScript and Jscript, although other scripting languages are supported if the corresponding “engine” is installed. Because you are executing a job external to SQL Server 2005, there will be security implications and a question of whether the SQL Server Agent will have enough rights to perform the action requested. By default, the process will run under the security context of the SQL Server Agent Account. Operating system (CmdExec) You can execute any external executable process or batch file via this type of job step. Again there will be the same security consideration as with the ActiveX script jobs as you are executing an external job. SQL Server Integration Services Package You can execute SQL Server Integration Services (SSIS) packages through the SQL Server Agent, which now provides extra options specifically for SSIS. I will cover SSIS in more detail in Chapter 9. Transact-SQL script (T-SQL) You can execute any valid T-SQL batch via the SQL Server Agent framework. It’s as easy as that. There are also a number of Replication and SQL Server Analysis Services (SSAS) tasks. The Replication tasks are typically automatically generated by the various Replication Wizards and Utilities, but don’t forget that you can tweak these replication tasks here, if required. I will cover Replication briefly in Chapter 9. When scheduling a job to run using the SQL Server Agent, you typically use the SQL Server Management Studio environment. Although if you need to replicate a job to multiple SQL Server instances, you would want to create T-SQL scripts instead.
Don’t forget that you can get SQL Server Management Studio to generate a T-SQL script for every action that you perform through the graphical environment. This represents an easy way to generate T-SQL scripts (which can then be further customized) that you want to execute on multiple SQL Server instances.
40521.book Page 106 Monday, August 14, 2006 8:04 AM
106
Chapter 2
Designing Database Objects
So let’s have a look at the steps involved in scheduling a job through the SQL Server Agent in more detail.
Defining the Job The first step involved in creating a SQL Server Agent job is to define the job itself. This involves giving it a unique name and providing some other basic information, such as a job category and a description of what the job does and other relevant information.
Try to get into the habit of always providing a category and a description for every job. One of the more frustrating aspects of refactoring a database solution as a contractor is to unravel the mess of jobs created by various DBAs, developers and contractors over the years. The benefits of properly documenting the various jobs created should be self-evident, be it an enterprise or a small business environment.
Figure 2.5 shows how you define a job using the SQL Server Management Studio environment: FIGURE 2.5
Defining a new SQL Server Agent job
To add a new job using a T-SQL script you would use the sp_add_job system stored procedure: sp_add_job [ @job_name = ] 'job_name' [ , [ @enabled = ] enabled ] [ , [ @description = ] 'description' ]
40521.book Page 107 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
[ [ [ [ [ [ [ [ [ [
107
, , , , , , , , , ,
[ @start_step_id = ] step_id ] [ @category_name = ] 'category' ] [ @category_id = ] category_id ] [ @owner_login_name = ] 'login' ] [ @notify_level_eventlog = ] eventlog_level ] [ @notify_level_email = ] email_level ] [ @notify_level_netsend = ] netsend_level ] [ @notify_level_page = ] page_level ] [ @notify_email_operator_name = ] 'email_name' ] [ @notify_netsend_operator_name = ] 'netsend_name' ] [ , [ @notify_page_operator_name = ] 'page_name' ] [ , [ @delete_level = ] delete_level ] [ , [ @job_id = ] job_id OUTPUT ]
Defining the Job Steps Once you have defined your SQL Server Agent job, you then need to define the various steps that you would like the SQL Server Agent to perform and the order in which you would like them executed. The properties of the job step will depend exclusively on the type of task you are configuring. Figure 2.6 shows the General page and the options available when defining a T-SQL job step using the SQL Server Management Studio environment: FIGURE 2.6
Defining a new SQL Server Agent job step
40521.book Page 108 Monday, August 14, 2006 8:04 AM
108
Chapter 2
Designing Database Objects
Figure 2.7 shows the Advanced page and the options available when defining a job step. Notice the ability to create complex work flows by stipulating what job step we execute on either the success or failure of the executing job step. FIGURE 2.7
Defining the SQL Server Agent job step’s workflow
To add a new step using a T-SQL script you would you use the sp_add_jobstep system stored procedure: sp_add_jobstep [ @job_id = ] job_id | [ @job_name = ] 'job_name' [ , [ @step_id = ] step_id ] { , [ @step_name = ] 'step_name' } [ , [ @subsystem = ] 'subsystem' ] [ , [ @command = ] 'command' ] [ , [ @additional_parameters = ] 'parameters' ] [ , [ @cmdexec_success_code = ] code ] [ , [ @on_success_action = ] success_action ] [ , [ @on_success_step_id = ] success_step_id ] [ , [ @on_fail_action = ] fail_action ] [ , [ @on_fail_step_id = ] fail_step_id ]
40521.book Page 109 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
[ [ [ [ [ [ [ [ [
, , , , , , , , ,
[ [ [ [ [ [ [ [ { |
109
@server = ] 'server' ] @database_name = ] 'database' ] @database_user_name = ] 'user' ] @retry_attempts = ] retry_attempts ] @retry_interval = ] retry_interval ] @os_run_priority = ] run_priority ] @output_file_name = ] 'file_name' ] @flags = ] flags ] [ @proxy_id = ] proxy_id [ @proxy_name = ] 'proxy_name' } ]
Defining the Job Schedule The third step would be to define the frequency at which you want the job to execute. The ability to create a schedule that could be reused by multiple jobs was a welcome boon to DBAs when it was first introduced to SQL Server as it reduced both the amount of work required in setting up jobs and the complexity of managing jobs. The scheduling capabilities should be adequate for everyone but the fussiest of DBAs. Figure 2.8 shows how you define a new job schedule using the SQL Server Management Studio environment. FIGURE 2.8
Creating a new SQL Server Agent schedule
40521.book Page 110 Monday, August 14, 2006 8:04 AM
110
Chapter 2
Designing Database Objects
Figure 2.9 shows how you attach an existing job schedule to a job using the SQL Server Management Studio environment. FIGURE 2.9
Attaching as schedule to a SQL Server Agent job
To create a new schedule using a T-SQL script you would you use the sp_add_schedule system stored procedure: sp_add_schedule [ @schedule_name = ] 'schedule_name' [ , [ @enabled = ] enabled ] [ , [ @freq_type = ] freq_type ] [ , [ @freq_interval = ] freq_interval ] [ , [ @freq_subday_type = ] freq_subday_type ] [ , [ @freq_subday_interval = ] freq_subday_interval ] [ , [ @freq_relative_interval = ] freq_relative_interval ] [ , [ @freq_recurrence_factor = ] freq_recurrence_factor ] [ , [ @active_start_date = ] active_start_date ] [ , [ @active_end_date = ] active_end_date ] [ , [ @active_start_time = ] active_start_time ] [ , [ @active_end_time = ] active_end_time ] [ , [ @owner_login_name = ] 'owner_login_name' ] [ , [ @schedule_uid = ] schedule_uid OUTPUT ] [ , [ @schedule_id = ] schedule_id OUTPUT ] [ , [ @originating_server = ] server_name ]
40521.book Page 111 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
111
To attach an existing schedule to a job using a T-SQL script you would you use the sp_ attach_schedule system stored procedure: sp_attach_schedule { [ @job_id = ] job_id | [ @job_name = ] 'job_name' } , { [ @schedule_id = ] schedule_id | [ @schedule_name = ] 'schedule_name' }
Optionally Defining the Notifications You have the optional capability of notifying a predefined operator for the following conditions of the job’s execution:
When the job completes
When the job fails
When the job succeeds
Figure 2.10 shows you what actions can be performed upon the completion of a job using the SQL Server Management Studio environment. FIGURE 2.10
Defining whom to notify upon a job’s completion
40521.book Page 112 Monday, August 14, 2006 8:04 AM
112
Chapter 2
Designing Database Objects
Normally you would define the job notifications when you create the job using the sp_add_ job system stored procedure I discussed earlier. If you want to change an existing job’s notifications you would use the sp_update_job system stored procedure instead: sp_update_job [ @job_id =] job_id | [@job_name =] 'job_name' [, [@notify_level_eventlog =] eventlog_level ] [, [@notify_level_email =] email_level ] [, [@notify_level_netsend =] netsend_level ] [, [@notify_level_page =] page_level ] [, [@notify_email_operator_name =] 'email_name' ] [, [@notify_netsend_operator_name =] netsend_operator' ] [, [@notify_page_operator_name =] 'page_operator' ]
Defining the Target Server The final mandatory step is to specify which SQL Server instance the job should execute at, a fairly straight forward procedure. Figure 2.11 shows you how you can control the target server on which the job will execute. FIGURE 2.11
Defining the job’s target server
40521.book Page 113 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
113
The sp_add_jobserver system stored procedure is used to control which SQL Server instance the job should execute on: sp_add_jobserver [ @job_id = ] job_id | [ @job_name = ] 'job_name' [ , [ @server_name = ] 'server' ]
Defining Alerts Using SQL Server Agent Another important facet of the SQL Server Agent is the ability to be able to generate alerts based on a number of different criteria within your database solution. The SQL Server Agent in SQL Server 2005 supports a number of different types of alerts which I will cover shortly. Alerts are basically a response to an event that typically the DBA is interested in, such as when a database’s transaction log is full or almost full, such as being over 90 percent full. The event depends on the type of SQL Server Agent Alert. All the different types of alerts have the same response of either a job being executed, an operator being notified, or both. Figure 2.12 shows what options you have when responding to a SQL Server Agent Alert being generated. FIGURE 2.12
SQL Server Agent Alert response options
40521.book Page 114 Monday, August 14, 2006 8:04 AM
114
Chapter 2
Designing Database Objects
Likewise there are a number of options that you can configure for all alerts. Figure 2.13 shows what options exist for a SQL Server Agent Alert being generated. FIGURE 2.13
SQL Server Agent Alert options
The Delay between responses is an important alert option as it prevents the SQL Server Agent from being overloaded if a particular event is repeatedly firing in quick succession.
Let’s now have a look at the different types of SQL Server Agent Alerts that can be defined in SQL Server 2005.
SQL Server Event Alerts SQL Server Event Alerts are based on the SQL Server error messages that can be generated by the SQL Server 2005 Database Engine through the sys.sysmessages system catalog. So it is a matter of becoming familiar with the various error messages that can be generated by SQL Server, their severity and message text. Figure 2.14 shows a SQL Server Event Alert being defined for when the AdventureWorks database’s transaction log is full.
40521.book Page 115 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
FIGURE 2.14
115
SQL Server Event Alert
Don’t forget that as a database developer you can create your own custom error messages using the sp_addmessage system stored procedure: sp_addmessage [ @msgnum = ] msg_id , [ @severity = ] severity , [ @msgtext = ] 'msg' [ , [ @lang = ] 'language' ] [ , [ @with_log = ] 'with_log' ] [ , [ @replace = ] 'replace' ]
Don’t forget to have some sort of convention in your enterprise of what range of error numbers you plan to use above 50,000.
This basically allows you to define a SQL Server Event Alert based on your own user-defined messages. You will need to use the RAISERROR statement to generate the user-defined message: RAISERROR ( { msg_id | msg_str | @local_variable } { ,severity ,state } [ ,argument [ ,...n ] ] ) [ WITH option [ ,...n ] ]
40521.book Page 116 Monday, August 14, 2006 8:04 AM
116
Chapter 2
Designing Database Objects
The RAISERROR statement is typically invoked in a stored procedure or trigger.
For SQL Server Event Alerts to work against your user-defined error message, your user-defined error message must write to the Windows Event Log through either its definition via the @with_log = 'TRUE' clause or at runtime through the WITH LOG clause.
SQL Server Performance Condition Alerts SQL Server Performance Condition Alerts are based on the SQL Server Performance Monitor Object Counters that get installed with a default installation of SQL Server 2005. I will cover the Performance Monitor tool and various SQL Server Performance Monitor counters in Chapter 3, but suffice it to say, these alerts are generally designed for the DBA. Figure 2.15 shows a SQL Server Event Alert being defined for when the AdventureWorks database’s transaction log is 90 percent full. Now you could mistakenly move on to the next section of this chapter thinking that there is nothing for the developer here, but that is not the case. You can take advantage of the relatively unknown feature of SQL Server to be able to create your own custom Performance Object counters. SQL Server 2005 lets you create up to 10 instances of theses custom counters.
For more information on this feature of SQL Server 2005 you should read the “SQL Server, User Settable Object” topic in SQL Server 2005 Books Online.
FIGURE 2.15
SQL Server Performance Condition Alert
40521.book Page 117 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
117
As an example, let’s say that I work for a banana trader in Australia that services the lucrative restaurant market. Cyclone Larry, a Category 5 storm that hit Far North Queensland in March 18, 2006, has decimated 80 percent of the banana plantations. Consequently, demand for bananas and corresponding prices are expected to rise. Olga Kats, my manager, has asked me to monitor the stock level of bananas throughout the course of the trading day in the banana database and generate an alert when the stock level falls below 100 bananas. A potential solution is to create a trigger on the table that keeps track of the banana stock level by executing the EXECUTE sp_user_counter1 @BananasInStock statement whenever the [StockLevel] field is modified. This then allows me to set up a monitoring solution using Performance Monitor. It also allows me to generate SQL Server Performance Condition Alerts whenever the banana stock level falls beneath the threshold, as shown in Figure 2.16. I deliberately chose a mundane, yet hopefully powerful illustration of what can be done through user settable object counters. Although the price of bananas in Sydney is ridiculous at the moment, over $16 per kilogram. So Olga was not able to have a banana desert at the South American restaurant in Darling Harbor the other weekend. They should have implemented a SQL Server based alerting system.
WMI Event Alerts I have already discussed WMI Event Alerts in more detail earlier in this chapter so I have just included it here for the sake of being complete. Figure 2.17 shows the WMI Event Alert we created earlier in Exercise 2.7. FIGURE 2.16
Banana Crisis Alert
40521.book Page 118 Monday, August 14, 2006 8:04 AM
118
Chapter 2
Designing Database Objects
Defining Operators Using SQL Server Agent Let’s finish off with how you define operators in SQL Server Agent. It’s a relatively straightforward exercise of providing the correct operator details. Figure 2.18 shows how you would create a SQL Server Agent Operator using the SQL Server Management Studio environment. Alternatively you could use the sp_add_operator system stored procedure: sp_add_operator [ @name = ] 'name' [ , [ @enabled = ] enabled ] [ , [ @email_address = ] 'email_address' ] [ , [ @pager_address = ] 'pager_address' ] [ , [ @weekday_pager_start_time = ] weekday_pager_start_time ] [ , [ @weekday_pager_end_time = ] weekday_pager_end_time ] [ , [ @saturday_pager_start_time = ] saturday_pager_start_time ] [ , [ @saturday_pager_end_time = ] saturday_pager_end_time ] [ , [ @sunday_pager_start_time = ] sunday_pager_start_time ] [ , [ @sunday_pager_end_time = ] sunday_pager_end_time ] [ , [ @pager_days = ] pager_days ] [ , [ @netsend_address = ] 'netsend_address' ] [ , [ @category_name = ] 'category' ]
But why would you bother…
SQL Server Agent (Reprise) As Motley Crüe sang in 1985 “Better use it before you lose it, You better use it don’t throw it away, hey”… Well, I’m pretty sure we’re not going to lose the SQL Server Agent. But I do know that database developers typically overlook this very lean and mean component of SQL Server when deciding which technology to use to fulfill some functional requirement, opting for the “sexier” technology that might be available. I prefer using the SQL Server Agent whenever possible since it is a tried and tested technology, having been around since the earliest versions of SQL Server. It has minimal overhead and powerful executing and scheduling capabilities. Additionally it has fewer dependencies than other technologies, such as SSIS, so consequently fewer things that can go wrong. So use it!
40521.book Page 119 Monday, August 14, 2006 8:04 AM
Designing Objects That Perform Actions
FIGURE 2.17
WMI Event Alert
FIGURE 2.18
Defining a SQL Server Agent Operator
119
40521.book Page 120 Monday, August 14, 2006 8:04 AM
120
Chapter 2
Designing Database Objects
Summary In this chapter, you initially looked at how to improve and scale the database entities by using indexes and partitioning. You then examined the various objects supported by SQL Server 2005 to retrieve and extend the capabilities of the database engine. You learned whether these objects returned a relational result set of scalar value that dictated where they could be used. You also looked at the different types of triggers and what in order they fired relative to the data modification operation. You learned where to use DML versus DDL triggers. Finally you looked at the SQL Server Agent and how it is capable of executing schedules jobs and generating alerts based on a number of different criteria.
Exam Essentials Understand CLR aggregate functions. Familiarize yourself with the CLR aggregate functions that allow you to create custom aggregations. Know how DML operations are executed. Make sure you understand how INSERT, UPDATE, and DELETE operations are performed by SQL Server and where constraints and different triggers “fire.” Understand the difference between clustered and nonclustered indexes. You need to know where to implement a nonclustered index over a clustered index to optimize performance. Understand the differences between stored procedures and functions. You need to know the functional difference stored procedures and functions. Understand T-SQL versus CLR code. You need to have a good understanding of when to use T-SQL over managed code in order to be able to identify the best implementation method. Understand the difference between DML and DDL triggers. You need to know where to use DML triggers over DDL triggers. Know how to design data access objects. Be familiar with the various pros and cons of using views, UDFs, and stored procedures to retrieve data. Know the criteria for creating indexed views. Make sure you understand the benefits and how to create indexed views to improve performance.
40521.book Page 121 Monday, August 14, 2006 8:04 AM
Review Questions
121
Review Questions 1.
You have designed and implemented a stored procedure whose parameters can vary dramatically. Developers are complaining that performance is inconsistent. Sometimes it works quickly; other times it doesn’t. What strategy should you use? A. Rewrite the stored procedure using the WITH RECOMPILE option. B. Tell developers to always call the procedure using the WITH RECOMPILE option. C. Execute sp_recompile. D. Drop and create the stored procedure.
2.
Which database objects can improve performance for objects that retrieve data? (Choose all that apply.) A. Views B. Table-valued CLR functions C. Indexed views D. Table-valued T-SQL functions
3.
You want to create the following report:
Category
Products
Book
A Problem from Hell, Collapse, Wide Sargasso Sea, …
CD
Beyond Good and Evil, Contraband, The Mushroom Tapes…
DVD
Blue Velvet, Das Boot, Fallen, Hero, Lost Highway, …
…
…
The report is based on the following table: CREATE TABLE [Product] ( [WareHouseId] TINYINT NOT NULL, [ProductId] INT NOT NULL, [ProductName] VARCHAR(20) NOT NULL, [ProductCategory] VARCHAR(5) NOT NULL, [Price] NCHAR(10) NOT NULL, [StockLevel] TINYINT NOT NULL ) How do you implement the Products column in the previous report? A. Using the COALLESCE system function B. Using the NULLIF system function C. Using a CLR aggregate D. Using the GROUP BY clause
40521.book Page 122 Monday, August 14, 2006 8:04 AM
Chapter 2
122
4.
Designing Database Objects
Which of the database objects can be bound to a view? A. CHECK CONSTRAINT B. INSTEAD OF DML trigger C. DDL trigger D. AFTER DML trigger
5.
What is the order of operations during an INSERT statement on a table? A. Any INSTEAD OF triggers defined are fired. B. Any CONSTRAINTs defined are evaluated. C. The INSERT statement occurs. D. Any AFTER triggers defined are fired.
6.
You are designing a database solution that requires your information workers to be able to write their own SELECT queries via applications such as Excel, Access, and others. You do not want the information workers to be able to access the base tables, so want to create database objects based on the tables that will allow them to write their own queries with the maximum amount of flexibility. What database objects can you use? (Choose all that apply.) A. Stored procedures B. Views C. Scalar user-defined functions D. DML triggers E. Table-valued user-defined functions F.
7.
DDL triggers
Information workers want to be able to insert products into the following table: CREATE TABLE [Product] ( [WareHouseId] TINYINT [ProductId] INT [ProductName] VARCHAR(20) [ProductCategory] VARCHAR(5) [Price] NCHAR(10) [StockLevel] TINYINT )
NOT NOT NOT NOT NOT NOT
NULL, NULL, NULL, NULL, NULL, NULL
Management has stipulated that they will not have access to the table and that all DML operations must be performed through views. Management also determines the ideal price at which products should be sold. Currently, information workers are working with the table through this view alone: CREATE VIEW [IWProduct] AS SELECT
40521.book Page 123 Monday, August 14, 2006 8:04 AM
Review Questions
123
WareHouseId, ProductId, ProductName, ProductCategory, Price, (Price * 1.5) AS IdealPrice, StockLevel FROM Product Information workers want to insert new data using the following statement: INSERT IWProduct VALUES (@WareHouseId, @ProductId, @ProductName, @ProductCategory, @Price, @IdealPrice, @StockLevel ) You want to ensure that information workers can insert new product data through views as required by management and that the database solution will work as before. What is the best solution? A. Code an AFTER DML trigger that ensures the inserted [IdealPrice] is 50 percent greater than [Price]. B. Code an INSTEAD OF DML trigger that inserts the data ignoring [Price] and calculates a new value from [IdealPrice]. C. Code an AFTER DML that inserts the data ignoring [IdealPrice]. D. Create a new view that does not have [IdealPrice] that will be used for inserts. E. Alter the view, dropping the [IdealPrice] column. 8.
You have implemented a database solution on one of your production servers that has gone live. You want to prevent developers from modifying the database schema. What statement do you run? A. Use this: CREATE TRIGGER [UnauthorizedDDL] ON DATABASE FOR DDL_DATABASE_LEVEL_EVENTS AS RAISERROR ('Unauthorized DDL.', 16, -1) ROLLBACK RETURN GO
40521.book Page 124 Monday, August 14, 2006 8:04 AM
124
Chapter 2
Designing Database Objects
B. Use this: CREATE TRIGGER [UnauthorizedDDL] ON DATABASE FOR DDL_DATABASE_LEVEL_EVENTS AS RAISERROR ('Unauthorized DDL.', 16, -1) RETURN GO C. Use this: CREATE TRIGGER [UnauthorizedDDL] ON SERVER FOR DENY_SERVER AS RAISERROR ('Unauthorized DDL.', 16, -1) ROLLBACK RETURN GO D. Use this: CREATE TRIGGER [UnauthorizedDDL] ON DATABASE FOR CREATE_TABLE AS RAISERROR ('Unauthorized DDL.', 16, -1) ROLLBACK RETURN GO 9.
You have developed a database solution for a real-time auctioning system. It is OLTP intensive, and users are complaining about poor performance when trying to enter bids and alter bids. You have narrowed it down to an indexing problem. A trace has indicated excessive page splitting. How can you correct the problem? A. Alter the index with FILLFACTOR = 70. B. Drop the index. C. Alter the index with FILLFACTOR = 100. D. Alter the index with FILLFACTOR = 0. E. Use UPDATE STATISTICS.
40521.book Page 125 Monday, August 14, 2006 8:04 AM
Review Questions
125
10. You are designing a database solution based on SQL Server 2005 Standard Edition. The core table contains more than 9,000,000,000 records, and you are concerned about performance. What strategy should you implement to maximize performance? A. Implement table partitioning. B. Implement partitioned views. C. Implement table-valued user-defined functions. D. Implement indexes with FILLFACTOR = 70. 11. You plan to implement a partitioned table that will use four files for the partitions. How many partition boundaries do you need to create? A. 1 B. 2 C. 3 D. 4 12. Your database solution is required to move deleted customer records to a separate table. You want to achieve this using the quickest possible method. What should you do? A. Write a stored procedure that will be called by users. B. Write an AFTER DML trigger. C. Write an INSTEAD OF DML trigger. D. Write an extended stored procedure that will be invoked by users. 13. You want to prevent your database developers from creating database objects unless they get authorization from the developer manager, who needs to verify and test the T-SQL code before allowing it. What should you use to do this? (Choose all that apply.) A. The ENABLE TRIGGER statement B. The ALTER statement C. DDL triggers D. A CLR user-defined aggregate E. The DISABLE TRIGGER statement F.
DML triggers
G. Stored procedures 14. You are designing a database solution based on SQL Server 2005 Enterprise Edition. The core table contains more than 9,000,000,001 records, and you are concerned about performance. What strategy should you implement to maximize performance? A. Implement table partitioning. B. Implement partitioned views. C. Implement table-valued user-defined functions. D. Implement indexes with FILLFACTOR = 70.
40521.book Page 126 Monday, August 14, 2006 8:04 AM
126
Chapter 2
Designing Database Objects
15. You are planning to implement partitioned tables as part of your database solution. What is the first step you must perform? A. CREATE TABLE B. CREATE PARTITION SCHEME C. CREATE PARTITION D. CREATE PARTITION FUNCTION 16. After implementing a set of five DML triggers on one table in your database solution, your testing phase has determined that the triggers aren’t working because they are not firing in the right order. How can you correct the problem? A. Rewrite your DML triggers as DDL triggers. B. Use the sp_settriggerorder system stored procedure. C. Rewrite the five triggers as one trigger instead. D. Disable the triggers. 17. Which of the following functions are deterministic? A. DATEADD B. GETDATE C. GUID D. YEAR 18. You are designing a database for a university and want to ensure that query performance will be optimal between the following two tables that will be frequently joined.
What should you do to maximize join performance? A. Create a clustered index on the [Student].[StudentNo] column. B. Create a clustered index on the [Student].[StudentNo] column. C. Create a nonclustered index on the [Grade].[StudentNo] column. D. Create a clustered index on the [Grade].[StudentNo] column.
40521.book Page 127 Monday, August 14, 2006 8:04 AM
Review Questions
127
19. You are designing your data access layer for your database solution and want to ensure that developers or DBAs cannot view the source code for the database objects. What option should you use when creating these objects? A. ENCRYPTION B. SCHEMABINDING C. RECOMPILE D. VIEW_METADATA 20. You are designing a VLDB for a call center. The main table will be the [Customers] table that contains more than 900,000,000 rows and more than 1,000 fields. The partial definition is as follows: CREATE TABLE [Product] ( [CustId] SMALLINT NOT NULL, [Name] VARCHAR(20) NOT NULL, [Surname] VARCHAR(20) NOT NULL, [Salary] MONEY NOT NULL, [Phone] VARCHAR (20) NOT NULL, [Sex] BIT NOT NULL, [Married] BIT NOT NULL, ... CONSTRAINT [PK] PRIMARY KEY CLUSTERED (CustId) ) Your call center operators telemarket from this database, calling innocent souls out of the blue to harass them with improbably good deals. This week they want to target single females. What indexes should you create on the previous table to optimize the query with the least overhead on SQL Server? A. Use this: CREATE NONCLUSTERED INDEX [NCI] ON Product (Salary, Married, Sex, Name, Surname, Phone) B. Use this: CREATE NONCLUSTERED INDEX [NCI] ON Product (Salary, Married, Sex) INCLUDE (Name, Surname, Phone) C. Use this: CREATE NONCLUSTERED INDEX [NCI] ON Product (Salary, Married, Sex, Surname, Name) INCLUDE (Phone) D. Use this: CREATE NONCLUSTERED INDEX [NCI] ON Product (Salary, Married, Sex)
40521.book Page 128 Monday, August 14, 2006 8:04 AM
128
Chapter 2
Designing Database Objects
Answers to Review Questions 1.
A. Using the WITH RECOMPILE option in the stored procedure guarantees that the procedure will always be executed optimally.
2.
C. Only indexed views will improve performance because they are materialized objects that have indexes on them.
3.
C. You can write a CLR user-defined aggregate to perform the required string concatenation function.
4.
B. Only INSTEAD OF DML triggers can be bound to views.
5.
A, B, C, D. The order of operations is A, B, C, D.
6.
B, E. Views and table-valued user-defined tables return a result set in a tabular format that can be accessed via a SELECT query. Stored procedures, scalar user-defined functions, DML triggers, and DDL triggers do not return data relationally and thus cannot be queried via the SELECT statement.
7.
D. Option D meets all the business requirements. Options A and C will not work as the INSERT statement will fail and the trigger will never fire. Option B is invalid as users should not be able to modify a calculated field set by management. Option E is will potentially break existing code and the database solution will not work as before.
8.
A. The trigger in answer A both detects and rolls back all DDL operations. Option B does not roll back the DDL operation. Option C has the wrong scope. Option D prevents only the creation of tables.
9.
A. Excessive page splitting occurs as a result of not enough free space on the pages that make up the table. Using a fill factor of 70 percent ensures that each page has 30 percent free space, thereby postponing the need for splitting pages. A fill factor setting of 100 percent or 0 percent (which are equivalent) is more appropriate for a DSS environment.
10. B. SQL Server 2005 Standard Edition does not support table partitioning, so you need to implement partitioned views. 11. C. You need one less boundary than the number of partitions being used. 12. B. An AFTER DML trigger will be the quickest technique. 13. A, C, E. You can use DDL triggers to prevent developers from creating database tables. The DISABLE TRIGGER statement will need to be run when a developer has been authorized to create database objects and ENABLE TRIGGER to reenable the DDL triggers. The ALTER statement is used to change the definition of database objects. CLR user-defined aggregates are used to return data. DML triggers and stored procedures are used to perform actions. 14. A. Implementing table partitioning will maximize performance. You should avoid partitioned views because they have been superseded by table partitions.
40521.book Page 129 Monday, August 14, 2006 8:04 AM
Answers to Review Questions
129
15. D. The first step in creating a partitioned table is to create the partition function. 16. C. By rewriting the five triggers as one, you can explicitly control the order in which they get executed. The sp_settriggerorder system stored procedure will not work because it allows you to set only the first and last trigger that gets executed. 17. A, D. The DATEADD and YEAR functions are deterministic. 18. D. Creating a clustered index on the [Grade].[StudentNo] column will maximize join performance. A nonclustered index on [Grade].[StudentNo] will not be as efficient as there will be duplicate values (higher density). Any index on the [Student].[StudentNo] column will not help. 19. A. The ENCRYPTION option hides the object’s source code by encrypting it. 20. B. The optimal index is Option B because it is a covering index. The indexes in Options A and C are unnecessarily wide. Option D is not efficient because the query will still have to go to the table’s data pages.
40521.book Page 130 Monday, August 14, 2006 8:04 AM
40521.book Page 131 Monday, August 14, 2006 8:04 AM
Chapter
3
Performance Tuning a Database Solution MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER: Create a performance baseline and benchmarking strategy for a database.
Establish performance objectives and capacity planning.
Create a strategy for measuring performance changes.
Create a plan for responding to performance changes.
Create a plan for tracking benchmark statistics over time.
Design data distribution.
Design SQL Server Agent alerts.
40521.book Page 132 Monday, August 14, 2006 8:04 AM
One of the important tasks for a database administrator is service monitoring, which allows you to monitor the health of a service in real time. For SQL Server, one aspect will remain constant— that services change—because of the dynamic environment for a database management system. This chapter will show you how to face the changes and determine how they will affect your server’s performance. You will learn how business objectives translate into performance objectives and several ways to transform them into actions. In addition, you will learn how to implement baselines and benchmarks and how to respond to performance problems. You will also learn how to use the tools from the Windows operating system and SQL Server 2005 for performance tuning and troubleshooting.
Establishing Performance Objectives Performance objectives are typically dictated by business requirements and sometimes imposed through a service-level agreement (SLA). The SLA is a mechanism for the customer to demand a certain level of service, but at the same time it helps database administrators (DBAs) control and improve the service’s environment. It can also justify requests for additional resources, such as staff, hardware, or software. Ultimately, it provides a basis for all stakeholders. Regardless of whether they’re imposed by a SLA, performance objectives will usually translate in requirements for response time. The following are several examples. This is scenario 1: Business objectives (excerpt) For a tax application, a specific tax calculation operation should complete in less than five seconds. Performance objectives At the database level, the query or the stored procedure behind the tax calculation operation should complete in less than five seconds (for this scenario I will not consider the time consumed by the client application or by an intermediary tier, if any). Possible actions Optimize the query, use the Database Engine Tuning Advisor to obtain index or index views recommendations, modify the structure of the underlying tables, check fragmentation for existing indexes, and so on. This is scenario 2: Business objectives (excerpt) An existing application stores Extensible Markup Language (XML) documents in table columns of the XML data type. Several queries retrieve various elements from the XML documents. The users of the application complain that the interrogation of the documents takes too long. The queries’ response times should be improved.
40521.book Page 133 Monday, August 14, 2006 8:04 AM
Establishing Performance Objectives
133
Performance objectives Measure the existing response of the queries, and optimize them to improve the response times. It is important to rely on objective metrics (the response times in this case) rather than on a user’s opinion (“it feels slow”). Possible actions Measure the response times for the queries, add XML indexes for the XML columns, consider using storing the XML documents in relational format, and so on. This is scenario 3: Business objectives (excerpt) You have to add a new application to a server that already hosts an application. You have to measure the performance impact of the new application. Performance objectives Measure the impact on performance of the new application. Possible actions Measure the performance counters for the server before and after installing the new application.
Monitoring SQL Server Performance The primary goal for monitoring SQL Server 2005 is to establish its level of performance by gathering various metrics. Because of the complexity of a client-server architecture, the nature of a concurrent system, and the various components involved in a database solution, it is virtually impossible to guarantee a level of performance or determine whether a bottleneck exists otherwise. A great aspect of SQL Server 2005 is that it offers a large range of tools and commands to enable you to easily gather various metrics. The trick is what to gather! Generally speaking, you’re trying to achieve two performance goals with any relational database management system (RDBMS): a fast response time and high throughput. Ultimately, these are subjective concepts, so it is important to liaise with the users of the system. There is no point in chasing bottlenecks, because there will always be one in any system.
When monitoring SQL Server performance, it is important to know the theoretical limits of the various components of the system. You will not be able to get 100MB/s throughput on a 10MB network interface card (NIC)!
When monitoring SQL Server performance, you can take two approaches: monitoring proactively and managing exceptions. Both these strategies have their respective advantages, and you should carefully consider your database environment before implementing one. Realistically, however, you’ll find that you’ll use both techniques as required. Ideally, although this hardly ever happens in my experience because of time and resource constraints, you should adopt a proactive monitoring strategy; you will focus on that in this chapter.
Monitoring Proactively Setting up a proactive monitoring solution requires more initial work but is ultimately worth the effort. For enterprise environments, where you typically have multiple instances of SQL
40521.book Page 134 Monday, August 14, 2006 8:04 AM
134
Chapter 3
Performance Tuning a Database Solution
Server running, you can replicate this initial work across all instances. A proactive monitoring solution generally involves the following three steps: 1.
Establishing a baseline
2.
Implementing benchmarks
3.
Setting up ongoing monitoring You get several advantages from setting up a proactive monitoring solution:
You can identify performance trends and anticipate necessary configuration changes. The increase in user activity, disk space usage, and memory usage may require changes in your server configuration to accommodate the increased load.
You can take advantage of SQL Server 2005’s dynamic configuration parameters to optimize performance. For example, you could allocate more central processing units (CPUs) to a particular SQL Server instance at the end of the week when you run compute-intensive batch operations.
You gain the ability to determine otherwise unknown performance problems by monitoring server components or database objects. For example, excessive query recompilations may indicate a need for possible performance improvement by query optimization. It is common for hardware to mask these sorts of problems until one day, when you least expect it, the system hits a threshold and performance degrades substantially because the hardware can no longer mask the problem.
You can troubleshoot and anticipate performance problems. Having components that run at near capacity—such as memory pressure, for example—may not indicate an immediate problem but can raise your awareness to a potential problem that requires close monitoring.
You can make better long-term decisions regarding your database solution. Don’t forget that few metrics mean anything by themselves. By gathering performance metrics over a period of time, you empower the relevant stakeholders to make better decisions to guarantee any SLAs in place.
This method also has a number of drawbacks, although they really have to do with more work for the DBA:
Proactive monitoring can be a time-consuming process, especially for a DBA responsible for many applications. This can be a serious problem.
It requires deeper understanding of SQL Server’s client-server architecture and how the applications work with the database. This can be a particular problem when you have purchased a third-party application.
The information gathered needs to be reviewed on an ongoing basis.
You’ll now take a closer look at the steps involved in creating a proactive monitoring solution, and I’ll make some recommendations as to what to monitor to get you started. However, don’t forget you will need to take into account your database environment.
40521.book Page 135 Monday, August 14, 2006 8:04 AM
Establishing Performance Objectives
135
Establishing a Baseline The first step, establishing a performance baseline, consists of recording how your server behaves under normal conditions. You will use the baseline to identify initial problems, if any, and to serve as a comparison later in the ongoing monitoring process. A simple method of building your baseline is to use the Performance Logs and Alerts tool. You typically start with a larger set of performance counters to get a good snapshot of the server’s state. Later, for the ongoing monitoring, you can reduce the number of counters and increase the sampling interval. SQL Server 2005 supports a massive list of performance counters for monitoring, and it is worth your while to become familiar with them through Books Online. However, I recommend the following counters as a basis for building a baseline because they represent the more important counters: Memory– Pages/sec Is defined as the rate at which pages are read from or written to the disk to resolve hard page faults. Network Interface– Bytes Total/sec Represents the number of bytes sent and received over each network adapter. PhysicalDisk– Disk Transfers/sec Is defined as the rate of read and write operations on the disk. You can define a counter for each physical disk on the server. Processor– % Processor Time Represents the percentage of time that the processor is executing a nonidle thread. This counter is used as a primary indicator of CPU activity. SQLServer:Access Methods–Full Scans/sec Is defined as the number of unrestricted full scans of base tables or indexes. SQLServer:Buffer Manager–Buffer Cache Hit Ratio Represents the percentage of pages found in the buffer pool without having to read from disk.
The Buffer Cache Hit Ratio is calculated over time differently than in earlier versions of SQL Server, so it is more “accurate” now.
SQLServer:Databases–Log Growths Represents the total number of log growths for the selected database. SQLServer:Databases Application Database–Percent Log Used Is defined as the percentage of space in the log that is in use. (Run this against your application database instance.) SQLServer:Databases Application Database–Transactions/sec Represents the number of transactions started for the selected database. SQLServer:General Statistics–User Connections Represents the number of users connected to the system. SQLServer:Latches–Average Latch Wait Time Represents the average latch wait time (in milliseconds) for latch requests that had to wait.
40521.book Page 136 Monday, August 14, 2006 8:04 AM
136
Chapter 3
Performance Tuning a Database Solution
SQLServer:Locks–Average Wait Time Is defined as the average amount of wait time (milliseconds) for each lock request that resulted in a wait. SQLServer:Locks–Lock Waits/sec Represents the number of lock requests that could not be satisfied immediately and required the caller to wait. SQLServer:Locks–Number of Deadlocks/sec Represents the number of lock requests that resulted in a deadlock. SQLServer:Memory Manager–Memory Grants Pending Represents the current number of processes waiting for a workspace memory grant. SQLServer:User Settable–Query Is used to define an application-specific counter.
Implementing a Benchmark The purpose of a benchmark is to measure the performance of your hardware and software in a controlled environment under a specific type of load. For a SQL Server 2005–based application, benchmarking will give you an idea of how your server will perform under various types of loads and usage. You can implement a benchmark in several ways:
Write custom Transact-SQL (T-SQL) scripts that reflect your typical SQL Server load. This can be a difficult process because it is tough to know how users will be working with your database solution.
Use the SQL Profiler to record database activity, and use the replay feature for simulating a load. This is superior to the previous technique because it should more typically reflect the usage patterns of your database solution. It is also much easier to do.
Use the load generation tools such as those found in the SQL Server Resource Kit.
Use existing benchmarking software such as those available from the Transaction Processing Council (TPC) or SAP.
Although it can be a somewhat esoteric exercise, it is still worth your while to visit the TPC’s website at http://www.tpc.org to see the various benchmarks and resources, such as the technical articles.
You will usually get the best results from a custom benchmarking solution designed specifically for your application. Don’t forget to involve all stakeholders, including the developers and users so you can get a better understanding of how your application should behave and ultimately set correct performance expectations for all concerned.
Monitoring on an Ongoing Basis The ongoing monitoring solution is similar to the baseline approach discussed previously. You typically use the same tool, Performance Logs and Alerts, but with a reduced set of performance counters. In this case, based on the previous list, you would monitor only the following counters:
Memory– Pages/sec
Network Interface– Bytes Total/sec
40521.book Page 137 Monday, August 14, 2006 8:04 AM
Establishing Performance Objectives
Physical Disk– Disk Transfers/sec
Processor– % Processor Time
SQLServer:Access Methods–Full Scans/sec
SQLServer:Buffer Manager–Buffer Cache Hit Ratio
SQLServer:Databases Application Database–Transactions/sec
SQLServer:General Statistics–User Connections
SQLServer:Latches–Average Latch Wait Time
SQLServer:Locks–Average Wait Time
SQLServer:Locks–Lock Timeouts/sec
SQLServer:Locks–Number of Deadlocks/sec
SQLServer:Memory Manager–Memory Grants Pending
137
Typically you should also increase the sampling interval to a higher value, depending on your particular requirements and database environment. For online transaction processing (OLTP) systems, your frequency could be several seconds, whereas in the case of a decision support system (DSS) environment, several minutes might be appropriate. Another useful component of ongoing monitoring is implementing alerts. You should set up your SQL Server instance so it notifies you when there are performance problems. You will look at how you can set up SQL Server to notify you of important events in Chapter 7. But you still need to use the Performance Logs and Alerts tool, in conjunction with the SQL Server Agent, as the basis. Another important facet of ongoing monitoring that is commonly overlooked is to periodically monitor the SQL Server log, SQL Server Agent log, and Windows event logs. You can view them together using SQL Server Management Studio.
Don’t let your SQL Server log grow out of control! One of the most annoying things when you have a performance crisis is waiting several minutes for the SQL Server log to load in whatever graphical user interface (GUI) tool you are using. Take advantage of the sp_cycle_errorlog system stored procedure to periodically restart the log without having to shut down and restart SQL Server.
Understanding the Factors Affecting SQL Server Performance When designing a monitoring strategy, you should have in mind the factors that can affect performance. This is important because you need to correctly identify which factor is causing the performance problems before responding to it. In the case of enterprise environments, this might involve bringing in specialized information technology (IT) personnel. You can group the factors that determine SQL Server performance into the following categories: Hardware resources The hardware resources involve components such as the server, memory, CPUs, and disk array subsystem.
40521.book Page 138 Monday, August 14, 2006 8:04 AM
138
Chapter 3
Performance Tuning a Database Solution
Network resources The network infrastructure obviously involves the hardware side of your system, such as your NICs, switches, and the configuration of such. But don’t forget that it might also incorporate your Active Directory, potential certificate servers, and other network dependencies. Operating system I’ve always maintained that the DBA should also know as much about the operating system as they possibly can. Invariably you will find that performance issues are related to the underlying version of Windows. Gone are the days when you could fob it off, unfortunately! Database applications SQL Server 2005 is more than just an RDBMS; you need to be concerned about other components, such as full-text indexes, SQL Server Agent, Reporting Services (SSRS), and SQL Server Integration Services (SSIS), all of which can be the cause of performance problems and need to be investigated. Client applications The way client applications have been written and the underlying technology they use can have a dramatic impact on a database solution’s performance. As a DBA, it is typically difficult to troubleshoot such problems, although tools such as SQL Profiler can help. Commonly it is sufficient for a DBA to narrow down performance problems to the client application, in which case developers or external companies are responsible for diagnosing the problems.
Usually these days I am more commonly involved in refactoring existing database solutions or troubleshooting performance problems. It would be great to work on a “green fields” project, so please don’t hesitate to email me in that particular case. We’ll design and implement it correctly from the start.
An effective monitoring strategy should evaluate the entire database environment, considering all the previous factors because they can all individually impact the performance. Sometimes the performance problem can be a number of steps removed from the RDBMS. For example, a web application can be poorly written and lead to poor performance of SQL Server because of excessive network traffic or index fragmentation at the database level.
The Art of Troubleshooting Performance Problems A couple of years ago I was responsible for rearchitecting a database solution in the coal industry. This particular database solution included a web portal that was used internationally to monitor the coal market. After three months of the system going live, performance ground to a halt, to the point of it being unusable. It was quite embarrassing for the company because they were using the latest hardware and had spent a lot of resources on the project.
40521.book Page 139 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
139
The problem, as it turned out, was neither SQL Server nor hardware related. In this case, the culprit was the web application. The web developers had decided to log every click on the website to a set of auditing tables. Well, after three months, these auditing tables, which had a number of poor indexes, were so massive that they degraded the performance of the entire SQL Server solution. I was able to determine this bottleneck only by applying a systematic approach to troubleshooting performance. Once discovered, the solution was easy to implement.
Evaluating the Tools for Monitoring Performance SQL Server 2005 and the Windows operating systems come with a complete set of tools for monitoring, troubleshooting, and tuning performance. The choice of a particular tool depends on your particular needs. Sometimes you’ll require a combination of tools. You should also consider factors such as trend analysis availability, the possibility of replaying captured events, whether you can use them for ad hoc monitoring, alert generation options, the presence of a graphical interface, and the option of using the tool within a custom application. It is critical to use the appropriate tool for the job at hand.
Using Windows Tools The tools that come with the Windows operating system have not dramatically changed since Windows NT first came out in the early 1990s, although they have been enhanced through subsequent releases. The most commonly used tools include the following: Task Manager Displays information about programs and processes running on the server (or desktop computer). You can use it to get an immediate overview of your server’s performance. System Monitor (PerfMon) Tracks resource usage on Windows operating systems. You can use it to monitor different local or remote instances of SQL Server. The resource usage is measured using a set of performance counters. Performance Logs and Alerts Allows you to collect performance data automatically from local or remote computers. Network Monitor Agent Detects network problems and identifies network traffic patterns. The primary tools you will use as a DBA are the System Monitor and the Performance Logs and Alerts tools, so I’ll discuss them in more detail.
40521.book Page 140 Monday, August 14, 2006 8:04 AM
140
Chapter 3
Performance Tuning a Database Solution
Using System Monitor System Monitor allows you to monitor the hardware resources and system services on your server. It allows you to define which performance objects and performance counters should be monitored for object instances and the sampling interval. Just in case you are not familiar with the terminology, let’s go through a quick overview. Performance data generated by a system component is described as a performance object. For example, the Processor object represents a collection of performance data about the processor (or processors) of your system. This object has a number of metrics about it that can be monitored, referred to as performance counters, and would include in this example % Processor Time. Object instances refer to multiple instances of an object such as in the case of a multiprocessor server. You can use System Monitor to investigate performance-related issues for SQL Server and the Windows operating system and for local and remote computers. Monitoring on a local computer can add performance overhead that you can reduce by monitoring fewer counters, by increasing the sample interval, and by logging data to another disk. Another option to reduce the performance overhead is to monitor from a remote computer. However, this option will add network traffic, which you can reduce by adding a network card. You can also use System Monitor to view data simultaneously from multiple computers, to create charts, or to export data from charts.
You should use charts for real-time, short-term monitoring. For longer periods, use performance logs.
Using the Performance Logs and Alerts tool The Performance Logs and Alerts tool allows you to log performance data and generate alerts for specific events. Additionally, it has the following capabilities:
The performance data is collected automatically, and because logging runs as a service, a user doesn’t need to be logged on.
Data can be collected in several formats such as comma-separated, tab-separated, binary log file format, and SQL Server database format.
You can use Performance Logs and Alerts tool to view logged counter data.
Collected data can be viewed not just when the collection is stopped by also during the collection.
When setting up alerts, you can specify various actions such as sending a message, running a program, or adding an entry to the application log.
When writing to log files, such as when using the Performance Logs and Alerts tool, make sure you have sufficient space on the partition to which you are writing the performance data. I know consultants who have forgotten to turn off logging and subsequently “crashed” the server after the system partition ran out of space.
40521.book Page 141 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
141
Using SQL Server Tools The range of tools and commands available with SQL Server 2005 has grown considerably since SQL Server 4.21a days, when SQL Server shipped on four floppy disks. So, it’s a matter of getting experience with using the various tools and commands and becoming familiar with their usage and the information they return. Don’t fall into the trap of using only the tool you are familiar with using. With any new release of SQL Server, you should reevaluate all the tools because they might have substantially changed.
Using SQL Server Profiler SQL Profiler is a graphical tool for using traces. A trace captures event data such as T-SQL statements, the start of a stored procedure executing, a lock being acquired or released on a database object, or security permission checks against objects. You can save the captured data in a file or a table for later analysis. You can also use SQL Profiler for monitoring Analysis Services. Another advantage of SQL Profiler in SQL Server 2005 is the option to correlate a trace with Windows performance log data. You will learn how to do that in an exercise later in this chapter.
Using a SQL Trace Another way to trace the activity of a SQL Server instance is by using system stored procedures. You can use the following stored procedures to manage a SQL trace: sp_trace_create Used to create a trace definition sp_trace_setevent Used for existing traces to add or remove an event or event column sp_trace_setfilter Used to apply a filter to a trace sp_trace_setstatus Used to modify the current state of a trace sp_trace_generateevent Used to create a user-defined event
Using Dynamic Management Views and Functions Dynamic management views (DMVs) are new objects in SQL Server 2005 that expose detailed server state information such as current connections, locks, requests, tasks, and memory allocation. They make SQL Server a transparent server, enhancing the information you could have received from virtual tables such as sysprocesses, from system stored procedures, from database console commands (DBCC), by debugging, or from dumps. They also expose aggregate statistical data that was not available in previous releases of SQL Server. SQL Server 2005 has two types of DMVs: server-scoped DMVs and database-scoped DMVs. Although predominantly envisaged for use by Microsoft Product Support Services (PSS), DMVs provide rich information about the current status of your SQL Server instance. This is the direction of the future, so you should get used to using DMVs in preference of older commands. Many DMVs exist and cover various components of SQL Server. You’ll note that DMVs use a naming convention that makes them easier to use.
40521.book Page 142 Monday, August 14, 2006 8:04 AM
142
Chapter 3
Performance Tuning a Database Solution
The DMVs will change over time. Already with SQL Server 2005 Service Pack 1, Microsoft has introduced new DMVs to allow you to monitor query execution memory grant status. So, always make sure you go through the list of new features and improvements included in SQL Server service packs.
You’ll now look at the DMVs that are supported with the release-to-manufacturing (RTM) version of SQL Server 2005.
Service Broker DMVs You can query the DMVs described in Table 3.1 to return more information about the Service Broker, which is covered in Chapter 7. TABLE 3.1
Service Broker DMVs
DMV
Description
sys.dm_broker_activated_tasks
Returns a row for each stored procedure activated by Service Broker.
sys.dm_broker_connections
Returns a row for each Service Broker network connection.
sys.dm_broker_forwarded_messages
Returns a row for each Service Broker message that an instance of SQL Server is in the process of forwarding.
sys.dm_broker_queue_monitors
Returns a row for each queue monitor in the instance. A queue monitor manages the activation for a queue.
Common Language Runtime DMVs The DMVs described in Table 3.2 will return more information about the common language runtime (CLR) environment. TABLE 3.2
CLR DMVs
DMV
Description
sys.dm_clr_appdomains
Returns a row for each application domain in the server. An application domain (AppDomain) is a construct in the Microsoft .NET Framework CLR that is the unit of isolation for an application.
40521.book Page 143 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
TABLE 3.2
143
CLR DMVs (continued)
DMV
Description
sys.dm_clr_loaded_assemblies
Returns a row for each managed user assembly loaded into the server address space.
sys.dm_clr_properties
Returns a row for each property related to CLR integration.
sys.dm_clr_tasks
Returns a row for all CLR tasks that are currently running. A T-SQL batch that contains a reference to a CLR routine creates a separate task for executing all the managed code in that batch. Multiple statements in the batch that require managed code execution use the same CLR task.
Database DMVs You can use the DMVs described in Table 3.3 to report on the state of the database, returning rich information that the developer can use to tune the database environment. TABLE 3.3
Database DMVs
DMV
Description
sys.dm_db_file_space_usage
Returns space usage information for each file in the database.
sys.dm_db_index_operational_stats
Returns current low-level input/output (I/O), locking, latching, and access method activity for each partition of a table or index in the database.
sys.dm_db_index_physical_stats
Returns size and fragmentation information for the data and indexes of the specified table or view.
sys.dm_db_index_usage_stats
Returns counts of different types of index operations and the time each type of operation was last performed.
sys.dm_db_mirroring_connections
Returns a row for each connection established for database mirroring.
sys.dm_db_missing_index_columns
Returns information about database table columns that are missing an index. sys.dm_db_missing_ index_columns is a dynamic management function.
sys.dm_db_missing_index_details
Returns detailed information about missing indexes.
40521.book Page 144 Monday, August 14, 2006 8:04 AM
144
Chapter 3
TABLE 3.3
Performance Tuning a Database Solution
Database DMVs (continued)
DMV
Description
sys.dm_db_missing_index_group_stats
Returns summary information about groups of missing indexes.
sys.dm_db_missing_index_groups
Returns information about what missing indexes are contained in a specific missing index group.
sys.dm_db_partition_stats
Returns page and row count information for every partition in the current database.
sys.dm_db_session_space_usage
Returns the number of pages allocated and deallocated by each session for the database.
sys.dm_db_task_space_usage
Returns page allocation and deallocation activity by task for the database.
Query DMVs The DMVs described in Table 3.4 will give you more information about the queries that are executing in your SQL Server environment. TABLE 3.4
Query DMVs
DMV
Description
sys.dm_exec_background_job_queue
Returns a row for each query processor job that is scheduled for asynchronous (background) execution.
sys.dm_exec_background_job_queue_stats
Returns a row that provides aggregate statistics for each query processor job submitted for asynchronous (background) execution.
sys.dm_exec_cached_plans
Returns information about the query execution plans that are cached by SQL Server for faster query execution.
sys.dm_exec_connections
Returns information about the connections established to this instance of SQL Server and the details of each connection.
sys.dm_exec_cursors
Returns information about the cursors that are open in various databases.
sys.dm_exec_plan_attributes
Returns one row per attribute associated with the plan specified by the plan handle.
40521.book Page 145 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
TABLE 3.4
145
Query DMVs (continued)
DMV
Description
sys.dm_exec_query_optimizer_info
Returns detailed statistics about the operation of the SQL Server query optimizer.
sys.dm_exec_query_plan
Returns the showplan execution plan in XML format for a T-SQL batch whose query execution plan resides in the plan cache.
sys.dm_exec_query_stats
Returns aggregate performance statistics for cached query plans. The view contains one row per query plan, and the lifetime of the row is tied to the plan itself. When a plan is removed from the cache, the corresponding row is eliminated from this view.
sys.dm_exec_requests
Returns information about each request that is executing within SQL Server.
sys.dm_exec_sessions
Returns one row per authenticated session on Microsoft SQL Server.
sys.dm_exec_sql_text
Returns the text of the SQL statement that is given the sql_handle for that statement.
An initial query of sys.dm_exec_query_stats might produce inaccurate results if a workload is currently executing on the server. You can get more accurate results by rerunning the query.
Full-Text Index DMVs You can use the DMVs described in Table 3.5 to find out more with respect to the full-text indexes and engine. TABLE 3.5
Full-Text Index DMVs
DMV
Description
sys.dm_fts_active_catalogs
Returns information about the full-text catalogs that have some population activity in progress on the server
sys.dm_fts_crawl_ranges
Returns information about the specific ranges related to a fulltext index population currently in progress
40521.book Page 146 Monday, August 14, 2006 8:04 AM
146
Chapter 3
TABLE 3.5
Performance Tuning a Database Solution
Full-Text Index DMVs (continued)
DMV
Description
sys.dm_fts_crawls
Returns information about the full-text index populations currently in progress
sys.dm_fts_memory_buffers
Returns information about memory buffers belonging to a specific memory pool that are used as part of a full-text crawl or a full-text crawl range
sys.dm_fts_memory_pools
Returns information about the memory pools used as part of a full-text crawl or a full-text crawl range
I/O DMVs The DMVs described in Table 3.6 will return I/O-related information. TABLE 3.6
I/O DMVs
DMV
Description
sys.dm_io_backup_tapes
This dynamic management view identifies the list of backup devices and the status of mount requests for backups.
sys.dm_io_cluster_shared_drives
Returns the drive name of the shared drives if the current server is a clustered server.
sys.dm_io_pending_io_requests
Returns a row for each pending I/O request in SQL Server.
sys.dm_io_virtual_file_stats
Returns I/O statistics for data and log files.
SQL Operating System DMVs If you want to learn more about the SQL operating system, you should use the DMVs described in Table 3.7. Mind you, you had better brush up on your operating system theory. TABLE 3.7
SQL Operating System DMVs
DMV
Description
sys.dm_os_buffer_descriptors
Returns the buffer pool buffer descriptors that are being used by a database on an instance of SQL Server.
40521.book Page 147 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
TABLE 3.7
147
SQL Operating System DMVs (continued)
DMV
Description
sys.dm_os_child_instances
Returns a row for each SQL Server Express user instance that has been created from the parent database.
sys.dm_os_cluster_nodes
Returns a row for each node in the virtual server configuration.
sys.dm_os_hosts
Returns all the hosts currently registered in an instance of SQL Server. This view also returns the resources that are used by these hosts.
sys.dm_os_latch_stats
Returns information about all latch waits organized by class.
sys.dm_os_loaded_modules
Returns a row for each module loaded into the server address space.
sys.dm_os_memory_cache_clock_hands
Returns the status of each hand for a specific cache clock.
sys.dm_os_memory_cache_counters
Returns a snapshot of the health of a cache.
sys.dm_os_memory_cache_entries
Returns information about all entries in caches. Use this view to trace cache entries to their associated objects.
sys.dm_os_memory_cache_hash_tables
Returns a row for each active cache in the instance of SQL Server.
sys.dm_os_memory_clerks
Returns the set of all memory clerks that are currently active in the instance of SQL Server.
sys.dm_os_memory_objects
Returns memory objects that are currently allocated by SQL Server.
sys.dm_os_memory_pools
Returns a row for each object store in the instance of SQL Server.
sys.dm_os_performance_counters
Returns a row per performance counter maintained by the server.
sys.dm_os_schedulers
Returns one row per scheduler in SQL Server where each scheduler is mapped to an individual processor.
40521.book Page 148 Monday, August 14, 2006 8:04 AM
148
Chapter 3
TABLE 3.7
Performance Tuning a Database Solution
SQL Operating System DMVs (continued)
DMV
Description
sys.dm_os_stacks
Is used internally by SQL Server to keep track of debug information.
sys.dm_os_sys_info
Returns a miscellaneous set of useful information about the computer and about the resources available to and consumed by SQL Server.
sys.dm_os_tasks
Returns one row for each task that is active in the instance of SQL Server.
sys.dm_os_threads
Returns a list of all SQL Server operating system threads that are running under the SQL Server process.
sys.dm_os_virtual_address_dump
Returns information about a range of pages in the virtual address space of the calling process.
sys.dm_os_wait_stats
Returns information about the waits encountered by threads that are in execution.
sys.dm_os_waiting_tasks
Returns information about the wait queue of tasks that are waiting on some resource.
sys.dm_os_workers
Returns a row for every worker in the system.
Query Notification DMVs The lone DMV described in Table 3.8 returns information about query notifications, which are covered in Chapter 7. TABLE 3.8
Query Notification DMVs
DMV
Description
sys.dm_qn_subscriptions
Returns information about the active query notifications subscriptions in the server
Replication DMVs The set of DMVs described in Table 3.9 will return basic information about your replication environment.
40521.book Page 149 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
TABLE 3.9
149
Replication DMVs
DMV
Description
sys.dm_repl_articles
Returns information about database objects published as articles in a replication topology
sys.dm_repl_schemas
Returns information about table columns published by replication
sys.dm_repl_tranhash
Returns information about transactions being replicated in a transactional publication
sys.dm_repl_traninfo
Returns information about each replicated transaction
Transaction DMVs The DMVs described in Table 3.10 will give more information about the transactions that are executing in your SQL Server environment. TABLE 3.10
Transaction DMVs
DMV
Description
sys.dm_tran_active_snapshot_database_transactions
Returns a virtual table for all active transactions that generate or potentially access row versions
sys.dm_tran_active_transactions
Returns information about transactions for the SQL Server instance
sys.dm_tran_current_snapshot
Returns a virtual table that displays all active transactions at the time when the current snapshot transaction starts
sys.dm_tran_current_transaction
Returns a single row that displays the state information of the transaction in the current session
sys.dm_tran_database_transactions
Returns information about transactions at the database level
sys.dm_tran_locks
Returns information about currently active lock manager resources
sys.dm_tran_session_transactions
Returns correlation information for associated transactions and sessions
40521.book Page 150 Monday, August 14, 2006 8:04 AM
Chapter 3
150
TABLE 3.10
Performance Tuning a Database Solution
Transaction DMVs (continued)
DMV
Description
sys.dm_tran_top_version_generators
Returns a virtual table for the objects that are producing the most versions in the version store
sys.dm_tran_transactions_snapshot
Returns a virtual table for the sequence_number of transactions that are active when each snapshot transaction starts
sys.dm_tran_version_store
Returns a virtual table that displays all version records in the version store
Figure 3.1 shows the output of the sys.dm_os_memory_objects DMV.
Using SQL Server Management Studio The new management tool for SQL Server 2005 offers several improvements for monitoring: Activity Monitor Returns information about user connections and locks. Figure 3.2 shows the Activity Monitor available in the SQL Server Management Studio: FIGURE 3.1
Output of sys.dm_os_memory_objects
40521.book Page 151 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
FIGURE 3.2
151
Activity Monitor
Graphical execution plan Shows a graphical and exportable representation of the query execution plan. Figure 3.3 shows the graphical execution plan. Error logs Contains information about events that occur on the Windows operating systems, as well as events in SQL Server, SQL Server Agent, and full-text search. SQL Server Management Studio allows you to visualize all error logs from a variety of sources such as Windows, SQL Server, SQL Server Agent, or Database Mail.
Using DBCC Commands The Database Console Commands (DBCC) commands are a legacy of the Sybase days but still can be used for a variety of purposes such as enabling trace flags, performing maintenance tasks, and displaying various types of information. Table 3.11 shows a number of commonly used DBCC commands.
Did you know that Microsoft has changed what DBCC stands for? Many DBAs still incorrectly think it stands for Database Consistency Checker. Thanks for that bit of trivia, Paul. Slante!
40521.book Page 152 Monday, August 14, 2006 8:04 AM
152
Chapter 3
Performance Tuning a Database Solution
FIGURE 3.3
Graphical execution plan
TABLE 3.11
Common DBCC Commands
DBCC Command
Description
DBCC CHECKDB
Checks and repairs database problems
DBCC SHOW_STATISTICS
Returns the current distribution statistics for the specified table or view
DBCC PROCCACHE
Displays information about the procedure cache
DBCC OPENTRAN
Returns information about the oldest active transactions within the specified database
DBCC LOGINFO
Shows the internal layout of the transaction log file, including which virtual log files (VLFs) are active
DBCC SHOWCONTIG
Returns the level of internal and external fragmentation in a table or index
DBCC TRACEON
Turns on trace flags
DBCC TRACEOFF
Turns off trace flags
40521.book Page 153 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
TABLE 3.11
153
Common DBCC Commands (continued)
DBCC Command
Description
DBCC USEROPTIONS
Returns the current SET options
DBCC DROPCLEANBUFFERS
Removes all buffers from the buffer pool
DBCC FREEPROCCACHE
Invalidates all elements from the procedure cache
DBCC INDEXDEFRAG
Defragments indexes of a table or view
DBCC FREESYSTEMCACHE
Releases all unused cache elements from all caches
DBCC CLEANTABLE
Reclaims space for dropped variable-length columns and text columns
DBCC UPDATEUSAGE
Reports and corrects page and rows count inaccuracies in the database catalog views
DBCC HELP
Returns syntactical information for the specified DBCC command
Figure 3.4 shows the output of the DBCC SHOWCONTIG command. FIGURE 3.4
Output of DBCC SHOWCONTIG
40521.book Page 154 Monday, August 14, 2006 8:04 AM
Chapter 3
154
Performance Tuning a Database Solution
Be careful with using DBCC commands because they are typically undocumented and officially unsupported. For a list of undocumented SQL Server DBCC commands, visit http://www.SQLServerSessions.com. Furthermore, some DBCC commands such as DBCC SHOWCONTIG and DBCC INDEXDEFRAG are being deprecated in future releases of SQL Server.
Using System Functions SQL Server 2005 has several built-in statistical and system functions that display status information. Although not commonly used, they still provide some useful, typically global information. Table 3.12 shows a number of system functions available in SQL Server 2005. TABLE 3.12
SQL Server 2005 System Functions
System Function
Description
@@CONNECTIONS
Returns the number of attempted connections
@@TOTAL_READ
Returns the number of disk reads by SQL Server
@@PACK_SENT
Returns the number of packets written to the network by SQL Server
fn_virtualfilestats
Returns I/O statistics for database files
Figure 3.5 shows the output of the fn_virtualfilestats system function.
Using Trace Flags You can use SQL Server trace flags to enhance the information available for diagnostic or troubleshooting purposes such as memory allocation, for example. Table 3.13 shows some documented SQL Server trace flags. TABLE 3.13
SQL Server Trace Flags
Trace Flag
Description
1204
Returns the type of locks participating in the deadlock and the current command affected
1211
Disables lock escalation based on the memory pressure/number of locks
1224
Disables lock escalation based on the number of locks
40521.book Page 155 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
TABLE 3.13
155
SQL Server Trace Flags (continued)
Trace Flag
Description
2528
Disables parallel checking of objects by DBCC CHECKDB, DBCC CHECKFILEGROUP, and DBCC CHECKTABLE
3625
Limits the information returned in error messages
FIGURE 3.5
Output of fn_virtualfilestats
Trace flags are typically set on and off via the DBCC TRACEON and DBCC TRACEOFF commands. Alternatively, you can use the /T SQL Server service start-up option. Use the DBCC TRACESTATUS command to see which trace flags have been set.
Be careful with using trace flags because they are typically undocumented and officially unsupported. For a list of undocumented SQL Server trace flags, visit http://www.SQLServerSessions.com.
Using the Database Engine Tuning Advisor The Database Engine Tuning Advisor (DTA) is the replacement of the Index Tuning Wizard (ITW). It can analyze workloads, sets of T-SQL statements, and recommend changes in the physical design structure of a database such as adding, removing, or modifying indexes,
40521.book Page 156 Monday, August 14, 2006 8:04 AM
156
Chapter 3
Performance Tuning a Database Solution
indexed views, and partitions. Figure 3.6 shows the Database Engine Tuning Advisor interface and the advanced options available. FIGURE 3.6
The Database Engine Tuning Advisor
The advisor is particularly useful for seeing the adequacy of indexes and picking up other problems where you have purchased a third-party database solution and therefore do not have the development knowledge required about the database.
You can use the SQL Server 2005 DTA to tune SQL Server 2000 databases. In theory, it should do a better job than the SQL Server 2000 ITW because Microsoft has invested more research and development in the DTA.
Using System Stored Procedures SQL Server has always come with a comprehensive set of system stored procedures that you can use to determine the state of your SQL Server instance and to troubleshoot and performance tune your database solutions. These system stored procedures are typically located in the master system database.
If you want to learn more about SQL Server and become a real guru, I highly recommend browsing the source code of various system stored procedures where you’ll learn more about the internals of SQL Server, undocumented features, and good programming techniques. You’ll also see examples of poor programming techniques.
40521.book Page 157 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
157
Table 3.14 describes a number of system stored procedures that you can use for performance tuning and troubleshooting problems. TABLE 3.14
System Stored Procedures
System Stored Procedure
Description
sp_who
Returns information about current SQL Server users and processes. You can replace it with the sys.dm_exec_sessions and sys.dm_exec_requests DMVs.
sp_lock
Returns locking information such as the object ID, index ID, type of lock, and type or resource to which the lock applies. You can replace it with the sys.dm_tran_locks DMV.
sp_monitor
Returns statistic information such as CPU usage or I/O usage.
Figure 3.7 shows the output of the sp_monitor system stored procedure. FIGURE 3.7
Output of sp_monitor
Using Dedicated Administrator Connection I’ll end your look at the monitoring tools with some words about the dedicated administrator connection (DAC). The DAC is a special diagnostic connection—typically invoked via the –A switch when using the SQLCMD command-line utility—for administrators to use when SQL
40521.book Page 158 Monday, August 14, 2006 8:04 AM
158
Chapter 3
Performance Tuning a Database Solution
Server is not responding to standard connection requests. It’s designed to allow DBAs to get into an “unresponsive” SQL Server and perform diagnostic and troubleshooting commands.
By default on SQL Server 2005, the DAC is allowed only from a client running on the SQL Server instance. You will need to allow network connections via the SQL Server Surface Area Configuration tool; you will examine how to do this in more detail in Chapter 4.
Don’t forget to close the DAC, because only one is allowed per SQL Server instance. Otherwise, you will be blocking all other administrators from using it.
Choosing a Monitoring Tool Several factors can guide you in selecting a tool for performance monitoring besides the type of information you want to track. The following are the various considerations you should take into account when deciding on the appropriate tool to use.
Minimizing Performance Overhead Using a tool to measure performance will result in performance overhead. How much overhead depends on the server’s power, the quantity of information collected, and the method of monitoring. For example, using System Monitor locally can add up to 18–20 percent performance overhead if you monitor a low-end system and add all the performance counters you can imagine. To avoid that performance penalty, you can reduce the number of counters, increase the sampling interval, and monitor from a remote computer. Remote monitoring will generate another type of overhead that you should be aware of; namely, it will increase the amount of network traffic generated. However, an additional network adapter will solve this problem. Another important tool that comes with a price, in terms of performance, is SQL Profiler. In certain circumstances it can consume up to 30 percent of your server’s power. How can you prevent this? Reduce the number of events, store the resulted trace in a file (instead of a table), use SQL Trace stored procedures instead of the graphical interface, and so on. The new DMVs and functions can be a source of overhead for your server in special situations such as monitoring memory allocation per object or monitoring locking. You can use Books Online (and of course common sense) to decide how to use them.
Addressing Trend Analysis Requirements Another factor that can influence your decision is the capability of trend analysis. SQL Profiler and System Monitor are the perfect candidates for this purpose, though you can also use the DMVs and functions with a small additional effort. Don’t forget, however, that DMVs typically represent a snapshot of your system over a period of time, although some are cumulative. Make sure you are familiar with the DMV by reading Books Online.
40521.book Page 159 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
159
Addressing Automatic Monitoring Requirements The need of using a tool automatically will often require that you use Performance Logs and Alerts instead of System Monitor or use SQL Trace instead of SQL Profiler. Although you can use a range of various third-party tools, the SQL Server development team has provided an impressive array of functionality, so it’s more a matter of learning how to use the tools.
Considering Additional Factors Several other factors can help you decide what tool should be used, including the presence of a graphical interface, the capability of generating alerts, the ability to replay captured events, and the option of being able to embed within a custom tool.
Correlating a Trace with Windows Performance Log Data Using SQL Profiler One of the benefits of SQL Profiler is the option to correlate performance counter data (recorded with System Monitor or Performance Logs and Alerts tool) with SQL Server or SQL Server 2005 Analysis Services events. Exercise 3.1 will show you how. You will create a deadlock and see it reflected in the SQLServer:Lock Number of Deadlocks/sec counter. You will set up first the performance log, start a trace, and after that run some simple T-SQL statements to create a deadlock. I want to mention that SQL Profiler can be of great help in detecting the cause of a deadlock. However, for this demonstration, you will use it just to record the T-SQL statements that will generate the deadlock. EXERCISE 3.1
Correlating a Trace with Windows Performance Log Data 1.
Use the Windows Start menu, and choose All Programs Administrative Tools Performance.
2.
Expand Performance Logs and Alerts (in the Windows Performance tool), right-click Counter Logs, and click New Log Settings.
3.
Type Deadlock Log as the name for the counter log, and click OK.
4.
On the General tab, click Add Counters.
5.
In the Performance Object box, select SQLServer:Locks.
40521.book Page 160 Monday, August 14, 2006 8:04 AM
160
Chapter 3
Performance Tuning a Database Solution
EXERCISE 3.1 (continued)
6.
Add the Number of Deadlocks/sec counter, and leave _Total selected in the list of instances.
7.
Click Close.
8.
Enter 1 as the value for the Interval box under Sample Data Every.
40521.book Page 161 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
161
EXERCISE 3.1 (continued)
9.
Click the Log Files tab, and choose TextFile (Comma Delimited) from the Log File Type list (so you can share the log file among different version of Windows or view the log files later with Microsoft Excel).
10. On the Schedule tab, specify Manually for both the Start Log and Stop Log options. 11. Click OK to create the performance log. 12. Click the Counter Logs node, right-click Deadlock Log, and select Start from the context menu. Leave the System Monitor console open.
13. From the Windows Start menu, choose All Programs Microsoft SQL Server 2005, and click SQL Server Management Studio. For this exercise, leave SQL Server Management Studio open.
14. Connect to your SQL Server, and then from the Tools menu click SQL Server Profiler. 15. In the File menu (of SQL Profiler), select New Trace, and connect to your SQL Server.
40521.book Page 162 Monday, August 14, 2006 8:04 AM
Chapter 3
162
Performance Tuning a Database Solution
EXERCISE 3.1 (continued)
16. On the General Tab, enter Correlation Trace as the trace name, select the TSQL_Replay template, and check the Save to File box to specify the trace file location and filename (I used C:\Correlation Trace.trc). Click Run.
17. Switch back to SQL Server Management Studio. 18. Open a new query window, and run the following query: USE tempdb ; GO CREATE TABLE Employee ( EmployeeID INT, EmployeeName VARCHAR(64) ) GO CREATE TABLE Orders ( OrderID INT,
40521.book Page 163 Monday, August 14, 2006 8:04 AM
Evaluating the Tools for Monitoring Performance
EXERCISE 3.1 (continued)
Amount INT ) GO INSERT INTO Employee VALUES(69, 'Paula Verkhivker') GO INSERT INTO Orders VALUES(1000, 200) GO
19. Type the following statements in the query window but do not run the query yet: USE tempdb ; GO BEGIN TRAN UPDATE Employee SET EmployeeName = 'Angelina Jolie' WHERE EmployeeID = 69 WAITFOR DELAY '00:00:10' UPDATE Orders SET Amount = 300 WHERE OrderID = 1000
20. Open a new query window, and type the following statements: USE tempdb ; GO BEGIN TRAN UPDATE Orders SET Amount = 350 WHERE OrderID = 1000 WAITFOR DELAY '00:00:10'
163
40521.book Page 164 Monday, August 14, 2006 8:04 AM
164
Chapter 3
Performance Tuning a Database Solution
EXERCISE 3.1 (continued)
UPDATE Employee SET EmployeeName = 'Lara Croft' WHERE EmployeeID = 69
21. Run the query, and then switch to the first query to run it. It should create a deadlock. In one of the query windows you will get an error message “Msg 1205, Level 13, State 45, Line 6 Transaction (Process ID 52) was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction.”
22. Switch to System Monitor, and stop the Deadlock log (click the Counter Logs node, rightclick Deadlock Log, and select Stop from the context menu).
23. Switch to SQL Profiler, stop the trace, and then close it. 24. From the File menu of SQL Profiler, select Open, and then click Trace File. Open the trace file you just created (C:\Correlation Trace.trc).
25. From the File menu, select Import Performance Data, and select the log file recorded previously (the default path and name are C:\PerfLogs\ Deadlock log_000001.csv).
26. In the Performance Counter Limit Dialog window, check the instance name of your server.
40521.book Page 165 Monday, August 14, 2006 8:04 AM
Detecting and Responding to Performance Problems
165
EXERCISE 3.1 (continued)
27. You can play with the pointer both from the trace and from the Performance Data window to get a feel of the tool. The last thing I want to mention is that you can go directly to the maximum value of the counter (the value 1 in this case) by right-clicking the counter (you have just one counter) and selecting Go to Max Value.
Detecting and Responding to Performance Problems As your application’s environment changes, its performance will change too. In time these changes, such as the increased number of user connections, fragmentation of indexes, and size of data, can cause performance problems. That is why it is critical to establish a baseline, because only through this baseline can you determine what has changed and thus identify the cause of possible performance problems. As part of a performance-monitoring solution, you can develop a plan to detect and respond to performance problems.
40521.book Page 166 Monday, August 14, 2006 8:04 AM
Chapter 3
166
Performance Tuning a Database Solution
I will explore several scenarios for how to identify performance bottlenecks and possible measures to solve them. First I’ll define what bottleneck means: a bottleneck is generally one component in a chain of components that has limited capacity and thus reduces the capacity of all the components. In software systems, bottlenecks are caused by severe factors such as insufficient resources, malfunctioning components, incorrect configured resources, and workloads that are not distributed evenly. These are the major subsystems that reflect bottleneck areas that can affect SQL Server performance:
Memory
CPU
Disk I/O
The tempdb system database
Poorly running queries
Locking
In the following sections, you’ll go through these subsystems and learn how you can detect potential bottlenecks in them.
Troubleshooting Memory Problems Insufficient memory is one of the major causes for SQL Server performance degradation, because it will generate excessive paging, generate increased I/O activity, and slow down the system. To diagnose and monitor memory problems, you should have a good understanding of memory architecture in SQL Server 2005 as well as understand the concept of memory pressure (there are different types of memory pressure, such as internal and external, physical and virtual). In addition, you should understand how SQL Server reacts to each type of memory pressure and what corrective actions you can take. Having memory pressure is not necessarily the sign of a problem but may indicate that your SQL Server is running near its capacity and that memory errors could occur in the near future.
Recognizing Causes of Memory Problems A number of signs could indicate your SQL Server is having memory problems. You could experience the following signs when you have insufficient memory:
SQL Server starts generating one of the error messages described in Table 3.15.
The system experiences intense I/O activity, because a lack of memory will typically result in intensive paging.
The system appears to be slow from a user’s point of view. (However, this tends to be a subjective metric, so you should not rely on it exclusively.)
Low values for the SQL Server:Buffer Manager–Buffer Cache Hit Ratio and SQL Server:Buffer Manager–Page Life Expectancy counters. Low values mean less than 90 percent for Buffer Cache Hit Ratio and less than 300 seconds for Page Life Expectancy.
40521.book Page 167 Monday, August 14, 2006 8:04 AM
Detecting and Responding to Performance Problems
TABLE 3.15
167
Error Messages
Error Number
Message
701
There is insufficient system memory to run this query.
802
There is insufficient memory available in the buffer pool.
8628
A timeout occurred while waiting to optimize the query. Rerun the query.
8645
A timeout occurred while waiting for memory resources to execute the query. Rerun the query.
8651
Could not perform the requested operation because the minimum query memory is not available. Decrease the configured value for the ‘min memory per query’ server configuration option.
Detecting Memory Problems SQL Server 2005 comes with a variety of tools and commands that can help you detect memory problems. It’s a matter of becoming familiar with these different techniques. Monitoring the following performance counters will show memory pressure:
For Memory– Available Bytes (the number of bytes available), when memory available drops in the interval of 50–100MB, then you should investigate whether the system has memory problems. When the value is less than 10MB, then it’s a certain sign of external memory pressure.
For Process– Working Set (the amount of memory used by a process), if the value is consistently less than the amount of memory that is set by the ‘min server memory’ and ‘max server memory server’ options, you need to identify the underlying root cause.
For SQL Server:Buffer Manager–Buffer Cache Hit Ratio (the percentage of pages found in the buffer cache), ideally the value will be greater than 90 percent.
For SQL Server:Buffer Manager–Page Life Expectancy (the number of seconds a page will remain in the buffer pool), a value less than 300 seconds can indicate problems.
Don’t get carried away with the SQL Server:Buffer Manager–Page Life Expectancy counter. In 2005 everyone was talking about it; you saw it at all the SQL Server conferences around the world. It was the counter of the year! Or was it the most misunderstood counter of the year? There was a lot of confusion about what values you should expect. The problem is that it is so subjective to your operational environment. So, be careful about drawing any conclusions by looking at that counter by itself. Use it more as a correlating factor.
40521.book Page 168 Monday, August 14, 2006 8:04 AM
Chapter 3
168
Performance Tuning a Database Solution
Several memory-related DMVs can help you detect and analyze memory problems:
The sys.dm_os_memory_clerks DMV displays memory for various components such as CLR or extended stored procedures.
The sys.dm_os_ring_buffers DMV returns the content of internal ring buffers, which is particularly useful in seeing the internal memory notifications sent to SQL Server components, out-of-memory conditions, and so on.
Other DMVs. If you need to drill-down for a particular problem, you can use the following DMVs: sys.dm_os_memory_objects, sys.dm_os_memory_cache_clock_ hands, and sys.dm_os_cache_counters. The DBCC MEMORYSTATUS command will return a snapshot of the current memory status of Microsoft SQL Server. You will find it useful to troubleshoot memory allocation issues or out-of-memory errors.
You can find more details about how to use the DBCC MEMORYSTATUS command in the Knowledge Base article “How to use the DBCC MEMORYSTATUS command to monitor memory usage on SQL Server 2005” at http:// support.microsoft.com/kb/907877.
The Task Manager Windows utility is a quick and easy tool to use. It’s similar to the use of the Memory: Available Bytes performance counter, and you can use the Task Manager’s Performance tab and check the Physical Memory section to get the available memory.
Resolving Memory Problems Generally when you have memory pressure, the first task you need to perform is to determine whether it is related to SQL Server or to the operating system. You can follow these guidelines for solving memory problems:
Check for external memory pressure. If you lack physical memory, find major system memory consumers, such as unnecessary services, and try to eliminate them if possible. Otherwise, you have to consider adding more random access memory (RAM), although poor application design and inefficient indexing strategies can consume more memory. If the external pressure is due to a lack of virtual memory, consider increasing the swap file size and, if possible, again find and eliminate the major consumers of virtual memory.
Verify and correct, if possible, the SQL Server’s memory configuration such as min memory per query, min/max server memory and AWE enabled configuration options, as well as the Lock Pages in Memory privilege. This might be particularly relevant if you have multiple instance of SQL Server running on the same server.
Check for internal memory pressure. Identify major memory consumers inside SQL Server, and identify the cause of memory pressure, considering the existing workload, design issues, or other possible bottlenecks.
40521.book Page 169 Monday, August 14, 2006 8:04 AM
Detecting and Responding to Performance Problems
169
Troubleshooting CPU Problems Consistently high CPU utilization may indicate the need for tuning your queries and ultimately the need for a CPU upgrade. However, before you decide to buy new hardware, you should determine the cause of CPU performance problems and find a possible resolution.
Generally, you will find that most SQL Servers are more likely to be I/O bound than computer bound. Given today’s CPU hardware and the relatively inexpensive cost of multiple CPUs or even multicore CPUs, the memory or I/O subsystems will generally be more likely candidates as bottlenecks, especially memory because the amount of data (and to a degree the number of concurrent users) is generally always growing.
Detecting CPU Problems Several tools, such as System Monitor (or Performance Logs and Alerts), Task Manager, or various DMVs and functions, can help you detect processor performance problems. Let’s start with the performance counters that allow you to determine the CPU usage:
The Processor– % Processor Time performance counter represents the amount of time your CPU is spending executing a nonidle thread. Your CPUs are generally considered to be a bottleneck if the counter is consistently greater than 80 percent. Please note that for multiprocessor systems you can monitor a separate instance of this counter for each processor. If you need the average value for all CPUs, you can use the System– % Total Processor Time counter.
The System– Processor Queue Length indicates the number of threads waiting for processor time. A value greater than 2 may indicate a CPU bottleneck.
Task Manager can tell you whether SQL Server is using the CPU or whether another process is using most of the CPU in detriment of SQL Server. Another option to detect processor bottlenecks is to use DMVs and functions, specifically the following ones:
The sys.dm_os_schedulers DMV monitors the condition of schedulers or identifies runaway tasks. In this monitoring case, the runnable_tasks_count column is the important one. It represents the number of workers waiting to be scheduled on the runnable queue. If the value is frequently greater than zero, you have a CPU problem.
You can use the sys.dm_exec_query_stats DMV to get aggregate performance statistics for cached query plans. Two columns of this DMV can help you identify CPU-intensive queries: the execution_count column that represents the number of times the plan has been executed and the total_worker_time column that will give you the total amount of CPU time consumed by executions of a plan.
40521.book Page 170 Monday, August 14, 2006 8:04 AM
170
Chapter 3
Performance Tuning a Database Solution
Resolving CPU Problems You could have many potential causes for CPU bottlenecks, and this is where your knowledge of your database solution becomes critical. It is typically important to talk to the developers of the client application to understand how it works with your SQL Server databases.
Excessive Compilations and Recompilations Compilations and recompilations of query plans are potentially CPU intensive and can have various causes such as schema changes, statistics updates, deferred compiles, and various SET option changes, as well as changes to temporary tables and stored procedures created with the RECOMPILE query hint, to name just a few. Figure 3.8 gives a brief overview of how SQL Server 2005 decides whether to perform a recompilation. FIGURE 3.8
Recompilation
Success
Cache lookup Failure Query compilation begins. Load all of the “interesting” statistics. Yes
Are any starts stale? Recompilation
No
Refresh all of the statistics that need refreshing.
Generate the query plan. Set recompilation thresholds (RT) of all of the tables referenced in the query. Test query plan for correctnessrelated reasons (schema checks)
No
Schema valid? Yes
Yes
Do we have newer stats available? No
Yes
Any stats stale? No Begin query execution.
Query execution has (technically) begun
40521.book Page 171 Monday, August 14, 2006 8:04 AM
Detecting and Responding to Performance Problems
171
Detecting excessive compilation and recompilation can sometimes be a bit complex, but you would typically use the following resources: Use System Monitor to monitor the following counters:
SQL Server:SQL Statistics–Batch Requests/sec
SQL Server:SQL Statistics–SQL Compilations/sec
SQL Server:SQL Statistics–SQL Recompilations/sec
Use SQL Trace, and watch the SP:Recompile/SQL:StmtRecompile events.
Query the sys.dm_exec_query_optimizer_info DMV to get an idea of the time SQL Server has spent on optimization.
Query the sys.dm_exec_query_stats DMV to examine the number of plan generations and executions for queries.
Resolving excessive compilations or recompilations can be quite difficult, especially if you aren’t familiar with the way the underlying T-SQL code or how SQL Server 2005’s query optimizer works. For resolution, you can follow these recommendations:
Avoid changing SET options in stored procedures.
Change your T-SQL code to use table variables instead of temporary tables.
Take advantage of the KEEP PLAN query hint.
Don’t forget that you can use the Database Engine Tuning Advisor to see whether any indexing changes improve the compile time and the execution time for queries.
Inefficient Query Plans An inefficient query plan can cause increased CPU consumption and usually can be detected comparatively easily. DMVs such as sys.dm_exec_query_stats can be particularly useful for detecting queries that are CPU intensive because of possibly inefficient query plans. Again, you can try using the Database Engine Tuning Advisor to tune the queries. Alternatively, you can try updating statistics for tables involved in your CPU-intensive queries, although SQL Server should be taking care of that for you.
Other Causes Other potential reasons for excessive CPU usage are poor cursor usage and incorrectly configured intraquery parallelism that potentially need to be investigated. Again, the more you know about the way your database solution has been architected and the SQL Server RDBMS engine, the more empowered you will be to efficiently determine the cause of excessive CPU usage.
For more information, you should read the excellent white paper titled “Troubleshooting Performance Problems in SQL Server 2005” at http:// www.microsoft.com/technet/prodtechnol/sql/2005/tsprfprb.mspx.
40521.book Page 172 Monday, August 14, 2006 8:04 AM
Chapter 3
172
Performance Tuning a Database Solution
Troubleshooting I/O Bottlenecks To detect and solve I/O bottlenecks, you need to understand how they will manifest themselves, such as through slow response times, timeout error messages, and so on. It is also important to understand the various factors that can contribute to excessive I/O activity such as through paging, transaction log file operations, or heavy tempdb activity. Although you might have both disk or network I/O bottlenecks, you will generally find that disk I/O tends to be the sole problem, because disk drive technology has not dramatically improved in throughput over the past decade, unlike CPUs and network cards. Consequently, we will focus on disk I/O bottlenecks.
Detecting Disk I/O Bottlenecks The detection of disk I/O bottlenecks has been well documented in various Windows and SQL Server resources, so you should already be familiar with the performance counters and what to watch. Don’t forget the new DMVs in SQL Server 2005. Use the following when trying to detect disk I/O bottlenecks: You should monitor the following well-known performance counters:
The PhysicalDisk Object– % Disk Time counter represents the percentage of time a disk drive was busy servicing read or write requests. A value greater than 50 percent indicates a potential I/O bottleneck.
The PhysicalDisk Object– Avg. Disk Queue Length tracks the average number of physical read and write requests queued on the selected physical disk. Look out for a value greater than 2 to indicate an I/O problem.
The PhysicalDisk Object– Avg. Disk Sec/Read and Avg. Disk Sec/Write counter counts the average time, in seconds, of disk read and write operations. As a guideline, a value less than 10ms is good, a value of 10–20ms is OK, a value of 20–50ms is slow, and a value greater than 50ms indicates a serious I/O problem.
When monitoring these performance counters, do not forget to take into account the number of disk drives if you are using a redundant array of inexpensive disks (RAID) and adjusting accordingly.
SQL Server 2005 now also has the following DMVs that you can use to monitor and troubleshoot I/O activity:
The sys.dm_os_wait_stats DMV can help you get latch wait statistics, which can help you identify I/O problems.
The sys.dm_io_virtual_file_stats DMV and sys.dm_io_pending_io_requests DMV can help you monitor pending I/O operations.
The sys.dm_exec_query_stats DMV can give you the number of logical and physical reads and writes for cached queries.
40521.book Page 173 Monday, August 14, 2006 8:04 AM
Detecting and Responding to Performance Problems
173
Resolving Disk I/O Bottlenecks The ultimate cause of excessive disk I/O might have nothing to do with the actual disk drives but can be a consequence of your database design, user behavior, or lack of other resources. Consequently, you have a number of possible resolution methods:
Identifying the I/O-intensive queries and rewriting them
Checking and solving memory-related problems
Using faster disk drives
Using additional disk drives to distribute the I/O load
Moving the transaction log files to separate disk drives in intense OLTP database solutions
Moving tempdb onto a separate disk drive for database solutions that heavily utilize this system database
The tempdb system database is often overlooked in performance-tuning methodologies because developers and DBAs aren’t aware of how SQL Server utilizes it in their database solutions. Consequently, it is worth examining tempdb and related performance issues in more detail.
Troubleshooting tempdb Problems In SQL Server 2005 the tempdb system database has become more important because of the ever-increasing number of features that rely on this system database. It is important to understand what components of SQL Server use this temporary workspace. The list includes the following:
The new SQL Server 2005 features of row versioning use tempdb as its version store. A lot of other features utilize row versioning and consequently tempdb.
Bulkload operations with triggers enabled use row versioning and thus take up space in tempdb.
Common table expression (CTE) queries utilize work tables that are created in tempdb for spool operations during execution.
Keyset-driven and static cursors use work tables that are generated in tempdb.
The Service Broker component uses tempdb for various reasons such as caching and preserving dialog box context. Features that rely on Service Broker such as Database Mail, event notifications, and query notifications implicitly use tempdb.
DBCC CHECKDB uses tempdb work tables.
Creating and rebuilding indexes with the SORT_IN_TEMPDB option will use tempdb.
Online index operations use row versioning and therefore tempdb implicitly.
Large object (LOB) data type variables and parameters of types such as VARCHAR(MAX), NVARCHAR(MAX), VARBINARY(MAX), TEXT, NTEXT, IMAGE, and XML can use tempdb for storing values.
40521.book Page 174 Monday, August 14, 2006 8:04 AM
174
Chapter 3
Performance Tuning a Database Solution
Multiple active result sets (MARS) can also use tempdb.
Queries that contain Data Manipulation Language (DML) statements can use internal objects to store intermediate results for hash joins, hash aggregates, or sorting.
Both temporary tables and table variables both utilize tempdb. For some reason, many developers and DBAs have assumed that table variables reside in memory, which is completely wrong.
Tables returned in table-valued functions need temporary workspace.
Triggers in SQL Server 2005 now use tempdb.
So as you can see, many SQL Server components rely on the tempdb system database. Unfortunately, it can be quite difficult to predict its usage and therefore capacity plan correctly. Consequently, it is important to monitor tempdb in any new database solution to ensure that it has been configured correctly for the operating environment.
Detecting tempdb Problems I don’t have really anything new to say in this section; detecting tempdb problems is a matter of using the various techniques, commands, and objects discussed previously. The tempdb database is ultimately just an instance of a database being used by SQL Server, albeit a special one. Again, you should be monitoring DMVs such as sys.dm_db_file_space_usage and sys.dm_db_task_space_usage, monitoring your disk I/O performance counters, and monitoring various database performance counters against the tempdb instance. Watch out for transaction log activity; that’s always a good start!
Resolving tempdb Problems It’s all about separating the disk I/O in a SQL Server solution that heavily utilizes tempdb, although you can adopt a number of programming techniques to stop tempdb from becoming a bottleneck:
Put the tempdb system database on a separate disk subsystem. Additionally, you could put it on an appropriate RAID array.
Allocate additional separate files to tempdb because SQL Server will then use more worker threads concurrently to service requests.
Correctly capacity plan the amount of disk space required for tempdb, and preallocate that space to avoid automatic growth and shrinkage during production hours.
Eliminate unnecessary Data Definition Language (DDL) statements in stored procedures that use tempdb.
Troubleshooting Poorly Running Queries Troubleshooting poorly running queries is a book in its own right and is beyond the scope of this chapter. However, it is important to mention for completion’s sake. Don’t forget that SQL
40521.book Page 175 Monday, August 14, 2006 8:04 AM
Summary
175
Server represents a complex client-server architecture. So, all sorts of concurrency problems and inefficiently written queries are potentially being run against the database engine. Over a period of time, indexes might become fragmented, which might manifest itself as excessive disk I/O and poor memory utilization, so you have a lot more to consider. Poor indexes might result in inefficient locking by SQL Server or simply too many table scans. All this will result in poorly performing queries. So…what else can you do? Well, make life easier for yourself by taking advantage of the Database Engine Tuning Advisor and seeing what recommendations it makes. Monitor index fragmentation, usage, overhead, and hot spots by taking advantage of the following DMVs
sys.dm_db_index_usage_stats
sys.dm_db_index_operational_stats
sys.dm_db_index_physical_stats
Another set of DMVs to watch which provide information about missing indexes that could enhance query performance are:
sys.dm_db_missing_index_group_stats
sys.dm_db_missing_index_groups
sys.dm_db_missing_index_details
sys.dm_db_missing_index_columns
The SQL Server team has gone out of its way to empower you with many resources to get the most out of SQL Server 2005, so use them!
Performance tuning SQL Server is a book in its on right. A large book! For more recommendations, don’t forget to read the “Database Performance” topic in Books Online.
Summary In this chapter, you saw how to build a complete monitoring solution. You learned how and why to build a baseline, the role of benchmarks, how to identify several performance problems, and several ways to solve them. You also learned about the tools available in the Windows operation system and SQL Server 2005 for performance tuning.
40521.book Page 176 Monday, August 14, 2006 8:04 AM
176
Chapter 3
Performance Tuning a Database Solution
Exam Essentials Know how to create a performance baseline. Know how to implement a performance baseline using Performance Logs and Alerts and the counters that should be included in your baseline. Be able to identify the performance objectives from business objectives. Understand how business objectives translate to performance objectives and how to implement the performance objectives to achieve the business requirements. Know how to choose a tool for performance monitoring. Know the tools available for performance monitoring, their capabilities, and how to choose them for monitoring tasks using the business objectives and technical requirements. Understand the factors that affect performance. Understand the factors that affect the performance of your application. It is important to know the components of SQL Server 2005 (Notification Services, Service Broker, replication, and so on) from what they do to best practices for performance. You also have to know how to identify performance problems and their possible cause. Know how to respond to performance problems. Once you’ve identified the cause for performance problems, you should know how to respond and take action. For that, you need to have a good understanding of the server’s resources (memory, CPU, I/O, tempdb, and so on).
40521.book Page 177 Monday, August 14, 2006 8:04 AM
Review Questions
177
Review Questions 1.
You want to be notified when one of your servers is running near capacity. What should you do? A. Use SQL Profiler to monitor the server. B. Create SQL Trace to monitor the server’s resources. C. Use Performance Logs and Alerts. D. Monitor the SQL Server log.
2.
You install a new inventory application on a server that hosts a tax application. You want to measure the performance overhead added by the new application. What method should you use? A. Use System Monitor to measure CPU and disk performance counters after you install the inventory application. B. Use System Monitor to measure CPU and disk performance counters before you install the inventory application. C. Use System Monitor to measure CPU and disk performance counters before and after you install the inventory application. D. Use Task Manager to measure memory utilization for the inventory application.
3.
You need to implement a performance-monitoring plan for an application. What indicators should be included in the baseline? (Choose all that apply.) A. Memory– Pages/sec B. Processor– % Processor Time C. SQLServer:Access Methods–Full Scans/sec D. SQLServer:Buffer Manager–Buffer Cache Hit Ratio E. SQLServer:SQLErrors–Info Errors F.
4.
Server–Files Open
You need to implement a performance-monitoring plan for an application. What information is not relevant for creating a baseline? A. The number of users connected to the application B. The completion time for a database backup operation C. The number of deadlocks per second D. The number of lock waits per second
5.
What tool can you use to capture and replay users’ activity at the database level? A. SQL Profiler B. Activity Monitor C. SQL Server Management Studio D. Performance Logs and Alerts
40521.book Page 178 Monday, August 14, 2006 8:04 AM
178
6.
Chapter 3
Performance Tuning a Database Solution
You want to minimize the response time for a particular search function of an application. The search is based on a database query. Which tool can help you improve the query’s response time? A. System Monitor B. SQL Trace C. Performance Logs and Alerts D. Database Engine Tuning Advisor
7.
You need to implement a performance-monitoring solution for an application. One business requirement specifies that the monitoring solution should record and store performance data automatically. You need to monitor both hardware utilization as well as database activity, especially the execution of stored procedures and user-defined functions. What tool or combination of tools should you use? A. SQL Trace and Performance Logs and Alerts B. Only System Monitor C. Only SQL Profiler D. Only SQL Trace
8.
You want to add reporting capabilities for an existing application. The application is used 12 hours per day to record purchase information for a large supermarket. Performance is critical, so it is required that reports should not add overhead to the existing application. The reports will display statistic information and can use outdated data. What should you do? A. Use a database snapshot for reporting. B. Use transactional replication to maintain a copy of the database on a different server. C. Use database mirroring in Synchronous mode to maintain a copy of the database on a different server. D. Copy the database on a different server using replication and schedule replication to occur during nonbusiness hours.
9.
Which of the following are true regarding SQL Profiler? (Choose all that apply.) A. Can be used to replay events B. Can be used to monitor Analysis Services C. Provides a graphical interface D. Can be used for ad hoc monitoring
10. You need to create a benchmark for a database application. What methods can you use? (Choose all that apply.) A. Use SQL Profiler to record a trace. Later, use the replay feature of the same tool. B. Create custom T-SQL scripts. C. Use a load generation tool. D. Use the Database Engine Tuning Advisor.
40521.book Page 179 Monday, August 14, 2006 8:04 AM
Review Questions
179
11. What is the first step in implementing a proactive performance-monitoring solution? A. Create a baseline. B. Create a benchmark. C. Define alerts. D. Create a unit test plan for stored procedures. 12. When implementing a proactive monitoring solution, what will you usually do? A. Include more performance counters in the baseline, and reduce the number of counters for ongoing monitoring. B. Start with fewer counters for the baseline, and increase them for ongoing monitoring. C. Use the same counters for the baseline and for ongoing monitoring. D. Include as many performance counters as you can for both the baseline and for ongoing monitoring. 13. Which of the following tools can you use to generate alerts? A. SQL Server Agent B. Activity Monitor C. Performance Logs and Alerts D. System Monitor–Charts 14. What feature of SQL Server 2005 allows you to diagnose the usage or issues for all the following: CPU, I/O, memory, indexes, locking, and network? A. DMVs and functions B. Catalog views C. Activity Monitor D. Network Monitor 15. In diagnosing resource bottlenecks, you can use the SQLServer:Buffer Manager object for what type of resource? A. CPU B. Network C. tempdb D. Memory 16. Which of the following feature of SQL Server 2005 can affect tempdb performance? (Choose all that apply.) A. MARS B. Online index operations C. Row versioning D. Excessive usage of temporary tables
40521.book Page 180 Monday, August 14, 2006 8:04 AM
Chapter 3
180
Performance Tuning a Database Solution
17. Which of the following are measures to improve the performance for slow-running queries? A. Use the Database Engine Tuning Advisor. B. Use SQL Profiler to record the response time for slow-running queries. C. Drop the unnecessary indexes. D. Solve resource bottlenecks, if any. 18. Which feature of SQL Server 2005 allows you to correlate database activity with performance counter data? A. CLR integration B. SQL Trace C. Integration of SQL Profiler with System Monitor D. DMVs and functions 19. Which of the following are possible causes for CPU bottlenecks? (Choose all that apply.) A. Excessive recompilation B. Database backup operations C. Poor cursor implementation D. Inefficient query plans E. Table scans F.
Intraquery parallelism
20. Which of the following DMVs can help you identify the number of workers waiting to be scheduled on the runnable queue (and thus troubleshoot CPU problems)? A. sys.dm_db_index_usage B. sys.dm_exec_query_stats C. sys.dm_exec_sql_text D. sys.dm_os_schedulers
40521.book Page 181 Monday, August 14, 2006 8:04 AM
Answers to Review Questions
181
Answers to Review Questions 1.
C. Using the Performance Logs and Alerts tool, you can set up alerts when hardware components are used to a certain level. SQL Profiler and SQL Trace do not have the ability to notify you in the case of this event occurring. The SQL Server log does not have this information, and it can’t generate alerts.
2.
C. To measure the performance overhead, you need to monitor the server’s performance both before and after you install the new application. Measuring the performance overhead only before or only after will not give you the difference in resource utilization, which is what is required. Using Task Manager to monitor memory utilization will not by itself give you any meaningful metrics.
3.
A, B, C, D. The Memory– Pages/sec, Processor– % Processor Time, SQLServer:Access Methods–Full Scans/sec, and SQLServer:Buffer Manager–Buffer Cache Hit Ratio counters collectively return SQL Server performance data that can be used to establish a baseline. The SQLServer:SQLErrors–Info Errors and Server–Files Open counters do not return performance-related information.
4.
B. The completion time for a database backup or restore operation is irrelevant for a baseline. The rest of the options have a direct impact on performance or capacity/resource planning and consequently should make up the baseline.
5.
A. SQL Profiler can capture and replay database events. The rest of the tools cannot capture a SQL Server trace or replay users’ activity at the database level. Activity Monitor will just show what is currently happening. SQL Server Management Studio is designed primarily to perform DBA tasks. The Performance Logs and Alerts tool captures only operating system and SQL Server performance counters.
6.
D. The Database Engine Tuning Advisor can improve the query’s response time using index recommendations. The other tools by themselves cannot achieve that.
7.
A. You need both tools (SQL Trace and Performance Logs and Alerts) to satisfy the requirements.
8.
D. All the other options will add a performance overhead to the application.
9.
A, B, C, D. All options are true.
10. A, B, C. SQL Profiler, custom T-SQL scripts, or a load generation tool can help you create benchmarks. You can also use the Database Engine Tuning Advisor for tuning queries but not for benchmarks. 11. A. The first step should be creating a baseline; without this initial baseline, it is impossible to proactively monitor any degradation in performance or resources over time. 12. A. You should limit the number of performance counters for ongoing monitoring and include more of them initially for the baseline.
40521.book Page 182 Monday, August 14, 2006 8:04 AM
182
Chapter 3
Performance Tuning a Database Solution
13. A, C. Only the SQL Server Agent and Performance Logs and Alerts tools allows you to define alerts. 14. A. DMVs and functions allow you to access internal server state and statistical data. 15. D. You can use the SQLServer:Buffer Manager object to detect memory issues. 16. A, B, C, D. All options will use tempdb and will affect its performance. 17. A, C, D. You cannot use SQL Profiler by itself to improve a slow-running query. All the other options are valid methods to improve a query’s response time. 18. C. Integrating SQL Profiler with System Monitor allows synchronization of the database activity with performance counter data. 19. A, C, D, F. Excessive recompilation, poor cursor implementation, inefficient query plans, and intraquery parallelism are all possible causes for CPU bottlenecks. Table scans generally represent an I/O bottleneck, as do backup operations. 20. D. The sys.dm_os_schedulers DMV has the runnable_tasks_count column that represents the number of workers waiting to be scheduled on the runnable queue. The other DMVs return information about the database and query environment, not the SQL operating system.
40521.book Page 183 Tuesday, August 8, 2006 1:21 PM
Chapter
4
Securing a Database Solution MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER: Design an application solution to support security.
Design and implement application security.
Design the database to enable auditing.
Design objects to manage user access.
Design data-level security that uses encryption
Design database security.
Define database access requirements.
Specify database object security permissions.
Define schemas to manage object ownership.
Specify database objects that will be used to maintain security.
Design an execution context strategy.
40521.book Page 184 Tuesday, August 8, 2006 1:21 PM
In discussions with Maureen Adams, my lovely editor to whom I owe a million apologies (more likely lunch at the Rainbow Room in Manhattan, but I hope she settles for a cold Aussie beer), we decided each chapter in this book would be 40–50 pages. Well…by the time you get to the end of this chapter, you’ll realize that it is more than 40 pages! This is an indication of the scope of the subject in this chapter: SQL Server security. A lot of material needs to be covered when discussing SQL Server security; in addition, it is helpful to understand the history of SQL Server security because subsequent versions have enhanced and/or overcome the deficiencies of prior versions. In the first half of the chapter, I will provide a brief overview of how the SQL Server security architecture has evolved. This will help you understand security concepts and terms. It will also help you understand a lot of legacy constructs and why the current architecture exists in the form it does. Otherwise, I will concentrate more on the SQL Server environment, including discussing how to secure it and how to set up an auditing solution. In the second half of the chapter, I will concentrate more on the database environment and how to set up database security for your particular database solution and business needs. Security is an important concept; remember the impact that Slammer had on the industry— SQL Server’s ubiquitous nature means you should spend the time and resources to get it right.
Designing an Application Solution to Support Security Security has many facets, and in this chapter I will focus on features of SQL Server that allow you to control access, both user access and application access, to the database engine’s environment. SQL Server 2005 enables you to exploit features of the underlying operating system, which gives you more control over the environment than was available in previous versions of SQL Server. I will mention all the new security functionality so you can take advantage of these features in the latest version of the product. However, this also means you need some familiarity with the macro Windows environment of which you have a micro equivalent within SQL Server. I will not be concentrating on the operating system side of security, but it is important to understand how SQL Server security has evolved.
40521.book Page 185 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
185
A Brief History of SQL Server Security I doubt that a book called A Brief History of SQL Server Security would sell as well as Stephen Hawking’s bestseller, but I think he would be surprised by its popularity. Understanding your origins in any field is important, which is why it’s imperative to understand how SQL Server security has evolved. In fact, SQL Server’s security architecture has primarily evolved in response to demand and prior “deficiencies.” As you would expect, the security realm has a lot of terminology and concepts, which I’ll discuss in this section. There are many methods of authenticating users, and the first requirement is to have an authority, which is capable of this authentication process. Originally, the highest level of authority within the SQL Server environment was the system administrator (sa) account. This account and all other accounts were stored within internal system tables using basic authentication. That is, they were neither encrypted (anyone with access to the tables could see them) nor signed (authorized by a higher authority). To represent some kind of authority, the sa account was the only account allowed to configure the server. Each database had a database owner (dbo) that by default mapped to the sa account. The dbo had the authority to execute and grant all statement permissions, which predominantly included your Data Definition Language (DDL) commands. When these statement permissions were exercised, the person who executed the statement to create an object became the database object owner (dboo). In most cases, developers were mapped to the dbo account, so the objects they created were owned by that same dbo account.
SQL Server 2005 has introduced two new terms in the security model. A principal represents an entity (such as an individual, group, or process) that can access a protected resource, which is called a securable.
However, if you had multiple developers creating various database objects with complex dependencies under their own authority, you ended up with broken ownership chains. Broken ownership chains increased the amount of work for both the database administrator (DBA) and the developers, because permissions were checked by SQL Server, and therefore needed to be managed, whenever an object owner changed in the dependency chain. DBAs were dissuaded from granting statement permissions directly to developers to prevent broken ownership chains; instead, they granted them dbo access, which created its own set of problems. Consequently, as you can see, problems existed at the top level in terms of authorization, flexibility, and security. Also, problems existed at the bottom level in terms of the owners of certain objects and the consequences. There was a need to ensure that the signatures were stored within the tables in an encrypted fashion, using a hash (digest) of the original value. There was also a need for login accounts from the Windows environment to be integrated within SQL Server so that duplicate login accounts did not need to be created. The idea of authorizing bodies was extended by the use of roles, which were used as preconfigured authenticators to enable users to do the same tasks as other users without having to be given individual permissions. Application roles were introduced to enable specific permissions to be granted to users because they were accessing the data via a specific application.
40521.book Page 186 Tuesday, August 8, 2006 1:21 PM
186
Chapter 4
Securing a Database Solution
The latest progression with the security model has extended the features available within SQL Server to reduce a reliance on the operating system while giving the administrator the option of integrating with the operating system as and when required. The key to understanding the changes to the security features of SQL Server is to appreciate Microsoft’s desire to facilitate the design of a secure environment for any deployed software solution. The alternative, and less attractive, approach would have been to let the implementer worry about the security of the environment. For various reasons, this would not be a successful strategy for implementing a secure environment. Therefore, databases are about storing and communicating data, and if you are going to do this in a secure way, then you need to extend the authority from the top down into the data. To do this, you need a means of authenticating users, applications, services, and machines. To do this, you can use certificates.
Understanding the Need for Certificates Digital certificates are new database-level securables supported by SQL Server. At the highest level, the certificates will come from authenticating authorities. If you need to communicate between organizations, then you need to use these certificates. You can use the Microsoft Management Console (MMC), as shown in Figure 4.1 to see what certificates are installed on your server. You can see what higher authorities can be contacted for certification and which ones have authorized certain activities for the user, service, or machine. Figure 4.1 includes the certificates from the VeriSign organization. FIGURE 4.1
VeriSign certificates
40521.book Page 187 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
187
If the communication is to occur purely between resources belonging to one organization, then you can get the benefits from a domain authenticating authority without needing to look elsewhere. Using Kerberos within the Windows environment means that at its core, the operating system has been designed to exploit certification as a means of security, and the highest level of authority is the domain controller at the top of the authorization tree. If the communication is going to occur only within the scope of the database engines within the organization, you can now use the certification feature that is available as part of the installation of the server, which uniquely identifies it as an authenticating authority. Various certificates are created for the various services that may potentially utilize them. You can see them by entering the following: SELECT * FROM Master.sys.certificates
You will get a report that includes details of the following certificates, among others that may have been created since you installed the server: ##MS_SQLResourceSigningCertificate## ##MS_SQLReplicationSigningCertificate## ##MS_SQLAuthenticatorCertificate## ##MS_AgentSigningCertificate##
You can use ##MS_SQLResourceSigningCertificate## to create further certificates as follows: IF EXISTS ( SELECT * FROM sys.certificates WHERE name = N'sample_certificate' ) DROP CERTIFICATE sample_certificate GO CREATE CERTIFICATE sample_certificate ENCRYPTION BY PASSWORD = 'Pa$$w0rd' WITH SUBJECT = 'MySampleCertificate'; GO
Each authority can authorize resources to perform certain privileged activities. In addition, you can give permissions to some of the resources so they can use other resources. To do this, it may be sufficient that the authenticating authority exists and has the given privilege. Alternatively, the resource being given the privilege may want to authenticate other resources in turn. To achieve this, a hierarchy of certificates is involved. To extend the ability to authenticate a piece of code, for example, a user must create a certificate, get it authorized (signed) by a higher authorizing body, and provide the composite signature to the code to enable it to access appropriate resources such as data, the central processing unit (CPU), disk files, and so on.
40521.book Page 188 Tuesday, August 8, 2006 1:21 PM
188
Chapter 4
Securing a Database Solution
These resources could be complex to manage if you don’t utilize the abstraction and classification of resources, which you do by using namespaces to group the resources. These namespaces are known within the database as schemas. Schemas are the bottom-up approach to help you design the final system.
Understanding the Need for Encryption You also need the ability to encrypt the data, both when you store it and when you transfer it to networks while it is passing between databases, around the organization, and beyond. This is facilitated by Kerberos, which is built into the latest versions of the Windows operating systems (except for the Home Editions). Kerberos is the part of the operating system that enables the encryption of files, using the certificates mentioned previously. It also facilitates secure communication using public and private keys. SQL Server 2005 uses a hierarchical construct to encrypt data, with each layer encrypting the layer beneath it through corresponding certificates, asymmetric keys, and symmetric keys. Figure 4.2 shows this encryption hierarchy.
A detailed discussion of encryption is beyond the scope of this book, so make sure you read the “Encryption Hierarchy” topic in SQL Server 2005 Books Online to understand the different encryption mechanisms supported in SQL Server 2005. Make sure you understand where and when to use certificates, and asymmetric versus symmetric keys. FIGURE 4.2
SQL Server 2005 encryption hierarchy
Windows Level Service Master Key encrypted with DPAPI SQL Server Level Service Master Key Database Level Database Master Key
Certificates
Asymmetric Keys
Symmetric Keys
Symmetric Keys
Symmetric Keys Data Data
Data
40521.book Page 189 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
189
Communication can occur in various ways, and you can encrypt different elements. For example, you can encrypt the complete data or just a hash of the data if a secure channel is possible. In addition, the communication may involve either one key to both encrypt and decrypt the data (synchronous) or two keys, one to encrypt the data and the other to decrypt (asynchronous). These high- and low-level approaches offer you the maximum amount of control over the environment. It is akin to top-down and bottom-up designs. The top-down design enables you to simplify and understand the business environment; the bottom-up design allows you to apply predesigned technologies in an effective manner. It is important to have a solution space that is secure and remains that way for any applications you develop. You also need to be able to monitor what is happening in this solution space. This involves knowing what security features are available and using them both to apply and to monitor security so only acceptable activity occurs. To this end, in this chapter you will see what features are available to implement security, how they fit together, and how you can monitor the environment and use these features in a useful and sensible fashion.
Securing the SQL Server Solution SQL Server 2005 has seen great enhancements in security and by default installs with a lot of features turned off to reduce what is referred to as the surface area of attack. This philosophy, which I have been teaching for a decade, is that you should not install or configure a component until you will be using it. Again, you are reducing any potential “back doors” into your SQL Server environment. The first step in creating a secure SQL Server environment is ensuring that only the required features, services, and network protocols are configured on your SQL Server instance.
Actually, the first step really should be securing the operating system. For this Microsoft has provided tools such as the Microsoft Baseline Security Analyzer (MBSA), which you can find at http://www.microsoft.com/technet/ security/tools/mbsahome.mspx.
To help DBAs secure SQL Server 2005, Microsoft has released a tool called the SQL Server Surface Area Configuration (SSAC) tool, which helps ensure that the virtual and physical surface area used by the SQL Server instance is secure. Figure 4.3 shows the introductory screen of the SSAC. As you can see, this screen offers a number of options. You can connect to another SQL Server instance that’s local or remote.
You cannot remotely configure the Developer, Evaluation, and Express Editions of SQL Server 2005 via the SSAC tool because remote access is disabled through the default installation. You have to configure them locally.
Otherwise, you can configure either the various services and connections or the features of SQL Server. Figure 4.4 shows the interface for configuring the services and connections.
40521.book Page 190 Tuesday, August 8, 2006 1:21 PM
190
Chapter 4
Securing a Database Solution
FIGURE 4.3
SSAC welcome screen
FIGURE 4.4
Configuring services and connections
Figure 4.5 shows the options for configuring the features. For enterprise customers, however, using this utility to manage hundreds of SQL Server 2005 instances would be unacceptable. Consequently, Microsoft has provided a commandline utility, SAC.EXE, that allows you to import and export the SQL Server 2005 surface area settings. Figure 4.6 shows the SAC.EXE options and the export process.
40521.book Page 191 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
FIGURE 4.5
Configuring features
FIGURE 4.6
SAC.EXE utility
191
40521.book Page 192 Tuesday, August 8, 2006 1:21 PM
192
Chapter 4
Securing a Database Solution
Designing and Implementing Application Security After you have secured your SQL Server solution, it is time to move into the application space. Now I’m still talking about SQL Server 2005 here. Remember, the operating system considers SQL Server to be just another application. SQL Server 2005 has evolved from a simple database engine to a complex “something or other.” I don’t even know what to call it. But suffice it to say that it comes with a lot of peripheral technology and software components. As discussed, you should install software components only if you are using them because they represent a potential “back door” into the system. Sorry, Microsoft! So, before designing how you’re going to implement user-defined modules, you need to secure the SQL Server application space.
Securing Preconfigured Service Routines The first set of software objects includes preconfigured processes created for you by the designers of the latest version of SQL Server. These processes, referred to as service routines, are the kind of features you might expect of an operating system, but they have been included within the latest version of SQL Server to provide tighter control over their secure use in the database environment. Figure 4.7 shows the various SQL Server 2005 components that make up the set of service routines that can be installed. Microsoft has designed the services, provided as a part of the installation, with security in mind. However, they rely on working alongside elements that lie outside their control. It is at these interfaces that security can become compromised. You can secure these interfaces using new features of the database engine and ensuring the custom code you generate is as secure as is appropriate for interfacing with the technical service routines, the business customer’s data, and the legal framework within which the business functions store, access, and transfer the business data. These custom routines come in various guises. FIGURE 4.7
SQL Server components that make up the service routines
40521.book Page 193 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
193
Securing Custom Code Routines/User-Defined Modules The next decision you have to make is which user-defined modules you’re going to implement, such as the following:
Procedures
Roles
Schemas
Web services
Figure 4.8 shows the various user-defined project modules you can use. This is what you’ll be looking at in the second half of the chapter (and other chapters, as the case may be). Let’s talk about something everyone should do but doesn’t. And no, it’s not brushing your teeth once a week.
Designing the Database to Enable Auditing Auditing is an essential, critical part of your security, and many reasons exist for auditing data access. You can audit data access in various ways at the server level, database level, or individual object level. You can use various SQL Server 20005 features to monitor what is going on throughout the server environment. FIGURE 4.8
SQL Server 2005 programming modules
40521.book Page 194 Tuesday, August 8, 2006 1:21 PM
194
Chapter 4
Securing a Database Solution
At the server level, you have a number of options such as configuring login and C2-level auditing, as shown in Figure 4.9. C2 is definitely showing its age and has been superseded by FIPS-140-2 Common Criteria. I suspect Microsoft has just “left it in there” with SQL Server 2005. It is not that commonly configured in the industry.
The general recommendation is to not turn on C2 auditing unless it is strictly required because of the overhead on SQL Server. The C2 logs are very, very verbose!
Don’t forget you can use a range of external products such as Microsoft Operations Management, which relies on the Windows Management Instrumentation (WMI) interface and provides part of the solution.
Designing to Enable Auditing of Access to Data Ideally, the process used to audit data access should be as close to the data as possible, which means you should use triggers. But there are no “select triggers.” Why? Well, the SELECT statement is not a logged operation, which is what triggers rely on. Since there no equivalent of a “select trigger,” you can set up a trace via the sp_trace_create system stored procedures and its related procedures. Table 4.1 shows the system stored procedures you can use when setting up a trace. TABLE 4.1
Trace Stored Procedures
Action
System Procedure
Creating a trace
sp_trace_create
Adding events
sp_trace_setevent
(Optional) Setting a filter
sp_trace_setfilter
Starting the trace
sp_trace_setstatus
Stopping the trace
sp_trace_setstatus
Closing the trace
sp_trace_setstatus
An alternative is to use the SQL Server Profiler, which you saw how to use in Chapter 3. Figure 4.10 shows the Trace Properties dialog box for configuring the trace properties when setting up a trace.
40521.book Page 195 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
FIGURE 4.9
FIGURE 4.10
C2 and login auditing
Trace Properties dialog box
195
40521.book Page 196 Tuesday, August 8, 2006 1:21 PM
196
Chapter 4
Securing a Database Solution
Designing to Enable Auditing of Changes to Data The ability to audit changes to your data has been available with SQL Server since it was first released. I am talking about Data Manipulation Language (DML) triggers, of course, which were covered in Chapter 2. Auditing changes to data through DML triggers is reasonably straightforward. After you create the auditing tables that will be used to store the auditing information, it’s simply a matter of coding the DML triggers, as you saw in Chapter 2.
Make sure you look up the “System Functions (Transact-SQL)” topic in SQL Server Books Online because it will have a range of system procedures such as CURRENT_TIMESTAMP, HOST_NAME, SYSTEM_USER, and USER_NAME that are invaluable for writing auditing triggers.
Designing to Enable Auditing of Changes to Objects One of the shortcomings of earlier versions of SQL Server was that there was no simple way to audit DDL changes to database objects. The requirement to monitor and control changes to actual objects is something that has received much consideration in the latest version of SQL Server. Consequently, Microsoft introduced two new techniques, namely, DDL triggers and event notifications. DDL triggers can be either server scoped or database scoped, and they allow you to monitor changes to the underlying environment that contains the user data. At the server scope, they monitor changes that may affect all the databases. At the database scope, they monitor and control changes to a given database.
You can, of course, create a trace as well. In this case, you are looking for event number 118.
DDL Triggers As you saw initially in Chapter 2, DDL triggers are a new addition to SQL Server 2005. Because DDL triggers fire in response to DDL statements being executed, they are perfect for auditing changes to objects. The syntax for DDL triggers is as follows: CREATE TRIGGER trigger_name ON { ALL SERVER | DATABASE } [ WITH [ ,...n ] ] { FOR | AFTER } { event_type | event_group } [ ,...n ] AS { sql_statement [ ; ] [ ...n ] | EXTERNAL NAME < method specifier > [ ; ] }
In Chapter 2 I discussed the importance of identifying the DDL trigger scope and which T-SQL statement or batch fires the DDL trigger. For these requirements, the ON DATABASE
40521.book Page 197 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
197
scope is the appropriate scope for auditing the database solution. The ON SERVER scope would not be appropriate because the scope of the activity detected would extend to the whole server environment. The following list shows the DDL trigger events that apply at the database scope:
DDL_DATABASE_LEVEL_EVENTS
DDL_ASSEMBLY_EVENTS: CREATE_ASSEMBLY, ALTER_ASSEMBLY, DROP_ ASSEMBLY
DDL_DATABASE_SECURITY_EVENTS
DDL_CERTIFICATE_EVENTS: CREATE_CERTIFICATE, ALTER_CERTIFICATE, DROP_CERTIFICATE
DDL_USER_EVENTS: CREATE_USER, DROP_USER, ALTER_USER
DDL_ROLE_EVENTS: CREATE_ROLE, ALTER_ROLE, DROP_ROLE
DDL_APPLICATION_ROLE_EVENTS: CREATE_APPLICATION_ROLE, ALTER_APPLICATION_ROLE, DROP_APPLICATION_ROLE
DDL_SCHEMA_EVENTS: CREATE_SCHEMA, ALTER_SCHEMA, DROP_SCHEMA
DDL_GDR_DATABASE_EVENTS: GRANT_DATABASE, DENY_DATABASE, REVOKE_DATABASE
DDL_AUTHORIZATION_DATABASE_EVENTS: ALTER_AUTHORIZATION_ DATABASE
DDL_EVENT_NOTIFICATION_EVENTS: CREATE_EVENT_NOTIFICATION, DROP_EVENT_NOTIFICATION
DDL_FUNCTION_EVENTS: CREATE_FUNCTION, ALTER_FUNCTION, DROP_FUNCTION DDL_PARTITION_EVENTS
DDL_PARTITION_FUNCTION_EVENTS: CREATE_PARTITION_FUNCTION, ALTER_PARTITION_FUNCTION, DROP_PARTITION_FUNCTION
DDL_PARTITION_SCHEME_EVENTS: CREATE_PARTITION_SCHEME, ALTER_PARTITION_SCHEME, DROP_PARTITION_SCHEME
DDL_PROCEDURE_EVENTS: CREATE_PROCEDURE, DROP_PROCEDURE, ALTER_PROCEDURE DDL_SSB_EVENTS
DDL_MESSAGE_TYPE_EVENTS: CREATE_MSGTYPE, ALTER_MSGTYPE, DROP_MSGTYPE
DDL_CONTRACT_EVENTS: CREATE_CONTRACT, DROP_CONTRACT
DDL_QUEUE_EVENTS: CREATE_QUEUE, ALTER_QUEUE, DROP_QUEUE
DDL_SERVICE_EVENTS: CREATE_SERVICE, DROP_SERVICE, ALTER_SERVICE
DDL_ROUTE_EVENTS: CREATE_ROUTE, DROP_ROUTE, ALTER_ROUTE
40521.book Page 198 Tuesday, August 8, 2006 1:21 PM
Chapter 4
198
Securing a Database Solution
DDL_REMOTE_SERVICE_BINDING_EVENTS: CREATE_REMOTE_ SERVICE_BINDING, ALTER_REMOTE_SERVICE_BINDING, DROP_ REMOTE_SERVICE_BINDING
DDL_TABLE_VIEW_EVENTS
DDL_TABLE_EVENTS: CREATE_TABLE, ALTER_TABLE, DROP_TABLE
DDL_VIEW_EVENTS: CREATE_VIEW, ALTER_VIEW, DROP_VIEW
DDL_INDEX_EVENTS: CREATE_INDEX, DROP_INDEX, ALTER_INDEX, CREATE_ XML_INDEX
DDL_STATISTICS_EVENTS: CREATE_STATISTICS, UPDATE_STATISTICS, DROP_STATISTICS
DDL_TRIGGER_EVENTS: CREATE_TRIGGER, DROP_TRIGGER, ALTER_TRIGGER
DDL_TYPE_EVENTS: CREATE_TYPE, DROP_TYPE
DDL_SYNONYM_EVENTS: CREATE_SYNONYM, DROP_SYNONYM
DDL_XML_SCHEMA_COLLECTION_EVENTS: CREATE_XML_SCHEMA_ COLLECTION, ALTER_XML_SCHEMA_COLLECTION, DROP_XML_ SCHEMA_COLLECTION In Exercise 4.1, you’ll create a DDL trigger for auditing purposes.
EXERCISE 4.1
Auditing Changes to Objects through DDL Triggers In this exercise, assume you want to create an audit table that will store the auditing information for the exercise. You will audit the T-SQL statement, time, computer name, and user.
1.
Use the Windows Start menu, and select All Programs Microsoft SQL Server 2005 SQL Server Management Studio.
2.
Connect to your SQL Server 2005 environment.
3.
Execute the following Transact-SQL (T-SQL) script: USE AdventureWorks ; GO -- Create the AuditLog table CREATE TABLE dbo.AuditLog ( Command NVARCHAR(1000), PostTime NVARCHAR(24),
40521.book Page 199 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
EXERCISE 4.1 (continued)
HostName NVARCHAR(100), LoginName NVARCHAR(100) ) GO
4.
Next you need create the audit trigger that will write the required auditing information to the audit table created earlier, so execute the following T-SQL batch: -- Create audit trigger CREATE TRIGGER AuditOperations ON DATABASE FOR DDL_DATABASE_LEVEL_EVENTS AS DECLARE @Data XML DECLARE @TSQL NVARCHAR(1000) DECLARE @DateTime NVARCHAR(24) DECLARE @SPID NVARCHAR(6) DECLARE @HostName NVARCHAR(100) DECLARE @LoginName NVARCHAR(100)
SET @Data = eventdata() SET @TSQL = CONVERT(NVARCHAR(100), @data.query('data(//TSQLCommand//CommandText)')) SET @DateTime = CONVERT(NVARCHAR(24), @data.query('data(//PostTime)')) SET @SPID = CONVERT(NVARCHAR(6), @data.query('data(//SPID)')) SET @HostName = HOST_NAME()
199
40521.book Page 200 Tuesday, August 8, 2006 1:21 PM
Chapter 4
200
Securing a Database Solution
EXERCISE 4.1 (continued)
SET @LoginName = SYSTEM_USER
INSERT INTO dbo.AuditLog (Command, PostTime, HostName, LoginName) VALUES (@TSQL, @DateTime, @HostName, @LoginName) GO Good—you are ready to test the trigger. In the next steps, you’ll execute a couple of T-SQL statements to see how it all works.
5.
Execute the following T-SQL script: -- Test the audit trigger SELECT * FROM Production.Product GO UPDATE STATISTICS Production.Product GO CREATE TABLE dbo.Platypus(ArbitraryColumn INT) GO DROP TABLE dbo.Platypus GO
6.
It’s time to examine the auditing table you created to see what information has been written to it, so execute the following T-SQL batch: -- Examine the auditing table SELECT * FROM dbo.AuditLog GO
40521.book Page 201 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
201
EXERCISE 4.1 (continued)
You should see output similar to that shown here. Notice that you haven’t recorded the SELECT or that the audit log has just been interrogated (which is common with auditing systems). The DDL triggers are simply not designed to do that. This is why you need alternative methods, which you examined earlier, for when someone reads data or performs DML operations.
7.
Execute the following T-SQL script, which will clean up your SQL Server environment: -- Cleanup code DROP TRIGGER AuditOperations ON DATABASE GO DROP TABLE dbo.AuditLog GO
40521.book Page 202 Tuesday, August 8, 2006 1:21 PM
202
Chapter 4
Securing a Database Solution
Event Notifications The other new feature of SQL Server 2005, which will help with auditing activity (or events) within your database solution, is the event notifications subsystem. Unlike DDL triggers (which are closely tied to the relational database engine), event notifications rely on peripheral SQL Server 2005 technology. Because event notifications do not “piggyback” the normal functioning of the database engine, they are less invasive because of their asynchronous architecture. The syntax for creating event notifications in SQL Server 2005 is: CREATE EVENT NOTIFICATION event_notification_name ON { SERVER | DATABASE | QUEUE queue_name } [ WITH FAN_IN ] FOR { event_type | event_group } [ ,...n ] TO SERVICE 'broker_service' , { 'broker_instance_specifier' | 'current database' } [ ; ]
Fundamentally, event notifications react to specific events within the database (or server) and send a message through the Service Broker service. Service Broker will be covered in Chapter 7, so at this stage it’s sufficient to say that the Service Broker facilitates asynchronous communication between the server being audited and the actions that form part of the auditing.
Service Broker As you can see, the Service Broker has a number of elements you need to understand. I’ll go through them briefly to help provide a context for their purpose with respect to event notifications:
A service as a Service Broker set of tasks that form the fundamental framework for processing messages. In the case of event processing, some of the links are even predefined for you, such as the contract and the message type.
A contract forms part of a service; it is something you agree to offer as part of a service, and the service may consist of many contracts. In Service Broker terms, it is an agreement to process a message of a predefined type. In the case of event notification, the contract is predefined for you.
A message type is a predefined form that contains defined data points, which can be populated by a process to facilitate the request or provision of a service. In the case of event notifications, the message type is predefined for you.
A queue is a storage location for messages to process or messages that have been processed.
A route is a means by which the service will be provided; the service could be provided either locally or at a remote site.
40521.book Page 203 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
203
Event Notifications can also occur in response to a SQL Trace Event.
Figure 4.11 shows the close relationship between event notifications and the Service Broker service in SQL Server Management Studio. So, where and why should you use Event Notifications? Well, if your solution requirements are more to monitor what is happening than to control events, then you should consider using event notifications instead of DDL triggers because they’re “lightweight.” Because of their asynchronous architecture, they will potentially not have the same performance impact as triggers, which run in a transaction space and potentially heavily utilize the database’s transaction log, and so on. Table 4.2 shows the main differences between triggers and event notifications: FIGURE 4.11
Event notification components in the Service Broker service
40521.book Page 204 Tuesday, August 8, 2006 1:21 PM
Chapter 4
204
TABLE 4.2
Securing a Database Solution
Difference between Triggers and Event Notifications
Triggers
Event Notifications
DDL triggers respond to DDL operations. DML Event Notifications respond to DDL events and triggers respond to DML operations. a subset of the SQL Trace events. Triggers are processed synchronously within transaction space.
Event Notifications are processed asynchronously outside transaction space.
Triggers can be rolled back.
Event Notifications cannot be rolled back
Triggers can run T-SQL or managed code.
Event Notifications send XML messages to the Service Broker
To which events can event notifications react? As you would expect, quite a few exist, so you should take some time to become familiar with them. The following list represents the set of server-level events:
DDL_SERVER_LEVEL_EVENTS: CREATE_DATABASE, ALTER_DATABASE, DROP_DATABASE
DDL_ENDPOINT_EVENTS: CREATE_ENDPOINT, ALTER_ENDPOINT, DROP_ ENDPOINT
DDL_SERVER_SECURITY_EVENTS: ADD_ROLE_MEMBER, ADD_SERVER_ ROLE_MEMBER, DROP_ROLE_MEMBER, DROP_SERVER_ROLE_MEMBER
DDL_AUTHORIZATION_SERVER_EVENTS: ALTER_AUTHORIZATION_ SERVER
DDL_GDR_SERVER_EVENTS: GRANT_SERVER, DENY_SERVER, REVOKE_ SERVER
DDL_LOGIN_EVENTS: CREATE_LOGIN, ALTER_LOGIN, DROP_LOGIN
The following list represents the set of database-level events:
DDL_DATABASE_LEVEL_EVENTS
DDL_ASSEMBLY_EVENTS: CREATE_ASSEMBLY, ALTER_ASSEMBLY, DROP_ ASSEMBLY
DDL_DATABASE_SECURITY_EVENTS
DDL_APPLICATION_ROLE_EVENTS: CREATE_APPLICATION_ROLE, ALTER_APPLICATION_ROLE, DROP_APPLICATION_ROLE
DDL_AUTHORIZATION_DATABASE_EVENTS: ALTER_AUTHORIZATION_ DATABASE
DDL_CERTIFICATE_EVENTS: CREATE_CERTIFICATE, ALTER_CERTIFICATE, DROP_CERTIFICATE
40521.book Page 205 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
DDL_GDR_DATABASE_EVENTS: GRANT_DATABASE, DENY_DATABASE, REVOKE_DATABASE
DDL_ROLE_EVENTS: CREATE_ROLE, ALTER_ROLE, DROP_ROLE
DDL_SCHEMA_EVENTS: CREATE_SCHEMA, ALTER_SCHEMA, DROP_ SCHEMA
DDL_USER_EVENTS: CREATE_USER, DROP_USER, ALTER_USER
205
DDL_EVENT_NOTIFICATION_EVENTS: CREATE_EVENT_NOTIFICATION, DROP_EVENT_NOTIFICATION
DDL_FUNCTION_EVENTS: CREATE_FUNCTION, ALTER_FUNCTION, DROP_ FUNCTION
DDL_PARTITION_EVENTS
DDL_PARTITION_FUNCTION_EVENTS: CREATE_PARTITION_FUNCTION, ALTER_PARTITION_FUNCTION, DROP_PARTITION_FUNCTION
DDL_PARTITION_SCHEME_EVENTS: CREATE_PARTITION_SCHEME, ALTER_PARTITION_SCHEME, DROP_PARTITION_SCHEME DDL_PROCEDURE_EVENTS: CREATE_PROCEDURE, DROP_PROCEDURE, ALTER_PROCEDURE
DDL_SSB_EVENTS
DDL_CONTRACT_EVENTS: CREATE_CONTRACT, DROP_CONTRACT
DDL_MESSAGE_TYPE_EVENTS: CREATE_MSGTYPE, ALTER_MSGTYPE, DROP_MSGTYPE
DDL_QUEUE_EVENTS: CREATE_QUEUE, ALTER_QUEUE, DROP_QUEUE
DDL_SERVICE_EVENTS: CREATE_SERVICE, DROP_SERVICE, ALTER_ SERVICE
DDL_REMOTE_SERVICE_BINDING_EVENTS: CREATE_REMOTE_ SERVICE_BINDING, ALTER_REMOTE_SERVICE_BINDING, DROP_ REMOTE_SERVICE_BINDING
DDL_ROUTE_EVENTS: CREATE_ROUTE, DROP_ROUTE, ALTER_ROUTE DDL_SYNONYM_EVENTS: CREATE_SYNONYM, DROP_SYNONYM
DDL_TABLE_VIEW_EVENTS
DDL_INDEX_EVENTS: CREATE_INDEX, DROP_INDEX, ALTER_INDEX, CREATE_ XML_INDEX
DDL_STATISTICS_EVENTS: CREATE_STATISTICS, UPDATE_STATISTICS, DROP_STATISTICS
DDL_TABLE_EVENTS: CREATE_TABLE, ALTER_TABLE, DROP_TABLE
DDL_VIEW_EVENTS: CREATE_VIEW, ALTER_VIEW, DROP_VIEW DDL_TRIGGER_EVENTS: CREATE_TRIGGER, DROP_TRIGGER, ALTER_TRIGGER
40521.book Page 206 Tuesday, August 8, 2006 1:21 PM
206
Chapter 4
Securing a Database Solution
DDL_TYPE_EVENTS: CREATE_TYPE, DROP_TYPE
DDL_XML_SCHEMA_COLLECTION_EVENTS: CREATE_XML_SCHEMA_COLLECTION, ALTER_XML_SCHEMA_COLLECTION, DROP_XML_SCHEMA_ COLLECTION The following list represents the set of SQL Trace events:
TRC_CLR: ASSEMBLY_LOAD
TRC_DATABASE: DATA_FILE_AUTO_GROW, DATA_FILE_AUTO_SHRINK, DATABASE_MIRRORING_STATE_CHANGE, LOG_FILE_AUTO_GROW, LOG_ FILE_AUTO_SHRINK
TRC_DEPRECATION: DEPRECATION_ANNOUNCEMENT, DEPRECATION_ FINAL_SUPPORT
TRC_ERRORS_AND_WARNINGS: BLOCKED_PROCESS_REPORT, ERRORLOG, EVENTLOG, EXCEPTION, EXCHANGE_SPILL_EVENT, EXECUTION_WARNINGS, HASH_WARNING, MISSING_COLUMN_STATISTICS, MISSING_JOIN_PREDICATE, SORT_WARNINGS, USER_ERROR_MESSAGE
TRC_FULL_TEXT: FT_CRAWL_ABORTED, FT_CRAWL_STARTED, FT_ CRAWL_STOPPED
TRC_LOCKS: DEADLOCK_GRAPH, LOCK_DEADLOCK, LOCK_DEADLOCK_ CHAIN, LOCK_ESCALATION
TRC_OBJECTS: OBJECT_ALTERED, OBJECT_CREATED, OBJECT_DELETED
TRC_OLEDB: OLEDB_CALL_EVENT, OLEDB_DATAREAD_EVENT, OLEDB_ERRORS, OLEDB_PROVIDER_INFORMATION, OLEDB_ QUERYINTERFACE_EVENT
TRC_PERFORMANCE: SHOWPLAN_ALL_FOR_QUERY_COMPILE, SHOWPLAN_XML, SHOWPLAN_XML_FOR_QUERY_COMPILE, SHOWPLAN_ XML_STATISTICS_PROFILE
TRC_QUERY_NOTIFICATIONS: QN_DYNAMICS, QN_PARAMETER_TABLE, QN_SUBSCRIPTION, QN_TEMPLATE
TRC_SECURITY_AUDIT: AUDIT_ADD_DB_USER_EVENT, AUDIT_ADDLOGIN_ EVENT, AUDIT_ADD_LOGIN_TO_SERVER_ROLE_EVENT, AUDIT_ADD_ MEMBER_TO_DB_ROLE_EVENT, AUDIT_ADD_ROLE_EVENT, AUDIT_APP_ ROLE_CHANGE_PASSWORD_EVENT, AUDIT_BACKUP_RESTORE_EVENT, AUDIT_CHANGE_AUDIT_EVENT, AUDIT_CHANGE_DATABASE_OWNER, AUDIT_DATABASE_MANAGEMENT_EVENT, AUDIT_DATABASE_OBJECT_ ACCESS_EVENT, AUDIT_DATABASE_OBJECT_GDR_EVENT, AUDIT_ DATABASE_OBJECT_MANAGEMENT_EVENT, AUDIT_DATABASE_OBJECT_ TAKE_OWNERSHIP_EVENT, AUDIT_DATABASE_OPERATION_EVENT, AUDIT_ DATABASE_PRINCIPAL_IMPERSONATION_EVENT, AUDIT_DATABASE_ PRINCIPAL_MANAGEMENT_EVENT, AUDIT_DATABASE_SCOPE_GDR_EVENT, AUDIT_DBCC_EVENT, AUDIT_LOGIN, AUDIT_LOGIN_CHANGE_PASSWORD_ EVENT, AUDIT_LOGIN_CHANGE_PROPERTY_EVENT, AUDIT_LOGIN_FAILED,
40521.book Page 207 Tuesday, August 8, 2006 1:21 PM
Designing an Application Solution to Support Security
207
AUDIT_LOGIN_GDR_EVENT, AUDIT_LOGOUT, AUDIT_SCHEMA_OBJECT_ ACCESS_EVENT, AUDIT__SCHEMA_OBJECT_GDR_EVENT, AUDIT_SCHEMA_ OBJECT_MANAGEMENT_EVENT, AUDIT_SCHEMA_OBJECT_TAKE_ OWNERSHIP_EVENT, AUDIT_SERVER_ALTER_TRACE_EVENT, AUDIT_ SERVER_OBJECT_GDR_EVENT, AUDIT_SERVER_OBJECT_MANAGEMENT_ EVENT , AUDIT_SERVER_OBJECT_TAKE_OWNERSHIP_EVENT, AUDIT_ SERVER_OPERATION_EVENT, AUDIT_SERVER_PRINCIPAL_ IMPERSONATION_EVENT, AUDIT_SERVER_PRINCIPAL_MANAGEMENT_ EVENT, AUDIT_SERVER_SCOPE_GDR_EVENT
TRC_SERVER: MOUNT_TAPE, SERVER_MEMORY_CHANGE, TRACE_ FILE_CLOSE
TRC_STORED_PROCEDURE: SP_CACHEINSERT, SP_CACHEMISS, SP_CACHEREMOVE, SP_RECOMPILE
TRC_TSQL: SQL_STMTRECOMPILE, XQUERY_STATIC_TYPE
TRC_USER_CONFIGURABLE: USERCONFIGURABLE_0, USERCONFIGURABLE_ 1, USERCONFIGURABLE_2, USERCONFIGURABLE_3, USERCONFIGURABLE_4, USERCONFIGURABLE_5, USERCONFIGURABLE_6, USERCONFIGURABLE_7, USERCONFIGURABLE_8, USERCONFIGURABLE_9
So let’s finish up with an example of a server level and database level even notification. I’ll create a server-level notification called [CreateLoginEvent] that will send event information to the [NotifyService] service whenever the CREATE LOGIN statement executes. Additionally I’ll create a database-level notification called [TableOrViewEvent] that will send event information whenever a table or view is modified through a DDL operation. -- Create queue to receive messages. CREATE QUEUE NotifyQueue GO -- Create service on queue that references EN contract. CREATE SERVICE NotifyService ON QUEUE NotifyQueue ➥([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification]); GO -- Create route on service to define the address to which -- the Service Broker will sends messages. CREATE ROUTE NotifyRoute WITH SERVICE_NAME = 'NotifyService', ADDRESS = 'LOCAL'; GO --Create the server-level event notification. CREATE EVENT NOTIFICATION CreateLoginEvent
40521.book Page 208 Tuesday, August 8, 2006 1:21 PM
Chapter 4
208
Securing a Database Solution
ON SERVER FOR CREATE__LOGIN TO SERVICE 'NotifyService', 'current database' GO --Create the database-level event notification. CREATE EVENT NOTIFICATION TableOrViewEvent ON DATABASE FOR DDL_TABLE_VIEW_EVENTS TO SERVICE 'NotifyService', '8140e771-3a4t-4479-8pus0-81008sy17!84' GO
Designing Database Security Designing your database security can easily be the most complex component of your database design because you need to consider many issues. For example, you have to balance security concerns against ease of administration against business requirements against user requests (notice I didn’t say requirements) and against everybody else’s opinion! Designing database security involves understanding the SQL Server engine’s security architecture. I have already discussed the concepts of principals and securables, but Figure 4.12 concisely illustrates the security hierarchy. No strict rules exist for how you should design your database security. Whether you should use a bottom-up or top-down approach depends on your environmental factors. But these are the key issues you will need to address:
Model the security model.
Define the login accounts.
Windows groups
Windows users
SQL logins
Determine the fixed server roles.
Define database users.
Determine the guest account.
Map login accounts to database users, including Windows groups, Windows users, and SQL logins.
Determine the fixed database roles, including the public role.
Define the user-defined database roles.
Determine the statement permissions. Determine the level of database security.
Schemas
Database objects, including procedures, views, and tables (including columns)
40521.book Page 209 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
User-defined functions Determine object permissions.
GRANT permissions
DENY permissions
FIGURE 4.12
SQL Server security hierarchy
Principals
Securables
Windows Level Windows Group Windows Domain login Windows Local login SQL Server Level Fixed Server Role SQL Server login Microsoft SQL Server SQL Server Login Endpoint Database Database Level Fixed Database Role Database User Application Role Group
Database Application Role Assembly Asymmetric Key Certificate Contract Full-Text Catalog Message Type Remote Service Binding Role Route Service Symmetric Key User Schema Schema Table View Schema Function Procedure Schema Queue Synonym Type XML Schema Collection
Database Schema Schema Schema
Database Database
209
40521.book Page 210 Tuesday, August 8, 2006 1:21 PM
210
Chapter 4
Securing a Database Solution
Implementing the KISS Principle “Keep It Simple, Stupid.” If you’ve worked in the information technology (IT) field for any length of time, you must have heard of the KISS principle. So instead of having a real-world case scenario for this particular chapter, I thought I’d focus more on the concept of keeping it simple, which is particularly relevant for security. Ultimately, security is there to keep the “bad guys” out. The problem is that the bad guys have proven to be clever, resourceful, and sneaky—ever finding new ways to hack into a system. The problem is that you probably don’t have as much time to model, implement, and test your security model as the bad guys do to try to break in. Additionally, DBAs, and particularly developers, tend to have a habit of testing for expected behavior, not unexpected behavior. You can see the evidence of this in all the buffer overflow–based attacks over the past couple of years. The idea is, if you keep your security model simple, it will be easier to implement, easier to understand, and most important easier to predict! This concept might translate into a decision of allowing access only to database objects via views and procedures, with no one having direct access to tables, with principals being implemented only via Windows groups, and so on. You are trying to avoid building inherent complexity into your security model, which will help ensure that all stakeholders understand and consequently can implement a more secure solution. So remember, when in doubt, KISS it! It will certainly help you stop making a CLM (career-limiting move)!
As you can see, you have a lot to consider. The remainder of this chapter will roughly follow this framework. But before you dive into it, you’ll learn about the art of KISSing….
Defining the Security Model As discussed earlier in the chapter, I’ll use the term principal to refer to an authenticated identity, meaning something that can be given access to objects, which are called securables, in the SQL Server 2005 database system. You can consider the principals to be the business resources that consume the technical securable resources. Both these resources are derived from an information strategy plan (ISP), which maps the business resources to the technical resources to simplify the design, implementation, and management of the relationships between the two. For example, business resources may come from the functional hierarchy of the business, and the technical resources may come from the hierarchy derived from a data analysis.
40521.book Page 211 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
211
Typically when you get to the point of mapping the two together, you will usually have too many low-level objects to handle them in a manageable way. Consequently, you can exploit centuries of human brain development and use its ability to abstract data and simplify the design of the interrelationships. You can do this on the functional side by traversing up the hierarchies you created to understand the business and the technical architecture of how the business is serviced. For example, you may have a business presence in Unites States, Spain, and the United Kingdom. In the United Kingdom, this presence may be located in Ireland, England, and Scotland. In Scotland, you may have offices in Aberdeen, Glasgow, and Edinburgh. In Edinburgh, you may have offices in Edinburgh Castle and in several other buildings. In the castle, you may use several of the rooms, and so on. Another hierarchy is one representing the business function. The following shows it represented in a different way, which can help you understand the relationships between the different levels of the hierarchy: Human Resources Production Purchasing Sales ... Order processing Placing an order Finding the Customer Finding the Product required Generating an Order Adding an Order Item Updating an Order ... Deleting an Order ...
Having gone to all this trouble of identifying the principals, it would be a shame if you could assign permissions only at some of the levels. In addition, it would be a travesty if you could not transfer the ownership of the permissions from one principal to another. Without database user groups or database object groups, you would have many people requiring permissions on many different objects, and permissions would automatically be granted at the dbo level or the user level depending on whether the user belonged to the dbo group. By introducing database user groups, principals, you simplify permission allocation substantially by replacing permissions for lots of users, with substantially fewer permission allocations for principal groups of users. By introducing database object groups, schemas, you simplify allocating permissions on lots of objects, with substantially fewer permission allocations on securable groups of objects.
40521.book Page 212 Tuesday, August 8, 2006 1:21 PM
212
Chapter 4
Securing a Database Solution
Defining Login Access Requirements Before users can access a database solution, they must be able to access the SQL Server environment. This, as expected, takes place through login accounts. SQL Server 2005 supports the same types of login accounts as previous versions: Windows login These principals are based on the Windows operating system, which will typically be part of your Active Directory. In this case, a user who has already logged into the Windows network can transparently access the SQL Server solution because they, or a group they belong to, have been granted access. SQL login These principals are based on an explicit login name and password combination that is kept on the SQL Server instance. Whenever a user wants to connect to the SQL Server solution, they will have to explicitly provide the correct login name and password; otherwise, they will not get in—such as in life.
By default a default installation of SQL Server 2005 supports only Windows authentication. You can still create SQL Server logins, but they cannot be used. You will need to reconfigure SQL Server to allow both types of connections.
Whether you rely on Windows logins or SQL logins depends on your network infrastructure and security requirements. In an enterprise environment, you’d want to leverage the existing investment and resources of your Active Directory. You would generally want to rely on using Windows groups, instead of individual Windows users, as per the usual security administration recommendations. Alternatively, you might have an Internet-based SQL Server solution; in this case, it would not be as easy to rely on the Windows operating system. Therefore, in this instance, you would have to rely predominantly on SQL logins. Either way, you could achieve this only through the correctly modeling of your security model, as I discussed earlier when covering the purpose of your ISP. Once you have modeled your security login requirements, creating them is easy. The CREATE LOGIN T-SQL statement creates logins in SQL Server 2005. The syntax for creating logins in SQL Server is as follows: CREATE LOGIN login_name { WITH | FROM } ::= WINDOWS [ WITH [ ,... ] ] | CERTIFICATE certname | ASYMMETRIC KEY asym_key_name
40521.book Page 213 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
213
::= PASSWORD = 'password' [ HASHED ] [ MUST_CHANGE ] [ , [ ,... ] ] ::= SID = sid | DEFAULT_DATABASE = database | DEFAULT_LANGUAGE = language | CHECK_EXPIRATION = { ON | OFF} | CHECK_POLICY = { ON | OFF} | CREDENTIAL = credential_name ::= DEFAULT_DATABASE = database | DEFAULT_LANGUAGE = language
You should no longer use the sp_addlogin and sp_grantlogin system stored procedures that were available in earlier versions of SQL Server. These system stored procedures have been included in Microsoft SQL Server 2005 for backward compatibility only and may not be supported in a future releases.
The important addition in SQL Server 2005 is that it can apply the same complexity and expiration policies as the operating system to login accounts. You do this through the CHECK_ EXPIRATION and CHECK_POLICY clauses. The CHECK_EXPIRATION option has no effect on a Windows 2000 server.
SQL Server 2005 can use Windows password policy mechanisms only when running on Windows Server 2003 or newer. Watch out for that in the SQL Server 2005 exams.
I won’t spend too much time on login access, because you should concentrate on database solutions, so the following is just a quick example: -- Create SQL login account CREATE LOGIN Iris WITH PASSWORD = 'daehevigt', DEFAULT_LANGUAGE = German -- Create SQL login account with hashed password CREATE LOGIN Olga WITH PASSWORD = 'nacnoiram' HASHED, DEFAULT_LANGUAGE = Russian
40521.book Page 214 Tuesday, August 8, 2006 1:21 PM
214
Chapter 4
Securing a Database Solution
-- Create Windows login CREATE LOGIN [LIVESISATNAQ\PaulaVerkhivker] FROM WINDOWS WITH DEFAULT_DATABASE = WorldSalesDB -- Create Windows login with expiration policy CREATE LOGIN [LIVESISATNAQ\AlexIsakov] FROM WINDOWS WITH DEFAULT DATABASE = EngineeringDB, CHECK_EXPIRATION = ON, CHECK_POLICY = ON -- Create Windows login for Windows Group CREATE LOGIN [PADDINGTONINN\Informare] FROM WINDOWS WITH DEFAULT_DATABASE = TrainingDB
Defining Database Access Requirements Don’t forget that ultimately security is all about the users of the database solution. Therefore, you need to prevent the users of the database solution from getting frustrated when trying to perform legitimate business processing. At the same time, you need to prevent access to data for those who should not have access to it. So, your initial data access design starts with a goal of achieving the correct levels of accessibility and usability of the data. You also need to take into account your DBAs; in other words, the data access design should be easy to understand, implement, and manage. Unnecessary complexity and administration difficulties will only encourage “mistakes” and possibly even shortcuts that bypass your security design.
It Is Worth the Effort! Finding the right balance between restricting access and allowing users to perform their jobs easily can be difficult. Likewise, spending the time to understand and model your database security correctly can be a complex and time-consuming effort. But it is worth the effort! I have lost count of how many companies have not bothered to find this balance or have simply given up. In these companies, everyone logs into the SQL Server solution using the sa account. Or people have had both a normal user account and an administrative account set up for them, but they always log in using their administrative account because they cannot be bothered to log out and then log in when they need to perform an administrative task. Of course, these are all disasters waiting to happen. All it takes is one disgruntled employee or even a simple human error to delete critical data (or worse) for the benefits of implementing security correctly to be appreciated! So, take the time to do it right!
40521.book Page 215 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
215
As you saw earlier for the database level, you can take advantage of the following principals:
Database users
Database roles
Application roles
In the following sections, you’ll examine in more detail what principals natively exist in SQL Server 2005 and the syntax used to create your own.
Defining Database Users As discussed, you create logins to enable access to the SQL Server system. Unless the logins have some sort of elevated privilege through fixed server role membership, they cannot access the individual user databases. You can grant access to the individual databases by creating database user accounts.
Don’t forget that each database has its own security subsystem. In modern database environments, users invariably need to access data from multiple databases. It is a suggested best practice to have some sort of consistency in the way you implement database users (and database groups) to avoid future problems such as name clashes and confusion.
Although you can create your own database user accounts, your first “port of call” should be deciding whether you can leverage any existing database user accounts that exist in every SQL Server database.
Fixed Database Users In SQL Server 2005 you need to be aware of a number of special users. These predefined users have special purposes such as allowing administrative control or guest access. The dbo database user account The dbo database user account exists in all databases by default. As with earlier versions of SQL Server, the sa login and members of the sysadmin server role are automatically mapped to the dbo account. This account cannot be deleted— well, not officially, according to Microsoft.
You really should not be working with the dbo account like you might have in earlier versions of SQL Server. The security model has completely changed in SQL Server 2005, so do not use knowledge or experience that you might have picked up in the past.
Any database object that a system administrator creates by default will automatically belong to the dbo account. This predominantly has to do with object ownership (and backward compatibility issues), which you will look at in more detail later.
40521.book Page 216 Tuesday, August 8, 2006 1:21 PM
216
Chapter 4
Securing a Database Solution
The guest database user account The guest database user account exists in all databases but is disabled by default. As with earlier versions of SQL Server, the purpose of the guest user is to allow login accounts without explicit database user accounts to access the database.
I wish Microsoft (or Sybase really) had used some other moniker for the guest account. Do no confuse the guest user account with the concept of guest access from other operating systems or security concepts. The guest account can be as powerful or as restrictive as you make it. Think of it more as a “default” account for everyone who can access your SQL Server environment who does not have an explicit database user mapping.
The guest database user account is a potentially powerful concept because it allows a DBA to efficiently control the majority of users who will be accessing a database with minimal effort. Take advantage of it! At the same time, make sure your DBAs and developers understand the implications of the guest account so they don’t inadvertently give access to data that is not intended.
User-Defined Database Users In most database solutions, you will be explicitly mapping database users to logins. You saw earlier how to create login accounts, so now you need to see how to create a database user. The syntax for creating a user-defined database role is as follows: CREATE USER user_name [ { { FOR | FROM } { LOGIN login_name | CERTIFICATE cert_name | ASYMMETRIC KEY asym_key_name } | WITHOUT LOGIN ] [ WITH DEFAULT_SCHEMA = schema_name ]
You’ll note that when you create a database user account, you must map it back up the SQL Server hierarchy to the login account. This is because database users cannot exist by themselves. They are the bridge between the server scope and the database scope if you like.
Some built-in database users such sys and INFORMATION_SCHEMA aren’t mapped to logins.
40521.book Page 217 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
217
Defining Database Roles Database roles are basically grouping mechanisms. In earlier versions of SQL Server, you would have called them groups. A group was a group in SQL Server 4, was a group in SQL Server 6, was a group in SQL Server 6.5, and became a role in SQL Server 7.0. I am not sure why Microsoft decided to change the official moniker. The cynical side of me thinks it was to sell it as something new, because vendors in the IT industry have a history of changing terminology for marketing/sales reasons. However, perhaps it was to avoid confusion with the Windows groups nomenclature. SQL Server 2005 supports two types of roles at the database level. Fixed database roles are provided by Microsoft and, as the name implies, cannot be changed. User-defined database roles allow you to customize your entire security subsystem to suit your requirements. The syntax for adding a principal to a database role is as follows: sp_addrolemember [ @rolename = ] 'role', [ @membername = ] 'security_account'
Fixed Database Roles Fixed database roles are basically provided as a basic means of grouping either administrative tasks or common data access within a database context. It’s just a matter of being familiar with the fixed database roles that come with SQL Server 2005, as shown in Table 4.3. Try to take advantage of using existing fixed database roles as opposed to creating your own. Not only does it make your life easier because there is less work to do, but it also allows you to achieve consistency between different databases because they exist in every SQL Server database. The public fixed database role The public fixed database role is a special role, which comes from SQL Server’s Sybase legacy. It exists in all databases, both system and user, and cannot be deleted. Most important, all principals—all users, groups, and roles—belong to the public role, and this relationship cannot be removed. Effectively, the public role maintains the default permissions for all users in that specific database. If you like analogies, think of it as the Everyone group in the Windows operating system security model. TABLE 4.3
Fixed Database Roles
Fixed Database Role
Database-Level Permission
Server-Level Permission
db_accessadmin
Granted: ALTER ANY USER, CREATE SCHEMA
Granted: VIEW ANY DATABASE
db_backupoperator
Granted: BACKUP DATABASE, BACKUP LOG, CHECKPOINT
Granted: VIEW ANY DATABASE
db_datareader
Granted: SELECT
Granted: VIEW ANY DATABASE
40521.book Page 218 Tuesday, August 8, 2006 1:21 PM
218
Chapter 4
TABLE 4.3
Securing a Database Solution
Fixed Database Roles (continued)
Fixed Database Role
Database-Level Permission
Server-Level Permission
db_datawriter
Granted: DELETE, INSERT, UPDATE Granted: VIEW ANY DATABASE
db_ddladmin
Granted: VIEW ANY DATABASE Granted: ALTER ANY ASSEMBLY, ALTER ANY ASYMMETRIC KEY, ALTER ANY CERTIFICATE, ALTER ANY CONTRACT, ALTER ANY DATABASE DDL TRIGGER, ALTER ANY DATABASE EVENT, NOTIFICATION, ALTER ANY DATASPACE, ALTER ANY FULLTEXT CATALOG, ALTER ANY MESSAGE TYPE, ALTER ANY REMOTE SERVICE BINDING, ALTER ANY ROUTE, ALTER ANY SCHEMA, ALTER ANY SERVICE, ALTER ANY SYMMETRIC KEY, CHECKPOINT, CREATE AGGREGATE, CREATE DEFAULT, CREATE FUNCTION, CREATE PROCEDURE, CREATE QUEUE, CREATE RULE, CREATE SYNONYM, CREATE TABLE, CREATE TYPE, CREATE VIEW, CREATE XML SCHEMA COLLECTION, REFERENCES
db_denydatareader
Denied: SELECT
db_denydatawriter
Denied: DELETE, INSERT, UPDATE
db_owner
Granted with GRANT option: CONTROL
db_securityadmin
Granted: ALTER ANY APPLICATION Granted: VIEW ANY DATABASE ROLE, ALTER ANY ROLE, CREATE SCHEMA, VIEW DEFINITION
Granted: VIEW ANY DATABASE
Granted: VIEW ANY DATABASE
Be careful with what permissions you assign to the public fixed database role, because everyone who has access to it at the database level will have those permissions.
40521.book Page 219 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
219
User-Defined Database Roles With SQL 2005 you have the capability of creating your own roles. Importantly, the syntax for creating user-defined roles has changed from earlier versions of SQL Server and is now part of the Data Control Language (DCL) syntax instead of a system stored procedure. The syntax for creating a user-defined database role is as follows: CREATE ROLE role_name [ AUTHORIZATION owner_name ]
You should no longer use the sp_addrole system stored procedure because it will be dropped in subsequent releases of SQL Server.
You will examine various strategies of how to implement your database security strategy later in this chapter, including user-defined database roles.
Defining Application Roles Hmm…application roles were an interesting addition to SQL Server 7.0 when it was first released. The technology has not been advanced greatly since then, and I do not know of many sites that take advantage of it. But there is a place for the concept; in certain cases, you may want to tie the security into the application that is being run as opposed to the user who is running the application. Can’t think of a great example at the moment, and watching the Formula One Grand Prix in Germany, but something along the lines of a “kiosk” application will suffice. The whole philosophy is to enable your users to have special permissions within the database when they use the application without having to explicitly grant them access to the database explicitly. The application role concepts works like this: 1.
Your user runs your database application.
2.
The database application connects to SQL Server using the user’s security credentials.
3.
The database application then executes a system stored procedure, passing an application role name and password, which the user does not know.
4.
If the application role name and password are correct, the application role is activated. The user loses their permissions and inherits the permissions allotted to the application role.
As with earlier versions of SQL Server, application roles differ from other roles in the following ways:
Application roles have no members.
Application roles require a password to be activated.
Upon activation of the application role, the user loses all existing permissions in the database with the exception of the permissions granted to the public role.
Probably the major difference between the implementation of application roles in SQL Server 2005 and earlier versions is the ability to change the security context once an application role has been activated. In earlier versions of SQL Server, a user had to disconnect and reconnect to SQL
40521.book Page 220 Tuesday, August 8, 2006 1:21 PM
220
Chapter 4
Securing a Database Solution
Server to change their security context. In SQL Server 2005, however, you can utilize sp_ unsetapprole to revert to the original security context. It does this through a cookie.
Application roles aren’t associated with a server-level principal and therefore cannot access server-level metadata. To disable this restriction, you need to take advantage of trace flag 1416.
The syntax for creating an application role is as follows: CREATE APPLICATION ROLE application_role_name WITH PASSWORD = 'password' [ , DEFAULT_SCHEMA = schema_name ]
The syntax that an application must use to activate an application role is via the following system stored procedure: sp_setapprole [ @rolename = ] 'role', [ @password = ] { encrypt N'password' } | 'password' [ , [ @encrypt = ] { 'none' | 'odbc' } ] [ , [ @fCreateCookie = ] true | false ] [ , [ @cookie = ] @cookie OUTPUT ]
So to tie it all together here is an example of how an application role would be used. The following application role would be created within the database. -- Create an application role CREATE APPLICATION ROLE KioskApplication WITH PASSWORD = 'FayeValentine' GO
You would then set appropriate permissions to the various database objects that the application needs access to. The following code would be embedded in the application: -- Activate an application role DECLARE @cookie VARBINARY(8000); EXEC sp_setapprole 'KioskApplication', 'FayeValentine' , @fCreateCookie = true, @cookie = @cookie OUTPUT; GO -- Test security context, should be application role SELECT USER_NAME() GO
40521.book Page 221 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
221
-- Restore security context EXEC sp_unsetapprole @cookie; GO -- Test security context, should be original user SELECT USER_NAME(); GO
Defining Application Administration Requirements Your application administration requirements will be database solution specific. In earlier versions of SQL Server, you had little choice because users typically came in either as sa in which case they could pretty much do anything within SQL Server or as dbo in which case they could do whatever they wanted in the database. Yes, there were statement permissions (and you will examine them as well), but most SQL Server sites just don’t both with statement permissions. The security model in SQL Server 2005 is more granular now, so you can take advantage of the following principals and permissions to allow administrative access to your database solution, depending on your needs.
To learn more about permissions, you can query the sys.server_permissions database catalog view.
Granting Administrative Control Again, you can use different techniques to define your administrative requirements:
Fixed server roles
Fixed database roles
Statement-level permissions You’ll now examine the options you have in more detail:
Fixed server roles SQL Server 2005 offers a number of roles at the server level that you can potentially use to administer your database solution. Table 4.4 shows the fixed server roles and their inherited permissions. The fixed server roles are designed more for DBAs and managing the SQL Server environment, so I wouldn’t strictly categorize them as being appropriate for application administration. However, fixed server roles such as bulkadmin, dbcreator, and diskadmin could fall within the scope of database administration. Am I stretching it too much? The syntax for adding a principal to a fixed server role is as follows: sp_addsrvrolemember [ @loginame= ] 'login' , [ @rolename = ] 'role'
40521.book Page 222 Tuesday, August 8, 2006 1:21 PM
Chapter 4
222
Securing a Database Solution
So, it’s then a matter of granting the appropriate roles to the login account. In the following example, Natalie has been promoted to a junior DBA and needs to be able to perform BULK INSERT statements and manage disk files, so you would execute the following: -- Give appropriate administrative rights EXEC sp_addsrvrolemember 'Natalie', 'bulkadmin' EXEC sp_addsrvrolemember 'Natalie', 'diskadmin' GO
TABLE 4.4
Fixed Server Roles
Fixed Server Role
Server-Level Permission
bulkadmin
Granted: ADMINISTER BULK OPERATIONS
dbcreator
Granted: CREATE DATABASE
diskadmin
Granted: ALTER RESOURCES
processadmin
Granted: ALTER ANY CONNECTION, ALTER SERVER STATE
securityadmin
Granted: ALTER ANY LOGIN
serveradmin
Granted: ALTER ANY ENDPOINT, ALTER RESOURCES, ALTER SERVER STATE, ALTER SETTINGS, SHUTDOWN, VIEW SERVER STATE
setupadmin
Granted: ALTER ANY LINKED SERVER
sysadmin
Granted with the GRANT option: CONTROL SERVER
Fixed database roles You have already seen the fixed database roles and what their inherit capabilities are. The majority, which appear in the following list, perform administrative-type tasks within the database:
db_accessadmin
db_backupoperator
db_ddladmin
db_owner
db_securityadmin
Say an existing user, Natasha, who is in charge of her department, needs to be able to back up the AdventureWorks database; you would execute the following: USE Adventureworks GO
40521.book Page 223 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
223
-- Give appropriate administrative rights EXEC sp_addrolemember 'db_backupoperator', 'Natasha' GO
Statement-level permissions If the existing fixed database roles do not have the granularity that is required, you can resort to statement-level permissions (sometimes also referred to as database permissions). Statement-level permissions allow you to more finely control what administrative tasks and/or DDL statements are allowed to a database principal. The set of statement permissions supported by SQL Server 2005 is as follows:
BACKUP DATABASE
BACKUP LOG
CREATE DATABASE (applies only to the master system database)
CREATE DEFAULT
CREATE FUNCTION
CREATE PROCEDURE
CREATE RULE
CREATE TABLE
CREATE VIEW
Statement-level permissions tend not to be utilized much in the industry because most SQL Server environments take advantage of the existing fixed server/database roles. They existed before the introduction of the fixed roles, which were introduced in SQL Server 7.0. The syntax used to grant database permissions to a database principal is as follows: GRANT [ ,...n ] TO [ ,...n ] [ WITH GRANT OPTION ] [ AS ] ::= permission | ALL [ PRIVILEGES ] ::= Database_user | Database_role | Application_role | Database_user_mapped_to_Windows_User | Database_user_mapped_to_Windows_Group | Database_user_mapped_to_certificate | Database_user_mapped_to_asymmetric_key | Database_user_with_no_login
40521.book Page 224 Tuesday, August 8, 2006 1:21 PM
224
Chapter 4
Securing a Database Solution
Say an existing information worker, Veronica, needs to be able to create views and back up the AdventureWorks database; you would execute the following: USE AdventureWorks GO -- Give appropriate administrative rights GRANT BACKUP DATABASE TO 'Veronica' GRANT BACKUP LOG TO 'Veronica' GRANT CREATE VIEW TO 'Veronica' GO
Don’t forget you can create your own custom administrative role infrastructure by creating user-defined database roles and granting the requisite statement-level permission combination to them.
Preventing Administrative Control Earlier you looked at how you can take advantage of DDL triggers to audit your DDL operations. But obviously you can also use DDL triggers to prevent certain changes to your database schema. For example, you might have a requirement where you want to implement an additional “safety mechanism” to ensure that DBAs and other legitimate developers do not inadvertently drop a table without first turning off this safety mechanism. The following T-SQL code illustrates this concept: CREATE TRIGGER SafetyMechanism ON DATABASE FOR DROP_TABLE AS ➥PRINT 'You cannot drop a table without turning off the [SafetyMechanism]' ROLLBACK GO
In this scenario, DBAs and developers would have to disable the previous trigger before dropping a table: DISABLE TRIGGER safety ON DATABASE
I hope they also remember to enable it!
Defining Users and Groups Requiring Read-Only Access to Application Data In some cases, users need only to read the data inside a database solution, be it all of the tables or a subset of the tables. This is typical for information workers who typically need to produce
40521.book Page 225 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
225
reports. You can achieve this in a number of ways, depending on your requirements. Again, I highly recommend following the KISS principle. When defining your security layer, remember you can use two fundamentally different techniques to control security, either through explicit permissions or through implicit permissions. With implicit permissions, the users or the group inherits permissions by belonging to one of the predefined SQL Server fixed database roles covered earlier. This represents a simple and effective way of giving users the ability to modify data within the database with a minimal effort. So, if you have a database solution where all valid users need to have only read-only access to all the data, the easiest way of achieving this is through the combination of the predefined db_ datareader fixed database role and the guest database user account. The following example shows how you would grant read-only access to all information workers who can connect to the SQL Server hosting the AdventureWorks database, assuming there has been no security modeling in the database whatsoever: USE AdventureWorks ; GO -- Enable the guest account GRANT CONNECT TO guest GO EXEC sp_addrolemember 'db_datareader', 'guest'
It’s important to emphasize that you have “not left the gates open to the barbarian hordes” because people still have to be able to log into SQL Server, but everyone who can log into the SQL Server solution will have read-only access to the AdventureWorks database. However, you have given them access to the entire database. If you do not want them to have access to a particular table, it’s simply a matter of explicitly denying them access to the appropriate table: USE AdventureWorks ; GO DENY SELECT ON [Sales].[ContactCreditCard] TO guest GO
Otherwise, don’t forget that you can still deny individual users access to the database or give them greater access depending on your requirements. You do that simply by creating a database user account for them within the AdventureWorks database, which has the effect of negating the permissions inherited by the guest database user account. So building on the previous example, let’s assume you have a existing specific user with a valid login, Paula Verkhivker, who should not be able to access any data within the AdventureWorks database. It would be sufficient to execute the following: USE AdventureWorks ; GO -- Create the database user account CREATE USER Polina FOR LOGIN [LIVESISATNAQ\PaulaVerkhivker]
40521.book Page 226 Tuesday, August 8, 2006 1:21 PM
226
Chapter 4
Securing a Database Solution
Now, because Paula has had an explicit database user account created for her that does not have any access rights, she would not be able to access any of the data within the AdventureWorks database. However, you have not guaranteed that she will not be able to access any data in the future depending on what else happens in the security model of the database. Consequently, you should run the following script to ensure she will never be able to access the data: USE AdventureWorks ; GO EXEC sp_addrolemember 'db_denydatareader', 'Polina'
Perhaps the main “problem” with using this technique of relying on implicit permissions and then worrying about exceptions and what might happen in the future is having to track it all, let alone understand it. Consequently, for more complex security requirements, you would typically use explicit permissions.
SQL Server 2005 has a great new feature that allows you to see the effective permissions defined on a securable, which takes into account what is inherited through role or group membership. This option is available through the Effective Permissions dialog box located on the Permissions page of the securable’s properties in SQL Server Management Studio.
Using this alternative approach involves creating a new database role, populating it with the various information workers who need access to the data, and giving SELECT permissions to all the relevant tables and/or views. So, assuming the logins exist, the following is an example: USE AdventureWorks ; GO -- Create new database role CREATE ROLE [InformationWorkers] GO -- Add all relevant Information Workers to new role EXEC sp_addrolemember 'InformationWorkers', 'EugeneDeefholts' EXEC sp_addrolemember 'InformationWorkers', 'JamesSquire' EXEC sp_addrolemember 'InformationWorkers', 'KevinDunn' EXEC sp_addrolemember 'InformationWorkers', 'NatashaFiodoroff' EXEC sp_addrolemember 'InformationWorkers', 'TimHaslett' -- Add any relevant groups to new role EXEC sp_addrolemember 'InformationWorkers',
40521.book Page 227 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
227
'[SYDNEY\NewWindsor]' EXEC sp_addrolemember 'InformationWorkers', '[SYDNEY\LightBrigade]' EXEC sp_addrolemember 'InformationWorkers', '[BELLEVUE\TapHouse]' GO -- Grant read permissions to all tables GRANT SELECT ON [HumanResources].[Department] TO [InformationWorkers] GRANT SELECT ON [HumanResources].[Employee] TO [InformationWorkers] GRANT SELECT ON [HumanResources].[EmployeeAddress] TO [InformationWorkers] GRANT SELECT ON [HumanResources].[EmployeeDepartmentHistory] TO [InformationWorkers] GRANT SELECT ON [HumanResources].[JobCandidate] TO [InformationWorkers] GRANT SELECT ON [HumanResources].[Shift] TO [InformationWorkers] GRANT SELECT ON [Person].[Address] TO [InformationWorkers] ...
As you can see, there is more work to do in this particular simple example based on the AdventureWorks database. When modeling your security model, always think about implementing it in a way that would enable you to easily understand the implications, modify requirements as required, and audit the resultant, effective permissions as required. As I told Paula: KISS it!
Defining Users and Groups Responsible for Modifying Application Data It is more common for users of your database solution to be able to access the data in a variety of ways. However, remember that fundamentally users can access the data in four ways—by reading, adding, modifying, and deleting it. These translate to the SELECT, INSERT, UPDATE, and DELETE DML statements. Again, you have the choice of controlling security through implicit or explicit permissions. Much of what you saw in the previous section applies here, except that typically you’ll find that different users and groups have different requirements when it comes to modifying the data. So, the Accounting department will be responsible for modifying different data compared to the Sales department, and so on. It’s all about the initial modeling of your security requirements through the ISP, as discussed earlier.
40521.book Page 228 Tuesday, August 8, 2006 1:21 PM
228
Chapter 4
Securing a Database Solution
For simple database solutions, you can take advantage of the db_datawriter fixed database role. If all users of the SQL Server solution need to be able to modify all tables in the database, it’s simply a matter of executing the following script: USE AdventureWorks ; GO -- Enable the guest account GRANT CONNECT TO guest GO EXEC sp_addrolemember 'db_datawriter', 'guest'
Don’t forget that it would also probably make sense to allow them to read the data via the db_datawriter fixed database role. However, if you had specific users and groups that needed to be able to modify all data, you would not rely on the guest account and resort to specific mappings of the appropriate principals to the db_datawriter fixed database role: USE AdventureWorks ; GO -- Add relevant users and groups to db_datawriter role EXEC sp_addrolemember 'db_datawriter', 'JamesSquire' EXEC sp_addrolemember 'db_datawriter', 'NatashaFiodoroff' -- Add relevant groups to new role EXEC sp_addrolemember 'db_datawriter', '[BELLEVUE\TapHouse]' GO
It should go without saying, but I’ll say it anyway…users and groups will be able to modify the data by belonging to the db_owner fixed database role. It should also not be necessary for me to add that this is a bad idea because you have given users the capability of doing pretty much anything they like within the database, but I will anyway.
Most enterprise environments, however, need much finer granularity when modifying data within a database solution. So although the db_datareader fixed database role might be used for convenience, the db_datawriter fixed database role is hardly ever used. Given the variety of different objects in SQL Server 2005, the often complex dependencies between these objects, the complexity of modern databases, and the security requirements of organizations, Microsoft has really concentrated on “enriching” SQL Server’s security model; this is what you will examine next.
Specifying Database Object Security Permissions As you saw in Chapter 2, SQL Server 2005 supports a variety of database objects. The object model is more complex than earlier versions of SQL Server. Consequently, the permissions and which objects they are applicable to are more complex.
40521.book Page 229 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
229
Table 4.5 shows all the permissions in SQL Server 2005 and the database objects to which they apply. TABLE 4.5
SQL Server 2005 Permissions
Permission
Applicable Database Object
ALTER
Aggregate functions, procedures, scalar functions, Service Broker queues, tables, table-valued functions, views
CONTROL
Aggregate functions, procedures, scalar functions, Service Broker queues, synonyms, tables, table-valued functions, views
DELETE
Synonyms, tables, views
EXECUTE
Aggregate functions, procedures, scalar functions, synonyms
INSERT
Synonyms, tables, views
RECEIVE
Service Broker queues
REFERENCES
Aggregate functions, scalar functions, Service Broker queues, tables, table-valued functions, views
SELECT
Synonyms, tables, table-valued functions, views
TAKE OWNERSHIP
Aggregate functions, procedures, scalar functions, synonyms, tables, table-valued functions, views
UPDATE
Synonyms, tables, views
VIEW DEFINITION
Aggregate functions, procedures, Service Broker queues, scalar functions, synonyms, tables, table-valued functions, views
I told you there were a lot more of them!
To learn more about permissions, you can query the sys.database_permissions database catalog view.
Once you have become familiar with the different types of object permission, you need to examine how you can manipulate or “work” with these object permissions. You can manipulate object permissions in SQL Server 2005 with the DCL statements covered in the following sections. Discounting the new security architecture in SQL Server 2005, there have not been any substantial changes since earlier versions of SQL Server 7.0 from random access memory (RAM).
40521.book Page 230 Tuesday, August 8, 2006 1:21 PM
230
Chapter 4
Securing a Database Solution
GRANT The GRANT statement grants object permissions on aggregate functions, extended stored procedures, scalar functions, Service Broker queues, stored procedures, synonyms, tables, table-valued functions, and views. The GRANT statement explicitly grants objects permissions for more advanced database security models where the existing fixed database roles are not sufficient. The partial syntax for the GRANT statement is as follows: GRANT [ ,...n ] ON [ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ] TO [ ,...n ] [ WITH GRANT OPTION ] [ AS ] ::= ALL [ PRIVILEGES ] | permission [ ( column [ ,...n ] ) ] ::= Database_user | Database_role | Application_role | Database_user_mapped_to_Windows_User | Database_user_mapped_to_Windows_Group | Database_user_mapped_to_certificate | Database_user_mapped_to_asymmetric_key | Database_user_with_no_login
The GRANT statement comes with the WITH GRANT OPTION, which basically grants the principal the ability to further grant the same object permission to other principals. In other words, the grantor has granted the grantee to further grant the object permission.
Obviously, you should be careful using this powerful concept because object permissions might be granted to principals who you prefer not to have them. However, SQL Server does keep track of the “grantee” chain so if you decide to revoke the permission, SQL Server will take into account those unknown grantees. You’ll learn more about this shortly.
DENY The DENY statement explicitly denies object permissions on aggregate functions, extended stored procedures, scalar functions, service broker queues, stored procedures, synonyms, tables, table-valued functions, and views.
40521.book Page 231 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
231
The idea behind the DENY statement is that by explicitly denying a principal access to database objects, you don’t have to worry about what other permissions might exist directly or through principal membership because the DENY statement takes precedence over everything else. Sort of….
Make sure you remember that in SQL Server 2005, a table-level DENY does not take precedence over a column-level GRANT. So watch out for this one! Remember the KISS principle, here’s an example of “why.”
The partial syntax for the DENY statement is as follows: DENY [ ,...n ] ON [ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ] TO [ ,...n ] [ CASCADE ] [ AS ] ::= ALL [ PRIVILEGES ] | permission [ ( column [ ,...n ] ) ] ::= Database_user | Database_role | Application_role | Database_user_mapped_to_Windows_User | Database_user_mapped_to_Windows_Group | Database_user_mapped_to_certificate | Database_user_mapped_to_asymmetric_key | Database_user_with_no_login
When working with the DENY statement, make sure you are aware of (and understand) the CASCADE option, which tells SQL Server that the object permission being denied is also denied to other principals to which it has been granted by the executing principal.
REVOKE The REVOKE statement removes the GRANT or DENY object permissions on aggregate functions, extended stored procedures, scalar functions, service broker queues, stored procedures, synonyms, tables, table-valued functions, and views. The idea behind the REVOKE statement is that it returns the object permission to a neutral state, so no explicit permission is set against the object.
40521.book Page 232 Tuesday, August 8, 2006 1:21 PM
232
Chapter 4
Securing a Database Solution
Do not rely on REVOKE to ensure that a principal does not have access to an object. You should use the DENY statement instead to guarantee that.
The partial syntax for the REVOKE statement is as follows: REVOKE [ GRANT OPTION FOR ] [ ,...n ] ON [ OBJECT :: ][ schema_name ]. object_name [ ( column [ ,...n ] ) ] { FROM | TO } [ ,...n ] [ CASCADE ] [ AS ] ::= ALL [ PRIVILEGES ] | permission [ ( column [ ,...n ] ) ] ::= Database_user | Database_role | Application_role | Database_user_mapped_to_Windows_User | Database_user_mapped_to_Windows_Group | Database_user_mapped_to_certificate | Database_user_mapped_to_asymmetric_key | Database_user_with_no_login
In the case of the REVOKE statement, make sure you understand the clauses covered in the following sections.
GRANT OPTION The GRANT OPTION does not revoke the permission but signifies that the right to grant the object permission to other principals will be revoked.
If the principal has the specified permission without the GRANT option, the permission itself will be revoked.
CASCADE The CASCADE clause ensures the revoked permission is also revoked from the other principals to which permissions have been assigned by the executing principal.
40521.book Page 233 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
233
A cascaded revocation of an object permission granted with WITH GRANT OPTION will revoke both the GRANT and DENY of the permission.
Specifying Database Objects Used to Maintain Security One of the key decisions when designing your database security will be choosing the layer at which you want to control data access. This will reflect the database objects that need to be designed and created to provide the access to your data. Generally, you do not want your users to access the base tables directly. So, you need to create a layer of database objects above the base tables that your users will access instead. When designing this “data access layer” for your database solution, you can generally resort to the following objects: Entities You can take advantage of explicitly created views and table-valued user-defined functions to allow relational DML access. This allows you to control security in the least restrictive fashion because your users will be able to connect to these entities using their favorite applications such as Microsoft Excel and Microsoft Access. Procedures Objects such as stored procedures and user-defined functions are more restrictive than the previously described entities, but allow you to more restrictively control what happens in your database and what data users can access. Most database solutions use a combination of both entities and procedures to create this data access layer above the base tables. But you could, as an example of a more secure database solution, allow people to only insert, update, delete, and retrieve data through stored procedures.
Several tools are available to create this data access layer of objects above the base tables. One that is proving to be popular, for obvious reasons, is CodeSmith (http://www.codesmithtools.com).
Now, you will not always want to adopt this particular strategy of creating a data access layer because it depends on the complexity of the database solution. For smaller database solutions, the investment in time and effort probably isn’t worth your while. However, this strategy has a number of benefits for more complex database solutions:
It is easier to implement and manage your database security because it involves only one layer of management.
It is easier to predict the resultant effective security because you don’t have to worry about what’s going on at the base table layer.
An additional benefit is that it allows you to modify the underlying table structures with a minimal impact on the higher-level objects. There are probably more, but my editor, Maureen, is “harassing” me to finish this chapter.
40521.book Page 234 Tuesday, August 8, 2006 1:21 PM
234
Chapter 4
Securing a Database Solution
The Benefits of a Data Access Layer In 2004–2005 I was involved in a major project for a major financial institution in Sydney; it involved refactoring a very large database (VLDB) that was being used for trading energy in the Australian market. The database contained more than 300 tables, let alone the various views and stored procedures above those tables. This particular database solution was a perfect candidate for a data access layer because it effectively created an interface or a layer of abstraction between the procedural objects (such as stored procedures and user-defined functions) and the data on which those objects depended. Creating a data layer through views allowed the underlying table structure to be modified (or for partitioning to be implemented) while minimizing the recoding of the highlevel objects. So, take the time in your initial database design and implement a data access layer, because you will be saving a lot more time in the life of the database solution.
Understanding Object Ownership Chaining Having been a Microsoft Certified Trainer for more than 10 years, I remember how I used to teach about ownership chains back in the SQL Server 6 days, but for some reason by SQL Server 7 this important concept had sort of disappeared. The importance of understanding it has reemerged in SQL Server 2005. When multiple database objects depend on each other, they form a dependency chain, although it’s now called an ownership chain. Figure 4.13 shows a typical example of a database with multiple objects depending on another object. (It looks like Natalie has “a bit of a mess” on her hands.) It is critical to understand how the database engine checks for permissions when a user calls a higher-level object so you can model and manage your security correctly. Usually when a user accesses a database object, SQL Server has to check to see whether you have been given the appropriate permissions. This means you have to give them the appropriate object permission. This can be a lot of work in complex database solutions with hundreds of database objects. To make your life easier (yes, Microsoft does think about making your life easier), permission checking works differently in a dependency chain, such as the one illustrated in Figure 4.13. When a user tries to access a database object, SQL Server checks to see whether the user has the appropriate permission granted; however, it will not check on subsequent objects as long as the object owner does not change. The catch is the object owner. (Perhaps that’s why we now refer to them as ownership chains.) So whenever the object owner changes between the object being accessed and the referenced object, SQL Server will again check permissions. The benefit is immediately obvious—you have to manage permission only at the highestlevel objects, and you don’t have to worry about the underlying objects on which they depend.
40521.book Page 235 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
FIGURE 4.13
235
Object ownership chain
Instance Natalie
Alex 1
Doonside_DB
Kingsgrove_DB
[viw_ForSale] Owned by Natalie
[viw_Inheritance] 6
Owned by Natalie
2
7
[viw_Wrecks]
[tbl_Chairs]
Owned by Natalie
Owned by Natalie
3 [viw_Monaros] Owned by Natalie 4 [viw_Kingswoods] Owned by Marc
Key View
5 [tbl_Holdens]
Table
Owned by Larissa
I’ve always taught that there are also performance benefits because there is less permission checking and the corresponding database catalog is smaller. But realistically I don’t think this applies to 99.999 percent of cases.
Ownership chaining has the following restrictions:
It applies only to the DML statements:
SELECT
INSERT
UPDATE
DELETE
All bets are off with dynamic T-SQL within objects.
40521.book Page 236 Tuesday, August 8, 2006 1:21 PM
236
Chapter 4
Securing a Database Solution
In earlier versions of SQL Server, the only way to avoid SQL Server having to check permissions at every level of an object dependency chain was to have an unbroken ownership chain. SQL Server 2005 has some further techniques that you will examine later in this chapter. You can configure SQL Server 2005 to allow ownership chaining between specific databases or across all databases inside a single instance of SQL Server. To enable a specific database to be the source or target of a cross-database ownership chain, execute the following statement: ALTER DATABASE ... SET DB_CHAINING ON
To enable ownership chaining across all databases for the SQL Server instance, execute the following statement: sp_configure 'cross db ownership chaining', 1
Cross-database ownership chaining is disabled by default and should not be enabled unless it is specifically required because it represents a potential security risk.
Seeing Object Ownership Chaining in Action You saw in Figure 4.13 an example of a typical ownership chain. Let’s assume that Natalie has given Alex SELECT permission to the [viw_ForSale] view. Likewise, Larissa has given SELECT permission to her [tbl_Holdens] table. When Alex attempts to retrieve data from the [viw_ForSale ] view, SQL Server will check to see whether he has SELECT permissions granted to him on that object. In this case, he does, so there is no problem there. Since the [viw_Wrecks] and [viw_Monaros] views are both owned by the same owner (Natalie), SQL Server will not bother checking permissions on those objects. (It has effectively assumed that Natalie “knew what she was doing” when she granted Alex permissions to the higher-level object in the dependency chain and tried to make her life easier by not making her give access to every object in the chain.) However, when Alex’s process tries to access the [viw_Kingswoods] view owned by Marc, SQL Server will again check to see whether Alex has been granted access to the [viw_Kingswoods] view because the object owner has changed. In this example, Marc has not given Alex explicit permissions, so access to the view will fail. Note that Alex will, however, have access to the table directly because Larissa has given him access to the underlying table! The other point to note is that the [viw_ForSale] view in Doonside_DB database references a view in the Kingsgrove_DB database called [viw_Inheritance]. Natalie owns both views, so you have an unbroken ownership chain. In this case, however, SQL Server will not check permissions only if cross-database chaining has been enabled. Simple!
40521.book Page 237 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
237
Defining Schemas to Manage Object Ownership Welcome to the future! You really need to get your head around the concept of schemas in the brave new world of SQL Server 2005 and beyond! Be careful of making assumptions based on earlier versions of SQL Server, as seductive as it may be. A schema in SQL Server 2005 is basically a namespace for database objects. The fully qualified name of an object is server.database.schema.object. This looks deceptively similar to what you might be familiar with from earlier versions of SQL Server. However, as I said, do not assume it is the same as the dbo object owner that you might already know.
The dbo schema exists in every database.
The whole idea of schemas is to separate this layer of abstraction from users. The primary benefit is to be able to organize your database objects into namespaces, which is becoming more of a requirement than a nicety, given the complexity of modern database solutions. However, you get a number of additional benefits:
Managing database users is easier. You can drop database users more easily, and you don’t need to change object ownership because of user/schema separation or to recode all dependent objects.
You can set different default schemas for multiple users to better reflect your application requirements.
Managing permissions is easier because you can also control permissions at the schema scope.
Permissions on schemas and schema-contained objects are more granular than in earlier versions on SQL Server.
As an example, the AdventureWorks database, shown in Figure 4.14, has these schemas defined. When working with schemas in SQL Server 2005, you will generally have to perform the following:
Determine or model the schemas required to reflect your organizational and/or application requirements.
Create schemas as determined earlier.
Functionally map your schemas to your database principals.
Set the default schema for the database principals as determined earlier.
Again, you can see the importance of initially modeling your security requirements through the ISP, as discussed earlier.
40521.book Page 238 Tuesday, August 8, 2006 1:21 PM
238
Chapter 4
FIGURE 4.14
Securing a Database Solution
AdventureWorks schema
Using Schemas in “Modern Databases” Back to the energy trading system. As I indicated earlier, the database consisted of more than 300 tables, with more tables being added periodically because the market’s requirements evolved since the initial design. Being a market system, existing tables cannot be changed or deleted because there are complex dependencies on them and because they fundamentally contain critical historical information. Additionally, there was the usual plethora of various views and stored procedures (as well as the occasional user-defined function written by Dave and Troy) sitting on top of those tables, let alone more than 30,000 Excel spreadsheets used by the various analysts and traders. In other words, this is your typical, modern database solution. The problem was that with so many database objects, the database solution was becoming ever more increasingly difficult to use and manage. There was a heavy reliance on naming conventions for the various views, and especially tables, because of the system’s requirements explained earlier. A better solution, which is now available with SQL Server 2005, is to implement schemas to simplify the work of the DBAs, developers, and especially the users (through default schemas).
40521.book Page 239 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
239
The syntax for creating schemas is as follows: CREATE SCHEMA schema_name_clause [ [ , ...n ] ] ::= { schema_name | AUTHORIZATION owner_name | schema_name AUTHORIZATION owner_name } ::= { table_definition | view_definition | grant_statement revoke_statement | deny_statement }
The following example shows a schema and table being created, with the appropriate permissions being set, in a single statement: USE AdventureWorks ; GO CREATE SCHEMA [Doonside] AUTHORIZATION Natalie CREATE TABLE [Assets] ( CarWreck VARCHAR(50), Model VARCHAR (10), DateBought SMALLDATETIME, ChassisNumber INT, Price SMALLMONEY ) GRANT SELECT TO Larissa GRANT SELECT TO Marc DENY SELECT TO Alex GO
Using Default Schemas Every object in SQL Server 2005 has a fully qualified four-part name of server.database .schema.object. Within a database, you can get away with using schema.object. However, in most cases (and you should know this because you do it every day), you do not use this two-part name either.
40521.book Page 240 Tuesday, August 8, 2006 1:21 PM
240
Chapter 4
Securing a Database Solution
So, how does SQL Server 2005 resolve unqualified objects names? You could, for example, have two tables called [Sales].[Customers] and [Marketing].[Customers] in the database; in that case, what does SQL Server 2005 do when you just use [Customers] in your T-SQL code? SQL Server uses the following process to resolve unqualified objects names: 1.
If the user has a default schema, use that schema to locate the object.
2.
If the object is not found in this default schema (or there was no default schema defined), use the dbo schema.
3.
If that fails, return an error.
You can assign a default schema to a user through the CREATE USER or ALTER USER T-SQL statement. The syntax for the ALTER USER statement is as follows: ALTER USER user_name WITH [ ,...n ] ::= NAME = new_user_name | DEFAULT_SCHEMA = schema_name
The following example shows a database user’s default schema being changed: ALTER USER Alex WITH DEFAULT_SCHEMA = Doonside;
Designing an Execution Context Strategy The execution context is the controlled environment within which a user request is executed. A pair of security tokens that identify the user connected to the SQL Server instance determines it. These two tokens are as follows: Login token A login token applies to the SQL Server instance and contains the primary and secondary identities used to check any server-level and database-level permissions. User token A user token is applicable to a database only. Like login tokens, user tokens contain the primary and secondary identities; however, they are used only to check the database level. The security tokens contain the following:
A primary identifier represented by a server or database principal
One or more principals as secondary identities
Zero or more authenticators
The privileges and permissions of the primary and secondary identities
Therefore, a user logging into a SQL Server instance will have one login token and a number of user tokens, one for every database to which they have access. Simple!
40521.book Page 241 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
241
To view information about the login token created by SQL Server when you log in, execute the following: SELECT principal_id, sid, name, type, usage FROM sys.login_token
For the user token, execute the following: SELECT principal_id, sid, name, type, usage FROM sys.user_token
Usually when a user requests to execute a module in a database, SQL Server will use the user’s token discussed earlier to determine whether they have adequate permissions to access the database objects required. However, in certain cases, as a database developer or even the DBA, you’d like to change the execution context. Reasons for doing this include the following:
Overcoming object ownership problems
Testing your security model by impersonating users
Giving elevated privileges as required in a strict, controlled fashion to normal database users
Building your own custom permission set
It’s all about flexibility and security, of course! SQL Server 2005 allows you to change the execution context for connected users in a number of ways, which is what you will examine next. Don’t forget ownership chaining and the associated issues discussed earlier. Regardless of the execution context specified, the following always apply:
When a module is executed, SQL Server initially checks to see that the caller has EXECUTE permission on the module.
Ownership chaining rules continue to apply, so permissions on underlying objects aren’t checked if the object owner is the same across the dependency chain.
When a module that runs in an execution context other than the caller is executed, referred to as impersonation, both the user’s permission to initially execute the module and the additional permissions based on the EXECUTE AS clause are checked.
You can change the execution context in two ways, explicit and implicit; you will examine them next.
Switching the Execution Context Explicitly SQL Server 2005 offers new options via the EXECUTE statement to explicitly change the execution context. The syntax for explicitly changing the execution context is as follows: { EXEC | EXECUTE ] AS [;] ::= { LOGIN | USER } = 'name'
40521.book Page 242 Tuesday, August 8, 2006 1:21 PM
242
Chapter 4
Securing a Database Solution
[ WITH { NO REVERT | COOKIE INTO @varbinary_variable } ] | CALLER
When changing the execution context, the impersonation will remain in effect until one of the following events occurs:
The session is dropped.
The execution context is reverted.
The execution context is switched.
In earlier versions of SQL Server, you could use the SETUSER statement to change the execution context. Using the EXECUTE AS statement options has a number of advantages over using SETUSER:
Server or database principals other than sa or dbo can call EXECUTE AS.
The scope of impersonation is explicitly defined in the statement.
My favorite—you can build an “execution context stack” by calling the EXECUTE AS statement multiple times across multiple principals. You can use the REVERT statement to go back through the stack.
You should use the EXECUTE AS statement over the SETUSER statement, which is being deprecated. SETUSER is included in Microsoft SQL Server 2005 for backward compatibility only and may not be supported in a future releases.
Switching the Server-Level Context Explicitly To explicitly switch the execution context at the server level, you need to use the EXECUTE AS LOGIN statement. The login name must exist, and the statement caller must have IMPERSONATE permissions on the login name. When switching context at the server level, the following considerations apply:
The SQL Server instance validates the login token for the login name. The login instance’s scope is valid for the entire SQL Server instance.
All server-level permissions and role memberships associated with the login name are honored.
Switching the Database-Level Context Explicitly To explicitly switch the execution context at the database level, you need to use the EXECUTE AS USER statement. The login name must exist, and the statement caller must have IMPERSONATE permissions on the username. When switching context at the server level, the following considerations apply:
The username that gets validated by the SQL Server instance is valid only in the current database.
40521.book Page 243 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
243
Role membership and database-level permissions in the current database associated with the username are honored. However, associated server-level permissions and role memberships are not honored.
The scope of impersonation from within a database can be changed to another database or SQL Server instance. You must meet a number of conditions, however:
The TRUSTWORTHY database property has to be turned on for the source database.
The authenticator must be trusted in the target scope. (There are number of option as to how the authenticator can be configured.)
For more information on how to extend the scope beyond the database, see “Extending Database Impersonation by Using EXECUTE AS” in SQL Server 2005 Books Online.
In Exercise 4.2, you’ll explicitly change the execution context. EXERCISE 4.2
Explicitly Changing the Execution Context You first need to create some new SQL Server logins and database user accounts for the exercise.
1.
Use the Windows Start menu, and select All Programs Microsoft SQL Server 2005 SQL Server Management Studio.
2.
Connect to your SQL Server 2005 environment.
3.
Execute the following T-SQL script: USE AdventureWorks ; GO -- Create two temporary principals CREATE LOGIN Larissa WITH PASSWORD = 'daehehtnidek' CREATE LOGIN Marc WITH PASSWORD = 'cufsinoiram' GO CREATE USER Larissa FOR LOGIN Larissa CREATE USER Marc FOR LOGIN Marc GO
40521.book Page 244 Tuesday, August 8, 2006 1:21 PM
Chapter 4
244
Securing a Database Solution
EXERCISE 4.2 (continued)
4.
Now you need to give the IMPERSONATE permission on Marc to Larissa so she can set the execution context to Marc. Execute the following T-SQL script: GRANT IMPERSONATE ON USER:: Marc TO Larissa GO
The following T-SQL script demonstrates how you can change the execution context via the EXEC AS statement.
5.
Execute each of the following T-SQL batches one at a time: --Display current execution context. SELECT SUSER_NAME(), USER_NAME() GO
-- Set the execution context to Larissa. EXECUTE AS LOGIN = 'Larissa'; --Verify the execution context is now Larissa. SELECT SUSER_NAME(), USER_NAME() GO
--Larissa sets the execution context to Marc. EXECUTE AS USER = 'Marc' --Display current execution context. SELECT SUSER_NAME(), USER_NAME() GO The execution context stack now has three principals, namely, you (original caller), Larissa, and Marc. The following T-SQL script demonstrates how you can reset the execution context to the previous context via the REVERT statement.
40521.book Page 245 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
245
EXERCISE 4.2 (continued)
6.
Execute the following T-SQL script: REVERT -- Display current execution context. SELECT SUSER_NAME(), USER_NAME() GO REVERT -- Display current execution context. SELECT SUSER_NAME(), USER_NAME() GO
7.
Execute the following T-SQL script, which will clean up your SQL Server environment: --Remove temporary principals. DROP LOGIN Larissa; DROP LOGIN Marc; DROP USER Larissa; DROP USER Marc; GO
Switching the Execution Context Implicitly With SQL Server 2005 you can now explicitly control the execution context of database objects such as stored procedures, triggers, and user-defined triggers by specifying the execution context using the DDL that makes up an object’s code. You saw earlier that all objects in a dependency chain had to have the same owner to avoid the security management issues of having to control security for each object whose owner had changed. Now, however, by being able to implicitly control the execution context through the EXECUTE AS clause, it is much easier to manage security through the dependency chain.
Permissions now need to be granted only on the highest-level objects in the dependency chain without having to grant explicit permission on the references objects because subsequently you can change each object’s execution context as you see fit. The original caller is no longer relevant!
40521.book Page 246 Tuesday, August 8, 2006 1:21 PM
Chapter 4
246
Securing a Database Solution
At the server-level you can take advantage of implicit execution context switching for the following objects:
DDL triggers
whereas at the database-level the following apply:
DML triggers
Service Broker queues
Stored procedures
User-defined functions The syntax used in your DDL code is as follows:
EXECUTE | EXEC AS { CALLER | SELF | OWNER | 'user_name' }
Table 4.6 further explains the options included in the syntax. TABLE 4.6
EXECUTE AS Options
Option
Description
CALLER
Execute by using the identity of the calling user. (This is the default setting.)
OWNER
Execute by using the identity of the owner of the function. This value changes if another user takes ownership of the module.
SELF
Execute by using the identity of the user who is creating or altering the module. The user represented by SELF is not displaced on a change of ownership of the module.
user_name
Execute by using the identity of the specified user.
EXECUTE AS CALLER You should use EXECUTE AS CALLER in the following scenarios:
You want the statements to execute as the calling user.
You want to base permission checks for the statements against the calling user and are relying only on ownership chaining to bypass permission checks on the underlying objects.
Don’t forget, as discussed earlier, that ownership chaining applies only to DML statements.
Your database application does not require hiding underlying referenced objects from the user.
40521.book Page 247 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
247
You can rely on ownership chaining to adequately hide the schema because you reference objects only of the same ownership.
You want to preserve SQL Server 2000 behavior.
EXECUTE AS OWNER You should use EXECUTE AS OWNER only if you want to be able to change the owner of the module without having to modify the module itself, so effectively the owner will automatically map to the current owner at runtime.
EXECUTE AS SELF You should use EXECUTE AS SELF in the following scenarios:
You want a shortcut to specify yourself as the user under whose context you want the statements to run as.
Your database application creates modules for users calling into it, and you want those modules to be created by using those same users as the execution context.
In this scenario, you do not know what the calling username is at design time.
EXECUTE AS user_name Finally, you should use EXECUTE AS user_name in the following scenarios:
You want the statements to execute in the context of a specified user.
You cannot rely on ownership chaining to hide the underlying schema, and you want to avoid having to grant permissions on those referenced objects.
You want to create a custom permission set.
For more information about how to create custom permission sets, see “Using EXECUTE AS to Create Custom Permission Sets” in SQL Server 2005 Books Online.
The following example shows a simple stored procedure that shows how you can utilize the EXECUTE AS clause: CREATE PROCEDURE [spr_ExecuteAs_Demo] WITH EXECUTE AS 'Olya' AS -- Shows execution context set to Olya SELECT USER_NAME() EXECUTE AS CALLER
40521.book Page 248 Tuesday, August 8, 2006 1:21 PM
248
Chapter 4
Securing a Database Solution
-- Shows execution context of calling principal SELECT USER_NAME() REVERT -- Shows execution context set back to Olya SELECT USER_NAME() GO
Considering the CLR Execution Context With the addition of the common language runtime (CLR) in SQL Server 2005, which allows developers to create assemblies that are utilized within server-side modules, you have additional security aspects, primarily the execution context, that you need to consider. It’s important to realize the potential havoc these assemblies could cause when accessing the external world. The robustness of SQL Server itself could be compromised. That is why Microsoft has not made the entire namespace of the .NET Framework available in the SQL Server CLR. But you still need to administer it carefully. You have already seen the SQL Server Surface Area Configuration tool, which allows you to switch on or off the integration of the CLR feature at the server side. However, this gives you little protection, as you have seen in the past with the xp_cmdshell extended stored procedure, which was used to execute routines in the external environment. The main problem with it was that it offered an all-or-nothing approach. Either you allowed people to use it and trusted that they would not execute inappropriate or “dangerous” code or you didn’t allow them to use it at all. I won’t even go into the issues of memory leakages and other operating system problems…. With the CLR integration, you have a finer granularity of the execution context of CLR assemblies. So on top of being able to control the user’s execution context, you can specify the scope of access to resources of any given assembly that has been developed in a CLR-conformant language. Table 4.7 shows the permission sets or options you have when creating CLR assemblies. TABLE 4.7
CLR Assembly Permission Sets
Permission Set
Description
SAFE
Only internal computation and local data access are allowed. SAFE doesn’t allow access to any resources outside the SQL Server environment. (This is the default setting.)
EXTERNAL_ACCESS
Same as SAFE but additionally allows access to external operating system resources such as network, environment variables, files, or the Registry. By default, EXTERNAL_ACCESS assemblies execute under the SQL Server service account.
UNSAFE
Allow access to any resources, even those that may compromise the robustness of SQL Server. UNSAFE allows unmanaged code to be executed within the assembly.
40521.book Page 249 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
249
Remember the concept of “least privilege.” Try to give any CLR assembly the minimal permissions required. The SAFE option is the recommended permission setting for assemblies that perform computational and data-related tasks without accessing resources external to SQL Server. EXTERNAL_ACCESS is recommended for assemblies where access to external resources is required because it provides various reliability and robustness protections that are not in UNSAFE assemblies.
You should try to avoid using UNSAFE because it allows the assembly code to perform illegal operations in the SQL Server process space. This can potentially compromise the robustness and scalability of SQL Server.
Exercise 4.3 highlights the various topics discussed in this chapter. EXERCISE 4.3
Performing a Multitude of Security Tasks You first want to create an authenticating certificate for a principal.
1.
Use the Windows Start menu, and select All Programs Microsoft SQL Server 2005 SQL Server Management Studio.
2.
Connect to your SQL Server 2005 environment.
3.
Execute the following T-SQL script: USE MASTER; CREATE CERTIFICATE AUserCertificate ENCRYPTION BY PASSWORD = 'EncryptionPa$$w0rd'
➥WITH SUBJECT = 'A Certificate in master database created for A User', EXPIRY_DATE = '06/06/2666'; GO
4.
Now check to see that it was created properly! Execute the following T-SQL script: SELECT * FROM sys.certificates WHERE name = 'AUserCertificate'
Now you can create and authenticate the principal by using the certificate you have created earlier. You will also create an unauthenticated principal by way of comparison. (Note you will have to remove the MUST_CHANGE clause if you are running the exercise on an unsupported operating system. You will get an error message indicating such.)
40521.book Page 250 Tuesday, August 8, 2006 1:21 PM
Chapter 4
250
Securing a Database Solution
EXERCISE 4.3 (continued)
5.
Execute the following T-SQL script: -- Create authenticated principal CREATE LOGIN ACertifiedUser FROM CERTIFICATE AUserCertificate GO
-- Create an Unauthenticated principal CREATE LOGIN ANormalUser WITH PASSWORD = 'Temporary$1_User_Password'
MUST_CHANGE, -- Remove clause if unsupported O/S CHECK_EXPIRATION = ON, CHECK_POLICY = ON; GO Now you could create a schema to group resources, but in this case you’ll take advantage of an existing one. So let’s create two user accounts for the logins created earlier. Notice that you can specify a default schema only for database users who don’t use certificates.
6.
Execute the following T-SQL script: USE AdventureWorks ; GO
CREATE USER ACertifiedUser FOR LOGIN ACertifiedUser --WITH DEFAULT_SCHEMA = Production; GO
CREATE USER ANormalUser FOR LOGIN ANormalUser WITH DEFAULT_SCHEMA = Production;
40521.book Page 251 Tuesday, August 8, 2006 1:21 PM
Designing Database Security
EXERCISE 4.3 (continued)
GO
GRANT SELECT ON SCHEMA::Production TO ANormalUser GO
7.
Now let’s take advantage of database roles and grant the access to the schema as required. Execute the following T-SQL script: -- Create fixed database role CREATE ROLE Production_Owner AUTHORIZATION [dbo] GO
-- Grant access rights to schema GRANT ALTER, CONTROL, DELETE, EXECUTE, INSERT, REFERENCES, SELECT, TAKE OWNERSHIP, UPDATE, VIEW DEFINITION ON SCHEMA::Production TO Production_Owner GO
-- Add existing user to role
251
40521.book Page 252 Tuesday, August 8, 2006 1:21 PM
Chapter 4
252
Securing a Database Solution
EXERCISE 4.3 (continued)
EXEC sp_addrolemember N'Production_Owner', N'ACertifiedUser' GO
8.
And finally you’ll see how the “certified user” cannot be impersonated, unlike the “normal user.” Execute the following T-SQL script: EXECUTE AS USER = 'ACertifiedUser' SELECT * FROM product REVERT You should get the following error message whereas the normal user can be impersonated: Msg 15517, Level 16, State 1, Line 1 Cannot execute as the database principal because the principal "ACertifiedUser" does not exist ,this type of principal cannot be impersonated, or you do not have permission.
9.
Execute the following T-SQL script: EXECUTE AS USER = 'ANormalUser' SELECT * FROM product REVERT
10. Execute the following T-SQL script, which will clean up your SQL Server environment: USE AdventureWorks ; GO EXEC sp_droprolemember N'Production_Owner', N'ACertifiedUser' DROP ROLE Production_Owner DROP USER ACertifiedUser DROP USER ANormalUser GO
40521.book Page 253 Tuesday, August 8, 2006 1:21 PM
Exam Essentials
253
EXERCISE 4.3 (continued)
USE master ; DROP LOGIN ACertifiedUser DROP LOGIN ANormalUser DROP CERTIFICATE AUserCertificate GO
Summary If you’ve read all this chapter in its entirety and understood it, congratulations! You should now have a great grounding in designing a secure SQL Server solution. I certainly did not understand half of what I wrote! Just kidding…. The chapter started with an examination of SQL Server 2005’s security architecture and the need to model your requirements before implementation. I showed the importance of auditing and the various techniques you can employ. Your options are highly dependent on what kind of events you want to audit. You then took a comprehensive tour of the various principals that you can use to manage your administrative and data access requirements. I showed the various permissions applicable to different securables and covered the pros and cons of different techniques to manage the relationship between the principals and securables. Finally, you looked at the issues that come about because of the complexity of “modern” database solutions such as having to control the execution context, dealing with ownership chaining, and using schemas to simplify and manage inherent complexity.
Exam Essentials Understand the types of principals and securables. Understand the security model, including what authenticated identities exist and what objects can be secured. Understand the types of permissions. You need to be familiar with object and statement permissions and where to use the GRANT, DENY, and REVOKE statements. Know how to audit your SQL Server solution. Different techniques are available depending on what you want to audit. You need to know which technique best suits your requirements and the overhead it will have on the system.
40521.book Page 254 Tuesday, August 8, 2006 1:21 PM
254
Chapter 4
Securing a Database Solution
Know how to work with ownership chains. It is critical to understand how SQL Server checks for permissions when objects have different owners, and it’s important to how to effectively use ownership chains. Understand how to control the execution context. SQL Server 2005 has a number of new ways to control the execution context. Understand where you would use these techniques explicitly versus implicitly.
40521.book Page 255 Tuesday, August 8, 2006 1:21 PM
Review Questions
255
Review Questions 1.
Your database design has a small number of critical tables that you want to ensure are not dropped, no matter what. All developers are members of the db_ddladmin fixed database role. What mechanism should you use? A. Use DML triggers. B. Use the DENY statement. C. Use DDL triggers. D. Use event notifications.
2.
You are designing your SQL Server database solution and want to ensure your Windows operating system is secure. Which tool(s) should you use? (Choose all that apply.) A. MBSA B. SSAC C. SQL Server Setup D. SAC.EXE E. MOM F.
3.
WMI
You need to implement a CLR module in your database solution. The CLR assembly needs to perform custom aggregates and other computational operations. What permissions should you give the CLR assembly? A. SAFE B. EXTERNAL_ACCESS C. UNSAFE D. PICK_ME_AND_FAIL_THE_EXAM
4.
You are implementing a database solution that involves the SalesDB and MarketingDB databases being used by two separate sets of users. Both databases have sensitive information, so you are concerned about security. Users of SalesDB need access to some of the data in MarketingDB. You have decided to implicitly switch the execution context in the stored procedures on SalesDB that will be used to retrieve data, some of which will come from MarketingDB. What statement should you run? A. ALTER DATABASE SalesDB SET DB_CHAINING ON B. ALTER DATABASE MarketingDB SET TRUSTWORTHY ON C. ALTER DATABASE MarketingDB SET DB_CHAINING ON D. ALTER DATABASE SalesDB SET TRUSTWORTHY ON
40521.book Page 256 Tuesday, August 8, 2006 1:21 PM
Chapter 4
256
5.
Securing a Database Solution
You need to implement a CLR module in your database solution. The CLR assembly needs to access the Windows Registry and perform other computational operations. What permissions should you give the CLR assembly? A. SAFE B. EXTERNAL_ACCESS C. UNSAFE D. PREPOSTEROUSLY_RIDICULOUS
6.
You have written and tested your CLR module in a development environment; however, when you deploy it on your production server, it does not work. What command should you run to get the CLR module to work correctly? A. ALTER DATABASE…SET DB_CHAINING ON B. ALTER DATABASE…SET TRUSTWORTHY ON C. EXEC sp_configure ‘clr enabled’, 1 D. EXEC sp_configure ‘external access’, ON
7.
You are designing the execution context strategy for your database solution. Your application does not require hiding the underlying referenced database objects. All objects in the database are owned by dbo. What execution context will be sufficient? A. EXECUTE AS CALLER B. EXECUTE AS SELF C. EXECUTE AS OWNER D. EXECUTE AS user_name
8.
You are designing a new database solution for a SQL Server instance where a database already exists. You are designing some stored procedures that need to access a table in the other database and intend to use the EXECUTE AS clause in your stored procedure because the two databases have different security principals. What command do you need to do to ensure that the stored procedure works correctly? A. ALTER DATABASE…SET DB_CHAINING ON B. ALTER DATABASE…SET TRUSTWORTHY ON C. EXEC sp_configure ‘clr’, ON D. EXEC sp_configure ‘external access’, ON
9.
You are designing your SQL Server 2005 database solution and want to ensure your Windows operating system is secure. Which tools should you use? (Choose all that apply.) A. MBSA B. SSAC C. SQL Server Setup D. SAC.EXE E. MOM F.
WMI
40521.book Page 257 Tuesday, August 8, 2006 1:21 PM
Review Questions
257
10. You have created a database user account based on a Windows group for your database solution. Members of this Windows group will need to modify data but not read data. You need to give the least privileges in the least steps to achieve this task. What fixed-database roles do you need to use? (Choose all that apply.) A. db_dbowner B. db_ddladmin C. db_datareader D. db_datawriter E. db_denydatareader F.
db_denydatawriter
11. As part of your database solution, you have development and production SQL Server 2005 instances. To ensure your testing is consistent, you want to ensure that the security setting for the SQL Server environment is consistent between your production and development SQL Server 2005 instances. Which tools should you use? (Choose all that apply.) A. MBSA B. SSAC C. SQL Server Setup D. SAC.EXE E. MOM F.
WMI
12. You have created a database user account based on a Windows group for your database solution. Members of this Windows group will need to read and modify data. You need to give the least privileges in the least steps to achieve this task. What fixed database roles do you need to use? (Choose all that apply.) A. db_dbowner B. db_ddladmin C. db_datareader D. db_datawriter E. db_denydatareader F.
db_denydatawriter
13. You are in the process of designing a complex database solution that will contain many entities. You are concerned about the complexity of the system and would like to make it easier for users to work with the database solution. What should you do? A. Enforce a strict naming convention for tables. B. Implement schemas. C. Allow access only through views. D. Ensure all objects are owned by dbo.
40521.book Page 258 Tuesday, August 8, 2006 1:21 PM
258
Chapter 4
Securing a Database Solution
14. An existing database user called Veronica needs to be able to create defaults, functions, procedures, rules, tables, and views. How can you achieve this with the least administrative effort? (Choose all that apply.) A. Give the appropriate statement permissions to Veronica. B. Make Veronica a member of the db_ddladmin fixed database role. C. Create a user-defined database role. D. Make Veronica a member of the db_securityadmin fixed database role. E. Give the appropriate statement permissions to the fixed database role. 15. You need to create a login account for an auditor from the Divided Nation during her analysis of your civil registry system. She will be here for 69 days before returning to Australia. She will return later in the year, so you do not want to delete the account. You do want to ensure that her password expires after the 69 days. The Windows 2003 policies are configured correctly. What statement do you run? A. CREATE LOGIN ‘IrisFrierichs’ WITH PASSWORD = ‘!tsaebYXESdeKiwU’, CHECK_ EXPIRATION = OFF, CHECK_POLICY = OFF B. CREATE LOGIN ‘IrisFrierichs’ WITH PASSWORD = ‘!tsaebYXESdeKiwU’, CHECK_ EXPIRATION = ON, CHECK_POLICY = OFF C. CREATE LOGIN ‘IrisFrierichs’ WITH PASSWORD = ‘!tsaebYXESdeKiwU’, CHECK_ EXPIRATION = OFF, CHECK_POLICY = OFF D. CREATE LOGIN ‘IrisFrierichs’ WITH PASSWORD = ‘!tsaebYXESdeKiwU ‘, CHECK_ EXPIRATION = ON, CHECK_POLICY = ON 16. You are developing a database solution for a German advertising agency for the World Cup in 2006. The boss Steffi has explained that the database solution will have client applications running around Germany at various booths promoting Austrian lederhosen, Bavarian beer, and the red-light district of Berlin. These booths will use an embedded client application with a touch screen so users will not be capable of logging into the application. The database will be hosted at their offices in Berlin. Various companies in Germany will also be accessing their database to update their specific advertisements and gather confidential statistics. How should you implement security for the kiosk users? A. Use fixed server roles. B. Use application roles. C. Use the public account. D. Use the guest account. 17. You are designing the execution context strategy for your database solution. Various developers and information workers will have the capability of creating their own objects, so there is no guarantee of consistent object owners. You want to avoid granting permissions on all these potential referenced objects. You need to tell these “muppets” how to create modules to avoid security problems. What execution context will be sufficient? A. EXECUTE AS CALLER B. EXECUTE AS SELF C. EXECUTE AS OWNER D. EXECUTE AS user_name
40521.book Page 259 Tuesday, August 8, 2006 1:21 PM
Review Questions
259
18. As part of your database solution requirements, you are required to set up auditing of successful changes to the customers credit rating. What is the best way to achieve this? A. Configure C2-level auditing. B. Use Profiler. C. Implement DML triggers. D. Implement DDL triggers. 19. An existing database user called Veronica needs to be able to only create views. How can you achieve this with the least administrative effort? (Choose all that apply.) A. Give the appropriate statement permissions to Veronica. B. Make Veronica a member of the db_ddladmin fixed database role. C. Create a user-defined database role. D. Make Veronica a member of the db_securityadmin fixed database role. E. Give the appropriate statement permissions to the fixed database role. 20. An existing database user, Olya Kats, has moved internally within your organization from the Accounting department to the Sales department. She currently has read access to the [Salaries] table. You need to ensure that she cannot read the [Salaries] table. What statement do you execute? A. GRANT SELECT ON [Salaries] TO Olya B. DENY SELECT ON [Salaries] TO Olya C. REVOKE SELECT ON [Salaries] TO Olya D. EXEC sp_addrolemember ‘db_denydatareader’, Olya
40521.book Page 260 Tuesday, August 8, 2006 1:21 PM
260
Chapter 4
Securing a Database Solution
Answers to Review Questions 1.
C. DDL triggers will allow you to prevent everyone from using the DROP TABLE statement on the critical tables. DML triggers are designed only for data manipulation statements. Event notifications are used only to communicate that a particular event has occurred. The DENY statement will prevent all tables from being dropped, which is not what was required.
2.
B. You can use the SQL Server Surface Area Configuration tool to reduce the surface area of attack for the SQL Server instance. You use the MBSA tool to secure the operating system environment. You use the SQL Server Setup program to add or remove SQL Server components. You use the SAC.EXE utility to import or export your SSAC settings. You use MOM to manage the operational environment of your enterprise infrastructure. The WMI is an operating system management interface.
3.
A. The SAFE permission set is the minimal permission set that will allow calculations within the database. There is no need to give the elevated privileges from the EXTERNAL_ACCESS and UNSAFE permission sets.
4.
D. For impersonation to work, one of the criteria is to set the source database to be trustworthy. Option D sets the correct property on the source database where the stored procedures will be located.
5.
B. The EXTERNAL_ACCESS permission set is the minimal permission set that will allow access to the Registry. The SAFE permission set will not allow the assembly to the Windows Registry. You should not use the UNSAFE permission set because it could compromise the SQL Server solution.
6.
C. The CLR must be enabled for CLR modules to work. It is turned off on a default install of SQL Server 2005. Option C will enable the CLR.
7.
A. Using the EXECUTE AS CALLER context will be sufficient, so you don’t need to change context.
8.
B. Because of the different principals, you need to rely on impersonation. So, you have correctly implemented the EXECUTE AS clause in the procedure, but because the objects the procedure needs to access are in another database, you also need to turn the Trustworthy database option on.
9.
A. You can use the Microsoft Baseline Security Analyzer (MBSA) to secure the operating system environment. You use the SSAC tool to reduce the surface area of attack for the SQL Server instance. You use the SQL Server Setup program to add or remove SQL Server components. You use MOM to manage the operational environment of your enterprise infrastructure. The WMI is an operating system management interface.
10. D. The db_datawriter fixed database roles will allow members of the Windows group to modify the modify data. The db_datareader, db_dbowner, and db_ddladmin roles will give them too many privileges. The db_datareader and db_datawriter roles will restrict their access. You should not use the db_denydatareader role because it might restrict valid access through other roles or groups. You were not asked to explicitly deny access.
40521.book Page 261 Tuesday, August 8, 2006 1:21 PM
Answers to Review Questions
261
11. D. The SAC.EXE utility is exactly designed for this sort of requirement—importing or exporting your security settings. You use the MBSA tool to secure the operating system environment. You use the SSAC tool to reduce the surface area of attack for the SQL Server instance. You use the SQL Server Setup program to add or remove SQL Server components. You use MOM to manage the operational environment of your enterprise infrastructure. The WMI is an operating system management interface. 12. C, D. The db_datareader and db_datawriter fixed database roles will allow members of the Windows group to read and modify data. The db_dbowner and db_ddladmin roles will give them too many privileges. The db_datareader and db_datawriter roles will restrict their access. 13. B. Implementing schemas will create a namespace for your users to adopt, making the database solution easier to understand. 14. B. The db_ddladmin fixed database role gives Veronica all the required permissions in one step. 15. D. You need to ensure that the CHECK_EXPIRATION is set to ON. For this option to work, you must also ensure that CHECK_POLICY is also set to ON. 16. B. Application roles will allow kiosk users to access the database in a highly controlled fashion without the need to log in, and so on. You cannot use the public and guest roles because of the sensitive nature of the data. The fixed server roles have too high a privilege for kiosk users. 17. D. You need to use the EXECUTE AS user_name when you cannot rely on ownership chaining. 18. C. DML triggers are the best solution because they cannot be bypassed and can audit your inserts, updates, and deletes. DDL triggers work only for DDL operations. Profiler has to be running for any tracing to be done. C2-level auditing does not audit this type of information. 19. A. Granting the CREATE VIEW statement permission will allow Veronica to create views. The db_ddladmin fixed database role will give Veronica too many privileges. 20. B. The DENY statement will ensure she does not have access to the [Salaries] table. The REVOKE statement is not enough because she might have read access via some other permission or role membership. Adding her to the db_denydatareader fixed database role will deny her read access to the entire database. She already has read access, and that is what you’re trying to remove. Duh!
40521.book Page 262 Tuesday, August 8, 2006 1:21 PM
40521.book Page 263 Tuesday, August 8, 2006 1:21 PM
Chapter
5
Designing Database Testing and Code Management Procedures MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER: Design a unit test plan for a database.
Assess which components should be unit tested.
Design tests for query performance.
Design tests for data consistency.
Design tests for application security.
Design tests for system resources utilization.
Design tests to ensure code coverage.
Create a plan for deploying a database.
Select a deployment technique.
Design scripts to deploy the database as part of application setup.
Design database change scripts to apply application patches.
Design scripts to upgrade database data and objects.
Control changes to source code.
Set file permissions.
Set and retrieve version information.
Detect differences between versions.
Encrypt source code.
Mark groups of objects, assign version numbers to them, and devise a method to track changes.
40521.book Page 264 Tuesday, August 8, 2006 1:21 PM
One of the more pervasive terms in the software development industry today is test-driven development (TDD). Simply put, TDD is an agile software development methodology where the developer first writes a test case and then writes just enough code to satisfy that test only. Using TDD, database developers can quickly and easily develop a practical unit test framework for their database objects. Modern database development involves much more than simply creating database objects and passing them “over the fence” to application developers. N-tier architecture design means that database developers must understand how their objects interact with each layer in the system and how changes at the database layer can affect other layers in unexpected ways. Designing appropriate unit tests is one way to ensure that other layers are not unexpectedly affected. Designing and utilizing appropriate deployment methods as well as maintaining appropriate source and version control strategies are important to modern-day database developers. This chapter discusses appropriate unit test strategies, offering suggestions on how you can apply TDD techniques effectively. I will also cover how to design, create, and execute appropriate deployment techniques, and I will cover source and version control strategies.
Designing a Unit Test Plan for a Database Most database developers who focus on Microsoft SQL Server development are used to the rather primitive tools that haven’t really changed much since the early days of SQL Server. With the release of Microsoft SQL Server 2005, the native tools have become only slightly more intelligent. Generally speaking, Transact-SQL (T-SQL) developers are used to writing simple “ad hoc” tests to ensure that the code they are writing is minimally functional, and then they either check the code into source control and hope for the best or possibly work with a user interface (UI) developer to perform some minimal integration testing. Unfortunately, in today’s software development world, it’s becoming harder for developers to focus just on T-SQL code, and more important, as organizations adopt more agile methods of software development, database developers face the need to develop and document unit tests that go beyond the simple ad hoc testing mechanisms that the native tools support.
Database developers can use several methods for building a unit testing framework. One of the most popular is an open source project called TSQLUnit. You can find TSQLUnit as well as several usage examples on SourceForge at http://tsqlunit.sourceforge.net.
40521.book Page 265 Tuesday, August 8, 2006 1:21 PM
Designing a Unit Test Plan for a Database
265
Unit tests are usually thought of as being part of an agile software development methodology but are useful even if you are not employing agile methods. Unit tests are different from system tests, or acceptance tests, in that they test a specific portion of code and don’t rely on the entire system being in place before the test. Unit tests have some specific benefits: They identify the offending piece of code Instead of having to track down problems through layers of the system, appropriate unit tests will identify exactly which piece of code is failing. They force the creation of documentation Good unit tests will exercise all the input parameters and their boundary conditions. This means all another developer has to do to understand your code is to look at the test. They force the developer to plan up front Combined with TDD, which states that unit tests are written before the code is written, unit tests ensure that the developer is considering the requirements of the specific portion of code they are developing. They save time during application rewrites When it comes time to rewrite existing code, developing good unit tests first will ensure that the new code you are writing does not suffer from any lost functionality. When considering unit testing, one of the more difficult tasks a database developer faces is deciding where to insert the unit tests. Taken to the extreme, you should write unit tests for every stored procedure, function, trigger, and constraint in the system. This may not make sense for simple create, read, update, and delete (CRUD) procedures, however, so you should consider the appropriate balance between functional testing and sheer code volume when developing unit tests.
TDD for database developers is still in its infancy. Although other languages have embraced TDD techniques and have built TDD frameworks, few database developers have yet to embrace TDD.
Assessing Unit Test Components To properly understand which items need unit tests, the developer must first break down the application into scenarios, which are more commonly known as use cases.
Microsoft Solution Framework (MSF) uses the term scenario to refer loosely to a use case. I’ll use the MSF terms throughout this chapter to define unit test planning and development. You can find more information about MSF and agile software development at http://msdn.microsoft.com/vstudio/ teamsystem/msf/msfagile.
A scenario is basically a specific action that a user will take within an application. Obviously, larger applications will have many more scenarios than smaller applications, but you should identify and document each scenario. Once you have identified the scenarios, you
40521.book Page 266 Tuesday, August 8, 2006 1:21 PM
Chapter 5
266
Designing Database Testing and Code Management Procedures
should choose each one and break it into smaller tasks. Once you have identified the tasks for a given scenario, you should then identify each database object involved in that task. Once you have identified the objects, you should develop specific unit tests that ensure the specific task comes to a successful conclusion. To better explain this process, I’ll show a simple example. Specifically, assume you have an application that allows a user to input a value in a form, and it then queries a database table and returns a recordset of any values found. Using MSF and TDD techniques, you would perform the following steps: Identify each usage scenario. In this case, there is only one scenario:
1.
a.
The user enters a search term, and the application returns a recordset if the search is successful.
Break each scenario into tasks. In this case, you have a number of tasks:
2.
a.
The user enters the search term in a form control, and then the user clicks the Search button.
b.
The UI retrieves the text string from the form control and passes it to the business object layer (BOL).
c.
The BOL performs rudimentary validation to ensure that the search string is valid.
d.
The BOL passes the validated search string to the data access layer (DAL).
e.
The DAL calls a database stored procedure and passes the search string as a parameter.
f.
The database stored procedure builds and executes a query using the search string.
g.
The database passes to the DAL the result of the query or an error code if an error was encountered.
h.
The DAL passes the result to the BOL.
i.
The BOL passes the result to the presentation layer and UI. Identify the specific database code interactions in each task that will require unit tests. In this case, you have three specific interactions:
3.
Where the DAL calls the stored procedure, you should write unit tests to ensure that the procedure can handle all boundary conditions, such as a NULL or empty string, a string with special characters, or a string that might contain SQL Injection code snippets.
Where the query is executed, you should write unit tests to ensure that error conditions are properly recognized or that queries returning no data are properly identified.
Where the result is returned to the DAL, you should write unit tests that ensure the returned recordsets are properly formed and match the expected result format. Unit tests should also ensure that error conditions are properly reported.
In the previous example, it is important to note that it is not just database code that requires unit testing. Each layer has its own specific unit test requirements, which, if properly constructed, will ensure that the entire system is tested before it is deemed ready. Although this process may sound cumbersome, in practical use it not only saves time in the long run but also ensures that all the code developed is higher quality than the code developed using ad hoc testing. The previous example shows how you can employ rudimentary unit testing as part of TDD. It is also important to realize that each application should have a defined set of goals that developers
40521.book Page 267 Tuesday, August 8, 2006 1:21 PM
Designing a Unit Test Plan for a Database
267
strive toward with respect to development quality attributes. Some common quality attributes are as follows: Application performance Each application should have defined performance goals that developers must meet. In the previous example, it might be required that no search should ever exceed five seconds from the time the user requests the search until the results are returned. Application security Each application should have a defined security profile. In the previous example, it might be required that the database identify the user who requested the search and allow only those users who had “search rights” to execute searches. It might also be required that no SQL Injection attack should ever be allowed to succeed. System resource utilization Each application should have a defined system resources profile that specifies how the host operating system will be utilized. In the previous example, it might be required that the query cannot use more than 10 percent of the available system memory. Data consistency Each application should ensure that the data store being used remains intact and that no failed transaction will cause corruption. In the previous example, no scenarios involve writing to the database.
The example used here is a simplistic example but does show how you can employ TDD for even the simple cases. Keep in mind that each application will have its own set of unique quality attributes and scenarios that you must consider when developing comprehensive unit tests.
Extending basic unit tests to cover quality attributes can ensure that applications not only meet the basic standard of “It works” but can also assure developers that it works and meets the quality attributes defined in the requirements. Having a well-defined unit test framework will also ensure that pieces of the application “long forgotten” remain in compliance with these requirements even as other parts of the application are extended and refactored.
Developing Appropriate Performance Profiles The term query performance means different things to different people. Using the previous example, it is a requirement that no search take longer than five seconds to return results. On the surface, this may seem like a good requirement, but you need to consider many factors. For example, the example didn’t mention hardware requirements or database size, just the simple requirement that can take no longer than five seconds to return query results. Developing a good performance profile helps application designers and developers by pointing out areas where you need more information. Having a good performance profile will also make creating unit tests much easier. Performance profiles have the following benefits:
Performance becomes part of the design, not an afterthought.
You quickly identify performance requirements that need more substance, which leads to less confusion during later stages of integration or acceptance testing.
You avoid any surprises when the application is deployed.
40521.book Page 268 Tuesday, August 8, 2006 1:21 PM
268
Chapter 5
Designing Database Testing and Code Management Procedures
When developing performance profiles, it is important to consider the performance objectives. You can generally break performance objectives down into the following main categories: Response time This is the most common objective when people think of performance; however, it is important to note that it is only part of the picture. In the previous example, the five-second query response is a perfect example of a requirement that considers only query response time. Application throughput Throughput is generally specified in terms of the number of queries processed in a given time frame. For example, if the previous application were a multiuser application, you might have a requirement that stated the application must be able to process 10 queries per second. Resource utilization This is one of the most often overlooked objectives when developing performance profiles. Not only must the application designer/developer consider the resource utilization within their control, but they also must consider how their application affects the entire system in terms of the central processing unit (CPU), disk utilization, and memory utilization. When creating performance objectives, it is important to consider factors such as workload, service-level agreements, response time, and the future growth of the application. Once you have created the appropriate performance profiles, you can develop unit tests that satisfy the performance needs of the application.
Creating effective performance profiles is an art that is beyond the scope of this book. For more information, see the white paper “How do you eat an elephant?” available from Whitespace Solutions at www.whitespacesolutions .com/whitepapers/How_do_you_eat_an_elephant.pdf.
Testing Performance Profiles Generally speaking, you test performance profiles by using external processes, such as the Windows System Monitor or the SQL Server Profiler. These tools can capture performance information of a given system and can hardware and operating system performance with SQL Server activity. One of the major benefits of the SQL Server Profiler is its capability to capture SQL workloads, as shown in Figure 5.1. These workloads represent the actual work SQL Server is performing at any given time. You can save the workloads and replay them either on the server they were captured from or on a different server. For example, you could capture a SQL Server workload from a production server and then replay it on a test server, making hardware adjustments to test how each adjustment affects the performance of the system. Using the Windows System Monitor, you can capture performance information and then import that information into the SQL Server Profiler to correlate the SQL statements with specific performance indicators, as shown in Figure 5.2.
40521.book Page 269 Tuesday, August 8, 2006 1:21 PM
Designing a Unit Test Plan for a Database
FIGURE 5.1
269
Using the SQL Server Profiler to capture a SQL workload
Designing Tests for Query Performance Once you have defined the performance profiles for the portion of code that needs unit tests, you should write unit tests that stress the limits of the performance guidelines. It is important to note that queries can behave differently based on data stored in the target tables, so good performance target tests will ensure that the data is sufficiently distributed in the test database. When designing unit tests for query performance, you should consider the following issues:
You should measure query response time from the moment the query is submitted until the query has completed execution. If you are creating a unit test for a stored procedure that performs multiple operations, consider adding timing code within the stored procedure that can be reported to the unit test framework.
You should measure query throughput as the number of concurrent queries that can be processed by the system without exceeding the response time requirements. For example, the requirement might be that you support 10 simultaneous queries, so you would write the unit test such that 10 separate instances of the query are executed (10 connections, each calling the tested procedure). While this test is executing, you should measure the query response time from each connection. If any one connection exceeds the response time threshold, all tests for throughput should fail.
40521.book Page 270 Tuesday, August 8, 2006 1:21 PM
270
Chapter 5
FIGURE 5.2 Profiler data
Designing Database Testing and Code Management Procedures
Correlating Windows System Monitor performance with SQL Server
You should test both query throughput and query response time with various input parameters to ensure that proper coverage is given to the test. One of the most common mistakes that developers make is to write “positive” unit tests, which ensure you follow the easiest path through the scenario.
Using the previous sample application as a guideline, the following is what a stored procedure might look like to satisfy the requirements of the application: CREATE PROCEDURE dbo.AppSearch @inputStr varchar(255), @errMsg varchar(100) OUTPUT, @totalTime int OUTPUT AS DECLARE @startTime datetime, @endTime datetime
40521.book Page 271 Tuesday, August 8, 2006 1:21 PM
Designing a Unit Test Plan for a Database
IF @inputStr IS NULL BEGIN SET @errMsg = 'NULL search strings are not allowed!' RETURN -1 END IF @inputStr = '' BEGIN ➥SET @errMsg = 'Empty search strings are not allowed!' RETURN -1 END IF CHARINDEX('''',@inputStr) != 0 BEGIN ➥SET @errMsg = 'Invalid character (single quote) found in search string!' RETURN -1 END IF @errMsg IS NULL BEGIN SET @startTime = GetDate() SELECT classno,description FROM dbo.certifications WHERE description LIKE '%' + @inputStr + '%' SET @endTime = GetDate() SET @totalTime = DATEDIFF(ms,@startTime,@endTime) END RETURN 0
A simple unit test designed to test query performance might look like this: SET NOCOUNT ON DECLARE @errmsg varchar(100),@totaltime int ➥PRINT 'Test 1: ''''Microsoft'''' as the search string input' EXEC dbo.AppSearch 'Microsoft', @errMsg OUTPUT, @totalTime OUTPUT IF @errMsg IS NOT NULL BEGIN PRINT @errMsg PRINT 'The test did NOT complete successfully' -- In this case, we did not expect an error so if -- we get one, the test fails END ELSE
271
40521.book Page 272 Tuesday, August 8, 2006 1:21 PM
272
Chapter 5
Designing Database Testing and Code Management Procedures
BEGIN ➥Print 'The procedure took: ' + CAST(@totalTime AS VARCHAR(50)) + 'mS to execute' IF @totalTime > 5000 ➥PRINT 'The query took longer than 5 seconds to execute, test FAILED!' ELSE PRINT '-- Test completed Successfully' END
This simple unit test can identify when the query takes longer than five seconds (5,000ms) to execute and can be executed multiple times to provide a framework for throughput testing as well. Combined with more basic unit tests, the database developer can quickly determine how well their application meets the requirements specified and has a framework in place for a TDDbased extension to the application. By extending the unit tests first and then writing new code, and developers can ensure that their code continues to meet application requirements. Once you are satisfied that the unit test covers all possible input parameters and addresses all application scenarios, you can move on to the application deployment phase of development.
Creating a Plan for Deploying a Database Generally speaking, developers of compiled code (such as C#, Visual Basic .NET, and C++) tend to understand the concept of application deployment fairly well, because they must always consider how their code will be delivered and have many tools available to them (built into their development environments in most cases) to assist them with the task of deploying their application. Database developers generally do not have that luxury and must decide how their objects will be combined, created, and deployed into both new and existing systems. For every database object (including the database itself), you must create a script that will either create or update that object. This is relatively easy to accomplish for a single object by simply selecting and right-clicking the object in SQL Server Management Studio’s Object Explorer and then selecting the Script…As option, as shown in Figure 5.3. Obviously, this will not scale well for a complete application when you have several objects to deploy, and it doesn’t consider that the object might already exist in a target database that is being redeployed or upgraded; therefore, you need a better technique to ensure that the database is properly deployed.
Selecting an Appropriate Deployment Technique With the introduction of Microsoft SQL Server 2005, database development is becoming more integrated with “mainstream” development tools such as Microsoft Visual Studio; however, the “pure” database developer still spends most of their time in SQL Server Management Studio. SQL Server Management Studio provides a “single-source” method for scripting database objects, which is available by selecting a database, right-clicking, and selecting Generate Scripts from the Tasks menu, as shown in Figure 5.4.
40521.book Page 273 Tuesday, August 8, 2006 1:21 PM
Creating a Plan for Deploying a Database
273
FIGURE 5.3 Scripting a database stored procedure from within SQL Server Management Studio
The biggest problem with scripts generated in this fashion is that they assume the target system does not already exist. You can generate scripts that include the IF EXISTS...DROP statements around each object, which means that any scripts that run on a system containing data will lose that data if you are creating tables. One method of approaching this problem is to save scripts for each of your objects individually and then have your application installation process put them all together and execute them at deployment time, executing only those that are appropriate. For example, if the deployment is a new install, run CREATE TABLE statements, but if it is an upgrade, only run scripts using ALTER TABLE statements to add columns if necessary. To facilitate this approach, the application installation routine must have the appropriate logic built in to decide which scripts to run. Another method would be to write scripts in such a manner that an upgrade is always assumed but not necessary. Using this method, the developer would break deployment scripts into groups that might be organized as follows: Initial build scripts These scripts contain the necessary table build scripts. These are executed only for new installs.
40521.book Page 274 Tuesday, August 8, 2006 1:21 PM
274
Chapter 5
FIGURE 5.4
Designing Database Testing and Code Management Procedures
Generating scripts for database objects from within SQL Management Studio
Programmable object scripts These scripts contain the necessary Data Definition Language (DDL) for all the stored procedures, user-defined functions, and views. Each object is scripted with an appropriate IF EXISTS section that will drop the object if it already exists and then create the object.
If you are creating scripts that upgrade objects using the IF EXISTS...DROP technique, be sure to reapply any specific user rights to the objects that were dropped. Unlike with ALTER scripts, dropping objects will also drop any Data Control Language (DCL) modifications associated with the object.
Post-build scripts These scripts contain any necessary code to “clean up” the installation or to deliver any post-build patches that might need to be deployed. No matter which deployment method you choose, the database developer needs to ensure that the application installation process executes the scripts in the correct order and correctly identifies any problems that arise during deployment. It’s important to realize that you do not have to always rely on scripts. SQL Server 2005 comes with a number of methods involving various tools or commands that you can use to deploy a database from a development environment to a production environment. The method you choose is highly subjective to your particular requirements.
40521.book Page 275 Tuesday, August 8, 2006 1:21 PM
Creating a Plan for Deploying a Database
275
For example, you might have a “read-only” database that has been developed at a central office and that needs to be deployed to various regional offices. (Think census information or other static data.) In this particular example, it would not make sense to deploy the database using scripts, particularly because you have to use BCP (which will be covered, along with SQL Server Integration Services (SSIS), in Chapter 9) to transfer the data.
Another nicety that should be available in SQL Server Management Studio is the ability to reverse engineer your data as a batch of INSERT statements. This is particularly useful for deploying database solutions where the lookup/base tables’ data needs to be included. If you agree or have a number of features or improvements that you would like Microsoft to consider for a future release of SQL Server, you should visit the new Product Feedback Center at http://labs.msdn.microsoft.com/productfeedback/.
In this example, you want to include the data, but other factors might be applicable that will dictate which technique you will use, including the following:
Security considerations such as whether logins exist on the SQL Server where the database needs to be deployed
Other security considerations such as encryption of the DML
Complex dependencies between the database objects and complexity of scripting them
The speed of the technique used
The complexity of the technique used
Network bandwidth and its corresponding utilization
Let’s examine some of the more commonly used techniques to deploy databases that do not rely on scripting alone.
Using the Copy Database Wizard The Copy Database Wizard is a straightforward way of copying or moving a database from one instance of SQL Server to another instance. The advantage of using this technique is that it is easy because it basically grabs the entire database and copies or moves it to another instance. The Copy Database Wizard can use two transfer methods: The “detach and attach” method This obviously means the database will be taken offline. A nice feature of this particular method of deployment is that it will grab the database “warts and all” and replicate it as is. It avoids certain types of errors that can arise if you use other techniques because objects have to be compiled, dependencies exist, and security considerations potentially exist. Because you’re fundamentally doing things at the file level, it may be quicker. You’ll learn more about this when I discuss the detach method later in this chapter. The SMO method Using the SMO will be slower, but the database remains online. This is fundamentally because you reverse engineer the database and run the corresponding generated code to create a replica of the database.
40521.book Page 276 Tuesday, August 8, 2006 1:21 PM
Chapter 5
276
Designing Database Testing and Code Management Procedures
SQL Server Management Objects I can read minds. I know what you’re thinking: “What on Earth is the SMO?” The SMO is the new object model in SQL Server 2005 specifically designed for programmatically administering or managing SQL Server 2005. The SMO object model extends and supersedes the Distributed Management Objects (DMO) object model that was used in earlier versions of SQL Server. The SMO supports two categories of classes.
Instance classes that represent SQL Server objects, such as servers, databases and certificates. You can see this object hierarchy by searching for the “SMO Object Model Diagram” in SQL Server 2005 Books Online.
Utility classes that represent specific tasks that can be performed on SQL Server 2005’s objects. These classes have been further divided based on their functionality:
Backup and Restore classes, which are used to backup and restore SQL Server databases
Scripter class, which is used to generate database object script files
Transfer class, which is used to transfer data and schema between databases
In Exercise 5.1, you’ll deploy a database using the Copy Database Wizard. Because you have only one instance, you will be deploying the AdventureWorks database to a new instance, AdventureWorks_Production, on the same SQL Server. EXERCISE 5.1
Deploying a Database Using the Copy Database Wizard You have developed a database for www.whalewatch.org and are ready to deploy it into a production environment. The reference data on the decline of whale populations due to whaling and environmental factors has been loaded into the database. All testing data has been removed. Consequently you have decided to use the Copy Database Wizard to deploy the database.
1.
Using the Windows Start menu, select All Programs Microsoft SQL Server 2005 SQL Server Management Studio.
2.
Connect to your SQL Server 2005 environment.
3.
Expand the Databases folder.
40521.book Page 277 Tuesday, August 8, 2006 1:21 PM
Creating a Plan for Deploying a Database
EXERCISE 5.1 (continued)
4.
Right-click the database you plan to deploy (in this case the AdventureWorks database), and select Tasks. Then select Copy Database to start the Copy Database Wizard.
5.
At this stage you should see the Welcome to the Copy Database Wizard window; click the Next button.
277
40521.book Page 278 Tuesday, August 8, 2006 1:21 PM
278
Chapter 5
Designing Database Testing and Code Management Procedures
EXERCISE 5.1 (continued)
6.
For the source server, enter your SQL Server instance name, or use (local) if you prefer.
7.
Make sure the Use Windows Authentication radio button is selected, and click the Next button.
8.
Use the same setting for the destination server. The defaults should be sufficient for this exercise. Click the Next button.
40521.book Page 279 Tuesday, August 8, 2006 1:21 PM
Creating a Plan for Deploying a Database
279
EXERCISE 5.1 (continued)
9.
In this case, you do not want to take the database offline, so click the Use the SQL Management Object Method radio button, and click the Next button.
10. The Select Databases window allows you select which databases you would like to copy or move. Notice that system databases are not allowed (why show them then? Duh!) and that the wizard has also indicated whether a database with the same name exists on the destination server. Click the Copy check box for the AdventureWorks database, and click the Next button.
40521.book Page 280 Tuesday, August 8, 2006 1:21 PM
280
Chapter 5
Designing Database Testing and Code Management Procedures
EXERCISE 5.1 (continued)
11. The Configure Destination Database (1 of 1) window allows you change database settings to allow for environmental differences such as different database file locations. Note that the wizard has automatically generated a different database name to avoid a name clash. Change the destination database name to AdventureWorks_Production, and click the Next button.
12. The Configure the Package window asks for some basic information about the SSIS package that will be created in the background. The defaults are usually sufficient. Change the package name to Platypus, and click the Next button.
40521.book Page 281 Tuesday, August 8, 2006 1:21 PM
Creating a Plan for Deploying a Database
EXERCISE 5.1 (continued)
13. In this case you want the package to run immediately, so ensure that the Run Immediately radio button is selected, and click the Next button.
14. The Complete the Wizard window summarizes your choices. You are ready to go! Click the Finish button.
281
40521.book Page 282 Tuesday, August 8, 2006 1:21 PM
282
Chapter 5
Designing Database Testing and Code Management Procedures
EXERCISE 5.1 (continued)
The Copy Database Wizard will go through the five operations shown here, and you should now see AdventureWorks_Production in your SQL Server Management Studio.
15. To clean up your SQL Server instance, right-click AdventureWorks_Production, and select Delete.
16. Click the OK button in the Delete Object dialog box to confirm the deletion.
40521.book Page 283 Tuesday, August 8, 2006 1:21 PM
Creating a Plan for Deploying a Database
283
Detaching and Attaching Databases An alternative to using the Copy Database Wizard is to manually perform the same operations. The benefit of doing it this way is that you can customize the deployment of the database to suit your requirements. Additionally, you typically will be logged in as an administrator so will have fewer problems as far as security is concerned. When using this technique, you generally use the following methodology: 1.
Detach the database from your development server.
2.
Copy the database files to the deployment server.
3.
Attach the database files to the deployment server. Run any development scripts that are required, including potentially the following:
4.
5.
Running SQL Server dependencies external to the database
Creating the security model including creating logins, mapping the logins to database users, and so forth
Deleting any unnecessary data such as test data Attach the original database files on development server.
Detaching a database removes it from a SQL Server instance. The data and transaction log files remain intact so that they can be copied as is to another SQL Server instance. The syntax for detaching a database is as follows: sp_detach_db [ @dbname= ] 'dbname' [ , [ @skipchecks= ] 'skipchecks' ] [ , [ @KeepFulltextIndexFile= ] 'KeepFulltextIndexFile' ]
You cannot detach a database if it has database snapshots. In this case, you have to drop all the database snapshots before detaching the database.
Once you have copied the database file for the detached SQL Server database, you are ready to attach the database. The syntax for attaching a database is as follows: CREATE DATABASE database_name ON [ ,...n ] FOR { ATTACH [ WITH ] | ATTACH_REBUILD_LOG } [;] ::= { ( NAME = logical_file_name ,
40521.book Page 284 Tuesday, August 8, 2006 1:21 PM
Chapter 5
284
Designing Database Testing and Code Management Procedures
FILENAME = 'os_file_name' [ , SIZE = size [ KB | MB | GB | TB ] ] [ , MAXSIZE = { max_size [ KB | MB | GB | TB ] | UNLIMITED } ] [ , FILEGROWTH = growth_increment [ KB | MB | GB | TB | % ] ] ) [ ,...n ] } ::= { FILEGROUP filegroup_name [ DEFAULT ] [ ,...n ] } ::= { DB_CHAINING { ON | OFF } | TRUSTWORTHY { ON | OFF } } ::= { ENABLE_BROKER | NEW_BROKER | ERROR_BROKER_CONVERSATIONS
In SQL Server 2005, full-text files that are part of a database are attached to the database.
You can do all this in the SQL Server Management Studio environment, of course.
Backing Up and Restoring Databases You can also use SQL Server’s backup and restore capabilities to deploy a database solution. The methodology is similar to what you saw previously: 1.
Back up the database on your development server.
2.
Copy the backup file to the deployment server.
3.
Restore the database backup to the deployment server. Run any development scripts that are required, including potentially the following:
4.
Running SQL Server dependencies external to the database
40521.book Page 285 Tuesday, August 8, 2006 1:21 PM
Creating a Plan for Deploying a Database
285
Creating a security model including creating the logins, mapping the logins to database users, and so forth
Deleting any unnecessary data such as test data
What is the difference between the two methods? Well, as I indicated, you take the database offline using the detach method, but that probably doesn’t make much difference in a development context. However, your choice of technique can make a big difference if you think about what is going on behind the scenes.
Think about the Underlying Technology Being Used Another contract…another Australian financial institution…great view of Sydney Harbor, though. It’s summertime…Friday afternoon…I should be at the pub enjoying a cold beer, but another large database needs to be deployed from a development to a user acceptance/testing environment. Should I use backup and restore? Or should I use detach and attach? Sometimes it seems there is little between doing things one way or the other. But I want to finish this ASAP! Think about what’s happening behind the scenes. In this case, the database consists of a number of database files that total 50GB in size, but the database has only 100MB of lookup table data. If I decide to use the attach and detach method, I will be copying 50GB across the network. If I use the backup and restore method, I will have to transfer only 100MB or so across the network after the backup completes. Needless to say, I can be at the pub in record time!
And the Rest… Yet another method for deployment is to use built-in SQL Server technologies, such as snapshot or merge replication. Replication is used as a deployment method when periodic updates of data are sent from a central location and the target environment is connected to the source. Using the replication method for deployment, database developers can ensure that their schema and pertinent data are replicated on a regularly scheduled basis. In other words, you can use one of several techniques; simply find a method that is appropriate to your particular requirements.
Designing Database Deployment Scripts To properly deploy a database and its objects while setting up an application, you must write a custom installer using one of the techniques described earlier. The most common method for
40521.book Page 286 Tuesday, August 8, 2006 1:21 PM
Chapter 5
286
Designing Database Testing and Code Management Procedures
deploying database objects is to build much of the logic into the scripts themselves, ensuring that the installer application is as simple as possible. Using this method, the installer application needs to follow these steps: 1.
Connect to the target SQL Server as an administrator.
2.
Check to see whether the target database exists.
3.
If the target database does not exist: a.
Execute the CREATE DATABASE script.
b.
Execute the script that creates user-defined types if necessary.
c.
Execute the script that creates server logins if necessary.
d.
Execute the script that creates database users.
e.
Execute the script that creates database roles.
f.
Execute the script that assigns database users to roles.
g.
Execute the scripts that create tables.
h.
Execute the scripts that create constraints and indexes.
i.
Execute the scripts that create views.
j.
Execute the scripts that create stored procedures.
k.
Execute the scripts that create triggers.
l.
Execute the scripts that populate tables with data. If the target database exists, do the following:
4.
a.
Verify the database version.
b.
If the version of the database is the same as the current version of the installer, then execute the scripts that repair the database objects and data.
c.
If the version of the database is a supportable upgrade, then follow the next four steps; otherwise, exit and report that the existing database must be deleted and re-created using the earlier steps.
d.
Execute the scripts that upgrade database tables as necessary (such as adding columns).
e.
Execute the scripts that recompile existing database objects.
f.
Execute the scripts that create new database objects.
g.
Execute the scripts that update table data.
You can basically break down the previous example into the following main sections: Database creation Create a new database and its objects and data. Database repair Repair an existing database, possibly adding application patches (sort of like a service pack) while leaving existing data intact. Database upgrade Upgrade an existing database to the current version, leaving the existing data intact.
40521.book Page 287 Tuesday, August 8, 2006 1:21 PM
Controlling Source Code
287
The important point to remember is that database deployment is not simply a delete-andreplace operation; developers must fully consider the implications of existing databases and application upgrades. To “put it all together,” database developers need a central location to store object scripts and to track changes to those scripts.
Controlling Source Code Source control, sometimes called version control, is another software development issue that many organizations struggle with on an ongoing basis. The basic problem is that organizations need a method to ensure consistency of development across multiple developers and projects. Source control is part of a broader set of issues known as software configuration management (SCM).
Understanding the Benefits of Source Control Many database developers consider SCM more of an intrusion than a necessary part of software development; however, the proper integration with SCM methods is an essential part of any software project, even if only a single developer is working on it. Generally speaking, SCM provides the following benefits:
A central location to store source code (in the case of database development, a central repository for SQL scripts)
A historical record of modifications and enhancements to code
The capability to work on objects in parallel and provide a structured method for merging changes to objects
The ability to control object access
Many SCM tools are available, ranging from free, open-source systems that have been around for many years to expensive commercial systems that support many developers working on a single project from many different locations. Microsoft has two SCM systems on the market today: Microsoft Visual SourceSafe (which is targeted toward the small group of developers working on a single project) and Microsoft Visual Studio Team Foundation Server (which is targeted to a much larger audience). SQL Server 2005’s native tools can integrate with many of the systems on the market today. Working with SCM in SQL Server Management Studio is as simple as selecting File Source Control Open from Source Control, as shown in Figure 5.5.
The examples in this chapter show an SCM system called Perforce, which is available at http://www.perforce.com. For more information about available SCM systems and technologies, follow the links in this Wikipedia article: http://en.wikipedia.org/wiki/SCM.
40521.book Page 288 Tuesday, August 8, 2006 1:21 PM
288
Chapter 5
Designing Database Testing and Code Management Procedures
FIGURE 5.5 Opening a database object from source control using SQL Server Management Studio
Implementing SCM Techniques No matter what SCM system you use, you can apply some basic techniques to incorporate SCM into the software development process. Generally speaking, SCM systems have a file system–like interface that allows developers to choose the object they want to work with and “check out” that object. (When a developer checks out an object from source control, the system keeps track of who has “control” of that particular object and ensures that changes made are tracked appropriately.) SCM system administrators can control which developers have access to any given object, much like system administrators have the ability to control access to files stored on a computer. When using SQL Server Management Studio’s built-in source control integration, a new window, Pending Checkins, lists each object that a developer has checked out from source control, as shown in Figure 5.6. The Pending Checkins window not only keeps track of all the items that a developer has checked out from source control but also allows the developer to keep track of changes to the objects as they are made. For example, if a developer wanted to know what changes have been made to a given object since it was checked out, they can right-click the object and select Compare Versions from the drop-down menu, as shown in Figure 5.7. This opens the source control differencing utility (P4Merge), as shown in Figure 5.8. The built-in integration with source control systems works well in SQL Server Management Studio, which makes it easy for organizations to choose any back-end SCM system that meets their needs without forcing developers to learn new systems.
40521.book Page 289 Tuesday, August 8, 2006 1:21 PM
Controlling Source Code
FIGURE 5.6
SQL Server Management Studio’s Pending Checkins window
FIGURE 5.7
Accessing the source control version compare utility
289
40521.book Page 290 Tuesday, August 8, 2006 1:21 PM
290
Chapter 5
FIGURE 5.8 compare utility
Designing Database Testing and Code Management Procedures
Comparing versions of objects through the source control version
One problem that database developers need to be aware of is that, by default, users can reverse engineer database objects simply by using the Tasks Generate Scripts menu discussed earlier. To protect database object reverse engineering, developers can choose to utilize the built-in encryption mechanisms of SQL Server 2005. You can encrypt any database programming object (tables cannot be encrypted) by using the T-SQL WITH ENCRYPTION clause, as shown in the following code snippet: CREATE PROCEDURE [dbo].[CustOrderHist] @CustomerID nchar(5) WITH ENCRYPTION AS SELECT ProductName, Total=SUM(Quantity) FROM Products P, [Order Details] OD, Orders O, Customers C WHERE C.CustomerID = @CustomerID AND C.CustomerID = O.CustomerID AND O.OrderID = OD.OrderID AND OD.ProductID = P.ProductID GROUP BY ProductName
When a user attempts to view the source of an encrypted object (or when they attempt to generate a script for the object), the dialog box shown in Figure 5.9 appears.
40521.book Page 291 Tuesday, August 8, 2006 1:21 PM
Exam Essentials
FIGURE 5.9
291
Attempting to view the source of encrypted objects
When you use WITH ENCRYPTION to encrypt source code, it is important to note that even its owner cannot script the object. It is important to ensure that a script that contains the original, unencrypted text is saved.
Source control is an effective method for ensuring that database developers maintain control over their development projects, and you shouldn’t think of it as necessary just in large organizations.
Summary In this chapter, you learned the essentials of unit testing database objects and how you can design unit tests to test query performance. You also learned about performance profiles and how you can use them to help you understand how application changes affect the overall performance of the system. You also learned some techniques for deploying databases and database objects. Finally, you learned about source control techniques and how you can seamlessly integrate them into your development environment.
Exam Essentials Know how to design a unit test plan for a database. You must be able to understand and articulate how you can create unit tests for individual database objects and apply them to form a comprehensive unit test plan that covers both functionality and performance. Know how to create a plan for deploying a database. You must be able to understand and articulate how you can create and use scripts to deploy database objects and data. You should also be able to recognize when you can use built-in SQL Server technologies such as merge or snapshot replication for database deployment. Know how to control changes to source code. You must be able to understand and articulate how SCM systems can make database development easier and how the built-in SQL Server management tools integrate with source control systems.
40521.book Page 292 Tuesday, August 8, 2006 1:21 PM
292
Chapter 5
Designing Database Testing and Code Management Procedures
Review Questions 1.
You are in charge of a database development project and will be evaluated by how well the system works once deployed. You have five database developers and three C# developers who will be working for you on this project. You have a limited amount of time before the solution is due, and you cannot afford to waste time with unnecessary work. You need to ensure that the code is well tested before it is delivered. What do you do? A. Require all C# developers to create interface documents that are closely adhered to by all database developers. B. Require all developers to deploy and build the solution on a regular basis and test each piece of functionality as it is finished. C. Hire an additional team member who will be responsible for overseeing all the testing of the application and ensure that the developers work in parallel with each other. D. Require all developers to write additional code that will test the code on which they are working. Each developer will be responsible for maintaining their own test code library.
2.
You are the project manager for a large-scale database development effort, and you need to ensure that your finished product meets all the requirements for application performance. Each developer is working on a specific portion of the code, and it will be several months before a finished product is ready to test. What is the best way for you to ensure that minimal effort will be required to validate application performance characteristics before the application is delivered? A. Develop a comprehensive application performance profile. Require each developer to develop unit tests that prove their code meets the thresholds defined in the profile. B. Ensure each developer writes unit tests that measure query performance. C. Prepare a plan to test the application in an environment that closely simulates the deployment environment. Ensure that each application function meets or exceeds application performance requirements. D. All of the above.
3.
You are designing the strategy you will use to determine when you need additional database servers for production systems. You must determine when performance bottlenecks exist and what factors have caused those bottlenecks. What is the best approach to take? A. Periodically run SQL Server Performance Monitor, and capture workload information. Examine these results to determine which queries are long running. B. Periodically run Windows System Monitor, and capture workload information. Examine these results to determine whether hardware is causing the performance problems. C. Periodically run the SQL Server Profiler and Windows System Monitor applications to capture workload files. Examine these results in the SQL Server Profiler to determine where the performance problems exist. D. Periodically run SQL Server Performance Monitor and Windows System Monitor to capture workload files. Examine these results in SQL Server Performance Monitor to determine where the performance problems exist.
40521.book Page 293 Tuesday, August 8, 2006 1:21 PM
Review Questions
4.
293
You are the information technology (IT) manager responsible for your company’s Internetbased application. Your company provides a quality of service (QOS) guarantee to your customers regarding Internet application performance. To ensure your company meets its QOS commitments, executive management has provided your team with an additional server to be used to test new application upgrades prior to deploying them to the production server. This additional server is configured identically to the production server. How can you best utilize this new server to ensure you meet your QOS requirements? A. On the test server, deploy your application, and run both the SQL Server Profiler and Windows System Monitor applications to ensure that the QOS requirements are met. Do this before deploying any application change to the production server. B. On the production server, deploy your application, and run both the SQL Server Profiler and Windows System Monitor applications to ensure that the QOS requirements are met. C. On the production server, run the SQL Server Profiler and Windows System Monitor applications to capture the appropriate log and workload files. Using the SQL Server Profiler, replay the workload on the test server while monitoring system performance. D. On the test server, run the SQL Server Profiler and Windows System Monitor applications to capture the appropriate log and workload files. Using the SQL Server Profiler, replay the workload on the production server while monitoring system performance.
5.
You are the IT manager responsible for all applications running in your environment. You have test, development, and production servers. You have enabled source control in your development environment to track application changes. Recently, users have noticed slight problems with one of the applications that have been developed in-house. Testers have not seen any issues in the test environment but have been able to assist users in tracking down which database stored procedure is causing the issue in production. How can you determine what happened? A. Using SQL Server Management Studio, script the procedure on the production server, and apply the script to the development server. Use SQL Server Management Studio on the development server to compare the changes to the version stored in source control to determine what changed and what needs to be done to solve the problem. B. Using SQL Server Management Studio, script the procedure on the test server, and apply the script to the production server since there are no issues in the test environment. C. Using SQL Server Management Studio, use the Pending Checkins window to compare the version of the procedure in source control to the version in production to determine what changed. D. Using SQL Server Management Studio, script the procedure on the production server, and apply the script to the test server. Have testers test the application to determine the problem.
6.
You are the project manager for a new web services application that your company is marketing. This product will be sold to various companies throughout the world, and your database developers have used some clever techniques to get the application to perform well. How can you ensure your application source code does not end up being reverse engineered? A. When the application is packaged, do not include any SQL scripts, and deploy only compiled versions. B. Employ a software package such as the Dotfuscator Community Edition, which is included with Microsoft Visual Studio. C. Ensure that all your SQL Server objects are encrypted using RSA standard encryption algorithms. D. Ensure that all your SQL Server objects are encrypted using the T-SQL WITH ENCRYPTION clause.
40521.book Page 294 Tuesday, August 8, 2006 1:21 PM
294
7.
Chapter 5
Designing Database Testing and Code Management Procedures
You are the development manager for a web hosting company. One of your clients allows their customers to publish information to the production database that is hosted by you. They want to develop a secure method to allow their customers to periodically receive new data and structure updates without having to reinstall the database on their local systems. What is the best method to allow for this type of interaction? A. Create a SQL Server Integration Services package that will allow their customers to connect and transfer new database objects and data. B. Enable SQL Server replication on both the database server hosted by you and on each customer’s database server. C. Develop database scripts that your clients’ customers can execute on a regular basis to keep their systems up-to-date. D. Package and send out a CD that contains all the database scripts and data that your clients’ customers will need.
8.
You are in the process of developing a new version of an existing application that is database intensive. It has been decided that the database objects must be thoroughly tested before the new application is deployed. What is the best method of ensuring that the new database code is ready for deployment? A. Before developing any new stored procedures, make sure there is a well-understood interface document that details exactly what the procedure should do. B. Create a comprehensive test plan that requires testers to exercise each function in the UI before the application is declared ready. C. Create a unit test plan that ensures each piece of database functionality is fully stressed and executed during integration testing. D. Write stored procedure–based tests that call each database procedure, and integrate them into the development process.
9.
You are a database developer responsible for ensuring users cannot view your T-SQL code. What T-SQL clause can you use to meet requirements? A. WITH SCHEMABINDING B. WITH OPTION MAXDOP C. WITH ENCRYPTION D. WITH NOSOURCE
40521.book Page 295 Tuesday, August 8, 2006 1:21 PM
Review Questions
295
10. You are a database developer responsible for ensuring your application changes are properly deployed to an existing application environment. Your changes include adding columns to existing tables and updating relationships. What deployment method will ensure success with minimal interruption of application functionality? A. Use SQL Server Integration Services to copy all existing data into a temporary storage location. Delete and re-create the database, deploy all objects, and copy the data to its original location. B. Write a custom installer that ensures all deployment scripts that modify tables execute first by using the ALTER TABLE statement, and then execute data manipulation scripts to modify existing data in place. Deploy any new objects by re-creating them if they already exist. Execute post-build scripts to ensure all data is intact and ready to use. C. Write a custom installer that modifies only existing objects. Provide the users with scripts they manually execute to update data. D. Use the built-in capabilities of the Microsoft Installer included with Visual Studio 2005 to ensure database objects are properly deployed. 11. You are an engineering executive responsible for a large development team that produces many products simultaneously. Each of your project teams relies on other teams for pieces of their product. How can you ensure each developer modifies only the code for which they are responsible? A. Employ a proper SCM strategy that includes authentication and authorization as well as well-defined code branch strategies. B. Ensure that developers on each project have a single SQL Server where all code is applied and tested. Ensure only developers on that project have access to the server. C. Ensure that each developer has a local copy of Visual SourceSafe to store their application code. Replicate each local database to a central repository that is controlled by the release manager. D. Ensure each developer uses Visual Studio to develop all the aspects of the application. Install Microsoft Visual Studio Team Foundation Server, and ensure each developer connects to that server to store their source code. 12. You are in the process of migrating an application that has been developed in-house from SQL Server 2000 to SQL Server 2005. There are no plans at this time to update the application. How can you best ensure the application will work properly on SQL Server 2005 before deploying it in production? A. Back up the database on SQL Server 2000 and restore it to SQL Server 2005. Have a team of users test all the functionality prior to deploying it into production. B. Ensure all unit test procedures execute correctly on SQL Server 2005 once you have migrated the application. C. Use the SQL Server 2005 Migration Wizard to detect any problems that might occur. D. Use SQL Server Notification Services to notify you of possible incompatibilities in your code.
40521.book Page 296 Tuesday, August 8, 2006 1:21 PM
Chapter 5
296
Designing Database Testing and Code Management Procedures
13. When developing database unit test plans, what objects should you test? A. Stored procedures B. Functions C. Views D. Tables E. User-defined data types F.
CLR procedures
14. Which built-in SQL Server technology is best suited for monitoring the overall system performance? A. SQL Server Database Tuning Wizard B. SQL Server Performance Monitor C. SQL Server Profiler D. SQL Server Notification Services 15. What factors are important to consider when building a comprehensive performance profile? A. Application response time B. Application query throughput C. System resource utilization D. All of the above 16. What is the largest drawback with procedures that include WITH ENCRYPTION in their creation scripts? A. Performance suffers. B. The objects can no longer be updated. C. The objects cannot be edited by native tools. D. There are no drawbacks to using WITH ENCRYPTION. 17. When upgrading existing objects, what must a database developer remember to do during deployment? A. Reissue any GRANT statements to ensure that the appropriate security statements have been applied to the objects. B. Ensure that the existing objects are not modified and only new objects are added. C. Ensure that the existing data is unmodified. D. Ensure that all the existing objects are renamed and versioned copies are stored in the database.
40521.book Page 297 Tuesday, August 8, 2006 1:21 PM
Review Questions
297
18. Which of the following statements demonstrate the proper use of the T-SQL WITH EN1CRYPTION clause? A. CREATE PROCEDURE WITH ENCRYPTION [procedure1] AS {…} B. CREATE PROCEDURE [procedure1] AS {…} WITH ENCRYPTION C. CREATE PROCEDURE [procedure1] WITH ENCRYPTION AS {…} D. WITH ENCRYPTION CREATE PROCEDURE [procedure1] AS {…} 19. When using agile database development techniques, what are some considerations for “selling” TDD to management? A. Agile methods are faster than regular development methods. B. Agile methods force the creation of requirement documents so developers understand what they are building. C. Agile methods force developers to create good documentation. D. Agile methods ensure higher-quality code. 20. What is the fastest method for generating database creation scripts for deploying to a test server? A. Use SQL Server Integration Services. B. Use SQL Server Management Studio. C. Use SQL Server Notification Services. D. Use SQL Server replication.
40521.book Page 298 Tuesday, August 8, 2006 1:21 PM
298
Chapter 5
Designing Database Testing and Code Management Procedures
Answers to Review Questions 1.
D. Even though it appears to be more work, having each developer implement unit tests for each piece of code they write will ensure that the finished product is of much higher quality than can be created using any of the other methods in the answers.
2.
D. To ensure that an application meets all the quality attributes required, it is important to develop a comprehensive performance profile and ensure that developers write unit tests to prove their code meets the profile. This is not sufficient, however, because the application also must be tested in as close to a real-world environment as possible. Tools such as the SQL Server Profiler and Windows System Monitor are excellent resources for this sort of testing.
3.
C. The SQL Server Profiler that ships with SQL Server 2005 is a comprehensive analysis tool that has the capability of correlating Windows System Monitor log files to SQL workloads; this gives you a good overview of exactly what is happening on the system at any given time.
4.
C. Using SQL Server Profiler to capture a workload file from the production server allows you to replay that file on the test server while monitoring system performance. Monitoring performance alone on the test server is not sufficient, because the server is not under the load of normal users. Monitoring performance on the production server will not catch any issues with application performance prior to deployment and may affect QOS.
5.
A. To solve the problem, you need to first determine what has changed from the development system to the production system. Since you already know what procedure is causing the problem, it will be easy to move the object to the development server and compare the version against the version stored in source control. This way you can determine exactly what happened in the deployment process and ensure that the problem is solved for now and for future deployments.
6.
D. Since T-SQL code is not compiled, code obfuscator programs such as Dotfuscator will not work. The only method that works is to create database objects with the WITH ENCRYPTION clause.
7.
B. In cases where many locations need an exact copy of both the data and structure within a database, SQL Server replication (most likely merge replication in this case, since the connections are only periodic) is an efficient means for moving both data and objects.
8.
D. Using TDD, a developer of SQL Server objects can first write tests that meet requirements for the object they are working on and then write just enough code to satisfy the tests. This can ensure higher-quality code in the end.
9.
C. The T-SQL WITH ENCRYPTION clause ensures deployed objects cannot be reverse engineered, even by administrators on the SQL Server.
10. B. There is no easy answer to this question; the only way to ensure success is to write a custom installer following one of the earlier examples in Chapter 5. The built-in Microsoft Installer is great for deploying nondatabase objects, but any database deployments must be handcrafted. 11. A. Proper SCM techniques are the best way to ensure that projects large and small are properly coordinated and managed.
40521.book Page 299 Tuesday, August 8, 2006 1:21 PM
Answers to Review Questions
299
12. B. The best method available for making sure an application functions after any change is to ensure that all unit test procedures execute correctly. Properly designed unit tests will ensure even the parts of the application that are not used regularly will get tested. 13. A, B, C, D, E, F. Complete unit tests include testing every object in the database. 14. Answer C. You can use the SQL Server Profiler to capture activity against a SQL Server instance and in conjunction with Windows System Monitor to correlate system-level performance counters with SQL Server activity. 15. D. When developing comprehensive performance profiles, it is important to cover all the aspects of the application, including how the overall system is affected (the CPU, memory, network, and so on). 16. C. When using the WITH ENCRYPTION clause, objects that are created have only encrypted information stored in syscomments; therefore, they can no longer be edited by native tools. They can still be updated if the original unencrypted source still exists and the performance hit is minimal. 17. A. When using upgrade scripts, one of the standard mechanisms is to ensure that existing objects are dropped and then re-created. This does not store any DCL applied to that object, so GRANT statements must be executed as part of a post-install process. 18. C. The proper syntax of the CREATE PROCEDURE statement includes the WITH ENCYRPTION clause after the procedure name and before the body of the procedure. 19. C. Although agile methods are sometimes controversial and can be all of these options, the most concrete benefit offered by TDD is the automatic creation of documentation. When using TDD, database developers spend a fair amount of time writing unit tests that exercise all input parameters, which allow other developers to simply examine the test cases to understand what the procedure is doing. 20. B. The quickest method for generating deployment scripts for a test environment is to use the GENERATE SCRIPTS function in SQL Server Management Studio. You could use both replication and SQL Server Integration Services, but they require more time to develop.
40521.book Page 300 Tuesday, August 8, 2006 1:21 PM
40521.book Page 301 Tuesday, August 8, 2006 1:21 PM
Chapter
6
Designing a Web Services Solution MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER: Select and design SQL Server services to support business needs.
Select the appropriate services to use to support business needs.
Design a SQL web services solution.
Design a Microsoft Distributed Transaction Coordinator (MCS DTC) solution for distributed transactions.
Design a SQL Server core service solution.
Design a SQL Server Agent solution.
Design data distribution.
Specify a web services solution for distributing data.
40521.book Page 302 Tuesday, August 8, 2006 1:21 PM
Although SQL Server 2005 introduces a rich set of new-andimproved features, few of them do as much to extend the reach of SQL Server applications as its new support for XML web services. In this chapter, you’ll look at distributed application development and how this new feature in SQL Server 2005 adds another option to your database application toolkit.
“A Brief History of Distributed Application Development” Everyone takes certain things for granted these days, including high gasoline prices, flavorless fast food, global warming and the destruction of the environment for profit, the greed of globalization and capitalism, and the ability to quickly and easily build software that works across widely distributed systems. But it wasn’t always this way. Little more than 10 years ago the ability to build software that communicated across networks was limited to a select few developers with specialized skills. Networked applications were possible, but building them involved working directly with sockets, named pipes, and other low-level networking concepts, and it wasn’t for the faint of heart. In the mid-to-late 1990s, technologies began appearing that made building distributed software easier: Microsoft’s Distributed Component Object Model (DCOM), Object Management Group’s Common Object Resource Broker Architecture (CORBA), and Java’s Remote Method Invocation (RMI) all gave developers the ability to build distributed applications— without worrying about the underlying network interface—using object-oriented programming techniques. Although this was a major step forward, these programming frameworks didn’t solve all the problems that programmers faced. DCOM, CORBA, and RMI were all built on proprietary binary network protocols; although this was a commonsense approach in the days when a 10BaseT Ethernet network was considered speedy, it was a major barrier to interoperability. Software built using DCOM could not be made to talk to programs built using CORBA, for example, without writing a lot of complex interop code or purchasing expensive application bridges. In addition, this generation of distributed application technologies was built primarily for local area networks (LANs); configuring a DCOM application wasn’t exactly simple to begin with, but when you tried to move from the LAN to a wide area network (WAN) environment, the configuration became more and more complex. This also created serious security concerns, so even if you could get your application running on a WAN, keeping it (and your network) secure was an ongoing challenge.
40521.book Page 303 Tuesday, August 8, 2006 1:21 PM
Introducing XML Web Services
303
What Is the Microsoft Distributed Transaction Coordinator? In the days of Windows NT 4.0 Option Pack, I remember the excitement of students when I was demonstrating Microsoft’s Distributed Transaction Coordinator (DTC) and distributed transactions via the BEGIN DISTRIBUTED TRAN statement on SQL Server 7.0. So, what is the DTC? Well, basically the DTC does what its name suggests. It’s an operating system component that exposes objects through the Component Object Model (COM), which allows clients to participate in coordinated transactions across multiple servers. In other words, the DTC provides the plumbing for implementing distributed transactions. A number of Microsoft technologies utilize the DTC. One example of where it has been used is with transactional replication with immediately updating subscribers. Importantly, the DTC uses a “connected model,” which basically means all nodes must be online for a distributed transaction to complete. This is not appropriate for the brave new “disconnected world,” where we have seen the emergence of service-oriented architectures (SOAs). Consequently we have seen the development of the Service Broker, web services, and of course this chapter!
What programmers lacked was a way to overcome these problems so they could start building software that worked on a truly global scale without requiring a team of PhDs to get the work done.
Introducing XML Web Services Enter the era of XML web services. In the late 1990s and early 2000s, a group of software vendors including Microsoft, IBM, and Sun Microsystems collaborated to define a set of new technologies that overcame the limitations of existing distributed application development technologies and enabled software developers to quickly and easily build secure, highly interoperable, and widely distributed systems. A core component of this new platform was the XML web service: a reusable software component that could be communicated with over networks—including the Internet—using accepted standards and protocols. Because XML web services were built upon existing industry standards, they made cross-platform applications not only a reality but part of everyday life. Using Visual Basic to build a Windows application that used an XML web service written in Perl and running on a Solaris server was as easy as if the XML web service had been written in Visual Basic and running on the same computer as the Windows client. In fact, the operating system and underlying language of the XML web service was something the client developer did not need to know.
40521.book Page 304 Tuesday, August 8, 2006 1:21 PM
304
Chapter 6
Designing a Web Services Solution
In fact, one of the primary goals of the Simple Object Access Protocol (SOAP), which I’ll talk about in a second or two, was simplicity—the group of software vendors didn’t want to invent anything they didn’t really need to invent. Because of this, the original SOAP specification was incredibly short—about 30 pages. If you’ve ever read the specifications for DCOM or CORBA (or SQL, for that matter), you know what a feat this is. You can read the SOAP specification at http://www.w3.org/TR/soap/.
These are some of the standards on which the XML web service specification was built: Transmission Control Protocol/Internet Protocol (TCP/IP) Because XML web services were designed from the beginning to be the building blocks of Internet-scale applications, it made sense to build them on top of the networking protocol stack that the Internet uses. Extensible Markup Language (XML) XML had been rapidly gaining acceptance as a way to share complex information between applications for the years leading up to the introduction of XML web services. XML provides a simple yet flexible way to communicate; XML documents are both computer- and human-readable, and with related standards such as XML Schema Document (XSD), Extensible Style Language (XSL)/XSL Transformations (XSLT), and XPath, it serves as “the lingua franca of the Internet.” Every modern operating system and development platform (and quite a few not-so-modern ones) has XML parsers and related tools available. Hypertext Transfer Protocol (HTTP) HTTP had been widely used by web browsers for years before XML web services existed, and as such HTTP is supported on every platform that has the ability to access the Internet. In addition to its broad support, HTTP also has some additional characteristics that make it well suited to XML web services, such as a stateless request/response pattern.
One significant drawback to sharing information using XML is its size: because XML documents contain both the data and markup tags that describe the data, they can quickly grow quite large. It is not uncommon to see XML documents that are five times (or sometimes more) as large as the data they contain. This is often acceptable because network bandwidth is much more freely available than it was in the days of DCOM and CORBA (one of the main reasons these older standards used binary protocols was because of the reduced message size this approach offered), but it can still be problematic when transmitting large volumes of data across slower networks.
In addition, some new standards were defined to enable the new features and functionality of XML web services: SOAP The Simple Object Access Protocol (SOAP) is at the heart of XML web services; it defines the message format used to communicate with XML web services. Just as DCOM and
40521.book Page 305 Tuesday, August 8, 2006 1:21 PM
Introducing XML Web Services
305
CORBA defined the binary format to package calls to the older generation of remote application components, SOAP defines an XML format for calls to the modern generation of remote components: XML web services. SOAP is built upon XML, HTTP, and other wellestablished standards, so it was easy for operating system and platform vendors to implement the standard using their tools. Web Services Description Language (WSDL) WSDL is an XML dialect that is used (as its name implies) to describe XML web services. WSDL documents give client software and development tools information about the external interfaces supported by an XML web service without sharing any information about the service’s internal implementation. In this way, WSDL is like the Interface Definition Language (IDL) used by COM and DCOM, but WSDL is a much more complete (and complex) language than IDL.
Reading the WSDL Specification The WSDL specification is a little heftier than the SOAP specification; it weighs in at about 50 pages. Still, much of this consists of example XML documents, not the technical language of the specification. The abstract at the beginning of the specification starts with this description: WSDL is an XML format for describing network services as a set of endpoints operating on messages containing either document-oriented or procedure-oriented information. The operations and messages are described abstractly, and then bound to a concrete network protocol and message format to define an endpoint. Related concrete endpoints are combined into abstract endpoints (services). I love this description because it captures the essence of WSDL so succinctly—WSDL is all about describing web services in an abstract manner, and it’s up to the web service (or services) that implements the WSDL interface to provide the concrete implementation. You can read the WSDL specification at http://www.w3.org/TR/wsdl/.
Of course, even though I’ve covered a lot of ground about how XML web services came about, I haven’t really talked at all about what they do, much less how they relate to SQL Server 2005. Please bear with me for just a few seconds more; we’re almost there. In essence, XML web services are building blocks with which application developers can build new and exciting software. Just as earlier component technologies such as DCOM made it possible for developers to write client code without worrying about where the server component was located, as long as it was located on the same network, XML web services make it easy for developers anywhere in the world to access components anywhere else in the world, regardless of how they were created. You’ll find public XML web services for accessing weather and traffic information, and major Internet players such as Amazon, Google, and eBay have created XML web services so developers can more easily access their data. Many software tools today also expose their functionality through an XML web services interface— Microsoft Visual SourceSafe 2005 and Microsoft SQL Server Reporting Services both provide
40521.book Page 306 Tuesday, August 8, 2006 1:21 PM
306
Chapter 6
Designing a Web Services Solution
XML web services so developers can easily program against them. And, of course, XML web services are everywhere today in custom software being built to solve business problems—if you’ve designed and built a custom business application in the past five years, odds are it uses XML web services, includes XML web services, or does both.
The Loosely Coupled World: A New Paradigm for Architecting Software At this point you may be saying something like “Hey—that sounds a lot like that SOA stuff I’ve been hearing a lot about lately” (or that you will be hearing about after reading Chapter 7). Although XML web services and SOA are not truly linked—you can have either one without the other—they do tend to go hand in hand. One of the core tenets of SOA is building applications from “loosely coupled services” that each work independently but that are brought together to form a greater whole. You can implement the services in an SOA using many different technologies (SQL Server Service Broker is an excellent choice), but XML web services are a natural fit because of their cross-platform nature and their ability to be called asynchronously, which is vital for high performance in an SOA application.
One thing that XML web services need is a web server. Often the choice of web server is closely related to the choice of development tools used to build XML web services. If you’re using Microsoft’s Internet Information Services (IIS) as your web server, you will probably develop your XML web services using Microsoft’s ASP.NET technologies. (These XML web services are often called ASMX web services, or simply ASMX after the file extension used to create them.)
For more information on Microsoft’s IIS capabilities visit http:// www.microsoft.com/WindowsServer2003/iis/default.mspx
If you chose Apache as your web server, you’ll probably develop your XML web services using Java or Perl. Of course, many different web servers and many different XML web services programming toolkits are available, but you get the idea.
What exactly is a web server, anyway? Although many people think of a computer as being a web server, the web server itself is actually a software component. This web server is often IIS or Apache, but it can be just about any piece of software that can listen on TCP port 80 (or port 443 for Security Sockets Layer [SSL] connections), accept incoming SOAP request messages, and return SOAP response messages. For example, you’ll find an excellent article on how to develop ASP.NET ASMX web services without IIS on the MSDN website at http://msdn.microsoft.com/msdnmag/issues/04/12/ServiceStation/.
40521.book Page 307 Tuesday, August 8, 2006 1:21 PM
Using Native XML Web Services in SQL Server
307
Unlike earlier distributed application frameworks, the choice of development tools used to build client software for XML web services is not tied to the server implementation. Each XML web service exposes a WSDL document that describes the service. Most development tools will automatically read this WSDL and create proxy classes that take care of all the “plumbing” necessary to send SOAP messages back and forth with the XML web service. The developer gets all the benefits of a distributed application built on industry standards while needing to do little or no work to get them. What could be better? Of course, you’ll see what’s even better in the next section.
Using Native XML Web Services in SQL Server Until SQL Server 2005, if you need to expose data in a SQL Server database through an XML web service, you needed a separate web server such as IIS to host the XML web services. Client software connects to the XML web service hosted by the web server using HTTP or HTTPS; the XML web service code connects to the SQL Server using Tabular Data Stream (TDS). You can see a common configuration for exposing SQL Server data using external XML web services in Figure 6.1. Starting with SQL Server 2005, client software can use another protocol to access SQL Server’s relational data: HTTP. No longer is an external web server needed, because SQL Server 2005 can act as its own web server, listening for HTTP requests directly from clients without needing TDS anywhere in the loop. This feature is known as native XML web services, and it allows SQL Server 2005 to expose stored procedures, extended stored procedures, scalar user-defined functions, and Transact-SQL (T-SQL) batches (although you’ll see later that this last option isn’t always the best idea) as XML web services using SOAP, WSDL, and HTTP/HTTPS. With native XML web services, the diagram from Figure 6.1 would look like Figure 6.2. FIGURE 6.1
SQL Server and external web services
Firewall
Web Client HTTP
Web Server Windows Client
HTTPS
TDS
SQL Server
40521.book Page 308 Tuesday, August 8, 2006 1:21 PM
308
Chapter 6
Designing a Web Services Solution
SQL Server’s Real “Language”: Tabular Data Stream TDS? Where did that come from? SQL Server uses a proprietary and largely undocumented application-level protocol called Tabular Data Stream for client-server communications. TDS streams are self-describing in that they contain information about the shape of the data as well as the data itself and are optimized for relational database traffic. TDS is one of those “under-the-covers” details of SQL Server that most database professionals don’t need to know. Any SQL Server connectivity software, such as ODBC and JDBC drivers and OLE DB and .NET providers, all use TDS internally. They’re simply providing a wrapper around TDS to make it easier to write software that talks to SQL Server. Although several open-source implementations of TDS exist (FreeTDS and jTDS) and although Sybase implements its own version of TDS for its database servers, when discussing Microsoft SQL Server, it is usually safe to think of TDS as a Windows-only protocol.
FIGURE 6.2
SQL Server and native XML web services
Firewall
Web Client HTTP
HTTP/HTTPS
Windows Client
SQL Server
HTTPS
In this scenario, you don’t need a separate web server computer or need to have IIS installed on the SQL Server computer. Not only can this reduce management complexity and licensing costs, but it can also improve security by reducing the attack surface of the web server. IIS implements a lot of functionality that SQL Server’s HTTP server does not, so a hacker can exploit fewer avenues of attack. Also—as you’ll see later when you look at how SQL Server secures native XML web services—SQL Server does not allow any anonymous connections, so without proper credentials an attacker can’t even make an HTTP connection to start an attack.
What’s Going on Under the Hood? At this point, you are probably asking either “Wow, how did they do that?” or “They didn’t really build another web server, did they? Wasn’t IIS enough?” You’ll be glad to learn that Microsoft went about things the right way when it made SQL Server 2005 a web server as well as a database server. Do you remember when Windows 2003 Server was still in beta and Microsoft was first publishing information about changes made since Windows 2000? One of
40521.book Page 309 Tuesday, August 8, 2006 1:21 PM
Building an XML Web Service Using SQL Server
309
those changes was the redesign of the HTTP processing stack; with Windows 2003 Server, Microsoft moved HTTP processing from the user mode as part of IIS to the kernel mode as part of the operating system. HTTP.SYS, the new kernel-mode HTTP driver, now provides this functionality. What does this mean for implementing SQL Server 2005 native web services? The most obvious result is that although SQL Server 2005 can install and run on Windows 2000, you cannot implement native web services because the required operating system feature does not exist—no HTTP.SYS means no SQL Server 2005 native web services. Does this mean you need Windows 2003 Server in order to develop native XML web services as well? Fortunately, this is not the case. Microsoft has implemented HTTP.SYS in Windows XP Service Pack 2 as well as in Windows 2003 Server, so you can continue doing your development work on your Windows XP development workstation and deploy to Windows 2003 Server when development and testing are complete.
Building an XML Web Service Using SQL Server I’ve spent enough time talking about what web services are and how SQL Server 2005 lets you build them. Now I’ll actually show how to build them so you can see how to take advantage of this exciting new functionality. The first element you need is to have some SQL Server object to expose. In Exercise 6.1, you’ll create a simple stored procedure using the AdventureWorks sample database that ships with SQL Server 2005.
AdventureWorks replaces the pubs and Northwind sample databases that have been shipping with SQL Server for many years. AdventureWorks is much more complicated than the older sample databases because it demonstrates the more complex feature set of SQL Server 2005 and because it models more meaningful data. If you do not have the AdventureWorks database on your SQL Server instance, read the topic “Running Setup to Install AdventureWorks Sample Databases and Samples” in SQL Server 2005 Books Online for detailed setup instructions.
The web service you’ll create will list all employees who currently work for a specific department at the fictitious AdventureWorks company. To implement this logic, you’ll create a stored procedure that accepts a single input parameter containing the department name and returns a single result set containing the last and first names of all employees currently working in the specified department. The AdventureWorks database is quite a bit better normalized than the older sample databases, so this may be more complex than you’d expect, but it’s still a pretty trivial stored procedure.
40521.book Page 310 Tuesday, August 8, 2006 1:21 PM
Chapter 6
310
Designing a Web Services Solution
EXERCISE 6.1
Creating a Stored Procedure 1.
Open SQL Server Management Studio, and connect to your local AdventureWorks database.
2.
Create a new query in SQL Server Management Studio.
3.
In the query editor window, enter this SQL script: USE AdventureWorks ; GO
-- Get all employees currently in department CREATE PROCEDURE dbo.ListEmployeesByDepartment @DepartmentName
NVARCHAR(50)
AS
SELECT c.LastName ,c.FirstName FROM Person.Contact c INNER JOIN HumanResources.Employee e ON c.ContactID = e.ContactID INNER JOIN HumanResources.EmployeeDepartmentHistory h ON e.EmployeeID = h.EmployeeID INNER JOIN HumanResources.Department d ON h.DepartmentID = d.DepartmentID HERE d.Name = @DepartmentName AND h.EndDate IS NULL ORDER BY 1
GO
4.
Execute the script.
40521.book Page 311 Tuesday, August 8, 2006 1:21 PM
Building an XML Web Service Using SQL Server
311
EXERCISE 6.1 (continued)
5.
Once you’ve created the stored procedure, you can test it using this code: EXEC dbo.ListEmployeesByDepartment 'Sales'
EXEC dbo.ListEmployeesByDepartment 'Human Resources'
EXEC dbo.ListEmployeesByDepartment 'Tool Design'
6.
Leave SQL Server Management Studio open; you’ll use it again in another exercise.
The next step you need to perform is to create a SOAP endpoint to expose this stored procedure to clients as a web service. A lot of SQL Server professionals tend to rely pretty heavily on the excellent graphical tools that ship with SQL Server. With the SQL Server 2005 release, the tools have gotten even better, although it does take a little while to get used to the differences between Enterprise Manager and Query Analyzer and the new SQL Server Management Studio. Because of this tendency, you may be tempted to look for a graphical user interface (GUI) tool to create a new SOAP endpoint. However, as you can see in Figure 6.3, there is no such graphical tool. When you right-click the SOAP folder under Endpoints, the expected Create or New option isn’t there—the only option is Refresh. The reason for this is that SQL Server Management Studio doesn’t have a GUI for managing endpoints. You create, alter, and delete endpoints using the new CREATE ENDPOINT, ALTER ENDPOINT, and DROP ENDPOINT T-SQL statements. Some people may cringe at the thought of writing a lot of T-SQL code, but this is a best practice even when a GUI tool exists; you can place script files under source control with other application code, and it is always much easier to reproduce steps taken in a script than steps taken in a GUI tool. FIGURE 6.3
Endpoints in SQL Server Management Studio
40521.book Page 312 Tuesday, August 8, 2006 1:21 PM
Chapter 6
312
Designing a Web Services Solution
Understanding Different Types of Endpoints OK, you ask, I’m talking about SOAP endpoints, but there are other endpoints shown in Figure 6.3 — what about what about the database-mirroring, Service Broker, and T-SQL endpoints? Well, the new database mirroring and SQL Server Service Broker use endpoints as well, although they use their own binary protocols, not HTTP or SOAP. In addition, SQL Server 2005 exposes its new dedicated administrator connection (DAC) and its network libraries as endpoints, although you’re more likely to administer the latter using the SQL Server Configuration Manager tool. The CREATE ENDPOINT, ALTER ENDPOINT, and DROP ENDPOINT statements vary when working with these different types of endpoints—see SQL Server Books Online for the details.
The next step you’ll take before you create your SOAP endpoint is to stop IIS if it is running. There are actually other reasons for this (and I’ll get into them later in the chapter), but for now this will let you ensure that it is SQL Server, and not IIS, that is providing the web service. You can stop IIS in many ways, but Exercise 6.2 shows my personal favorite. EXERCISE 6.2
Stopping IIS 1.
At a command prompt or from the Windows Run dialog box, execute the following command: net stop iisadmin /y
2.
Once you’re certain that IIS is not running, you can proceed with creating your SOAP endpoint. Open a new query in SQL Server Management Studio, and in the new query editor window, enter this SQL script: CREATE ENDPOINT EmployeeWebService STATE = STARTED AS HTTP ( PATH = '/AWemployees', AUTHENTICATION = (INTEGRATED), PORTS = (CLEAR), SITE = '*' )
40521.book Page 313 Tuesday, August 8, 2006 1:21 PM
Building an XML Web Service Using SQL Server
313
EXERCISE 6.2 (continued)
FOR SOAP ( WEBMETHOD 'ListEmployees' ( NAME='AdventureWorks.dbo.ListEmployeesByDepartment', SCHEMA=STANDARD ), WSDL = DEFAULT, SCHEMA = STANDARD, DATABASE = 'AdventureWorks', NAMESPACE = 'http://adventureworks.com/' );
GO
3.
Execute the script.
Did you catch a subtle difference between the CREATE PROCEDURE script and the CREATE ENDPOINT script? You executed the CREATE PROCEDURE script against the AdventureWorks database, because stored procedures are database objects. Regardless of the database context in which the CREATE ENDPOINT is executed, all endpoints are server-level objects and do not exist in any specific database. This is an important concept to understand. I’ll touch on it several more times throughout this chapter, but for now just keep in mind that endpoints are not “owned” by a specific database like tables, views, and stored procedures are.
Now you have a SOAP endpoint, but what does that really accomplish? You’ll look at the CREATE ENDPOINT syntax in greater detail later, but these are the basics:
SQL Server is now listening for incoming HTTP requests on TCP port 80, the “clear” HTTP port.
SQL Server is using Windows Authentication.
SQL Server is listening on all available host names for the computer that are not otherwise reserved by another HTTP listener.
40521.book Page 314 Tuesday, August 8, 2006 1:21 PM
314
Chapter 6
Designing a Web Services Solution
The web method name (the name of the function you’re exposing via HTTP) is ListEmployees.
You’re letting SQL Server generate the WSDL document that tells clients what to expect from the web method.
The web method is exposing the stored procedure you just created in the AdventureWorks database.
Using the Service When you created the stored procedure in Exercise 6.1, you tested it simply by executing it in SQL Server Management Studio. With a web service, things are a little more complex. Although you could use VB .NET or C# to create a common language runtime (CLR) stored procedure that calls the web service, you won’t do that here. Why not? You won’t do that for three primary reasons:
This is a worst practice, and you don’t want to start any bad habits now. T-SQL remains the preferred (and best) way to access SQL Server data; SQL Server’s CLR integration features are there to replace extended stored procedures, not T-SQL.
It’s not what people would do in the real world, and you want to focus on real-world applications.
It would be a lot more work than simply building a client application in Visual Studio, which is what you’re actually going to do in this chapter.
Instead, you’ll look at the WSDL document that SQL Server has created for you and then build a simple Windows Forms client application using Visual Studio 2005.
Looking at the WSDL Before you jump into Visual Studio, look at the WSDL that SQL Server has created. Make sure IIS is not running on your computer (net stop iisadmin /y), or the following exercise will not work because IIS will be vying for the same port and the following URL will fail if using XP. I’ll cover this in more detail later. Using Internet Explorer (or the web browser of your choice), enter the following URL: http://localhost/AWemployees?wsdl
Note that the form of this URL is the protocol you specified (HTTP), the name of your server (localhost), and the path of the endpoint, followed by the wsdl querystring parameter. Basically what we’re doing is asking SQL Server to show you the WSDL for your new web service. When the page loads, you should see something that looks like Figure 6.4. I’ve collapsed the wsdl:types node to hide a lot of the boilerplate content common to all SQL Server native web services and make the figure more readable. What’s included in a WSDL document? A WSDL document contains information about the web service, the web
40521.book Page 315 Tuesday, August 8, 2006 1:21 PM
Using the Service
315
Think About What’s Going On “Underneath the Hood”! Shortly before SQL Server 2005 was released to manufacturing, when I was helping mentor a web-based solution, one of the developers asked me why SQL Server CLR stored procedures were so slow. Since Microsoft had gone to great lengths to ensure that its CLR integration performed to the highest standards—and since my personal testing had not shown any serious performance issues, even when using the beta versions of SQL Server 2005—I was both confused and intrigued. I asked him to show me his code to see whether I could see what might be wrong. When he shared his source files with me, the problem was obvious. He was implementing a stored procedure written in C# that called a public XML web service and then returned the output from the web service to the client—after joining it to local SQL Server database data—much as using a SELECT statement in T-SQL would do. Each call to the stored procedure could take up to five seconds to run, even though only a few dozen records were being returned. When compared to a T-SQL stored procedure that can return thousands of records in less than a second, the performance was indeed poor. Of course, SQL Server is optimized for high-performance access to the relational data that it manages. Because physical disk input/output (I/O) is an expensive operation, SQL Server goes to great lengths to ensure that the data it needs is stored in random access memory (RAM) and supports a range of performance boosting options such as indexes and statistics to speed things up. So, of course, when the data returned by a stored procedure is on the Internet—hundreds or thousands of times slower than accessing data from a local hard disk— the procedure’s performance was unacceptable. How did I solve the problem? Instead of accessing the web service each time the stored procedure was called, I instead called the web service on a schedule and put the returned data in a database table. Not only did this dramatically reduce the number of times the web service was accessed, it also meant that when the stored procedure was called, it was able to access local database data. The end results were that response times dropped to nearly zero and throughput increased by orders of magnitude. So when architecting a database solution, it is imperative to understand the technology and what is going on behind the scenes, not just the syntax of your programming language!
methods it exposes, the parameters, and their data types. As I discussed earlier, WSDL is all about describing the abstract interface of a web service, without any information about the service’s implementation. This means with this WSDL document, a client application can connect and work with your web service without knowing—or needing to know—that the underlying code is written in T-SQL, not C# or Java.
40521.book Page 316 Tuesday, August 8, 2006 1:21 PM
316
Chapter 6
FIGURE 6.4
Designing a Web Services Solution
WSDL generated by SQL Server
Building a Client Application Using Visual Studio 2005 Fortunately, it’s rare that you’ll ever need to deal with the WSDL yourself. Even though SOAP and WSDL are at the core of web services—regardless of the technology used to create them—most modern development tools hide the details so developers don’t need to concern themselves with their intricacies. I’ll show how to use Visual Studio 2005 to do all the hard work here. So let’s start building a client application in Visual Studio 2005 by creating a Windows Forms web service client application (see Exercise 6.3).
40521.book Page 317 Tuesday, August 8, 2006 1:21 PM
Using the Service
317
Using Microsoft Visual Studio 2005 Express Edition Don’t worry if you don’t have a copy of Visual Studio 2005. If you have another development tool with which you’re already comfortable, you probably already know how to access web services using that tool and can adapt this exercise to your tool’s capabilities. If not, you can download one of the Express Editions of Visual Studio 2005 for free from the Microsoft website. (As of this book’s publication, these tools are free. Microsoft is making them free for one year following their release on November 7, 2005, and they’ll probably be reasonably priced after that.) The Express Editions are simplified versions of Visual Basic .NET, Visual C# .NET, and Visual Web Developer. They’re targeted at student and hobbyist developers, and although they lack many of the advanced enterprise features that professional developers need, they should be more than adequate for what you’re doing here. Of course, there’s also a free Express Edition of SQL Server 2005 as well, and Microsoft says that this one will be free forever. You can find more information about all the Express Edition products here: http://msdn.microsoft.com/ vstudio/express/.
EXERCISE 6.3
Creating a Windows Forms Web Service Client Application 1.
To begin, launch Visual Studio 2005 from the Start menu.
2.
Then select File New Project.
40521.book Page 318 Tuesday, August 8, 2006 1:21 PM
318
Chapter 6
Designing a Web Services Solution
EXERCISE 6.3 (continued)
3.
In the New Project dialog box that appears, select Visual C# and then Windows from the Project Types tree on the left, and then select Windows Application from the Templates list on the right. Then enter SQLWebServiceClient as the project name.
4.
Click OK to create the project. Visual Studio will now create a new Windows Forms application project from its Windows Application template. This template will create the project structure shown here in Visual Studio’s Solution Explorer window.
40521.book Page 319 Tuesday, August 8, 2006 1:21 PM
Using the Service
EXERCISE 6.3 (continued)
5.
To start with, you have places to put your application settings, references to commonly used functionality within the .NET Framework, a Program class that serves as the starting point for the application, and a default form that will display when the application is launched. You’ll be working primarily in the form in this chapter, but first you need to add a web reference to the project so your Windows application can talk to the web service you’ve just created in SQL Server.
6.
To add a web reference, simply right-click the References folder in the Solution Explorer window, and choose Add Web Reference from the pop-up context menu.
7.
When you do this, the Add Web Reference dialog box will appear. In the URL text box, type the URL of the web service’s WSDL (this is the same URL you used previously), and click the Go button. Visual Studio will then read in the WSDL and display information about the web service and the web methods it exposes.
319
40521.book Page 320 Tuesday, August 8, 2006 1:21 PM
Chapter 6
320
Designing a Web Services Solution
EXERCISE 6.3 (continued)
8.
Even though it looks different, the Add Web Reference dialog box is simply another view of the same WSDL you saw earlier in the exercise. Aren’t you glad you don’t need to read through the raw WSDL yourself?
9.
Enter the name SQLWebService in the Web Reference Name text box, and click the Add Reference button to add the web reference to the project.
10. When this is done—it may take a few seconds for Visual Studio to do its work—your Solution Explorer window will appear, as shown here. I clicked the Show All Files button to show the details of what’s included in a Visual Studio web reference; if you can’t see the contents of the SQLWebService Web References folder, click this button to toggle the details.
This is what you get when you add the web reference to your project in Visual Studio:
A WSDL file that describes the web service. This is not the exact WSDL document that SQL Server generated, but it’s functionally identical.
A Reference.map file that Visual Studio uses to hook up the proxy class (see next bullet) to the previous WSDL.
A Reference.cs file that defines a .NET proxy class with which code in the Visual Studio project interacts with to call the web service. When you create an instance of the web service in your C# code, you’re actually creating an instance of this proxy class. This class takes care of all of the “plumbing” involved in sending SOAP messages across the network to the web service. If you’re interested, you should look at the code inside Reference.cs. Don’t modify it—not that you might break something…any changes you make will be overwritten the next time you update the web reference—but looking at what’s going on behind the scenes will give you a much better view of what’s really involved with creating a web service client. It’s one of those things you usually don’t need to know, but the more you know, the better prepared you’ll be when you need to troubleshoot problems.
40521.book Page 321 Tuesday, August 8, 2006 1:21 PM
Using the Service
321
EXERCISE 6.3 (continued)
11. Now that you have added your web reference, it’s time to set up your form to call the web service. First you’ll build the user interface. Double-click Form1.cs in Solution Explorer to display the form designer.
12. Next, drag a TextBox control, a Button control, and a DataGridView control onto the form’s design surface and arrange them so they look like the form shown here.
13. If you want, you can set control properties like the Button control’s Text property to “Get Employees” or the Dock and Anchor properties to make the form prettier when you resize it, but since this exercise is focusing on SQL Server native XML web services and not Windows Forms UI development, I won’t get into any of those details here.
14. Next, write a little code to put all the pieces together. Double-click the button in the form designer. This will open the code file for the form and create a button1_Click event procedure automatically.
15. Inside this event procedure you need to write code to pass the text from the form’s TextBox control to the web service and display the results in the form’s DataGridView control. Your code should look something like this: private void button1_Click(object sender, EventArgs e) { SQLWebService.EmployeeWebService proxy = new SQLWebService.EmployeeWebService(); proxy.UseDefaultCredentials = true;
40521.book Page 322 Tuesday, August 8, 2006 1:21 PM
Chapter 6
322
Designing a Web Services Solution
EXERCISE 6.3 (continued)
object[] foo = proxy.ListEmployees(textBox1.Text);
foreach (object o in foo) { MessageBox.Show(o.ToString()); }
DataSet employees = foo[0] as DataSet;
dataGridView1.DataSource = employees.Tables[0]; } I should point out a few things about this code:
The type SQLWebService.EmployeeWebService, referenced in line 3, consists of two parts: SQLWebService is the namespace name, defined when you added the web reference to your project; EmployeeWebService is the class name, defined when you created your endpoint using the CREATE ENDPOINT statement. Both the namespace and the class are defined in the Reference.cs file I discussed earlier.
You need to specify security credentials in order to call this web service—remember how you told SQL Server to use Windows Authentication? You could have done this in a variety of ways, but you took the easiest route: in line 4, you’re saying, “Use the identity of the currently logged on user when you call the web service.”
On line 5 you’re performing quite a few steps at once. You’re calling the ListEmployees web method, passing in the value from the TextBox control. If you examine the ListEmployees method in Visual Studio, you’ll see that it returns an array of objects. This is because a stored procedure can return any number of things, from output parameters and return values to result sets (from a SELECT statement) and XML streams (from a SELECT...FOR XML statement)—all in response to one call. Your stored procedure returns only a single result set, so you’re taking the first object in the array and turning it into a .NET DataSet object.
40521.book Page 323 Tuesday, August 8, 2006 1:21 PM
Reviewing What You’ve Learned
323
EXERCISE 6.3 (continued)
Finally, you set the DataSource property of your DataGridView control to the first DataTable in the DataSet. This will bind the DataGridView control to the data returned by the web service and display it to the user. For the sake of completeness, I should point out that the object array returned by your web service actually contains three objects, not only the one you’re dealing with in your Windows client application. The first object is a System.Data.DataSet that contains the actual data. The second is a SQLWebServiceClient.SQLWebService.SqlRowCount object that contains the number of rows returned in the DataSet. The third and final object is the numeric value returned by the underlying stored procedure. In this case, it is zero, for success.
16. Now all the pieces are in place, and it’s time to see all of them work together. Let’s run the Windows client application in Visual Studio. Press F5, or click the Play button on the Standard toolbar to launch the application. When it starts, type Sales in the text box, and click the button. The results should look something like form shown here.
17. When you’re done reviewing the data (and perhaps trying the application with different department names), you can close the application, save your changes in Visual Studio, and close Visual Studio as well.
Reviewing What You’ve Learned So, what exactly have you accomplished here? Although the form you created in Exercise 6.3 isn’t all that pretty (OK, mine isn’t pretty—you may have taken the time to make yours a little more attractive), the work going on behind the scenes is exciting. Here are the major pieces:
You created a T-SQL stored procedure. This is the same sort of procedure you’d create if you were going to call it using ADO.NET or ODBC.
40521.book Page 324 Tuesday, August 8, 2006 1:21 PM
324
Chapter 6
Designing a Web Services Solution
You created a SOAP endpoint in SQL Server. This new feature configures SQL Server to act as a web server, listening on TCP port 80 for incoming HTTP requests, and defines a SQL Server native XML web service that exposes the stored procedure through a web method.
You used Visual Studio 2005 to create a Windows client application that calls the SQL Server native XML web service and displays the data to the user.
And you’ve done it all without using IIS (or any web server other than SQL Server, for that matter) or TDS anywhere in your new distributed application.
The ability to expose SQL Server data without requiring TDS, IIS, or another web server opens up new possibilities for application architects and developers to design, develop, and deploy distributed enterprise software in ways that were never possible before SQL Server 2005 and native XML web services.
Reducing the Software Licensing Cost Is it just me, or does it seem that the cost of software and licenses just keep increasing? I’m particularly keen about the cost of SQL Server Express Edition, especially because the money you save there can go elsewhere. Why purchase the Enterprise Edition when the Express Edition (or the Express Edition with Advanced Services) will suffice? So, don’t forget to check out http://msdn.microsoft.com/vstudio/express/sql/. Otherwise, getting back to the ranch, SQL Server native XML web services are a complementary feature to the new SQL Server Service Broker. I am currently in a proof-of-concept phase of evaluating a design for a consulting client using these two features and SQL Server Express Edition. The client needs the reliable messaging capabilities of Service Broker but needs a way to accept messages from disparate client systems while minimizing the costs of software licensing and maintenance. With native XML web services, I’m able to expose secure web methods that wrap stored procedures that interact with Service Broker, providing the functionality needed without requiring any additional software such as IIS. This reduces the cost and complexity of maintaining the systems, and SQL Server Express Edition eliminates the cost of a traditional SQL Server license. Since this system is likely to be deployed to thousands of web-facing computers, the licensing cost is a major factor, and the combination of native XML web services, Service Broker, and SQL Server Express Edition looks ideal to solve the problem at hand. The proof of concept is progressing well, thanks for asking. By the time you read this, I’ll probably be rolling out the first phase of the project into production. Or I’ll have completed the project.
40521.book Page 325 Tuesday, August 8, 2006 1:21 PM
Creating and Configuring Native XML Web Services
325
Creating and Configuring Native XML Web Services Now that you’ve created an end-to-end solution based on SQL Server 2005 native XML web services, you’ll step back to see more of the details involved with SQL Server’s HTTP support. You’ll look at the different T-SQL statements used to create alter and delete endpoints, the metadata that’s created when you use these statements, and the real reason why you stopped IIS before creating your endpoint earlier in the chapter.
Using the CREATE ENDPOINT Statement As you’ve already seen, you can use the CREATE ENDPOINT statement to create not only HTTP/ SOAP endpoints but also endpoints for database mirroring, Service Broker, and SQL Server’s network communication. I’ll focus here strictly on SOAP endpoints; you can look at the CREATE ENDPOINT topic in SQL Server Books Online for information about the other uses of this statement. This is the partial syntax for creating a SOAP endpoint: CREATE ENDPOINT endPointName [ AUTHORIZATION login ] STATE = { STARTED | STOPPED | DISABLED } AS HTTP ( PATH = 'url' , AUTHENTICATION =( { BASIC | DIGEST | INTEGRATED | NTLM | KERBEROS } [ ,...n ] ) , PORTS = ( { CLEAR | SSL} [ ,... n ] ) [ SITE = {'*' | '+' | 'webSite' },] [, CLEAR_PORT = clearPort ] [, SSL_PORT = SSLPort ] [, AUTH_REALM = { 'realm' | NONE } ] [, DEFAULT_LOGON_DOMAIN = { 'domain' | NONE } ] [, COMPRESSION = { ENABLED | DISABLED } ] ) FOR SOAP( [ { WEBMETHOD [ 'namespace' .] 'method_alias' ( NAME = 'database.owner.name' [ , SCHEMA = { NONE | STANDARD | DEFAULT } ] [ , FORMAT = { ALL_RESULTS | ROWSETS_ONLY | NONE} ] ) } [ ,...n ] ] [ BATCHES = { ENABLED | DISABLED } ]
40521.book Page 326 Tuesday, August 8, 2006 1:21 PM
Chapter 6
326
[ [ [ [ [ [ [ [ [
, , , , , , , , ,
Designing a Web Services Solution
WSDL = { NONE | DEFAULT | 'sp_name' } ] SESSIONS = { ENABLED | DISABLED } ] LOGIN_TYPE = { MIXED | WINDOWS } ] SESSION_TIMEOUT = timeoutInterval | NEVER ] DATABASE = { 'database_name' | DEFAULT } NAMESPACE = { 'namespace' | DEFAULT } ] SCHEMA = { NONE | STANDARD } ] CHARACTER_SET = { SQL | XML }] HEADER_LIMIT = int ]
)
As you can see, I didn’t cover quite a few options in the exercises earlier. You’ll probably notice as well that the CREATE ENDPOINT statement has two primary sections. The first section begins with the AS keyword and ends before the FOR clause; this section defines the transport protocol options. The second section begins with the FOR clause and includes the rest of the statement; this section defines the payload that is supported by the endpoint being created.
As you’re looking at the previous syntax and reading the following list of arguments, think about the options you specify when creating a virtual server or virtual directory in the IIS Microsoft Management Console (MMC) snap-in. If you’ve done much work with IIS, many of these options should look pretty familiar, because you’re basically performing a similar task with a different tool.
Here are some of the most useful and important arguments for creating SOAP endpoints: STATE = { STARTED | STOPPED | DISABLED } This argument controls the state of the endpoint when it is created. STARTED means the endpoint is started and listening for connections. DISABLED means the server does not listen to the endpoint or respond to any requests for the endpoint. STOPPED means the server listens to requests but returns errors to clients. This is the default value. PORTS= ( { CLEAR | SSL} [ ,... n ] ) This argument specifies one or more port types on which the endpoint will listen. If only CLEAR is specified, incoming requests must use HTTP; if only SSL is specified, incoming requests must use HTTPS. Both can be configured at the same time. As you can see in the previous syntax, you can also specify nondefault port numbers by using the CLEAR_PORT and SSL_PORT arguments. SITE = { ' * ' | ' + ' | 'hostName' } This argument specifies the name of the host computer. Using an asterisk (the default) says that the endpoint listens on all host names for the computer that are not otherwise explicitly reserved. Using a plus sign says that the endpoint listens on all host names for the computer even if they are otherwise explicitly reserved. If you specify a specific host name, the endpoint will listen only for requests that specify that name in their HTTP host header.
40521.book Page 327 Tuesday, August 8, 2006 1:21 PM
Creating and Configuring Native XML Web Services
327
COMPRESSION = { ENABLED | DISABLED } This argument, if set to ENABLED (it is DISABLED by default), tells SQL Server to compress the response sent for a request with an HTTP header specifying GZIP as a valid “accept-encoding” value. This can significantly reduce the size of the response and improve network performance (remember—SOAP is XML, and XML compresses well) but also places additional processor load on the server. WEBMETHOD [ 'namespace' .] 'method_alias' This argument specifies a web method to be exposed by a SOAP endpoint. You can specify multiple WEBMETHOD arguments for a single CREATE ENDPOINT statement. If you do not specify a namespace for the web method, the namespace of the endpoint will be used. NAME = 'database.owner.name' This argument specifies the name of a stored procedure or user-defined function that provides the implementation for a web method. Because the endpoint is a server-level object, you must always specify a full three-part object name. BATCHES = { ENABLED | DISABLED } This argument specifies whether ad hoc SQL queries are allowed by the endpoint. The default is DISABLED.
I strongly recommend never enabling the BATCHES feature unless you can guarantee that your SQL Server is on a completely trusted and secure network. Although this feature has valid uses, it introduces an avenue for anyone with HTTP access to the server (and valid authentication credentials, as I’ll discuss later in the chapter) to execute arbitrary SQL queries against the server. It’s not inherently evil, but you need to understand what you’re getting if you enable it.
LOGIN_TYPE = { MIXED | WINDOWS } This argument specifies the SQL Server authentication mode that the endpoint supports. The default is WINDOWS. You cannot use this argument to specify a less secure authentication mode than the one that was specified when SQL Server was installed. To put that another way, if you installed SQL Server in Windows Authentication mode, you cannot specify MIXED here. If you installed SQL Server in Mixed mode, however, you can restrict the endpoint to work only with Windows logins, for enhanced security. To further secure web services, if MIXED is specified, the endpoint must be configured to use SSL. WSDL = { NONE | DEFAULT | ‘sp_name’ } This parameter tells SQL Server how to generate the WSDL document for the endpoint. If NONE is specified, no WSDL response is generated or returned for WSDL queries like the one you’ve used several times in this chapter. You’ve already seen the DEFAULT behavior—SQL Server will generate and return a WSDL document built from the metadata about the endpoint and its web methods. It doesn’t happen often (SQL Server Books Online uses the phrase “in exceptional cases,” which sums it up pretty well), but you may run into circumstances where the autogenerated WSDL does not meet your requirements. In these cases, you can also specify the name of a stored procedure that will return a custom WSDL document. You’ll need to implement the stored procedure yourself, and although the new SELECT..FOR XML PATH syntax in SQL Server 2005 will make it easier, it’s not something you’re going to want to do more than once.
40521.book Page 328 Tuesday, August 8, 2006 1:21 PM
328
Chapter 6
Designing a Web Services Solution
As you can see, SQL Server provides a rich set of options for configuring native XML web services to work the way you need them to work. For a complete description of the CREATE ENDPOINT statement, refer to SQL Server Books Online.
Using the ALTER ENDPOINT Statement As is typical of ALTER statements in T-SQL, the ALTER ENDPOINT statement gives you the same basic capabilities of the corresponding CREATE statement. You can do a few tasks with ALTER ENDPOINT that you cannot do with CREATE ENDPOINT, so I’ve focused on them here instead of rehashing the tasks covered previously. For SOAP endpoints, ALTER ENDPOINT supports three clauses that CREATE ENDPOINT does not: ADD WEBMETHOD This clause adds a new web method to an existing endpoint. It supports the same arguments you saw for web methods previously. ALTER WEBMETHOD This clause alters the definition of an existing web method on an endpoint. It too supports the same arguments you saw for web methods previously. DROP WEBMETHOD This clause drops an existing web method from the endpoint. You can specify each of these clauses multiple times in a single ALTER ENDPOINT statement. When you use ALTER ENDPOINT, you need to specify only those parameters you want to change. Any other properties of an existing endpoint remain the same unless you explicitly change them.
Using the DROP ENDPOINT Statement After the complexity of the CREATE ENDPOINT and ALTER ENDPOINT statements, the DROP ENDPOINT is something of an anticlimax. Of course, this is typical of T-SQL DROP statements. Here’s the complete syntax: DROP ENDPOINT endPointName
When you execute the DROP ENDPOINT statement, SQL Server removes the endpoint. What more can you say?
Querying Endpoint Metadata Experienced SQL Server professionals are familiar with using system tables, system stored procedures, and the information_schema views to query the metadata that SQL Server uses for its own purposes. The same techniques apply to SQL Server 2005 and native XML web services. SQL Server provides four primary catalog views related to native XML web services. These catalog views allow you to retrieve detailed information about endpoints that you have already created and altered but have not yet dropped. sys.endpoints This catalog view contains one row for each endpoint that is created for the SQL Server instance. sys.http_endpoints This catalog view contains one row for each endpoint created for the SQL Server instance that uses HTTP.
40521.book Page 329 Tuesday, August 8, 2006 1:21 PM
Creating and Configuring Native XML Web Services
329
sys.soap_endpoints This catalog view contains one row for each endpoint for the SQL Server instance configured for a SOAP-type payload. For each row in this catalog view, there is a corresponding row in the sys.http_endpoints catalog view. sys.endpoint_webmethods This catalog view contains one row for each web method defined on a SOAP-enabled HTTP endpoint. These catalog views get more and more specific as the list progresses. The first, sys.endpoints, contains information about all endpoints, regardless of their type. This includes Service Broker endpoints, database-mirroring endpoints, and so on. Continuing down the list, sys.http_ endpoints contains information about all HTTP endpoints, regardless of whether they are configured for SOAP, and sys.soap_endpoints contains information about just SOAP endpoints. This means to get comprehensive information about a SOAP endpoint, you may need to query multiple views. Both sys.http_endpoints and sys.soap_endpoints include the same columns that sys.endpoints contains, but sys.soap_endpoints does not include any columns from sys.http_ endpoints. And, of course, sys.soap_endpoints and sys.endpoint_webmethods have a one-tomany relationship, because each web service can include multiple web methods.
Does the separation of information between the different endpoint catalog views look familiar? That’s right—the two sections of the CREATE ENDPOINT statement you saw earlier map closely to the information in these views. For more information about each of these catalog views, see the “Endpoints Catalog Views” topic in SQL Server Books Online.
Reserving a Namespace: Running Side-by-Side with IIS It’s time for another exercise. At this point I’m assuming that SQL Server is still running on your computer and that IIS is still stopped. If this is not true, stop IIS (net stop iisadmin /y) and then stop and restart SQL Server. If you have SQL Server 2005 installed as the default instance of SQL Server on your computer, you can run these two commands to restart the service: net stop mssqlserver /y net start mssqlserver /y
If you have installed SQL Server 2005 as a named instance, you’ll have to use the service name that the SQL Server installer generates for you, like so: net stop mssql$instancename /y net stop mssql$instancename /y
Of course, you can also use the Services MMC snap-in to restart IIS or any SQL Server instance, but performing tasks from the command line is so much quicker most of the time. Now that you’re ready, you can use Exercise 6.4 to see more about how SQL Server and IIS play together.
40521.book Page 330 Tuesday, August 8, 2006 1:21 PM
330
Chapter 6
Designing a Web Services Solution
EXERCISE 6.4
Testing SQL Server and IIS 1.
Restart IIS by running the iisreset utility from a command prompt. At this point both SQL Server and IIS are running, and both of them are web servers.
2.
Browse to the SQL Server native web service you created earlier at http://localhost/ AWemployees?wsdl. You should see the same WSDL document as before.
3.
Stop SQL Server.
4.
Restart IIS.
5.
Start SQL Server. At this point—once again—both SQL Server and IIS are running, and both of them are web servers.
6.
Browse to the SQL Server native web service you created earlier at http://localhost/ AWemployees?wsdl. You should see a 404 error: “This page cannot be found.”
Are you going crazy? Albert Einstein once defined insanity as “doing the same thing over and over again and expecting different results.” (But don’t forget that he didn’t like Schrödinger’s cat either!) Working under the assumption that you’re not losing your mind, there must be something critically different between the steps you took to make things work and the steps you took when things did not work. Do you see what the difference is? That’s right! It’s the order in which you started the services. Why should this matter? The answer is all about namespace reservation. Remember how both SQL Server and IIS use HTTP.SYS, the new kernel-mode HTTP driver? When a client (and to HTTP.SYS, both SQL Server and IIS are clients) wants to listen for incoming HTTP requests, it will reserve a specific namespace, such as /AWemployees, with the HTTP driver. SQL Server places this namespace reservation when you execute the CREATE ENDPOINT statement. This instructs HTTP.SYS to route any request to that specific application; creating a namespace reservation is basically committing to handle incoming requests for addresses within that namespace. If two different applications both attempt to reserve the same namespace, it’s first come, first served. But wait, you may say, I took these steps, and everything worked just fine—what are you talking about? Well, I’ve been working under the assumption that you’re running Windows XP Service Pack 2 and not Windows 2003 Server; few people run server operating systems on their personal machines, and few people perform study exercises on their servers, so I thought it was a safe assumption. Still, if you ran these steps on a Windows 2003 Server computer, you may not have seen any error. This is because IIS 6.0 (the version included with Windows 2003 Server) uses HTTP.SYS, but IIS 5.1 (included with Windows XP) does not. Without HTTP.SYS, only one application at a time can listen on a specific IP address and port. This means when IIS starts on Windows XP, it needs to monopolize port 80 (assuming you’re using the default port configuration) and cannot share the port with any other listener, even if there is no true namespace conflict caused by multiple applications trying to listen for the same “virtual directory” address. To run IIS and SQL Server 2005 native XML web services at the same time on Windows XP, you need to configure one of the applications to listen on a different port.
40521.book Page 331 Tuesday, August 8, 2006 1:21 PM
Creating and Configuring Native XML Web Services
331
With Windows 2003 Server and IIS 6.0, however, IIS is much better behaved. As long as no two listeners attempt to reserve the same namespace, any number of applications can listen on port 80. This means that as long as no virtual directory you create in IIS uses the same path as an HTTP endpoint you create in SQL Server, you shouldn’t have problem. If you do use the same path in both places, however, you’ll have a namespace conflict. In this case, the end result is the same: HTTP.SYS works on a first-come, first-served basis. Fortunately, you have more control over namespace reservations than what you’ve seen so far. Can you imagine having IIS and SQL Server running on the same production server and not knowing what server will start first and get access to the namespace? Neither can I. All the namespace reservations that you’ve performed or I’ve discussed so far have been implicit reservations. SQL Server will keep the namespace reserved as long as it’s running, but when SQL Server is stopped, any other application can grab the namespace. SQL Server also gives you the ability to create explicit namespace reservations so that no other application can reserve them, even if SQL Server is not running. To create an explicit namespace reservation, use the sp_reserve_http_namespace system stored procedure, like so: EXEC sp_reserve_http_namespace N'http://servername:80/AWemployees'
To remove an explicit namespace reservation that is no longer needed, use the sp_delete_ http_namespace_reservation system stored procedure, like so: EXEC sp_delete_http_namespace_reservation\ N'http://servername:80/AWemployees'
In addition to managing namespace reservations using these system stored procedures, you can also use the Httpcfg.exe utility that’s included with the Windows 2003 Server Support tools. It’s also worth noting that there’s another important reason to explicitly reserve namespaces with HTTP.SYS: security. To reserve a namespace with HTTP.SYS—either implicitly or explicitly—you must be a member of the local Administrators group on the computer where SQL Server is running. (If you’re connecting to SQL Server using Windows Authentication, SQL Server will impersonate your Windows account when connecting to HTTP.SYS to make the reservation. If you’re connecting using SQL Server authentication, SQL Server connects to HTTP.SYS using the service account configured at installation.) This means that unless a namespace is already reserved by SQL Server, no one can run the CREATE ENDPOINT statement and specify the path of that namespace unless they are an administrator on the SQL Server computer, regardless of the SQL Server–specific permissions they have been granted. Since this doesn’t follow the principle of least privilege, it’s a better practice to have administrators explicitly reserve the endpoints with HTTP.SYS and then have the database administrator (DBA) create the endpoints in SQL Server. The end product may be the same, but the process will be much more secure. And that brings me nicely to the next topic….
40521.book Page 332 Tuesday, August 8, 2006 1:21 PM
332
Chapter 6
Designing a Web Services Solution
Implementing Security with Native XML Web Services No discussion of web services—especially web services that expose your valuable business data—would be complete without covering security. Two of the core concepts of application security are authentication and authorization, so make sure you understand these concepts.
Although the “Building Secure ASP.NET Applications: Authentication, Authorization, and Secure Communication” guide is written specifically to help developers build secure ASP.NET applications, many of the same concepts apply to SQL Server native XML web services as well. You can find it online at the Microsoft Developer Network website at http://msdn.microsoft.com/library/ default.asp?url=/library/en-us/dnnetsec/html/secnetlpMSDN.asp.
If you’ve been working with SQL Server for a while, these terms should sound familiar to you, even if you’ve never seen them explicitly defined before. Just as SQL Server native XML web services combine features of a database server and a web server, SQL Server’s security model has expanded to cover both worlds as well. In the following sections, you’ll see how SQL Server 2005 handles authentication and authorization for native XML web services.
Implementing Authorization In this section, you’ll look at who can actually run the CREATE ENDPOINT, ALTER ENDPOINT, and DROP ENDPOINT statements you saw earlier in the chapter. Unlike previous versions of SQL Server, where server-level permissions were handled largely by the builtin fixed server roles, SQL Server 2005 has specific and granular permissions to define who can perform specific tasks. The built-in roles still exist, but now they’re containers for permissions you can grant individually instead of being the “magical” things they were in SQL Server 2000. So, how do you grant permissions to manage SQL Server endpoints? You do this with the same T-SQL statement you use to grant permissions for any other type of object. Remember, you looked at security in more detail in Chapter 4 and how the GRANT statement worked. (If you’re reading these chapters out of sequence, you might want to quickly peek at Chapter 4 to see how object and statement permissions work.) Just as you would grant permission to a database user to create tables by using this code: USE AdventureWorks ; GO GRANT ALTER TABLE TO DatabaseUserName; GO
40521.book Page 333 Tuesday, August 8, 2006 1:21 PM
Implementing Security with Native XML Web Services
333
you would also grant permission to a SQL Server login to create endpoints by using this code: USE master ; GO GRANT CREATE ENDPOINT TO LoginName; GO
Even though the syntax is similar, it has a few important differences: Database context When you grant database-level permissions such as CREATE TABLE, you need to be in the context of that database. When you grant server-level permissions such as CREATE ENDPOINT, you must be in the context of the master database. Authorized principal When you grant database-level permissions such as CREATE TABLE, you grant those permissions to a database-level security principal such as a database user or database role. When you grant server-level permissions such as CREATE ENDPOINT, you grant those permissions to a server-level security principal such as a login. The syntax for granting CREATE ENDPOINT permissions is pretty straightforward. Granting permissions on a specific endpoint—while still simple and consistent to a degree—can be a little more complex. For example, this is the syntax you’d use to grant a login permission to connect to the endpoint you created earlier: USE master ; GO GRANT CONNECT ON ENDPOINT::EmployeeWebService TO LoginName; GO
As with granting the ability to create endpoints, granting permissions on an endpoint is like granting permissions on a table but with a few notable exceptions: Database context As you’ve seen, endpoints are server-level objects, so you need to be in the master database to assign permissions on them. Authorized principal It’s the same here too—you’re granting permissions to logins and not database users because endpoints are server-level objects and not database-level objects.
Double colon qualifier Even though Microsoft introduced the double colon with SQL Server 2000 to distinguish built-in system functions from user-defined functions with the same name, it’s not something you see every day. Whenever you’re granting permissions on an endpoint, you need to qualify the endpoint to ensure that SQL Server knows exactly what you mean. The following is the partial syntax for granting permissions on an endpoint: GRANT permission [ ,...n ] ON ENDPOINT::endpoint_name TO < server_principal > [ ,...n ] [ WITH GRANT OPTION ] [ AS SQL_Server_login ]
40521.book Page 334 Tuesday, August 8, 2006 1:21 PM
334
Chapter 6
Designing a Web Services Solution
Implementing Authentication So far you’ve seen how to authorize access for web services in SQL Server—how to create and manage them and how to connect to them. The remaining piece of the security puzzle you have yet to put in place is authentication—how SQL Server knows who you are when you try to connect so it can decide whether you’re authorized to do so. When you’re connecting to SQL Server using TDS (remember the Tabular Data Stream protocol?), authentication is straightforward—TDS has authentication built in, and any traditional SQL Server client can take advantage of the protocol’s authentication support. When you’re connecting to SQL Server using HTTP, the process gets a little more complicated. HTTP also has authentication support built in, but unlike TDS, HTTP was not designed specifically to provide secure access to database data, so there’s a bit more work involved. As you saw when you looked at the CREATE ENDPOINT syntax, endpoint authentication has five built-in options: Basic authentication Basic authentication is part of the HTTP 1.1 standard and must be implemented by any software that supports HTTP 1.1. This is probably the only reason SQL Server supports it, because basic authentication is limited and not particularly secure. When using basic authentication, an HTTP endpoint must use SSL; because basic authentication sends the username and password in base64 encoding (which is practically clear text and can be easily decoded by any motivated party), SQL Server does not allow it to be used in conjunction with the PORTS = (CLEAR) setting. In addition, basic authentication can be used only when the HTTP credentials map to a Windows login based on a local user account. Basic authentication cannot be used with SQL Server logins or Windows logins based on domain user accounts. Digest authentication Digest authentication is also part of the HTTP 1.1 standard, and although it’s more secure than basic authentication (it uses an MD5 hash of the password), it’s still not ideal for SQL Server native web services. Digest authentication can be used only when the HTTP credentials map to a Windows login based on a domain user account. Digest authentication cannot be used with SQL Server logins or Windows logins based on local computer user accounts. NTLM authentication NTLM authentication (NTLM stands for LT LAN Manager, which should give you some feel for its history) is the authentication mechanism supported by Windows 95, Windows 98, and Windows NT 4.0. It provides a stronger authentication mechanism than either basic or digest authentication. Kerberos authentication Kerberos authentication is based on the Internet-standard Kerberos authentication protocol. Kerberos is supported by Windows 2000 and newer and provides the strongest available authentication option for SQL Server native XML web services. Integrated authentication The integrated authentication option supports both NTLM and Kerberos authentication. An endpoint configured to use integrated authentication will respond to clients who attempt to connect using either NTLM or Kerberos. It’s important to note that no “fallback” process is involved in integrated authentication. If a client attempts to connect using one authentication mechanism and authentication fails, the server will not fall back to the other authentication mechanism. Instead, the connection will be terminated, and the client will be unable to connect.
40521.book Page 335 Tuesday, August 8, 2006 1:21 PM
Using Alternative Technologies for SQL Server and Web Services
335
Implementing Security Best Practices for Native XML Web Services You’ve seen some of the main security options; now you’ll learn how to apply them to your web services. When you’re building and deploying native XML web services, you should do the following:
Use Kerberos authentication whenever possible. This provides the best possible authentication security.
Use SSL when exchanging sensitive data. No matter how secure your authentication mechanism is, if you’re transmitting your information in clear text, it will be vulnerable if intercepted.
Only grant connect permissions on endpoint connections to specific users or groups. This is one way you can apply the principle of least privilege to native XML web services.
Use a firewall to secure SQL Server. If you’re exposing only web services, only have port 80 or port 443 open on the firewall.
For additional security resources for developers, visit the Microsoft Security Developer Center at http://msdn.microsoft.com/security/. For additional security resources for information technology (IT) pros, visit the TechNet Security Center at http://www.microsoft.com/technet/security/.
Using Alternative Technologies for SQL Server and Web Services Of course, developers have been building XML- and web service–based applications using SQL Server for years before Microsoft added native XML web services to SQL Server 2005. In the following sections, you’ll quickly compare SQL Server 2005 native XML web services to two alternatives—SQL Server’s previous offering, SQLXML, and to ASP.NET ASMX web services.
Native XML Web Services vs. SQLXML According to the SQLXML home page on the Microsoft Developer Network website, SQLXML is a set of technologies that “enables XML support for SQL Server 2000, bridging the gap between XML and relational data. You can create an XML view of your existing relational data and work with it as if it were an XML file.” SQLXML was added to SQL Server 2000 shortly after its release and has been updated several times to offer feature and performance enhancements as XML technologies have matured. SOAP support was added to SQLXML in SQLXML 3.0, but most of SQLXML’s features are built on proprietary XML implementations before the SOAP and WSDL specifications were completed.
40521.book Page 336 Tuesday, August 8, 2006 1:21 PM
336
Chapter 6
Designing a Web Services Solution
SQLXML is implemented as an add-on for IIS and requires IIS to run. It has a custom MMC snap-in administration tool that enables administrators to create virtual directories that map to SQL Server objects and has a COM-based object model for doing the same but is generally a much more complex technology to manage than native XML web services in SQL Server 2005. In addition, native XML web services perform much better than SQLXML because of their simplified application stack and their reliance on the kernel-mode HTTP driver, HTTP.SYS. As a general rule, although SQLXML is still a valid and supported technology, it should not be considered a viable option for new development.
Native XML Web Services vs. ASMX Web Services Although SQLXML may not be a viable technology on which to build XML web services today, “ASMX” web services built using ASP.NET definitely are. ASP.NET has provided a rich, compiled, high-performance, object-oriented programming model for building web services for nearly five years, and with the release of ASP.NET 2.0, the platform has only gotten better. With native support for SQL Server provided by ADO.NET and the System.Data.SqlClient namespace, ASMX web services are an excellent choice for building web services that access data in SQL Server 2005 databases. When you’re choosing between ASMX web services and SQL Server 2005 native XML web services, you should consider many factors. You should probably choose ASMX web services when the following is true:
You already have ASP.NET applications and ASMX web services in your environment.
You need rich data caching to improve data access performance.
You need complex, computation-intensive application logic in your web services.
You need to be able to scale your web server in a server farm deployment. You should probably choose native XML web services when the following is true:
Your web services are data bound. A canonical example of this scenario is a function that produces small amounts of data but processes large volumes of data to produce its results.
You already have SQL Server in your environment but do not have IIS.
You need only XML web services, not any of the other capabilities of a full-featured web server such as IIS.
Your corporate IT policy does not permit IIS in the environment where you need to implement web services.
In other circumstances, the decision may not be quite as clear-cut. Things you’ll need to take into account will include the technology plan of your company, the skills and experience of your development and administration teams, and other factors that are difficult to generalize. It’s also important to note the emphasis placed on the word probably before both of these lists. For many of these bullet items I could come up with cases where the contraindicated technology could be used. For example, with SQL Server 2005’s CLR integration, you could write
40521.book Page 337 Tuesday, August 8, 2006 1:21 PM
Implementing Best Practices for Native XML Web Services
337
a stored procedure using C# or another .NET managed language and achieve the same superior performance for complex, computation-intensive web services as you would get using .NET code in ASMX web services. Also, it is possible to use SQL Server Service Broker and event notifications to distribute data to different SQL Server computers (perhaps running SQL Server Express Edition for cost effectiveness) for a scale-out scenario.
BI in NYC Over the past few years I’ve helped design, build, and enhance a business intelligence (BI) platform for a consulting client in New York. The existing platform is based on .NET 1.1 and SQL Server 2000, with a business tier implemented as a set of ASMX web services, data access implemented in SQL Server stored procedures, and several user interfaces, including web, Windows, and Pocket PC clients applications. When I started evaluating the benefits of upgrading to .NET 2.0 and SQL Server 2005, I considered removing the ASMX middle tier and replacing it with a set of native XML web services in the database. The existing web services were relatively thin, with little logic, and mainly wrapped around the underlying SQL Server stored procedures, so porting them to SQL Server should not have been too complex. I decided to stay with ASMX web services, however, for two primary reasons. First, the existing web services use ASP.NET caching extensively to reduce the load on the database servers, and putting everything in the database would eliminate this benefit. Next, I could more easily scale out the web services tier when it is hosted in IIS using application load balancing. Scaling out the data tier would be more complex and involved.
Most importantly, understand both the specific needs of the application and the capabilities of the technologies you’re considering—if you understand one but not the other, it’s difficult to make the best decisions. This brings me to the last topic of the chapter: best practices.
Implementing Best Practices for Native XML Web Services No discussion of a new technology would be complete without a discussion of best practices. Understanding when to use and when to avoid SQL Server native XML web services will help ensure that regardless of the technologies you choose, your project will be a success.
40521.book Page 338 Tuesday, August 8, 2006 1:21 PM
338
Chapter 6
Designing a Web Services Solution
First, where are SQL Server native XML web services a good fit? Here are some of the scenarios from which SQL Server native XML web services are ideally suited: Data-centric applications If your application doesn’t revolve around the data in a SQL Server database, building web services into the database probably doesn’t make a lot of sense. If you already have a library of stored procedures in your database, exposing them through web services can be an easy way to extend the reach of your data. XML-enabled applications If you already have application components that produce or consume XML data, integrating SQL Server native XML web services into the mix will be easier, and they will fit more logically into the overall application design. Applications with distributed and/or heterogeneous clients SQL Server native XML web services are a perfect way to expose SQL Server data to distributed clients (TDS does not often work well through firewalls) or to clients that run on non-Windows platforms. If you have a homogeneous client base where all client applications are running Windows on a local area network, implementing web services (regardless of the tools you select to build the web services) probably isn’t the best choice. You’ll be paying all the costs of communicating with XML—including a much larger network payload and the processor overhead of serializing and deserializing the XML—while reaping none of the rewards. Next, where should you not use SQL Server native XML web services? Some of these can be inferred from the previous list, but some should be stated explicitly: Applications where performance is key Remember, XML web services are all about interoperability and accessibility, not about raw performance. If response times are mission critical, binary network protocols are your friends. BLOB-heavy applications If your application inserts or returns TEXT or IMAGE data, the overhead of encoding these values for XML can be significant. Of course, if you’re just manipulating Binary Large Objects (BLOB) data within the database, the choice of data access and network technologies isn’t important; however, if you’re passing BLOB data between tiers, you should avoid web services. Of course, the real world isn’t always a best-practice place. For example, sometimes you might deal with large binary messages, but you need to send them between heterogeneous systems. XML web services may be the only technology that solves the crossplatform communication issue, but performance suffers as a result. To compensate, you may need to add network or hardware resources to achieve acceptable performance. Just remember: best practices are tools to help you select the correct tools for the job, but your final decision should be based on the results of your own testing for your specific environment and requirements.
40521.book Page 339 Tuesday, August 8, 2006 1:21 PM
Exam Essentials
339
Summary You learned about the history of how distributed application technologies have evolved to bring XML web services into the mainstream. You then saw how SQL Server 2005 implements its native XML web services using HTTP.SYS on Windows 2003 Server and Windows XP Service Pack 2. You then applied this information to build both a native XML web service using T-SQL and a Windows Forms client application using Visual Studio 2005 and C#. Next you drilled down into the details of the CREATE ENDPOINT, ALTER ENDPOINT, and DROP ENDPOINT T-SQL statements and the new SQL Server metadata that these statements manipulate. I finished the coverage of managing native XML web services with a look at how SQL Server 2005 interacts with HTTP.SYS to reserve namespaces with other web servers such as IIS. You then examined the security options and capabilities of native XML web services, including authentication, authorization, and security best practices for implementing secure XML web services with SQL Server. Finally, you learned about different technologies for implementing XML web services to provide access to SQL Server data and best practices for selecting and using (or avoiding) web services in different scenarios.
Exam Essentials Understand TDS and HTTP. It is essential to understand the design problems to which each protocol is well suited and how to select the right one to solve the problems in a given business scenario. Understand SOAP and WSDL. These two XML dialects form the foundation of XML web services, and you must know the role that each one plays in a web services solution. Understand the role of HTTP.SYS and the operating systems on which it is available. This kernel-mode driver is a key enabler of SQL Server 2005 native web services, and you need to know when it is available and what it does. Understand implicit and explicit namespace reservation and when each one takes place. SQL Server gives you different options for interacting with HTTP.SYS—understand what each one does and how to do each one. Understand authentication and authorization. Security is a vital aspect of web service applications; be certain you know the security options that native XML web services support, how they relate, and when to use each one. Understand when you should and shouldn’t use XML web services. This is probably the most important thing to know for the exam. XML web services are an appropriate technology to solve certain problems, but for other problems it is not the correct choice.
40521.book Page 340 Tuesday, August 8, 2006 1:21 PM
340
Chapter 6
Designing a Web Services Solution
Review Questions 1.
You are developing a web service that rolls up data from other web services online. This web service will be called regularly by a variety of applications, must implement its own security model, and must have flexible caching support. Which web service technology should you choose? A. SQL Server native web services and T-SQL stored procedures B. SQL Server native web services and SQL CLR stored procedures C. ASP.NET ASMX web services D. SQLXML
2.
You need to add a new web method to an existing SQL Server native XML web service. You need to perform this task using the least administrative effort and ensure that existing permissions assigned on the web service are maintained. What T-SQL command should you use? A. ALTER ENDPOINT...ALTER WEBMETHOD B. ALTER ENDPOINT...ADD WEBMETHOD C. CREATE ENDPOINT D. DROP ENDPOINT, followed by CREATE ENDPOINT
3.
You need to expose data stored in a SQL Server database to a variety of client applications, including Java, Macintosh, Linux, and mainframe clients. What technology or technologies should you select? A. ASP.NET ASMX web services and ADO.NET B. .NET Remoting and ADO.NET C. SQL Server native XML web services D. SQLXML
4.
What operating system component is required so that SQL Server can act as a web server and provide native XML web services? A. HTTP.SYS B. INETINFO.EXE C. UMS.DLL D. DEVENV.EXE
5.
On what operating systems can SQL Server 2005 act as a web server and provide native XML web services? A. Windows XP Service Pack 1 or higher B. Windows XP Service Pack 2 or higher C. Windows 2003 Server D. Windows 2000 Service Pack 4 or higher
40521.book Page 341 Tuesday, August 8, 2006 1:21 PM
Review Questions
6.
341
You are using SQL Server 2005 to build an application. You need to support a large number of Windows clients making high volumes of requests for data. The data includes large images stored in the database. You need to ensure that your application has the best performance. What should you do? A. Use native XML web services and T-SQL stored procedures. B. Use native XML web services and SQL CLR stored procedures. C. Use ADO.NET and T-SQL stored procedures. D. Use SQLXML.
7.
What standard is used to inform client applications of the functionality provided by an XML web service? A. SOAP B. HTTP C. UDDI D. WSDL
8.
You are developing an application that uses SQL Server native XML web services. During development you discover that sometimes your web services work; at other times, you receive 404 “File Not Found” errors when testing. Your development workstation is running Windows XP Professional with IIS installed. What is the most likely cause of the errors? A. SQL Server 2005 and IIS cannot be installed on the same machine. B. Your version of Windows does not support HTTP.SYS. C. IIS must be stopped so that SQL Server can listen on the necessary port for client requests. D. You have not granted the correct permissions on the endpoint.
9.
You are developing an application using SQL Server 2005. You have decided that web services are not appropriate for your application. What protocol must all client applications support in order to connect to SQL Server without using native XML web services? A. TDS B. TCP/IP C. HTTP D. SOAP
10. What is the relationship between endpoints and web methods? A. An endpoint always contains exactly one web method. B. An endpoint contains one or more web methods. C. A web method always contains exactly one endpoint. D. A web method contains one or more endpoints. 11. What is the default state of a new SQL Server endpoint? A. Started B. Stopped C. Disabled D. Running
40521.book Page 342 Tuesday, August 8, 2006 1:21 PM
342
Chapter 6
Designing a Web Services Solution
12. You need to write a query that returns information only about each XML web service on your SQL Server 2005 instance. What catalog view should you query? A. sys.endpoints B. sys.http_endpoints C. sys.soap_endpoints D. sys.endpoint_webmethods 13. In what database are endpoints defined? A. master B. model C. msdb D. distribution E. Each user database 14. What security concept is used to determine the identity of users connecting to a SQL Server native XML web service? A. Authentication B. Authorization C. Permissions D. Ownership chains 15. What are some of the advantages of web services over other distributed application protocols such as DCOM, CORBA, and RMI? A. Support for open standards B. Performance C. Cross-platform compatibility D. Network topology independence E. Language independence 16. What standards body has published and recommended the specifications for SOAP and WSDL? A. IEEE B. ANSI C. ISO D. W3C 17. What are some of the existing standards on which web service standards are based? A. XML B. XSD C. TCP/IP D. CLS E. XSLT
40521.book Page 343 Tuesday, August 8, 2006 1:21 PM
Review Questions
343
18. What types of endpoints does SQL Server 2005 support? A. T-SQL B. Database replication C. SOAP D. Service Broker 19. Which of the following are valid values for the NAME parameter when specifying a WEBMTHOD in a CREATE ENDPOINT or ALTER ENDPOINT statement? A. Servername.AdventureWorks.dbo.ListEmployees B. AdventureWorks.dbo.ListEmployees C. dbo.ListEmployees D. ListEmployees 20. With what development tools can you build client applications to access SQL Server native XML web services? A. Visual Studio 2005 B. Visual Studio .NET C. Visual Basic 6 D. Borland Delphi E. Eclipse
40521.book Page 344 Tuesday, August 8, 2006 1:21 PM
344
Chapter 6
Designing a Web Services Solution
Answers to Review Questions 1.
C. This scenario is a poor match for SQL Server web services, because it is not a data-centric explanation. ASP.NET ASMX web services, however, are ideal for this type of web service. ASMX web services also support flexible authentication and authorization models and have a robust caching system built in.
2.
B. The correct way to add a new web method to a native XML web service is to use the ALTER ENDPOINT statement with the ADD WEBMETHOD clause. Using the ALTER WEBMETHOD clause of the ALTER ENDPOINT statement will not create a new web method; it will change the settings of an existing web method instead. The CREATE ENDPOINT statement will create a new XML web service, not a new web method. And although dropping and re-creating an endpoint (and specifying the new web method when re-creating it) will allow you to add the new method, it will take additional steps, and any permissions assigned on the endpoint will be lost when it is dropped.
3.
C. This is an ideal scenario for using SQL Server native XML web services, because the services are data-centric and need to be exposed to a variety of clients. ASMX web services could work but would involve additional work and a separate web server. .NET Remoting is an alternative technology for building distributed applications using the .NET Framework, but it does not provide any cross-platform support. Finally, although SQLXML is based on XML, it is not built on any cross-platform standards and will not provide the required client access.
4.
A. HTTP.SYS is the kernel-mode driver that provides the necessary functionality for SQL Server to act as a web server. INETINFO.EXE is a process used by IIS. UMS.DLL is the SQL Server usermode thread scheduler (and it’s no longer included with SQL Server 2005!), and DEVENV.EXE is the process for Visual Studio.
5.
B, C. HTTP.SYS is included only with Windows 2003 Server and Windows XP Service Pack 2 or higher. Although SQL Server 2005 can run on other operating systems, it cannot act as a web server.
6.
C. Using any XML-based protocol for transmitting binary data such as images introduces significant overhead into an application and should be avoided when possible.
7.
D. WSDL is used to describe web services and inform clients of their capabilities. SOAP is used for communication between web services and their clients. The Universal Description, Discovery, and Integration (UDDI) protocol is used to locate web services. You can use HTTP to transfer SOAP packets.
8.
C. This type of behavior is most often seen when IIS and SQL Server are installed side-by-side on Windows XP. Whichever service starts first will prevent the other from listening to HTTP requests on port 80. By stopping IIS, you can ensure that SQL Server will be able to respond to client requests. Despite this, it is perfectly valid—and common—to have IIS and SQL Server installed on the same computer. Because SQL Server is able to respond to HTTP clients on some occasions, HTTP.SYS must be available. Inadequate permissions would cause a securityspecific error to be raised; they would not return a 404 error.
40521.book Page 345 Tuesday, August 8, 2006 1:21 PM
Answers to Review Questions
9.
345
A. To communicate with SQL Server without using XML web services, client applications must support the TDS protocol. This is usually provided by a data access library such as ODBC, OLE DB, or ADO.NET. Although TDS can operate on a TCP/IP network, SQL Server can run TDS over other network protocols such as NetBEUI if necessary. HTTP and SOAP are both used for XML web services.
10. B. Web methods are contained within endpoints, and each endpoint can expose any number of web methods. 11. B. Unless the CREATE ENDPOINT specifies a different value, new endpoints are always created in the Stopped state for security reasons. 12. C. The sys.soap_endpoints catalog view contains one record for each XML web service. The sys.endpoints and sys.http_endpoints catalog views contain information about endpoints other than SOAP endpoints. The sys.endpoint_webmethods catalog view contains information about web methods, not web services. 13. A. All endpoints are instance-level objects and are defined in the master database. 14. A. Authentication is the process of determining a user’s identity. Authorization is the process of determining whether an authenticated user can perform a specific task. Permissions are tools used to define authorization rules. Ownership chains are logical relationships between database objects that access each other and determine—in part—when permissions are checked. 15. A, C, E. Standardization and compatibility are two of the core benefits of using web services, as is the ability to use the language of your choice to implement an XML web service. Generally, web services do not offer the same raw performance as binary remoting protocols. Although web services can run on different network topologies such as Ethernet and Token Ring, so can other protocols. 16. D. The World Wide Web Consortium (W3C) is responsible for SOAP and WSDL as well as other web service standards. 17. A, B, C. XML web services build upon XML, XSD, and TCP/IP standards. Although you can develop XML web services using languages implemented using the Common Language Specification, XML web services have no language restrictions. Although you can use XSLT in your XML web service implementation, there is no dependency of XML web services on XSLT either. 18. A, C, D. SQL Server 2005 supports SOAP, T-SQL, database-mirroring, and Service Broker endpoints; only SOAP endpoints are used for XML web services. SQL Server database replication does not have its own endpoints. 19. B. When specifying the name of a database object to be exposed through a web method, you must use a three-part object name, specifying the database, the schema, and the object name. 20. A, B, C, D and E. All modern development tools (and Visual Basic 6 as well, using the Visual Basic 6 SOAP Toolkit) support accessing web services, regardless of the tools or technologies used to build the web services.
40521.book Page 346 Tuesday, August 8, 2006 1:21 PM
40521c07.fm Page 347 Monday, August 14, 2006 12:47 PM
Chapter
7
Designing Messaging Services for a Database Solution MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER: Select and design SQL Server services to support business needs.
Design a Notification Services solution to notify users.
Design a Service Broker solution for asynchronous database applications.
Design a SQL Database Mail solution.
Design data distribution.
Design a SQL Database Mail solution for distributing data.
Specify a Notification Services solution for distributing data.
Develop applications for Notification Services.
Create Notification Services configuration and application files.
Configure Notification Services instances.
Define Notification Services events and event providers.
Configure the Notification Services generator.
Configure the Notification Services distributor.
Test the Notification Services application.
Create subscriptions.
Optimize Notification Services.
Design objects that perform actions.
Design Service Broker applications.
40521c07.fm Page 348 Monday, August 14, 2006 12:47 PM
With the introduction of SQL Server 2005, Microsoft has transformed SQL Server from a simple database server into a fully featured and functional application server. SQL Server 2005 contains many features that allow application developers to develop robust, enterprise-class applications without writing a lot of infrastructure code. This is good news for developers but can be a headache for database administrators (DBAs). The application server features included with SQL Server 2005 that are covered in this chapter include the following: SQL Server Service Broker Service Broker is a messaging platform that allows developers to build scalable distributed applications by using asynchronous message queues. SQL Server Notification Services (SSNS) SSNS was included as an extra download with SQL Server 2000 but has been greatly enhanced in SQL Server 2005. SSNS provides a framework for notifying someone or something when a specific event happens within the data that SSNS is monitoring. SQL Server Database Mail Database Mail is the SQL 2005 replacement for SQL Mail. Database Mail is an application that allows developers to send email from within the SQL Server database engine. Unlike its predecessor, Database Mail does not require any interaction with Messaging Application Programming Interface (MAPI) and uses only standard Internetbased Simple Message Transfer Protocol (SMTP) messaging.
During the SQL Server 2005 beta phase, when SQL Server was known as Yukon, Database Mail was referred to as iMail. You will still find some literature referring to it using the old moniker.
When Microsoft decided to include these features with SQL Server 2005, it knew it was moving in a new direction. Typically, SQL Servers are thought of as simple database servers and are managed by DBAs who tend to “own” the database. An argument can be made that these features do not belong on the same server that houses SQL Server, but considering the industry move toward more “service orientation,” these services do make sense from an overall enterprise application perspective. Understanding service-oriented architecture is the first step in being able to effectively write and deploy applications using the new SQL Server 2005 application architecture.
40521c07.fm Page 349 Monday, August 14, 2006 12:47 PM
Understanding Service-Oriented Architecture
349
Understanding Service-Oriented Architecture Service-oriented architecture (SOA) is a relatively new term in the software development industry, but it frankly is not a new technology. Simply put, SOA refers to a collection of services that are well-defined and self-contained. These services communicate with one another to perform some function. In the early days of software development, issues started to arise that led to a lot of research about how to more effectively build applications. As developers realized their applications were becoming more and more complex, they tried to simplify their code by using more modular design techniques. This allowed developers to maintain commonly reused code libraries (sometimes called code macros) and enable a type of code reuse through these libraries. Unfortunately, this led to a lot of cutting and pasting of code and ultimately ended up in a maintenance nightmare for fairly large applications when bugs were discovered and applications had to be patched in many places. These realizations led to more “component-based” application development techniques. SOA has its roots in some early technologies developed by Microsoft and others. As systems became more and more connected, it was obvious to developers they needed some way to distribute applications across multiple machines. Microsoft took the approach of Distributed Component Object Model (DCOM). The “open source” industry took the approach of Common Object Request Broker Architecture (CORBA). Both of these technologies allow developers to write applications that work together across common networking protocols and boundaries. The shift to more component-based software development did indeed help solve the problem of code reuse and solved some of the maintenance issues; however, it did nothing to address that not everyone used the same application development platform or even used compatible operating systems. This quickly led to the integration question, how can application developers build applications that other applications can use independent of the underlying architecture of the machine on which the applications are running? SOA is an attempt to address the problem of component compatibility across application and network boundaries. Using industry-standard communication protocols and web services infrastructure, SOA-based applications can communicate across protocol, network, application, and language boundaries.
Understanding the Components of SOAs To fully understand how SOA fits into modern application development practices, you must first understand the various components of SOAs: Service Simply defined, a service is an exposed piece of functionality that provides a platformindependent interface contract that can be dynamically invoked as necessary and that maintains its own state information.
40521c07.fm Page 350 Monday, August 14, 2006 12:47 PM
350
Chapter 7
Designing Messaging Services for a Database Solution
Message Service providers and service consumers communicate with one another through messages. The message is defined through the service’s interface contract and is generally implemented through Extensible Markup Language (XML). Discovery True SOA requires that services can be dynamically discovered as they become available. Early forms of SOA discovery included technology such as Universal Description, Discovery, and Integration (UDDI). Many disputes have surrounded UDDI technologies, so this area of SOA is still very much in flux. If you stop to think about it for a minute, when you look at the definition of SOA and examine the requirements for something to be an SOA, you will quickly note that the application development features included with SQL Server 2005 really do fit into the SOA story quite well.
To learn more about SOA in the Microsoft world, refer to some of the patterns and practices on the MSDN website at http://msdn.microsoft.com/practices.
Understanding SOA in the Microsoft World With the release of SQL Server 2005 and Visual Studio 2005, Microsoft has fully embraced the SOA application development world. Many built-in features of both products help lead developers into SOA-based development. For example, Visual Studio 2005 introduced the logical application designer, which models the components of a service-oriented solution and how the components interact with one another. Visual Studio 2005 also introduced the logical DataCenter designer, which allows developers to model the machines where services will be deployed and their security zones. Both of these tools use the new System Definition Model (SDM) that Microsoft is pushing as part of the Dynamic System Initiative, which provides common XML schema for describing software components, computer hardware, application interaction, and networking components. These tools allow systems architects, application architects, and software developers to work together to effectively deploy new SOA-based applications. SQL Server 2005 includes components that allow SOA-based applications to directly interact with the database engine. Native XML, XQuery, result sets in XML, and the ability to expose HTTP endpoints directly at the database level are some of the components integrated directly into the engine. Couple these with external components of SQL Server 2005 such as Service Broker and Notification Services, and you have an extremely compelling SOA story to tell, with SQL Server at the core of it.
Designing Service Broker Solutions for Asynchronous Applications As mentioned, Service Broker is an integrated application in SQL Server 2005 that allows developers to create secure, scalable, and reliable applications based on a message-queuing infrastructure.
40521c07.fm Page 351 Monday, August 14, 2006 12:47 PM
Designing Service Broker Solutions for Asynchronous Applications
351
Service Broker is built upon the concept of message-based communication organized into conversations. A conversation is a collection of related messages organized in a specific order. Within each conversation, Service Broker guarantees that an application receives each message exactly once and in the order it was meant to be received. Service Broker is similar in function to the Russian Postal Service. To hold a conversation with a distant person, you can communicate through a series of letters. Each letter represents a distinct message, and the entire group of letters represents the conversation. When sending letters via the post office, you do not “sit and wait” for the letter to be received and replied to; you go about other business until the response arrives. This represents the asynchronous manner of message-based communication. Using the post office analogy, each letter represents a single message. The address on the letter represents the service that the message is delivered to, and the mailbox represents the queue for that service. Applications that use the Service Broker infrastructure are truly asynchronous; they do not need to know when the receiver reads their message, and they don’t “care” about the underlying process of moving the message from service to service. The Service Broker architecture consists of the following components: Conversation components Conversation groups, conversations, and messages are the runtime structure of Service Broker. Applications exchange messages as part of a conversation. Each conversation is between exactly two participants. Service definition components The service definition components are design-time components that define the basic conversation structure that the application uses. They define the message types, conversation flow, and database storage for the application. Networking and security components These components define the necessary infrastructure for handling conversations outside the SQL Server instance, such as communicating with a Service Broker instance on a customer’s SQL Server. These components are arranged to form the basic Service Broker architecture shown in Figure 7.1. Understanding each of these components is the key to designing and building Service Broker–based applications.
Understanding the Service Broker Conversation Architecture All Service Broker applications communicate through conversations, which are reliable, asynchronous, and possibly long running. Service Broker uses the following objects to define a conversation: Messages Messages are the actual data exchanged between services. Each message belongs to exactly one conversation and has a specific message type Dialogs Dialogs are bidirectional conversations between two Service Broker services. Dialogs provide the framework for exactly once, in order (EOIO) message delivery. Each dialog belongs to exactly one conversation group and has a specific contract.
40521c07.fm Page 352 Monday, August 14, 2006 12:47 PM
352
Chapter 7
FIGURE 7.1
Designing Messaging Services for a Database Solution
The Service Broker architecture
Applicaton User
Message Type
Message Type
Service Contract
Service Contract Applicaton User
Service Queue
Service Queue
Service Program
Service Program Conversation
Service
Message
Server 1
Service
Server 2
Conversation groups Conversation groups identify related conversations that work together to complete a given task. Conversation groups facilitate message locking and concurrency and are also used to assist with state management. These components work together to provide a robust, scalable messaging architecture that developers can use as the basis for building asynchronous SOA-enabled Service Broker applications.
Defining Service Broker Messages Probably the simplest pieces in the overall Service Broker puzzle, messages are simply the data exchanged between services. Messages have a specific message type, which is defined by a message type object. In simple terms, a message type object is an XML schema that represents a Service Broker message.
Don’t be mislead by this definition regarding message type objects. Not all Service Broker messages will be validated, and some messages are devoid of content altogether.
40521c07.fm Page 353 Monday, August 14, 2006 12:47 PM
Designing Service Broker Solutions for Asynchronous Applications
353
The types of messages that a given application can support are defined by service contracts. A service contract is an agreement between two services about which messages each service sends to accomplish a given task. Contract definitions are stored in the Service Broker database and must be created in each database that participates in a conversation. The contract specifies which message types the service can use and specifies which participant in the conversation can use the message. Some message types can be used by either participant, and some can be used by only one participant. A contract must specify at least one message type sent by the initiator or a message type sent by either participant to be considered a valid contract. In addition to user-defined message types, Service Broker uses three different message types indicating the Service Broker status: Dialog timer messages If an application sets a time constraint on a given message exchange, Service Broker will time the dialog and, in the event the time exceeds the limit, will send a message of type dialog timer. This is an empty message that informs the dialog initiator that the timeout has been exceeded. The dialog timer message type has a schema of http:// schemas.Microsoft.com/SQL/ServiceBroker/DialogTimer . Error messages When something goes wrong on a remote service, it ends the dialog with an error message. Service Broker automatically creates the error message type using the schema http://schemas.microsoft.com/SQL/ServiceBroker/Error , which is a well-formed XML document containing information about the error that occurred. End dialog messages When a remote service ends a dialog without specifying any error, the local broker sends an end dialog message using the schema http://schemas.microsoft.com/ SQL/ServiceBroker/EndDialog. End dialog messages are empty messages that inform the remote service that the dialog was successfully ended. You can create message types with the Transact-SQL (T-SQL) command CREATE MESSAGE TYPE, as shown here: CREATE MESSAGE TYPE message_type_name [ AUTHORIZATION owner_name ] [ VALIDATION = { NONE | EMPTY | WELL_FORMED_XML | VALID_XML WITH SCHEMA COLLECTION schema_collection_name } ] [ ; ]
Once you have created a message type, you can create contracts with the T-SQL command CREATE CONTRACT, as shown here: CREATE CONTRACT contract_name [ AUTHORIZATION owner_name ]
40521c07.fm Page 354 Monday, August 14, 2006 12:47 PM
Chapter 7
354
Designing Messaging Services for a Database Solution
(
{ { message_type_name | [ DEFAULT ] } SENT BY { INITIATOR | TARGET | ANY } } [ ,...n] )
[ ; ]
Once you’ve defined the message type and contract, you can define the service dialog.
Specifying Service Broker Dialogs All messages sent by Service Broker are part of a conversation, and a dialog is a conversation between two services. In essence, a dialog is a reliable and persistent bidirectional stream of messages. Dialog conversations have exactly two participants. The initiator begins the conversation, and the target accepts the conversation. Applications exchange messages as part of a dialog, as shown in Figure 7.2. Dialogs in Service Broker use automatic message receipt acknowledgements to ensure reliable delivery. Each time a message is sent, that message is stored in the transmission queue until the receiver acknowledges it. These messages are handled internally and are not accessible to the application. Service Broker does not consider it to be an error when a remote service is unreachable. The message is stored in the transmission queue until either the remote service acknowledges receipt of the message or the dialog lifetime expires. FIGURE 7.2
Service Broker dialogs
Service Broker Database
Service Broker Database
Service
Service Dialog
Queue
Message
Receive
Queue
Message
Receive
Application
Send
Application
Send
40521c07.fm Page 355 Monday, August 14, 2006 12:47 PM
Designing Service Broker Solutions for Asynchronous Applications
355
You specify dialog lifetime when creating a dialog, and both participants in the dialog will monitor the lifetime to ensure that no dialog exceeds this value. In the event the dialog lifetime expires, both participants send dialog timer messages, and any further exchange of messages is prohibited until a new dialog is established. Another important aspect about dialogs in Service Broker is that they are never automatically ended. Each participant in a dialog is responsible for ending the conversation by issuing the END CONVERSATION command. You can create dialogs using the T-SQL BEGIN DIALOG CONVERSATION statement, as shown here: BEGIN DIALOG [ CONVERSATION ] @dialog_handle FROM SERVICE initiator_service_name TO SERVICE 'target_service_name' [ , { 'service_broker_guid' | 'CURRENT DATABASE' } ] [ ON CONTRACT contract_name ] [ WITH [ { RELATED_CONVERSATION = related_conversation_handle | RELATED_CONVERSATION_GROUP = related_conversation_group_id } ] [ [ , ] LIFETIME = dialog_lifetime ] [ [ , ] ENCRYPTION = { ON | OFF } ] ] [ ; ]
Once the Dialog conversation as been initiated, messages can be sent using the TransactSQL SEND statement as shown below: SEND ON CONVERSATION conversation_handle [ MESSAGE TYPE message_type_name ] [ ( message_body_expression ) ] [ ; ]
Of course, the message must also be received, so the message recipient must issue the T-SQL RECEIVE statement, as shown here: [ WAITFOR ( ] RECEIVE [ TOP ( n ) ] [ ,...n ] FROM [ INTO table_variable ]
40521c07.fm Page 356 Monday, August 14, 2006 12:47 PM
356
Chapter 7
Designing Messaging Services for a Database Solution
[ WHERE {
conversation_handle = conversation_handle | conversation_group_id = conversation_group_id } ] [ ) ] [ , TIMEOUT timeout ] [ ; ] ::= { * | { column_name | [ ] expression } [ [ AS ] column_alias ] | column_alias = expression } [ ,...n ] ::= { [ database_name . [ schema_name ] . | schema_name . ] queue_name }
The RECEIVE statement simply reads messages from a queue and returns a result set that the application must understand. Once you have set up the dialog infrastructure, you must configure the conversation group information, because each dialog belongs to a specific conversation group.
Specifying Service Broker Conversation Groups A conversation group identifies a group of related conversations and allows applications to easily coordinate conversations around a given task. Participants in a conversation do not share conversation groups. Conversation groups mainly ensure the EOIO aspect of any given conversation is adhered to by locking the conversation group for the duration of the conversation. Generally speaking, Service Broker manages conversation groups without the need for applications to explicitly attach to them. However, sometimes an application may want to group a series of messages into the same group. For this reason, you use the T-SQL GET CONVERSATION GROUP statement, as shown here: [ WAITFOR ( ] GET CONVERSATION GROUP @conversation_group_id FROM [ ) ] [ , TIMEOUT timeout ] [ ; ]
40521c07.fm Page 357 Monday, August 14, 2006 12:47 PM
Designing Service Broker Solutions for Asynchronous Applications
357
::= { [ database_name . [ schema_name ] . | schema_name . ] queue_name }
Sometimes in the course of managing conversations, it becomes necessary to move a conversation from one conversation group into another. To accomplish this, you need to use the T-SQL MOVE CONVERSATION statement, as shown here: MOVE CONVERSATION conversation_handle TO conversation_group_id [ ; ]
Once you understand the overall Service Broker messaging architecture, you can create and deploy a Service Broker application.
Putting It All Together: A Sample Service Broker Application You might be thinking that creating a Service Broker application is not exactly an easy task, and you’d be correct. The Service Broker infrastructure is not the easiest concept to understand.
The sample application shown in this section is similar to the “Hello World” sample that is available as part of the SQL Server database engine samples included with SQL Server.
For this reason, I’ll show how to build a simple Service Broker application that will help bring some of the concepts together. The application is a simple “Hello World” application, created using the Service Broker infrastructure. This exercise assumes you have SQL Server 2005 installed along with the AdventureWorks sample database.
The Service Broker is not enabled by default and must be enabled by using the ALTER DATABASE statement.
Load SQL Server Management Studio, and open a query window, selecting the AdventureWorks database. Execute the following code to ensure that Service Broker is enabled for use in AdventureWorks: ALTER DATABASE AdventureWorks SET ENABLE_BROKER ;
40521c07.fm Page 358 Monday, August 14, 2006 12:47 PM
358
Chapter 7
Designing Messaging Services for a Database Solution
The ALTER DATABASE command requires exclusive access to the AdventureWorks database. Make sure no other users are connected to the database, and make sure all other open windows in SQL Server Management Studio do not have AdventureWorks selected.
Now select the AdventureWorks database, and execute the following command to create the HelloWorldMessage message type: CREATE MESSAGE TYPE HelloWorldMessage VALIDATION = WELL_FORMED_XML;
Next, issue the following command to create the HelloWorldContract message contract: CREATE CONTRACT HelloWorldContract ( HelloWorldMessage SENT BY INITIATOR);
Then, issue the following command to create the queue for the receiver service: CREATE QUEUE [dbo].[TargetQueue];
Next, issue the following command to create the queue for the initiator service: CREATE QUEUE [dbo].[InitiatorQueue];
Next, issue the following command to create the InitiatorService: CREATE SERVICE InitiatorService ON QUEUE [dbo].[InitiatorQueue];
Finally, issue the following command to create the receiver service, specifying the contract created earlier: CREATE SERVICE TargetService ON QUEUE [dbo].[TargetQueue] (HelloWorldContract);
Once you have completed these steps, you are ready to test the application. To test the application, you first need to create and send the “Hello World” message, ensuring that the message conforms to the contract created previously. Issue the following commands to set up a transaction, build a message, set up a conversation, begin a dialog, and send a message. You need to issue these commands together to ensure the variables are used properly: BEGIN TRANSACTION ; GO
40521c07.fm Page 359 Monday, August 14, 2006 12:47 PM
Designing Service Broker Solutions for Asynchronous Applications
359
-- Create the message. DECLARE @message XML ; SET @message = N'Hello, World!' ; -- Declare a variable to hold the conversation -- handle. DECLARE @conversationHandle UNIQUEIDENTIFIER ; -- Begin the dialog. BEGIN DIALOG CONVERSATION @conversationHandle FROM SERVICE InitiatorService TO SERVICE 'TargetService' ON CONTRACT HelloWorldContract WITH ENCRYPTION = OFF; -- Send the message on the dialog. SEND ON CONVERSATION @conversationHandle MESSAGE TYPE HelloWorldMessage (@message) ; -- End the conversation. END CONVERSATION @conversationHandle ; GO -- Commit the transaction. Service Broker -- sends the message to the destination -- service only when the transaction commits. COMMIT TRANSACTION ; GO
To test that the message was created properly, you can execute the following command to check the status of the message: SELECT * FROM [dbo].[TargetQueue]
40521c07.fm Page 360 Monday, August 14, 2006 12:47 PM
360
Chapter 7
Designing Messaging Services for a Database Solution
You should see a couple of entries in the result set that represent the message dialog you just initiated. Now execute the following code batch to process the message you sent previously: -- Process all conversation groups. WHILE (1 = 1) BEGIN DECLARE @conversation_handle UNIQUEIDENTIFIER, @conversation_group_id UNIQUEIDENTIFIER, @message_body XML, @message_type_name NVARCHAR(128); BEGIN TRANSACTION ; -- Get next conversation group. WAITFOR( GET CONVERSATION GROUP @conversation_group_id FROM [dbo].[TargetQueue]), TIMEOUT 500 ; -- If there are no more conversation groups, -- roll back the transaction and break out -- of the outermost WHILE loop. IF @conversation_group_id IS NULL BEGIN ROLLBACK TRANSACTION ; BREAK ; END ; -- Process all messages in the conversation -- group. Notice that all processing occurs -- in the same transaction. WHILE 1 = 1 BEGIN -- Receive the next message for the -- conversation group. -- Notice that the receive statement includes
40521c07.fm Page 361 Monday, August 14, 2006 12:47 PM
Designing Service Broker Solutions for Asynchronous Applications
-- a WHERE clause to ensure that the messages -- received belong to -- the same conversation group. RECEIVE TOP(1) @conversation_handle = conversation_handle, @message_type_name = message_type_name, @message_body = CASE WHEN validation = 'X' THEN CAST(message_body AS XML) ELSE CAST(N'' AS XML) END FROM [dbo].[TargetQueue] WHERE conversation_group_id = @conversation_group_id ; -- If there are no more messages, or an -- error occurred, stop processing this -- conversation group. IF @@ROWCOUNT = 0 OR @@ERROR 0 BREAK; -- Show the information received. SELECT 'Conversation Group Id' = @conversation_group_id, 'Conversation Handle' = @conversation_handle, 'Message Type Name' = @message_type_name, 'Message Body' = @message_body ;
-- If the message_type_name indicates that -- the message is an error or an end dialog --message, end the conversation. IF @message_type_name = ➥'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog'
361
40521c07.fm Page 362 Monday, August 14, 2006 12:47 PM
362
Chapter 7
Designing Messaging Services for a Database Solution
OR @message_type_name = ➥'http://schemas.microsoft.com/SQL/ServiceBroker/Error' BEGIN END CONVERSATION @conversation_handle ; END ; END; -- Process all messages in conversation group. -- Commit the receive statements and the end -- conversation statement. COMMIT TRANSACTION ; END ; -- Process all conversation groups.
You should see two result sets, one containing the message you sent and the other showing an EndDialog message type. In this section you built a simple Service Broker application that exchanges messages of a specific type. Obviously, this is a simple Service Broker application, but it shows the steps necessary to build fully featured Service Broker applications.
As demonstrated, Service Broker application development isn’t exactly the easiest thing in the world. Many third-party tools and applications are available to simplify creating and managing Service Broker applications. One of the best tools available is the Service Listing Manager, which you can find at http://blogs.msdn.com/remusrusanu/archive/2006/04/07/571066.aspx.
Service Broker is only one piece of the Microsoft SOA-enabled architecture of SQL Server 2005.
Developing Applications for SQL Server Notification Services As its name implies, SQL Server Notification Services is a platform for developing applications that send notifications to a variety of devices, including email, mobile phone text messaging, network alerts, and so on. Notification Services was included as a free download with SQL Server 2000, but it saw somewhat of a tepid response among SQL Server developers. Microsoft has enhanced Notification Services in many ways for SQL Server 2005: Integration with SQL Server Management Studio Instead of using an unwieldy commandline interface like its predecessor, Notification Services is now completely integrated into SQL Server Management Studio’s Object Browser.
40521c07.fm Page 363 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
363
Subscriber-defined conditions Notification Services now supports condition actions, which allow subscribers to define their own queries that result in notifications. Database independence Notification Services now supports existing databases and application data. Integrated .NET management APIs Notification Services is fully integrated with .NET 2.0 and includes new management APIs in the Microsoft.SqlServer.Management.Nmo namespace. Hostable execution engine A Windows service is no longer required to host the Notification Services engine. These enhancements to Notification Services help make SQL Notification Services a viable alerting and messaging platform for applications hosted on SQL Server 2005. Again, to build applications for Notification Services, you must first understand the underlying application architecture.
Understanding the Notification Services Application Architecture Notification Services uses a simplistic application architecture that is easy to understand. Applications built on the Notification Services infrastructure have the following basic components: Subscriber A subscriber is a person or application that subscribes to and receives events generated by Notification Services. Subscription A subscription is a request for specific information that is delivered to a specific location. (For example, a subscriber might create a subscription that states, “Send a text message to my phone when the scores for the Rockies game are posted.”) Event An event is a piece of information received by Notification Services that someone might be interested in, such as when a stock price changes or when a particular score is recorded. Notification Notifications are the delivery of subscribed information to the location specified by the subscriber. Figure 7.3 shows the overall high-level architectural view of Notification Services. A good example of an application built on Notification Services is a stock-tracking application. A subscriber might want to know when a specific stock reaches a specific amount. A subscription could be created that reads, “Send a message to my mobile when the price of MSFT rises to more than $500.”
For all of you in the United States, a mobile is a cell phone!
40521c07.fm Page 364 Monday, August 14, 2006 12:47 PM
364
Chapter 7
Designing Messaging Services for a Database Solution
Many applications that would normally require a fair amount of programming to implement can benefit from Notification Services. Use case examples of Notification Services include the following: End-user applications You could use Notification Services to send email messages and text messages to end users of websites or other consumer-based applications. For example, an online store might allow customers to subscribe to the shipping status of their purchased items and be alerted when an item ships from the store. Line of business applications You could use Notification Services to alert business managers of critical events. For example, a sales manager might subscribe to events within a customer relationship management (CRM) system to be notified when something critical occurs. (The beauty of using an open-ended system such as Notification Services for this task is that the sales manager can choose what they think is a critical event.) Business intelligence applications You could use Notification Services to track critical data and send alerts when specific events occur. For example, a sales management system might have the capability to generate a notification when a specific key performance indicator (KPI) is reached. These are just a few of the uses for Notification Services. Many real-world applications could use SQL Server Notification Services. The first step in building a Notification Services application is to create an application definition file. FIGURE 7.3
Notification Services high-level architecture Subscriptions
SQL Server Notification Services Events
Notifications
Notification Services Database
40521c07.fm Page 365 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
365
Building Notification Services Applications To build a Notification Services application, you must first define the application properties and configure an instance of Notification Services to host the application. Generally speaking, you can accomplish these tasks by using XML configuration files; however, you can also configure Notification Services by using the Notification Services Management Objects (NMO) management APIs. To define the application properties for a Notification Services application, you must provide the following information: Application database information Where will you store Notification Services information about your application? Event class information How will your events look, will you keep historical event information, and how will they be indexed? Subscription class information What will your subscriptions look, and how will they be indexed? Notification class properties How will your notifications be delivered, formatted, and stored? Event provider properties Where will your events come from? Event generator properties How will your events be processed? Event distributor properties How will your events be delivered? Application execution settings What is the environment that the application will run in, and how will your application coexist in that environment? At first glance, this seems like a lot of information to configure in order to use Notification Services. Although this is true, the good news is that it really isn’t as hard as it may seem to build effective Notification Services applications. All Notification Services application configuration information is stored in a configuration file called an application definition file (ADF).
You can create an ADF manually or programmatically by using NMO. For the purposes of this chapter, I will assume you are creating ADFs manually.
You can find a complete ADF template in SQL Server Books Online; it looks like this:
40521c07.fm Page 366 Monday, August 14, 2006 12:47 PM
366
Chapter 7
Designing Messaging Services for a Database Solution
40521c07.fm Page 367 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
➥
367
40521c07.fm Page 368 Monday, August 14, 2006 12:47 PM
368
Chapter 7
Designing Messaging Services for a Database Solution
40521c07.fm Page 369 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
➥
369
40521c07.fm Page 370 Monday, August 14, 2006 12:47 PM
370
Chapter 7
Designing Messaging Services for a Database Solution
➥ ➥
40521c07.fm Page 371 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
➥ ➥ ➥ ➥
371
40521c07.fm Page 372 Monday, August 14, 2006 12:47 PM
372
Chapter 7
Designing Messaging Services for a Database Solution
➥ ➥ ➥ ➥ ➥ ➥ ➥ ➥ ➥
40521c07.fm Page 373 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
373
Configuring Notification Services Application Database Information The Notification Services application database is an important piece in the overall Notification Services architecture. This database stores event, subscription, and notification data, along with the configuration data for the application itself. Notification Services allow you to use a single database to store data for multiple applications; however, for simplicity’s sake, it is better to keep the Notification Services applications separate.
If you decide to use a single database to store data for multiple Notification Services applications, make sure you use a separate schema for each application to keep the data separate and make it easier to administer.
Technically speaking, the database configuration information is not required for Notification Services. If you do not provide database information in the ADF, Notification Services will create a default database named and will use the dbo schema for all objects. Obviously, it is not a good idea to leave the default configuration in place, so you will want to manually configure all database information in the ADF. Database-specific information in the ADF looks like this: NotificationServices dbo Filegroup1 NotificationSvs_Data E:\SQL\NotificationSvcs.MDF 100 10% NotificationSvcs_Log
40521c07.fm Page 374 Monday, August 14, 2006 12:47 PM
374
Chapter 7
Designing Messaging Services for a Database Solution
F:\SQL\NotificationSvcs.LDF 25 10% PRIMARY
The properties in the ADF are the same properties you would use to create any user database. It is important to treat Notification Services databases just like any other user database on the system and follow all the normal optimization techniques to keep the database running at peak efficiency.
Notification Services uses the tempdb system database extensively, so make sure you follow all the “normal” tempdb optimizations when using Notification Services. See Microsoft Knowledge Base article 328551 for more information about optimizing tempdb.
Configuring Event Class Information Once you have configured the Notification Services application database, the next step is to configure the event class information. Event classes describe the basic structure of an event such as the name, fields, filegroup storage, and indexing strategy. (Think of an event class as a table definition.) Event classes also contain information about how events will be chronicled (historical event storage) and how data will be “aged” out of the event tables. To ensure efficiency when dealing with Notification Services events, it is important to keep event classes as simple as possible. If you are building an application that keeps track of multiple items, create an event class for each item. Defining an event class is similar to designing a table to store data. The following information is required to create an event class: Event class name The name describes the event class and must be unique within a Notification Services instance. Each Notification Services application can have up to 64 event classes. Event fields These are the columns that will make up the event class table. They must include a name and data type. At least one user-defined event field is required, and Notification Services adds EventID and EventBatchID fields to each event class. Filegroup This is the SQL Server filegroup on which to store this particular event class. When designing event classes, you should keep them as simple as possible. The “narrower” the tables are that need to hold event information, the more efficient event processing will be. A basic event class definition in the ADF looks like this:
40521c07.fm Page 375 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
375
MyEvent Column1 INT NOT NULL Column2 VARCHAR(10) NOT NULL Column3 money NULL Filegroup1
Once you have defined the event classes for your application, you must define the subscription classes.
Configuring Subscription Class Information Subscribers create subscriptions to define exactly what information they want to receive from a Notification Services application. Subscription classes define exactly how subscribers interact with Notification Services applications. To create a subscription class, the following information is required: Subscription class name This name defines the subscription class and must be unique within the Notification Services instance. Subscription class schema The subscription class schema defines the fields of an event class to which the subscriber has access. (Do not confuse the term schema with a SQL Server 2005 schema such as dbo.) Subscription rule Subscription rules define how notifications are generated and other data management tasks. A basic ADF containing subscription class information looks like this:
40521c07.fm Page 376 Monday, August 14, 2006 12:47 PM
376
Chapter 7
Designing Messaging Services for a Database Solution
Sub1 Column1 INT NOT NULL Column2 money NULL Rule1 {Some SQL Statement}
Subscription classes can get complicated to design and implement. For a good tutorial on Notification Services subscription classes, see http:// msdn2.microsoft.com/en-us/library/ms166580.aspx.
Once you have defined and implemented the subscription classes, you must configure the notification properties.
Configuring Notification Classes Notification classes define notifications that are generated by Notification Services applications. If your application will support multiple types of notifications, you will create multiple notification classes.
40521c07.fm Page 377 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
377
Much like the event and subscription classes you defined earlier, you define notification classes either through an ADF file or through the NMO APIs. The following items are required when creating notification classes: Notification class name Each notification class requires a name that is unique to the instance of Notification Services. Notification class schema The notification class schema defines the SQL Server table where notification information will be saved. Content formatter The content formatter takes the raw notification data stored in the schema and formats it appropriately for presentation and delivery. The default content formatter is the XSLT content formatter. This formatter is sufficient for most formatting requirements. Delivery protocol Delivery protocols define how the notification is delivered to the subscriber. By default, two delivery protocols are available, namely, File and SMTP. You can also create custom delivery protocols to handle special delivery requests (such as with an SMTP trap). A basic notification class configuration using an ADF file looks like this: ➥Notify Field1 money ➥ ConvertedName ➥CONVERT(VARCHAR(10),Field1,1) xsltFormatter xsltBaseDirectoryPath
40521c07.fm Page 378 Monday, August 14, 2006 12:47 PM
378
Chapter 7
Designing Messaging Services for a Database Solution
C:\scratch xsltFileName MyTransform.xsl DisableEscaping true SMTP Subject %SubjectLine% From %FromAddress% To DeviceAddress BodyFormat "html"
40521c07.fm Page 379 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
379
Many configuration options are available for notification classes; however, the basic XSLT content formatter, along with SMTP, makes for a simple yet effective configuration for notification classes.
To learn more about creating and maintaining notification classes, see the Notification Services tutorial available online at http://msdn2.microsoft.com/ en-us/library/ms170473.aspx.
Once you have created the notification classes, you can move on to event provider configuration.
Configuring Event Provider Information Event providers collect data and submit it to Notification Services. Think of event providers as the “spies” that watch external resources, including SQL Server data waiting for “something to happen.” Each event is collected and stored as a single row of data in the event table that is defined by the event class. Notification Services comes with three event providers: File system watcher The file system watcher monitors a specific folder and is triggered when an XML file “lands” in that folder. The file system watcher reads the contents of the file into memory and then uses an EventLoader to write the event to the event table in Notification Services. Configuring a file system watcher in an ADF file looks like this: MyWatcher FileSystemWatcherProvider MYSERVER WatchDirectory C:\Scratch\NewEvents EventClassName MyEvents SchemaFile C:\Schemas\MyEventSchema.xsd
40521c07.fm Page 380 Monday, August 14, 2006 12:47 PM
380
Chapter 7
Designing Messaging Services for a Database Solution
RetryAttempts 15 RetryQueueOccupancy 100 RetryPeriod 50000 RetryWorkload 100
SQL Server The SQL Server event provider uses a T-SQL query to read data from a database and then uses the Notification Services stored procedures to write the event to the event table. Configuring a SQL Server event provider in an ADF file looks like this: MySQLProvider SQLProvider MYSERVER P0DT00H00M60S PT4M EventsQuery SELECT Column1, Column2 FROM Table1
40521c07.fm Page 381 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
381
EventClassName MyEvents
You might note the strange format for the time fields in the previous listing. ADF files use primitive XML data types to represent most configuration elements. For a guide to primitive XML types, see http://msdn.microsoft.com/ library/default.asp?url=/library/en-us/xmlsdk/html/88c560ac-ca4d4bb2-a2ba-18ee6dde58b6.asp.
Analysis Services The Analysis Services event provider uses a multidimensional expression language (MDX) query to read data from an Analysis Services cube and then uses the Notification Services stored procedures to write the event to the event table. Configuring an Analysis Services event provider in an ADF file looks like this: StaticASEventProvider AnalysisServicesProvider MYSERVER P0DT00H00M02S PT7M EventClassName KPIEvents AnalysisServicesInstance MYANALYSISSERVER AnalysisServicesDatabase MYDATABASE
40521c07.fm Page 382 Monday, August 14, 2006 12:47 PM
382
Chapter 7
Designing Messaging Services for a Database Solution
MDXQuery SELECT { [Measures].[Total Product Cost], KPIValue([Gross Profit Margin]), KPIGoal([Gross Profit Margin]), KPIStatus([Gross Profit Margin]), KPITrend([Gross Profit Margin]) } ON COLUMNS, { [Employee].[Employees].[Joe Employee], [Employee].[Employees].[Jane Employee] } ON ROWS FROM [MYDATABASE] WHERE [Date].[Calendar Time].[2002]
The previous example shows a static query. The Analysis Services event provider is also capable of using an XSLT transform to build a dynamic query. This can be fairly complicated to build. For more information, see the Books Online topic “Defining the analysis services event provider.”
As with almost every aspect of Notification Services, the event provider class is extensible. If you can write code to monitor an event, Notification Services can react to that event through the use of custom event providers. Once you have defined the event providers, you must define the event generator properties.
Defining Event Generator Properties Every Notification Services application has a single generator, which is a service that runs on a machine you specify. The service is named NS$. When defining a Notification Services event generator, the following properties are required: Event generator system name This is the name of the machine that will host the NS$ service. In the event that you want to host the event generator on a failover cluster, you must define the system name as the virtual name of the cluster, not the individual node name. Thread pool size The thread pool size specifies the number of threads available for processing events. The maximum number of threads that can be allocated is 25. Technically, the thread pool size is not a required element in the configuration, because it is possible for Notification Services to dynamically allocate threads.
40521c07.fm Page 383 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
383
Configuring a Notification Services event generator with an ADF file looks like this: SERVER2 1
SQL Server Standard Edition allows a maximum of one thread to process events. If you are running on SQL Server Standard Edition and allocate more than one thread to the thread pool, SQL Server will ignore the setting.
Once you have configured SQL Server, you must configure the event distributor.
Defining Event Distributor Properties The Notification Services event distributor is the main working component in a Notification Services application. The event distributor spends its time looking for notifications to process. When it finds an event to process, it calls the formatter to format the notification and ensures the appropriate delivery protocol is used to deliver the notification. Each Notification Services application requires at least one event distributor; however, you can use multiple distributors if the workload is too much for a single machine to handle.
SQL Server Standard Edition supports the use of only a single event distributor and a maximum of three threads.
The event distributor spends most of its time “asleep,” waking up at a specified interval (known as a quantum) to process events. The event distributor also uses a thread pool to determine the maximum simultaneous threads that will be used to process events. Configuring an event distributor by using an ADF file looks like this: MYSERVER 10 P0DT00H01M30S
If you do not specify a value for the ThreadPoolSize element, Notification Services will use a built-in algorithm to determine the best size based on the current workload and the overall available system resources. If you do not specify a QuantumDuration, Notification Services will “wake up” and check for events to process every 60 seconds.
40521c07.fm Page 384 Monday, August 14, 2006 12:47 PM
384
Chapter 7
Designing Messaging Services for a Database Solution
Once you have configured the event distributor, the final piece of the configuration is the application execution settings.
Configuring Application Execution Settings Notification Services has many user-configurable settings that affect the efficiency of Notification Services applications. Application execution settings are not required, because Notification Services will pick default values for most configuration items; however, it is not recommended to trust the default values. The following are the application execution settings you can manage: Event Generator Quantum The Event Generator Quantum setting specifies how often the event generator runs. Quantum Limits The Quantum Limits application setting determines how far “behind” the generator is allowed to lag. In the event this value is exceeded, the generator will skip events. Event Processing Order The Event Processing Order setting determines how events will be processed, either serially or in parallel. Performance Query Interval The Performance Query Interval setting determines how often the Notification Services built-in performance monitor counters are updated. Event Throttle The Event Throttle setting determines how many events the distributor will attempt to process within a given quantum period. Distributor Logging The Distributor Logging setting determines what type of event audit logging Notification Services will write to the Notification Services database. Data Removal The Data Removal (or Vacuuming) settings determine how long events will be stored in the Notification Services database. Configuring application execution settings by using an ADF file looks like this: P0DT00H15M00S 25 ➥10 ➥false ➥P0DT01H00M00S 5000 500 500 ➥ ➥false true true
40521c07.fm Page 385 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
385
P3DT00H00M00S 23:00:00 P0DT02H00M00S
Once you have configured the application execution settings in the ADF, the file is now complete and ready to be used. To complete the Notification Services application, you must first configure the Notification Services instance that will host your application.
Configuring a Notification Services Instance A Notification Services instance hosts one or more Notification Services applications. Each instance of Notification Services requires an instance configuration file (ICF). The ICF is similar in structure to the ADF; however, it specifically covers the configuration of the instance of Notification Services. Each ICF requires the following configuration items: Instance name The instance name specifies the name of the service that will host Notification Services. The name is an important aspect of the configuration and is subject to stringent naming requirements. The name is not case sensitive, cannot contain any special characters or quotation marks, and is limited to 64 characters. SQL Server name The SQL Server name specifies the instance name of the SQL Server where the Notification Services instance will be installed. Applications The ICF must specify at least one Notification Services application. The full path to the ADF is specified in the ICF for each application hosted by the instance of Notification Services. Other elements in the ICF specify optional configuration information. You can find a complete template for an ICF in Books Online that looks like this: ➥
40521c07.fm Page 386 Monday, August 14, 2006 12:47 PM
386
Chapter 7
Designing Messaging Services for a Database Solution
40521c07.fm Page 387 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
➥
387
40521c07.fm Page 388 Monday, August 14, 2006 12:47 PM
388
Chapter 7
Designing Messaging Services for a Database Solution
Configuring the ICF is similar to configuring the ADF, so I will not spend a lot of time on it here. For more information about configuring the ICF, see the topic “Instance Configuration File” in Books Online.
Once you have created the ICF, the next step is deploying the Notification Services instance.
Deploying Notification Services Instances The process of deploying a Notification Services instance is fairly straightforward and involves the following steps: 1.
Create the instance and associated objects by compiling the ICF.
2.
Register the instance.
40521c07.fm Page 389 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
3.
Install the Notification Services engine.
4.
Deploy any custom event providers or subscription management interfaces.
389
You can deploy Notification Services in many configurations, ranging from a single server installation to a fully fault-tolerant clustered solution. The deployment strategy is based entirely on the needs of the Notification Services application and must be considered as part of the overall application architecture.
Many people make the mistake of thinking that Notification Services is built into SQL Server and therefore Notification Services applications do not have the same scaling and deployment architecture requirements as “normal” custom applications do. You should treat a Notification Services application just like any other custom developed application.
The first step in deploying a Notification Services instance is to compile the ICF and create the instance. You do this through SQL Server Management Studio by right-clicking the Notification Services icon in Object Explorer and selecting New Notification Services Instance, as shown in Figure 7.4. This opens the New Notification Services Instance dialog box. Click the Browse button, browse to the ICF you want to associate with this instance of Notification Services, and click Open. Fill in any parameters you are prompted for (you will be prompted for parameters only if you specified the use of parameters in your ICF), and check the Enable Instance After It Is Created box if you’d like the instance to be enabled, as shown in Figure 7.5. Once you have specified the ICF and filled in any parameters, select OK to compile the ICF. This will result in the dialog box shown in Figure 7.6 if everything goes OK. FIGURE 7.4
Creating a new Notification Services instance
40521c07.fm Page 390 Monday, August 14, 2006 12:47 PM
390
Chapter 7
Designing Messaging Services for a Database Solution
FIGURE 7.5
Preparing to compile the ICF
FIGURE 7.6
Compiling the ICF
Once you have compiled the ICF and created the instance, you must register the instance. To register the instance, right-click the instance name in Object Explorer, select Tasks, and then select Register, as shown in Figure 7.7.
40521c07.fm Page 391 Monday, August 14, 2006 12:47 PM
Developing Applications for SQL Server Notification Services
391
This opens the Register Instance dialog box. Fill in the Account and Password boxes for the service account you want to use, select the type of authentication you want to use for the database as shown in Figure 7.8, and then click OK. Assuming that everything went OK, you will see a status report, as shown in Figure 7.9. FIGURE 7.7
Preparing to register the notification services instance
FIGURE 7.8
Registering the Notification Services instance
40521c07.fm Page 392 Monday, August 14, 2006 12:47 PM
392
Chapter 7
FIGURE 7.9
Designing Messaging Services for a Database Solution
Status report showing registration success
Once you register the instance, you can now start it by right-clicking the instance name and then selecting Start. This will start the services and prepare Notification Services for operation.
Prior to starting the instance, be sure to add the Notification Services account (the account you selected earlier to run the service) as a login in SQL Server and as a user in all Notification Services databases. The service account must belong to the NSRunService database user role.
As demonstrated here, SQL Server 2005 Notification Services uses verbose configuration files, but the overall features of Notification Services make it a viable platform for developing SOA-based services that supply the notification and alerting needs of enterprise applications.
Looking at Notification Services Sample Applications Notification Services can be a powerful tool in the arsenal of a SQL Server application developer. As demonstrated earlier, Notification Services can be a potent back-end messaging system for use within applications that are hosted by SQL Server. To fully understand the power of Notification Services, developers should familiarize themselves with the readily available sample applications. The following sample applications demonstrate various aspects of Notification Services programming:
www.sqljunkies.com/Article/38D95C18-D0AB-4B00-9CF5-80940309E68C.scuk: This is a good but basic example of how you can use Notification Services in a calendar scheduling application.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ sqlntsv/htm/nsp_programmingsamples_8nqt.asp: This example shows Notification Services being used in a stock-tracking application.
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/ sqlntsv/htm/nsp_programmingsamples_22d0.asp: This example shows how you can use Notification Services in a travel-related weather application.
40521c07.fm Page 393 Monday, August 14, 2006 12:47 PM
Using SQL Server Database Mail
393
These sample applications show many aspects of programming with Notification Services and can help you see how you can integrate Notification Services into your own applications.
Before attempting the exam, I recommend you at least browse through one of the previous applications to note the structures used in a Notification Services application.
Notification Services is a robust and fully featured application development platform. Sometimes, though, application developers need a much less sophisticated approach for messaging. Microsoft addresses the needs of these developers with the final technology covered in this chapter, SQL Server Database Mail.
Using SQL Server Database Mail Many times in the lives of database developers they need to send a simple message from the database server to a user or administrator. Generally speaking, when developers have faced this requirement in the past, they wrote messaging into front-end applications or, if they were really brave, wrote code that used SQL Mail, which came in previous versions of SQL Server.
SQL Mail first came with SQL Server version 6.0 (circa 1995) and has remained largely unchanged until now. SQL Server 2005 has a completely redesigned and rewritten the Database Mail engine. It’s about bloody time if you asked me—I’ve been requesting a pure SMTP-based solution for the better half of the past decade. Now if only Microsoft could deliver a DATE data type as well!
SQL Mail in earlier versions of SQL Server had a number of problems, including its dependence on the Extended Mail API (MAPI), dependence on Windows profiles, difficulty to configure for DBAs, memory leakage, bugs, and great ability to hang. Although in theory it could work with Lotus Notes, Novell GroupWise, and any POP3/SMTP-based email server (to mention the common implementations), it worked best with Microsoft Exchange. Consequently, it was important to ensure you had the latest versions of the mail client/MAPI interface and even periodically restarted the SQL Server services to ensure that SQL Mail was working as expected. SQL Server Database Mail is a solution for sending email (SMTP-based) messages from within database applications. The messages can contain simple text, Hypertext Markup Language (HTML)–formatted text, query results, and files from anywhere on the network to which SQL Server has access. SQL Database Mail is capable of scaling from small, simple applications to enterprise-class clustered applications and has been engineered to support the following: Reliability Database Mail uses industry-standard SMTP messaging, which means there is no requirement for additional software on the database server. Database Mail executes in its own process, so there is no chance that a failure in the mail process will affect database operations.
40521c07.fm Page 394 Monday, August 14, 2006 12:47 PM
394
Chapter 7
Designing Messaging Services for a Database Solution
No News Is Not Necessarily Good News! SQL Server’s ability to send emails is a great feature of the product. But remember, don’t trust technology! Understand its limitations. When architecting a database solution, you need to consider unexpected events as well, not just expected events. I have been consulting for a major financial institution in Sydney for the better half of this decade that has hundreds of SQL Server instances. (Hello, Mandy, Ray, Joe, and Lucy!) Their SQL Server solutions heavily use SQL Mail to great effect. For me they are the benchmark in how to manage and maintain a SQL Server environment in an enterprise. As you can see, I’m hoping to still get consulting work there. In this particular case, a batch job needed to run daily after-hours. So, a developer, probably a contractor, correctly decided to leverage SQL Mail by coding a solution that would send an email if this batch job failed. The problem, of course, was that the developer did not anticipate the possibility that SQL Mail itself would hang. And of course it did. This meant personnel assumed that the batch job was running successfully because they had not receiving emails. Wrong! This was picked up in the user acceptance and testing (UAT) phase, so it was no big deal. But it does highlight a number of points. You cannot guarantee that technology will work as expected. You need to think about more than just the expected events, inputs, and behavior in your database solution. And of course the UAT phase is important and should involve experienced testers. The solution of course was simple—also send a corresponding email to indicate that the batch job had run successfully.
Scalability Database Mail configuration allows for multiple SMTP servers to be specified and will automatically use the additional servers if there is a problem with the primary server. Behind the scenes, Database Mail uses the SQL Server Service Broker infrastructure to provide for the background, asynchronous delivery of messages. Security Database Mail is turned off by default and must be enabled by the administrator for any messaging to work. Database users do not have permissions to send email unless they are placed in the DatabaseMailUser role in the msdb database. Security can be enabled at the mail profile level, meaning administrators can control which users have access to profiles stored on the server.
Rumor has it that Bill Gate’s reaction to the architecture of SQL Server’s Database Mail was something along the lines of “you realize you’ve built the world’s best email spamming engine.” Or something like that.
Database Mail is simple to use; however, there is a fair amount of architecture behind the process.
40521c07.fm Page 395 Monday, August 14, 2006 12:47 PM
Using SQL Server Database Mail
395
Understanding the Database Mail Architecture Unlike its predecessor, Database Mail does not use the Extended MAPI interface through a set of extended stored procedures to function. Database Mail consists of the following components:
Configuration and security components
Messaging components
Mail executable
Logging and auditing components
These components work together to form a simple yet effective architecture for messaging within SQL Server applications. The main database for all Database Mail configuration settings is the msdb system database. Database Mail configuration and security information is stored in the msdb database along with the system procedures that send email. Figure 7.10 shows the overall architecture of Database Mail. FIGURE 7.10
Database Mail architecture
SMTP mail server E-mail Recipients DatabaseMail90.exe
Activation SQL Server MSDB Database Mail queue
Status queue
Activation
Configuration and security objects
EXEC sp_send_dbmail
Messaging objects
EXEC sp_send_dbmail
40521c07.fm Page 396 Monday, August 14, 2006 12:47 PM
396
Chapter 7
Designing Messaging Services for a Database Solution
Database Mail relies on the message-queuing architecture of SQL Server Service Broker. The system procedure sp_send_dbmail is used to send email messages. When this procedure is executed, it places the outgoing message in the Database Mail queue in the msdb database. This causes the external mail process DataBaseMail90.EXE to execute and send the message through the defined SMTP server. Once the message has been successfully delivered to the SMTP server, DataBaseMail90.EXE places a success message in the Database Mail queue in msdb. It is not necessary to understand the inner workings of Database Mail to build an application using it, but it helps to understand how the process works in the event you need to troubleshoot mail issues. Before an application can use Database Mail, you must first configure the mail service, as shown in Exercise 7.1. EXERCISE 7.1
Configuring SQL Server Database Mail The heart of Database Mail configuration is the mail profile. The mail profile is simply a collection of accounts to use when sending messages. You can have multiple accounts and multiple profiles on a given SQL Server instance and can delegate which database users have access to them. Profiles can be public or private. Public profiles are available for any user who is a member of the DatabaseMailUserRole role in msdb. Private profiles are available only to administrators and users named in the profile. The relationship between users, profiles, and accounts is shown here.
Database User 1
User 2
User 3
Account 1
Profile 1
Account 2
Account 3
Profile 2
40521c07.fm Page 397 Monday, August 14, 2006 12:47 PM
Using SQL Server Database Mail
397
EXERCISE 7.1 (continued)
1.
To enable Database Mail, open the Surface Area Configuration tool (Start Programs SQL Server 2005 Configuration Tools). This opens the SQL Server 2005 Surface Area Configuration dialog box.
2.
Click Surface Area Configuration for Features, and select Database Mail from the list of features. Select the Enable Database Mail check box, and select Apply, as shown here.
40521c07.fm Page 398 Monday, August 14, 2006 12:47 PM
398
Chapter 7
Designing Messaging Services for a Database Solution
EXERCISE 7.1 (continued)
3.
Once Database Mail is enabled, you can configure the profiles and accounts that will use it. To configure Database Mail, open SQL Server Management Studio, and from Object Browser, expand the Management node, right-click Database Mail, and select Configure Database Mail to open the Database Mail Configuration Wizard.
4.
Click Next, and select Set Up Database Mail…, as shown here.
40521c07.fm Page 399 Monday, August 14, 2006 12:47 PM
Using SQL Server Database Mail
399
EXERCISE 7.1 (continued)
5.
Click Next, type a name for your database profile, type a description if you want one, and click Add to open the New Database Mail Account dialog box, as shown here. (If you have previously set up a mail account, you will need to either select that account or select New Account from the previous screen before this one appears.)
6.
Fill in the account name and description, the SMTP information, and the security information, and then click OK. The select Next to open the Manage Profile Security configuration page.
40521c07.fm Page 400 Monday, August 14, 2006 12:47 PM
400
Chapter 7
Designing Messaging Services for a Database Solution
EXERCISE 7.1 (continued)
7.
Select the profile you just created to make it a public profile, and click Next to open the Configure System Parameters page.
8.
Change the system parameters to values you prefer. (If you want to enable message retry, increase the account retry attempts from the default value of 1.) Click Next to open the summary screen.
40521c07.fm Page 401 Monday, August 14, 2006 12:47 PM
Using SQL Server Database Mail
401
EXERCISE 7.1 (continued)
9.
Once you have read the summary screen and are sure that the options are correct, click Finish to implement your changes. This will open a report dialog box.
Once the database mail configuration is complete, you can build applications that use Database Mail stored procedures to send email messages directly from SQL Server without having to build custom mail handling routines into your applications.
Managing SQL Server Database Mail As you have seen, SQL Server 2005’s email integration is no longer a kludge that relies on external COM components. It leverages the underlying Service Broker architecture to deliver an industrial-strength, enterprise-ready email solution. Consequently, you can use a number of system stored procedures to configure, manage, and monitor the SQL Server Database Mail subsystem. Table 7.1 briefly describes these stored procedures. TABLE 7.1
Database Mail Stored Procedures
Stored Procedure
Description
sysmail_add_account_sp
Creates a new Database Mail account holding information about an SMTP account.
40521c07.fm Page 402 Monday, August 14, 2006 12:47 PM
402
Chapter 7
TABLE 7.1
Designing Messaging Services for a Database Solution
Database Mail Stored Procedures (continued)
Stored Procedure
Description
sysmail_add_principalprofile_sp
Grants permission for a database user or role to use a Database Mail profile.
sysmail_add_profile_sp
Creates a new Database Mail profile.
sysmail_add_profileaccount_sp
Adds a Database Mail account to a Database Mail profile.
sysmail_configure_sp
Changes configuration settings for Database Mail.
sysmail_delete_account_sp
Deletes a Database Mail SMTP account.
sysmail_delete_log_sp
Deletes events from the Database Mail log.
sysmail_delete_mailitems_sp
Permanently deletes email messages from the Database Mail internal tables.
sysmail_delete_principalprofile_sp
Removes permission for a database user or role to use a public or private Database Mail profile.
sysmail_delete_profile_sp
Deletes a mail profile used by Database Mail.
sysmail_delete_profileaccount_sp
Removes an account from a Database Mail profile.
sysmail_help_account_sp
Lists information about Database Mail accounts. Does not return password information.
sysmail_help_configure_sp
Displays configuration settings for Database Mail.
sysmail_help_principalprofile_sp
Lists information about associations between Database Mail profiles and database principals.
sysmail_help_profile_sp
Lists information about one or more mail profiles.
sysmail_help_profileaccount_sp
Lists the accounts associated with one or more Database Mail profiles.
sysmail_help_queue_sp
Lists information about the state of the mail or status queues.
sysmail_help_status_sp
Displays the status of Database Mail queues.
sysmail_start_sp
Starts Database Mail by starting the Service Broker objects that the external program uses.
40521c07.fm Page 403 Monday, August 14, 2006 12:47 PM
Using SQL Server Database Mail
TABLE 7.1
403
Database Mail Stored Procedures (continued)
Stored Procedure
Description
sysmail_stop_sp
Stops Database Mail by stopping the Service Broker objects that the external program uses.
sysmail_update_account_sp
Changes the information in an existing Database Mail account.
sysmail_update_principalprofile_sp
Updates the information for an association between a principal and a profile.
sysmail_update_profile_sp
Changes the description or name of a Database Mail profile.
sysmail_update_profileaccount_sp
Updates the sequence number of an account within a Database Mail profile.
Execute sysmail_add_profileaccount_sp after both the Database Mail account and the profile have been created, because they must already exist.
So, if you wanted to reconfigure your Database Mail subsystem to retry three times in the case of a timeout/failure and to allow 10MB email attachments, you would execute the following script: USE msdb ; GO -- Reconfigure Database Mail to retry 3 times EXEC sysmail_configure_sp 'AccountRetryAttempts', '3' -- Reconfigure Database Mail to allow 10MB attachments EXEC sysmail_configure_sp 'MaxFileSize', '10240000'
Using SQL Server Database Mail The main interface between user applications and Database Mail is the system stored procedure dbo.sp_send_dbmail, which is located in the msdb database. The syntax for the sp_send_ dbmail stored procedure is as follows: sp_send_dbmail [ [ @profile_name = ] 'profile_name' ] [ , [ @recipients = ] 'recipients [ ; ...n ]' ] [ , [ @copy_recipients = ] 'copy_recipient [ ; ...n ]' ]
40521c07.fm Page 404 Monday, August 14, 2006 12:47 PM
404
Chapter 7
Designing Messaging Services for a Database Solution
[ , [ @blind_copy_recipients = ] 'blind_copy_recipient [ ; ...n ]' ] [ , [ @subject = ] 'subject' ] [ , [ @body = ] 'body' ] [ , [ @body_format = ] 'body_format' ] [ , [ @importance = ] 'importance' ] [ , [ @sensitivity = ] 'sensitivity' ] [ , [ @file_attachments = ] 'attachment [ ; ...n ]' ] [ , [ @query = ] 'query' ] [ , [ @execute_query_database = ] 'execute_query_database' ] [ , [ @attach_query_result_as_file = ] attach_query_result_as_file ] [ , [ @query_attachment_filename = ] query_attachment_filename ] [ , [ @query_result_header = ] query_result_header ] [ , [ @query_result_width = ] query_result_width ] [ , [ @query_result_separator = ] 'query_result_separator' ] [ , [ @exclude_query_output = ] exclude_query_output ] [ , [ @append_query_error = ] append_query_error ] [ , [ @query_no_truncate = ] query_no_truncate ] [ , [ @mailitem_id = ] mailitem_id ] [ OUTPUT ]
To send a simple email message using the profile created previously, you would execute the following script: EXEC msdb..sp_send_dbmail @profile_name = 'MyProfile', @recipients = '
[email protected]', @subject = 'Training/Consulting Requirement', @body = 'Dear Victor, We need some SQL Server training and consulting. Are you available?', @importance = 'High', @body_format = 'Text'
SQL Database Mail is an easy-to-use tool included in SQL Server that makes it easy to embed email messaging in database applications.
40521c07.fm Page 405 Monday, August 14, 2006 12:47 PM
Using SQL Server Database Mail
405
Database architects and developers always seem to want to do things in a more complicated fashion than required. A common example is when users need to be alerted via email when some event happens in the database. Invariably they start talking about generating a custom error message via the RAISERROR T-SQL statement in a stored procedure or trigger, which writes to the Windows Event log, which in turn generates a SQL alert that sends an email. Phew…They forget you can directly send an email via sp_send_dbmail in the same stored procedure or trigger, saving time and processor resources!
Migrating from SQL Mail to Database Mail Migrating from SQL Mail to Database Mail is really quite simple. SQL Mail comes in SQL Server 2005 for backward compatibility. It will be removed in a future version of SQL Server.
SQL Mail is not supported on 64-bit versions of SQL Server 2005. SQL Mail stored procedures cannot be installed on the 64-bit versions of SQL Server 2005.
Additionally, there are behavioral differences between SQL Mail in SQL Server 2005 and earlier versions. SQL Server–authenticated users can send email attachments using SQL Mail only if they are members of the sysadmin fixed server role. So, obviously, you should avoid using SQL Mail for any new database solutions. However, you might have application components that currently use this feature. In this case, you should plan to modify these components to use Database Mail instead. This means recoding any application components or T-SQL code that uses the various stored procedures and extended stored procedures from earlier versions of SQL Server:
sp_processmail
xp_deletemail
xp_findnextmsg
xp_readmail
xp_sendmail
xp_startmail
xp_stopmail
The xp_sendmail extended stored procedure was the main mail one used in earlier versions of SQL Server. You should replace calls to xp_sendmail with sp_send_dbmail. Table 7.2 shows the conversion of arguments between the two procedures. TABLE 7.2
SQL Mail to Database Mail Parameter Conversion
xp_sendmail Argument
sp_send_dbmail Argument
@attach_results
@attach_query_result_as_file
@attachments
@file_attachments
40521c07.fm Page 406 Monday, August 14, 2006 12:47 PM
Chapter 7
406
TABLE 7.2
Designing Messaging Services for a Database Solution
SQL Mail to Database Mail Parameter Conversion (continued)
xp_sendmail Argument
sp_send_dbmail Argument
@blind_copy_recipients
@blind_copy_recipients
@copy_recipients
@copy_recipients
@dbuse
@execute_query_database
@echo_error @message
@body
@no_header
@query_result_header
@no_output
@exclude_query_output
@query
@query
@recipients
@recipients
@separator
@query_result_separator
@set_user @subject
@subject
@type @width
@query_result_width @profile @body_format @importance @sensitivity @query_attachment_filename
As you can see, some parameters are not supported in SQL Mail.
40521c07.fm Page 407 Monday, August 14, 2006 12:47 PM
Using SQL Server Database Mail
407
Using SQL Server Agent Mail The SQL Server Agent service is a Windows service that is typically used to automate administrative and other tasks by scheduling jobs to run at particular times, monitor SQL Server, and process SQL Server alerts. It includes the ability to send emails. Typically SQL Server Agent is configured to send emails to predefined operators when some scheduled task, such as a backup, replication job, or data transfer task, succeeds or fails. You can also use it to generate an alert when a particular database event or operating system condition has occurred. SQL Server Agent Mail supports both Database Mail and SQL Mail. Figure 7.11 shows where you can set up email support for SQL Server Agent via SQL Server Management Studio. I covered the SQL Server Agent in more detail when we looked at “Designing Objects that Perform Actions” at the end of Chapter 2. FIGURE 7.11
Configuring email support for SQL Server Agent
40521c07.fm Page 408 Monday, August 14, 2006 12:47 PM
408
Chapter 7
Designing Messaging Services for a Database Solution
Summary In this chapter, you learned about the new service-oriented architecture paradigm and how SQL Server 2005 has changed from a simple database server to a fully SOA aware application server. I covered the architecture of SQL Server Service Broker, and you learned how you can leverage its components to build scalable, asynchronous, message-based applications. You then learned about the SQL Server Notification Services architecture and how you can use its infrastructure to develop applications that generate and send notifications to subscribers. The chapter finally covered the SQL Database Mail component of SQL Server 2005, its architecture, how to configure it, and how to send emails. You also learned that SQL Mail is being deprecated and should not be used.
Exam Essentials Know how to select and design SQL Server Services to support business needs. This is a subjective section, but it focuses mainly on service-oriented architectures and how SQL Server 2005 enabled the creation of SOA-based applications. Service Broker is mentioned several times on the exam, so understand how Service Broker is configured and used. Understand how to develop applications for Notification Services. This section of the exam focuses heavily on Notification Services and the various configuration requirements. Remember the syntax and elements of both the ICF and ADF files, as well as the limitations of SQL Server Standard Edition when it comes to Notification Services. Know how to design data distribution. This section of the exam focuses on using Database Mail and Service Broker and how you can use each technology within service-oriented architectures.
40521c07.fm Page 409 Monday, August 14, 2006 12:47 PM
Review Questions
409
Review Questions 1.
You are an enterprise application architect and have been asked to explain to your employer why you would benefit from service-oriented architectures. You have many applications that have been built using DCOM and CORBA technologies. What advantages will SOA bring to your organization? A. Service-oriented architecture applications are generally smaller than their component counterparts and are therefore easier to maintain. B. Service-oriented architectures are generally implemented with self-contained services; therefore, you’ll have fewer issues with client applications needing to cross application and network boundaries. C. Service-oriented architecture is fully integrated into SQL Server 2005 and Visual Studio 2005; therefore, you don’t need any third-party development tools. D. All of the above.
2.
How many participants can there be in a Service Broker conversation? A. One B. Two C. Three D. Unlimited
3.
When a Service Broker application is configured, which configuration element specifies the types of messages that can be exchanged? A. Service contract B. Message type C. Conversation group D. Service dialog
4.
You are an application developer who needs to build a framework for asynchronous message passing. You read in SQL Server Books Online that Service Broker is designed for asynchronous operations, but you are concerned about transactional boundaries. Which Service Broker feature allows you to define transactional boundaries for message-based applications? A. Conversation groups B. Message dialogs C. Thread pools D. Service Broker instances
40521c07.fm Page 410 Monday, August 14, 2006 12:47 PM
410
5.
Chapter 7
Designing Messaging Services for a Database Solution
You are an application developer building a Service Broker application. You are concerned that a particular exchange may be long running, but you want to ensure that the message times out after a reasonable time. What can you do to ensure that your message does time out? A. Implement a conversation timeout dialog. B. Utilize a message timer component, and roll back the transaction if the maximum time is exceeded. C. Specify the dialog lifetime when initiating the conversation. D. Service Broker is asynchronous, and therefore by definition messages do not time out.
6.
You are a developer working on a Service Broker application and want to move your code to a staging server to begin some testing. Once you have moved your code, you are unable to get anything to work properly. The DBA of the staging server explains that Service Broker is currently “off.” How do you turn on Service Broker so you can test your application? A. Use the SQL Server Surface Area Configuration tool to enable Service Broker applications. B. Run the SQL Server installation to enable Service Broker. C. Use SQL Server Management Studio to enable Service Broker. D. Execute the command ALTER DATABASE…ENABLE BROKER.
7.
The administrator of a SQL Server where you have recently deployed a new Service Broker application is complaining that the Service Broker queues are getting large and conversations appear to be never ending. What have you most likely forgotten to do in your application? A. Set a conversation timeout. B. Enable a dialog teardown request. C. Execute the END CONVERSATION command. D. Execute the COMMIT TRANSACTION command.
8.
You are the developer within your team who is responsible for alerts that will be generated when certain conditions are met. You need to build a system that is capable of handling asynchronous messaging to make people aware of the alerts. Which SQL Server 2005 technology is best suited for this type of application? A. Notification Services B. Service Broker C. Database Mail D. DDL triggers
9.
You are the developer of an application that uses Notification Services. When you built the application, you used the minimal ADF configuration template as a guide. The DBA of the SQL Server running your application would like to rename the database to fit corporate standards. What must you do? A. Simply rename the database in SQL Server Management Studio. B. Detach the Notification Services database, and reattach it using the new name. C. Modify the ADF, and recompile the instance. D. You cannot rename the database once it is created.
40521c07.fm Page 411 Monday, August 14, 2006 12:47 PM
Review Questions
411
10. The DBA responsible for the SQL Server where you have recently deployed your Notification Services application is complaining about poor performance from other applications running on the server. What is the best possible solution to this problem? A. Move other applications off the server, and dedicate the entire server to Notification Services. B. Move the Notification Services components to a server of their own. C. Add random access memory (RAM) to the server. D. Check to ensure tempdb has been properly optimized for use with Notification Services. 11. You are defining the event classes for your Notification Services application. What is the most important consideration when defining event classes? A. Keep the event class tables “narrow.” B. Ensure event classes have descriptive definitions. C. Ensure each event class has its own filegroup. D. Ensure each event class has a unique name. 12. You are developing a Notification Services application where one of the requirements is that notifications be delivered in HTML. Which content format should you use in the Notification Class configuration? A. HTML B. Text C. XSLT D. XML 13. You are building a Notification Services application to monitor your company’s key performance indicators that have been defined within a data-warehousing application. Which event provider can you use to process MDX queries? A. The hosted event provider B. The T-SQL event provider C. The Analysis Services event provider D. The FileSystem event provider 14. The DBA responsible for the system where you have deployed your Notification Services application has observed that the event generator seems to be generating events in a serial fashion, even though you have configured the thread pool to generate a maximum of 10 events in parallel. What is the possible problem? A. The event generator configuration is incorrect. B. The wrong edition of SQL Server is installed. C. Something happened when the ADF was compiled. D. The machine does not have enough memory.
40521c07.fm Page 412 Monday, August 14, 2006 12:47 PM
412
Chapter 7
Designing Messaging Services for a Database Solution
15. You are designing a Notification Services application where one of the requirements is that events are processed every five seconds. You configured the application using the minimal ADF template and notice that events are being processed every 60 seconds. Where in the ADF can you change this duration? A. The EventDistributor Thread Pool size element B. The EventDistributor Quantum Duration element C. The EventNotifier Timestamp element D. The EventProvider Schedule element 16. You are an application developer responsible for maintaining your organization’s Notification Services applications. Your corporate policy has recently changed to require that all custombuilt applications conform to a specific versioning strategy. Where can you configure version information in your Notification Services applications? A. Configure version information using NMO and the .NET global assembly cache (GAC). B. Configure version information using the Version element in the ICF. C. Configure version information manually using the version.xml file. D. Create a .NET resource file that contains version information. 17. When the DBA on the server you have just moved your Notification Services application to tries to start the instance, the service fails with a login failure. What must you do to allow your application to run on the new server? A. Ensure the account is granted the RunService operating system right. B. Ensure the account is a member of the NSRunService database user role. C. Ensure the account is a member of the Server Administrators server role. D. Ensure the account has FULL CONTROL permissions to the Registry on the SQL Server. 18. You are an application developer responsible for creating SQL Server 2005 applications that send email notifications on a regular basis. Which technology is best suited for sending rich, industry-standard email in SQL Server 2005? A. SQL Mail B. Notification Services C. Service Broker D. Database Mail 19. You are developing a SQL Database Mail application and want to limit the users who are allowed to send messages through the application. What can you do to ensure not all users can send email? A. Ensure that only users who are allowed to send mail have access to your application. B. Ensure that only users who are allowed to send mail are placed in the DatabaseMailUser database role in the msdb database. C. Ensure that only users who are allowed to send mail have EXECUTE access to the Database Mail extended stored procedures. D. You cannot limit access to Database Mail once it is configured.
40521c07.fm Page 413 Monday, August 14, 2006 12:47 PM
Review Questions
413
20. You are creating an application that will use Database Mail to send email messages. Different portions of the application will send messages to multiple recipients with different FROM addresses depending on the portion of the application from which the message was generated. How will you configure this? A. Use multiple email accounts in a single profile. B. Use multiple email profiles. C. No configuration is necessary; the application controls this. D. Use multiple SMTP servers.
40521c07.fm Page 414 Monday, August 14, 2006 12:47 PM
414
Chapter 7
Designing Messaging Services for a Database Solution
Answers to Review Questions 1.
B. The main driving force behind SOA is the reduction of coupling between application components. Both DCOM and CORBA suffer from the need to maintain network information and credentials between components.
2.
B. There are always exactly two participants in any Service Broker conversation.
3.
A. Service Broker service contracts specify the types of messages that can be passed between Service Broker services.
4.
A. Service Broker conversation groups are designed to group messages into a single transactional unit.
5.
C. The dialog lifetime specifies how long a specific dialog can last. If that time is exceeded, a dialog timeout message is sent to both parties of the dialog automatically.
6.
D. For Service Broker to function, the database you are using must be enabled for Service Broker.
7.
C. Dialogs in Service Broker do not end automatically and must be completed with the END CONVERSATION command.
8.
A. Notification Services is designed to react to events that occur and send notifications to subscribers when those events occur. Notification Services is extensible and scalable.
9.
C. You define the database instance configuration for Notification Services applications through the ADF. If you need to make a change, you must modify the ADF and recompile the instance.
10. D. Notification Services uses tempd extensively, so any server running a Notification Services application must have an optimized tempdb database. 11. A. When defining event classes, it is imperative that the table definition be as small or narrow as possible. The narrower the table, the more efficiently the event can be processed. 12. C. The XSLT content formatter is designed to transform the notification from the base information stored in the schema into a format viable for presentation. 13. C. The Analysis Services event provider is capable of using both static and dynamic MDX queries in order to query Analysis Services cubes. 14. B. SQL Server Standard Edition will support only a maximum of one thread per event generator. This means all events will be generated in a serial fashion, and any thread pool configuration information will be ignored. 15. B. Events are processed on a schedule known as the Quantum Duration, which is a component of the event distributor. 16. B. All Notification Services version information is stored in the Version element of the ICF.
40521c07.fm Page 415 Monday, August 14, 2006 12:47 PM
Answers to Review Questions
415
17. B. You must manually add the Windows account as a login in SQL Server and as a member of the NSRunService database role for Notification Services to function properly. 18. D. SQL Server Database Mail has been completely rewritten in SQL Server 2005 to be a robust, scalable platform to send email messages based on the industry-standard SMTP. 19. B. Database Mail is secure and ensures only those users with permission can actually send messages. 20. B. The email profile specifies the FROM address attached to the message.
40521c07.fm Page 416 Monday, August 14, 2006 12:47 PM
40521.book Page 417 Tuesday, August 8, 2006 1:21 PM
Chapter
8
Designing a Reporting Services Solution MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER: Design an application solution that supports reporting.
Design a snapshot strategy.
Design the schema.
Design the data transformation.
Design the indexes for reporting.
Choose programmatic interfaces.
Evaluate use of reporting services.
Decide which data access method to use.
Design data distribution.
Specify a Reporting Services solution for distributing data.
Develop applications that use Reporting Services.
Specify subscription models, testing reports, error handling, and server impact.
Design reports.
Specify data source configuration.
Optimize reports.
Select and design SQL Server services to support business needs.
Design a Reporting Services.
40521.book Page 418 Tuesday, August 8, 2006 1:21 PM
When Microsoft introduced SQL Server Reporting Services (SSRS) for SQL 2000, many considered it an amazing application add-on. Finally a product allowed application developers to develop rich reports and integrate them without purchasing expensive add-on licenses or paying huge royalties for commercial applications. Soon after the release of SSRS, however, it became apparent that it truly was a “version 1” product and it had many obstacles to overcome. The release of SQL Server 2005 saw SSRS included as a component and has overcome most of those early obstacles, and Microsoft has added much-awaited functionality to the product that makes SSRS a compelling reporting solution in the database-reporting application world. SSRS targets an entirely new business audience—users who want to interact with their reports in an ad hoc fashion as well as create and distribute new and exciting report formats and designs. SSRS is the first integrated reporting solution that targets both the “hard-core” developer and the casual user who wants to develop reports and possibly distribute them. SSRS focuses on four main product themes: Improved core features SSRS for SQL 2005 ships with many improvements over the 2000 version, including your options for designing reports, processing reports, and interacting with report consumers; it also offers better performance and scalability. Better integration SSRS supports much better integration with SQL Server components, such as Integration Services, Analysis Services, and SQL Server Management Studio. SSRS also supports enhanced integration with Microsoft SharePoint technologies, allowing for a much richer executive dashboard experience. Better developer experience SSRS has much tighter integration with Microsoft Visual Studio 2005, including several new, freely distributable ASP.NET and WinForms controls, making it much easier to distribute SSRS reports within custom applications. Better user experience SSRS includes a new application called Report Builder, which is an ad hoc report-building tool that allows users to develop their own SSRS reports “on the fly” without a lot of knowledge of the underlying data on which the report is built. All these new features combine with already existing features to make SSRS a compelling report generation tool that any developer responsible for data-driven applications should make an effort to understand. This chapter will use step-by-step examples to show how to create reports using SSRS; you have three options for developing reports, as covered in the following section.
40521.book Page 419 Tuesday, August 8, 2006 1:21 PM
Understanding the Report Model
419
Understanding Your Options for Developing Reports You can build reports using SSRS in three ways: The Visual Studio Report Wizard This is a simple wizard-driven interface built in to Microsoft Visual Studio 2005 that allows you to easily generate reports. The Visual Studio Report Designer This is the primary developer-driven report generation tool. You can access this via the full-blown Microsoft Visual Studio system or the basic Visual Studio system that is installed with SQL Server 2005. Report Builder Report Builder is a Microsoft ClickOnce WinForms application that is installed from the report server and has the look and feel of Microsoft Office applications. Report Builder is limited to a specific set of templates. No matter which method you choose to create reports, the result is an .rdl file that represents the report.
Report Definition Language (RDL) is an Extensible Markup Language (XML) schema that defines all the elements and objects that can be part of an SSRS report. For more information about RDL, see www.microsoft.com/sql/ technologies/reporting/rdlspec.mspx.
Understanding the Report Model One of the more important new features of SSRS in SQL Server 2005 is the ability of users to create ad hoc reports using the Report Builder application introduced previously. It is important to understand that Report Builder uses a report model as the basis of a report. A report model is basically a layer of abstraction that sits on top of your data. It’s a semantic model separated from the physical model. It’s essentially a metadata description of the data structures and relationship that you allow users to work with when creating these reports. Figure 8.1 shows the objects that make up this semantic model. The report model basically allows developers to control what data users will be able to report against, and it hides the potential complexity of the database solution. You don’t have to waste your precious time developing endless reports because the users can do it themselves now!
40521.book Page 420 Tuesday, August 8, 2006 1:21 PM
420
Chapter 8
FIGURE 8.1
Designing a Reporting Services Solution
Report model semantic objects Entity Folder Item
Semantic Model
Entity Folder
Entity
Field Folder Item
Inheritance
Sort Attribute Identifying Attribute
Variations
Field Footer
Field
Default Detail Attribute Default Aggregate Attribute
RelatedRole Role
Hidden Field
Attribute
Attribute Reference
Expression
Path
Legend Containment 1 to 1 1 to 1-0 1 to 1-N
Reference Inheritance
Building a Report Model You can create report models by creating Report Model projects through the SQL Server Business Intelligence Development Studio (Visual Studio really). To create a report model, simply create a new project in Visual Studio, select the Report Model Project template, provide the relevant name, and click OK, as shown in Figure 8.2. You first need to define the data source against which the model will be built. This is a simple process that is taken care for you through the Data Source Wizard. For this example, you will use the AdventureWorks database located on a SQL Server instance. To start the Data Source Wizard,
40521.book Page 421 Tuesday, August 8, 2006 1:21 PM
Understanding the Report Model
421
right-click the Data Sources folder located in Solution Explorer, and choose Add New Data Source. This opens the Data Source Wizard’s welcome screen, as shown in Figure 8.3. Click the Next button to get to the window shown in Figure 8.4, which is where you will create the various connections to the data sources that will make up your report model. In this case, you want to create a simple report model against the AdventureWorks database, so click the New button to get to the Connection Manager dialog box. Type (local) for your SQL Server server name, and select the AdventureWorks database from the drop-down list, as shown in Figure 8.5. FIGURE 8.2
Choosing the Report Model Project template
FIGURE 8.3
Using the Data Source Wizard
40521.book Page 422 Tuesday, August 8, 2006 1:21 PM
422
Chapter 8
Designing a Reporting Services Solution
FIGURE 8.4
Creating data sources
FIGURE 8.5
Connection Manager dialog box
Click the OK button. You should now be back to the Data Source Wizard, but this time it displays a configured data connection, as shown in Figure 8.6. Notice you are using the native SQL client to connect to the SQL Server instance as opposed to OLE DB or ODBC. Click the Finish button, and give the data source a name by typing Adventure Works. You have just created a data source connection!
40521.book Page 423 Tuesday, August 8, 2006 1:21 PM
Understanding the Report Model
423
Next, you need to create a data source view. Data source views basically describe the database(s) to which the data sources defined earlier are pointing. They fundamentally contain the metadata of the data sources, such as the various entities that exist and the relationships between them. To create a data source view, right-click the Data Source Views folder in Solution Explorer, and choose the Add New Data Source View option. This opens the Data Source View Wizard, as shown in Figure 8.7. FIGURE 8.6
A configured data connection
FIGURE 8.7
Using the Data Source View Wizard
40521.book Page 424 Tuesday, August 8, 2006 1:21 PM
424
Chapter 8
Designing a Reporting Services Solution
Click the Next button to get to the next window, as shown in Figure 8.8, which allows you to select the defined data sources you will use in this data source view. In this case, you have only one data source defined, so click the Next button. You should now see the window shown in Figure 8.9, which allows you choose the entities (tables or views) that will be available in this particular data source view. In this example, you are creating a report model for the sales department. So, highlight all the views that belong to the Sales schema, and add them to the list of included objects. FIGURE 8.8
Selecting a defined data source
FIGURE 8.9
Including objects
40521.book Page 425 Tuesday, August 8, 2006 1:21 PM
Understanding the Report Model
425
Click the Next button. Give the data source view a name of Adventure Works – Sales, as shown in Figure 8.10. Then click the Finish button. Next, you need to define the report model. So, right-click the Report Models folder in Solution Explorer, and choose the Add New Report Model option. This kicks off yet another wizard, this time the Report Model Wizard shown in Figure 8.11. Click the Next button to get to the window shown in Figure 8.12, which allows you to choose the data source view you will be using as a basis for the report model. In this case, it is the Adventure Works – Sales data source view created previously. FIGURE 8.10
Naming the data source view
FIGURE 8.11
Using the Report Model Wizard
40521.book Page 426 Tuesday, August 8, 2006 1:21 PM
426
Chapter 8
Designing a Reporting Services Solution
The next window, shown in Figure 8.13, allows you to control what metadata will be generated through the Report Model Wizard. You should leave the defaults.
For more information about what the various rules do within the Report Model Wizard, read the “Select Report Model Generation Rules (Report Model)” topic in SQL Server 2005 Books Online.
FIGURE 8.12
Choosing a data source view
FIGURE 8.13 Rules window
Report Model Wizard’s Select Report Model Metadata Generation
40521.book Page 427 Tuesday, August 8, 2006 1:21 PM
Understanding the Report Model
427
Leave the defaults, and click the Next button. The next window, as shown in Figure 8.14, basically controls how the Report Model Wizard will collect statistics about your data source so it can set various properties for the report model. Since this is a virgin report model, leave the default of updating the statistics. Click the Next button. Name the report model Adventure Works – Sales, as shown in Figure 8.15, and click the Run button. FIGURE 8.14
Report Model Wizard’s Collect Model Statistics window
FIGURE 8.15
Naming the report model
40521.book Page 428 Tuesday, August 8, 2006 1:21 PM
428
Chapter 8
Designing a Reporting Services Solution
You will need to wait while the Report Model Wizard does its “magic.” Eventually, after a number of passes, it should complete, and you should see a screen similar to that shown in Figure 8.16. Click the Finish button to complete the process. Finally! These wizards are hard work, ain’t they? And I thought they were supposed to make our lives easier! What you should now have is a report model defined as shown in Figure 8.17. FIGURE 8.16
Report Model Wizard’s Completing the Wizard window
FIGURE 8.17
The report model you created
40521.book Page 429 Tuesday, August 8, 2006 1:21 PM
Understanding the Report Model
429
Once you have defined the report model, you can customize and tweak it depending on your requirements. You could, for example, change attribute’s names or even delete them.
Deploying a Report Model Once you are happy with your report model, it is time to publish it to a report server. You can publish it in a few ways. Figure 8.18 shows one technique—you can select the Build menu and then choose the Deploy AdventureWorks Report Model option.
Don’t forget to republish the report model if the data source view changes.
Make sure you check for any warnings or errors in the Output pane, as shown in Figure 8.19.
Using a Report Model I’ll tie up the report model discussing by showing you how users will work with it. Remember, you are concentrating on the ability of empowering information workers with the ability of creating their own ad hoc reports. I will be covering how to design reports in more detail shortly. What I’ll cover now is how your information workers can connect to your reporting server and work with the report model. FIGURE 8.18
Publishing the report model
40521.book Page 430 Tuesday, August 8, 2006 1:21 PM
430
Chapter 8
Designing a Reporting Services Solution
Figure 8.20 shows a virgin report model website. Information workers who want to create an ad hoc report can click the Report Builder button. FIGURE 8.19
Checking for deployment errors and warnings
FIGURE 8.20
Report server website
40521.book Page 431 Tuesday, August 8, 2006 1:21 PM
Using the SSRS Report Wizard to Design Reports
431
The Report Builder button launches the Report Builder application discussed earlier, enabling the information worker to create ad hoc reports against the defined report model. Figure 8.21 shows an information worker selecting the Adventure Works – Sales report model created previously. Once the user has chosen the Adventure Works – Sales report model, they will be able to create their own reports, as shown in Figure 8.22. They will be able to see only the entities and attributes you have exposed to them through the report model.
Using the SSRS Report Wizard to Design Reports The Report Wizard is a component of the SSRS Report Designer in Visual Studio. Visual Studio contains a project template called the Report Server Project Wizard, as shown in Figure 8.23, that launches the Report Wizard. FIGURE 8.21
Selecting Report Model in Report Builder
40521.book Page 432 Tuesday, August 8, 2006 1:21 PM
432
Chapter 8
Designing a Reporting Services Solution
FIGURE 8.22
Building ad hoc report within Report Builder
FIGURE 8.23
Report Server Project Wizard template
40521.book Page 433 Tuesday, August 8, 2006 1:21 PM
Using the SSRS Report Wizard to Design Reports
433
The Report Wizard is a simple interface that allows report designers to create either tabular reports (reports that show data in a table format with a fixed number of columns) or matrix reports (reports that show data in a cross-tab format with dynamic columns). To use the Report Wizard, simply create a new project in Visual Studio, select the Report Server Project Wizard template, give the project a name, and click OK. This will open the Report Wizard welcome page, as shown in Figure 8.24. Once the welcome page has appeared, click the Next button, which will open the Select the Data Source window, as shown in Figure 8.25. The Select the Data Source window allows you to either create a new data source or specify an existing data source to use to connect to a data source for your new report. FIGURE 8.24
Report Wizard welcome page
FIGURE 8.25 The Select the Data Source window allows you to select an existing data source or create a new one for your report.
40521.book Page 434 Tuesday, August 8, 2006 1:21 PM
434
Chapter 8
Designing a Reporting Services Solution
To create a new data source, click Edit to open the Connection Properties dialog box. Select a database server and the database you want to use as the source for your report, and click OK, as shown in Figure 8.26. This will return you to the Select the Data Source window. From the Select the Data Source window, click Next to open the Design the Query window. This window either allows you to execute the Query Builder tool, which is a graphical tool designed to simplify query creation, or allows you to enter the SQL to be used in the report. Enter the query SELECT * FROM Production.vProductAndDescription as shown in Figure 8.27, and then click Next to open the Select the Report Type window. FIGURE 8.26 The Connection Properties dialog box allows you to specify the connection information for your report.
FIGURE 8.27 your report.
The Design the Query window allows you to specify the query for
40521.book Page 435 Tuesday, August 8, 2006 1:21 PM
Using the SSRS Report Wizard to Design Reports
435
The Select the Report Type window allows you to select whether your report will be a simple tabular report or a more complex cross-tab report, as shown in Figure 8.28. In this case, create a tabular report. Select the Tabular radio button, and click Next to open the report table designer, as shown in Figure 8.29. The table designer allows you to select how the table will be constructed, such as which columns will appear on page headers, how the report will be grouped, and what details to show on the report. FIGURE 8.28 The Select the Report Type window allows you to select what type of report you will create.
FIGURE 8.29
The table designer allows you specify the design of your tabular report.
40521.book Page 436 Tuesday, August 8, 2006 1:21 PM
436
Chapter 8
Designing a Reporting Services Solution
Once you’ve specified the table design, click Next to open the table layout designer, as shown in Figure 8.30. This window allows you to specify the “look and feel” of the report, whether the data will be “blocked” (boxes painted around the cells), and whether to include subtotals or drill-down capabilities. Once you’ve specified the layout, click Next to open the table style designer, as shown in Figure 8.31. This window allows you to specify a template to use when rendering the report. Microsoft includes some basic templates that are similar to Microsoft Office design layouts. FIGURE 8.30 will look.
The table layout designer allows you to specify how the report table
FIGURE 8.31 The table style designer allows you to specify a template to use when rendering the report.
40521.book Page 437 Tuesday, August 8, 2006 1:21 PM
Using Microsoft Visual Studio to Develop Reporting Services Reports
437
Once you have applied a template to the report, click Next to open the final Report Wizard window. This window allows you to see exactly what the report contains and allows you to preview the report, as shown in Figure 8.32. Once you’ve saved the report, it opens in Visual Studio. If you selected the Preview Report check box on the previous page, the report will be rendered and shown in preview mode, as shown in Figure 8.33. FIGURE 8.32 The Report Wizard’s final screen allows you to view the specific report settings and preview the report.
The Report Wizard is a simple, easy-to-use way to develop simple tabular or cross-tab reports. Once the RDL file is created, you can edit it just like any other Reporting Services report in Visual Studio.
Using Microsoft Visual Studio to Develop Reporting Services Reports If you want more control over how your report is created or if you want to develop complex reports, you must use the report designer within Visual Studio. The report designer is a freeform tool that starts with a blank sheet and allows you to add the controls and elements that you want to include in your report.
40521.book Page 438 Tuesday, August 8, 2006 1:21 PM
438
Chapter 8
FIGURE 8.33
Designing a Reporting Services Solution
Once the report is saved, it opens in Visual Studio in preview mode.
Developing complex reports is well beyond the scope of this book; however, many great resources are available if you want to learn more about designing complex SQL Server Reporting Services reports. See the Microsoft Reporting Services resources page at www.microsoft.com/sql/technologies/reporting/ default.mspx.
Creating a report from “scratch” is not much more difficult than using the wizard, because the steps to create a report are the same no matter which method you choose. The basic steps for creating a report are as follows: 1.
Create the basic RDL file.
2.
Add connection information.
3.
Add query information.
4.
Add report elements to the RDL file.
To create a new report from scratch and add it to the existing project that was created in the previous steps, first right-click the Reports folder in your Visual Studio project. Then select
40521.book Page 439 Tuesday, August 8, 2006 1:21 PM
Using Microsoft Visual Studio to Develop Reporting Services Reports
439
Add New Item, and then select Report from the Templates list in the Add New Item dialog box, as shown in Figure 8.34. Once you’ve added the report to Visual Studio, a blank report will open showing that there are no configured data sources, as shown in Figure 8.35. FIGURE 8.34
Adding a new report from scratch
FIGURE 8.35 Once you’ve added the new report to Visual Studio, the Data Connections window will appear.
40521.book Page 440 Tuesday, August 8, 2006 1:21 PM
440
Chapter 8
Designing a Reporting Services Solution
To add a new connection, select New Dataset from the Dataset drop-down menu at the top of the window. This will open the Data Source dialog box, as shown in Figure 8.36. (This is same window that appears when you use the wizard to add a new report.) To add a new data source, click Edit in the Data Source dialog box, which opens the Connection Properties dialog box, as shown in Figure 8.37. Select the server and database you want to connect to, and click OK, which will return you to the Data Source dialog box. FIGURE 8.36
Selecting a data source for the new report
FIGURE 8.37
Configuring a data source for the new report
40521.book Page 441 Tuesday, August 8, 2006 1:21 PM
Using Microsoft Visual Studio to Develop Reporting Services Reports
441
SSRS for SQL Server 2005 supports multiple data sources for reports. These data sources are not limited to relational data. These are the data sources supported by SSRS:
SQL Server 2005
SQL Server 2000
SQL Server 7
SQL 2005 Analysis Services
SQL 2000 Analysis Services
Oracle
OLEDB
ODBC
XML documents
Developers can also develop custom data sources, allowing for connections to data sources not specified previously. These are implemented as custom data-processing extensions. Each data-processing extension includes a standard set of interfaces, and by mixing custom data-processing extensions with built-in data-processing extensions, developers can enable seamless integration with enterprise-wide heterogeneous data sources.
When choosing a programmatic interface to use to connect to your data source, you should always try to use the native one if it exists. Otherwise, try using a more modern one (such as OLEDB) over an older technology (such as ODBC).
For the report to work properly, you must specify the credentials to use for the data source. Select the Credentials tab in the Data Source dialog box, and select the appropriate credentials to use for the report, as shown in Figure 8.38. FIGURE 8.38 access data.
You must select appropriate credentials to ensure the report can
40521.book Page 442 Tuesday, August 8, 2006 1:21 PM
442
Chapter 8
Designing a Reporting Services Solution
You can choose from several options on the Credentials tab; these options determine which credentials are used when the report is rendered by SSRS: Use Windows Authentication (Integrated Security) Use the credentials of the logged-on user to connect to the data source. Use a Specific User Name and Password Use SQL Server authentication with the username and password supplied. (This is an insecure method, because the credentials are stored in the connection string.) Prompt for Credentials This option will prompt the user for credentials when the report is executed. No Credentials This option will not use any credentials to connect to the data source. Once you’ve supplied the credentials, select OK to save the data source, which will open the report query window. The report query window allows you to specify the query that is executed to provide the DataSet used in the report. Type the following query as the query for your report: SELECT S.OrderDate, S.SalesOrderNumber, S.TotalDue, C.FirstName, C.LastName FROM HumanResources.Employee E INNER JOIN Person.Contact C ON E.ContactID = C.ContactID INNER JOIN Sales.SalesOrderHeader S ON E.EmployeeID = S.SalesPersonID
Once you’ve entered the query into the query window, select the exclamation point to execute the query and populate the DataSet, as shown in Figure 8.39. FIGURE 8.39
Creating the query and populating the DataSet to be used by the report
40521.book Page 443 Tuesday, August 8, 2006 1:21 PM
Using Microsoft Visual Studio to Develop Reporting Services Reports
443
This is a simple example of a query that you can use to create a report. You can also use the Query Builder to construct the query for you at this stage.
Once you’ve populated the DataSet, you need to create a report layout. Select the Layout tab at the top of the data window, which opens a blank report layout designer. Open the Toolbox to display a list of controls that you can use in the report, as shown in Figure 8.40. To create a simple table layout, drag a Table control from the Toolbox onto the report, which will create a simple table consisting of three rows and three columns, as shown in Figure 8.41. (Note that the Toolbox is set to autohide so the DataSet columns are visible during this process.) As shown in Figure 8.41, the table control consists of three main elements: the header, detail, and footer information. The header will display column headings, the detail will display the actual data, and the footer will display page and column footer information. Copying the appropriate data elements (in this case, OrderDate, OrderNumber, and TotalValue) from the DataSet onto the table columns will result in the simple report shown in Figure 8.42. FIGURE 8.40 Studio Toolbox
Creating the report layout using the report designer and the Visual
40521.book Page 444 Tuesday, August 8, 2006 1:21 PM
444
Chapter 8
Designing a Reporting Services Solution
FIGURE 8.41
Creating a simple table layout report
FIGURE 8.42
Applying the appropriate data elements and previewing the report
40521.book Page 445 Tuesday, August 8, 2006 1:21 PM
Understanding Report Layout and Rendering Options
445
The report shown in Figure 8.42 is a basic report, yet it was easy to create, and because of the SSRS architecture, it is easy to incorporate into applications you develop. You can use some simple techniques to make the report more pleasing to the eye, and Visual Studio’s report designer provides the controls necessary to extend the functionality of the report as necessary.
Understanding Report Layout and Rendering Options As demonstrated earlier, you can create an SSRS through a simple wizard interface or from scratch using the Visual Studio Report Designer for Reporting Services. No matter which method you choose to create reports, the end result is stored in an RDL file that you can package and ship with your applications as necessary. Every RDL file contains the report definition, which is basically broken down into three main areas: The report header The header contains information that repeats for every page in the report and is generally placed at the top of the page. The report body The body contains the actual information in the report. The report footer Like the header, the footer contains information that repeats for every page. The footer usually appears at the bottom of each page.
Technically you cannot add data-bound controls to report headers or footers; however, it is possible to write expressions that indirectly reference databound controls.
Working with Report Data Regions Every SSRS report must contain at least one DataSet that is used to obtain data for the report. Each DataSet has at least one field that references a data column, alias, or calculated expression. Each field has a name associated with it, which is the name that the report designer uses to reference that field. For example, if a DataSet has a column named LastName, SSRS will reference this column as Fields!LastName. Each report has one or more data regions, which are report items that display repeating rows of data from underlying DataSets. SSRS has four basic data region layouts: Table layout In table layouts, data is arranged in columns and rows. Tables have a fixed number of columns, and rows are determined by the underlying DataSet. Tables are flexible and can contain as many columns as you want. Each individual cell within a table can contain formatting information, and much like a Microsoft Excel spreadsheet, you can merge cells with other cells to provide for complex layout scenarios.
40521.book Page 446 Tuesday, August 8, 2006 1:21 PM
446
Chapter 8
Designing a Reporting Services Solution
Matrix layout In matrix layouts, data is also arranged in columns and rows; however, columns in a matrix layout are dynamic. Matrix layouts are often called cross-tab reports or pivot-table reports. Matrix reports are best used to display decision support or online transaction processing (OLAP) data. List layout A list data region repeats with each group or row in the DataSet. Lists are generally embedded within either table layouts or matrix layouts to group data elements. Chart layout A chart is simply a graphical representation of data in the DataSet. Chart data areas are similar to matrix groups.
The Chart control included with SSRS is an external component from Dundas Software and includes basic charting elements. For more information, see www.dundas.com.
An SSRS report can have multiple data regions, and you can nest data regions. A single report can have as many data regions as you’d like; however, having multiple data regions can impact the performance of the report.
Grouping Report Data You can use fields and expressions to group the data within each data region. You can use groups to provide logical sections of data within the report. When grouping data within a report, you can add expressions to both the header and footer of the group, such as subtotal information. This is useful when creating sales reports or other reports that contain subtotals. When using logical grouping operations within a report, the underlying query in the DataSet does not need to contain the logic necessary to support the group. (For example, you don’t need to use the T-Transact [T-SQL] GROUP BY clause.) This can help improve the performance of complex reports. You can also use SSRS grouping to solve problems that would be complex in T-SQL. For example, representing a recursive hierarchy such as an employee-manager relationship in T-SQL can be difficult. Using SSRS grouping, the problem is easy to solve, as Exercise 8.1 shows. EXERCISE 8.1
Grouping Data in SSRS 1.
Add a new report to the solution created previously (Figures 8.12 through 8.15 demonstrate these steps) by selecting the Reports folder, right-clicking and selecting Add, selecting New Item, and selecting the Report template. This will open the blank data region.
2.
From the DataSet drop-down list, choose New DataSet; this will open the Data Source dialog box.
40521.book Page 447 Tuesday, August 8, 2006 1:21 PM
Understanding Report Layout and Rendering Options
EXERCISE 8.1 (continued)
3.
Click the Edit button, and select your server and the AdventureWorks database.
4.
Click OK to close the Connection Properties dialog box, and click OK again to close the Data Source dialog box. This will open the DataSet query window.
5.
Type the following query into the DataSet query window, and click the exclamation point to execute: SELECT p.FirstName,p.LastName,e.EmployeeID,e.ManagerID FROM person.contact p JOIN humanresources.employee e ON p.ContactID = e.ContactID ORDER BY p.LastName,p.FirstName
6.
This will fill in the DataSet results window, as shown here.
447
40521.book Page 448 Tuesday, August 8, 2006 1:21 PM
448
Chapter 8
Designing a Reporting Services Solution
EXERCISE 8.1 (continued)
7.
Click the Layout button, and add a table to the report by dragging a Table control from the Toolbox onto the panel, as shown here.
8.
In the first detail cell of the table, type the following expression: =Fields!FirstName.Value & " " &Fields!LastName.Value
9.
Right-click the upper-left corner of the table, and select Properties, which will open the Table Properties dialog box. Click the Groups tab, as shown here.
40521.book Page 449 Tuesday, August 8, 2006 1:21 PM
Understanding Report Layout and Rendering Options
EXERCISE 8.1 (continued)
10. Click the Details Grouping button to open the Grouping and Sorting Properties dialog box, and in the General tab’s Expression box, type the following expression: =Fields!EmployeeID.Value. In the Parent Group box, type the following expression: =Fields!ManagerID.Value. The results should be as shown here.
11. Click OK to close the Grouping and Sorting Properties dialog box, and click OK to close the Table Properties dialog box. If you preview the report at this time, you cannot really tell that the report has been grouped because all the data is left-aligned. To represent the hierarchy, you need to pad the text box using the Level function.
12. Select the data column in the report, and in the Properties dialog box, expand the Padding property. Enter the =Convert.ToString(2 + (Level()*10)) & “pt” expression for the Left padding property, as shown here. This will ensure that the report will be properly rendered to represent the hierarchy.
449
40521.book Page 450 Tuesday, August 8, 2006 1:21 PM
450
Chapter 8
Designing a Reporting Services Solution
EXERCISE 8.1 (continued)
When you preview the report, it should appear as shown here.
The use of the Level function and other advanced reporting features are beyond the scope of this book. For more information, see the Books Online topic “Grouping Data in a Report.”
Understanding How Reports Are Rendered at Runtime Visual Studio contains graphical controls that you can place as items on the report. Controls in the header and footer can contain images, text boxes, and graphical lines. You can place data region controls anywhere in the report body, and they can include data tables, matrix (or cross-tab) tables, lists, charts, and subreports. By combining these elements, you can create highly customized and extensible reports. The process of combining data with report definitions, creating a report, and displaying it at runtime is known as rendering. SSRS uses rendering extensions to display reports in the desired format (such as a .pdf file). SSRS includes four basic RDL rendering extensions:
40521.book Page 451 Tuesday, August 8, 2006 1:21 PM
Understanding Report Layout and Rendering Options
451
Data-rendering extensions These extensions support basic data-centric output such as XML and comma-separated values (CSV). Interactive layout–rendering extensions These extensions support Hypertext Markup Language (HTML) output and can include basic user interaction such as clicking and expanding grouping. Noninteractive, logical page–rendering extensions These extensions support output to Office applications (Excel) and Mobile HTML (MHTML) formats and rely on the destination format to perform pagination Noninteractive, physical page–rendering extensions These extensions support output to PDF and image formats and require SSRS to perform pagination Pagination for a report is controlled by the page size of the report and any page breaks placed on report items. It is important to realize that pagination will vary based on the rendering extension used to view the report.
Controlling User Interaction within Reports In report display environments that support user interactivity, reports can support features that allow user interaction. Reports can include the following user interaction elements: Parameters You can create reports that prompt the user for input parameters prior to rendering the report. You can use these parameters as input to queries, filters, or expressions within the report. Parameters can be simple static lists, open-ended text boxes, or complex data-bound dynamic controls. Filters You can apply filters to the data after it has been retrieved from the data source. Filters are most commonly used with snapshot reports but can be used with any data source. Links You can use three types of links within an SSRS report: drill-through links, which allow the user to select another report on the SSRS server and pass parameters as necessary; URL links, which allow users to select and display non-report-related data through external web links; and bookmark links, which allow the user to quickly jump to a bookmark or anchor within the current report. Hidden items Much like data-bound controls on a form, controls on a report can be hidden. You can control display properties based on dynamic data (such as data that appears only when certain conditions are met in other data-bound controls on the report) or user interaction (such as a summary report that allows the user to click to show the detail information). Document maps For larger reports, document maps are useful for providing a table of contents, allowing the user of the report to jump directly to the information in which they are interested. User interactivity is an important part of designing effective reports for use with SQL Server Reporting Services.
40521.book Page 452 Tuesday, August 8, 2006 1:21 PM
452
Chapter 8
Designing a Reporting Services Solution
Working with Graphical Report Elements SSRS reports can contain graphical elements, such as images, borders, rectangles, and lines. SSRS reports can also contain chart elements. Rectangles and lines are generally used for visual effects, such as area and element separators. You can also use rectangles as containers for report controls, effectively grouping items together. You can add report borders to the header, body, and footer of a report, or you can combine them to create a seamless report border. For example, to draw a line around an entire report, create separate borders around the left, top, and right sides of the header; the left and right sides of the report body; and the left, bottom, and right side of the footer. The result will be a seamless box around the entire report.
When using rectangles to group report controls, make sure you understand how the controls will interact with one another. For example, if you have a large DataSet bound to the top control in a rectangle, remember that the control will expand to accommodate the data, which will force other controls in the rectangle to move accordingly.
Working with Report Style Elements Each control on a report has an associated style element that controls the appearance of the control when rendered. You set the appearance of a Report control at design time. Table 8.1 details the style elements that are available for most SSRS Report controls. TABLE 8.1
SSRS Report Control Style Elements
Element
Description
BackgroundColor
The color of the background of the item.
BackgroundGradientType
The direction in which the background gradient is displayed.
BackgroundGradientEndColor
The end color of the background gradient. If omitted, the item has no background gradient.
BackgroundImage
An image to display as the background of the item.
BorderColor
The color of the border of the item.
BorderStyle
The style of the border of the item, such as dotted, dashed, or solid.
BorderWidth
The width of the border of the item.
Calendar
The calendar to use to format dates.
40521.book Page 453 Tuesday, August 8, 2006 1:21 PM
Understanding Report Layout and Rendering Options
TABLE 8.1
453
SSRS Report Control Style Elements (continued)
Element
Description
Color
The color of the text in the item.
Direction
The direction of the text.
FontFamily
The name of the font to use for the text in the item.
FontSize
The size of the font in points.
FontStyle
The style of the font for the text in the item, such as italic.
FontWeight
The thickness of the font for the text in the item.
Format
The Microsoft .NET Framework formatting string to apply to the item, such as C for currency.
Language
The primary language of the text.
LineHeight
The height of a line of text. If not specified, the height is based on the font size.
NumeralLanguage
The digit format to use, based on the language property.
NumeralVariant
The variant of digit format to use.
PaddingBottom
The amount of space to insert between the bottom edge of the item and the text or image in the item.
PaddingLeft
The amount of space to insert between the left edge of the item and the text or image in the item.
PaddingRight
The amount of space to insert between the right edge of the item and the text or image in the item.
PaddingTop
The amount of space to insert between the top edge of the item and the text or image in the item.
TextAlign
The horizontal alignment of the text in the item, such as left, right, or center.
TextDecoration
A special effect to apply to the font for the text in the item, such as underline.
UnicodeBiDi
The level of bidirectional embedding.
40521.book Page 454 Tuesday, August 8, 2006 1:21 PM
454
Chapter 8
TABLE 8.1
Designing a Reporting Services Solution
SSRS Report Control Style Elements (continued)
Element
Description
VerticalAlign
The vertical alignment of the text in the item, such as top, middle, or bottom.
WritingMode
The direction of the text.
Along with the style elements that are applied to the report controls, developers can apply conditional formatting to elements within the report itself. For example, if you want all negative numbers to display as red within a TextBox control, you can apply the following expression to the Color property of the control: =iif(Fields!Profit.Value < 0, "Red", "Black")
Or if you want to emulate an old-school “greenbar” report, you can apply the following expression to the Color property of the control (the result will be every other line rendered with a green background): =iif(RowNumber(Nothing) Mod 2, "PaleGreen", "White")
Expressions in SSRS are beyond the scope of this book, but for more information, see the “Using Expressions in Reporting Services” Books Online topic.
Working with Nongraphical Report Elements Although graphical elements can add style and visual appeal to a report, the real “meat” of the report comes from the nongraphical controls that display data elements. Two nongraphical report elements exist: TextBox controls and subreports. You can place TextBox controls anywhere in the report body, but they usually appear inside cells in a table or matrix report and contain instructions that bind data to that control. By default, TextBox controls are a fixed size, so data that would exceed the size of the control will not display properly. Two properties of the control, CanGrow and CanShrink, control dynamic sizing and allow the control to grow and shrink vertically as data changes. TextBox controls can contain simple expressions to display static information, such as ="This is a static label", or more complex expressions that perform operations on the data, such as =SUM(Fields!TotalUnits.Value). A subreport is a report that references another report stored on the report server. These reports can be stand-alone reports or reports that are best viewed along with other data. You can think of a subreport as a report embedded within a report. For this reason, they can be tricky to work with and can cause performance problems because the entire report must be rendered separately from the report that contains it.
40521.book Page 455 Tuesday, August 8, 2006 1:21 PM
Creating SSRS Reports Using the Microsoft Report Builder Utility
455
Subreports are useful for quickly referencing data contained in other reports but do suffer from performance issues. If you are developing reports that need to reference other data, consider using additional data regions instead.
As the topics in this section have shown, building reports can be simple or, when combined with some of the advanced functions available, can be intense. Sometimes application designers want to make data available to users for ad hoc reporting. Deploying Visual Studio to every user is not a viable solution when users need to build their own reports. To solve this issue, Microsoft has included a new tool with SSRS called Report Builder. This application is a ClickOnce application that you can install from the Reporting Services website.
Creating SSRS Reports Using the Microsoft Report Builder Utility Microsoft Report Builder allows users to create their own reports based on a user-friendly reporting model created in the model designer. This allows users to create reports without fully understanding the underlying data model. Report Builder is fully integrated with SSRS and includes all reporting capabilities. To install and use Report Builder, navigate to the Reports virtual directory on your reporting server, and click the Report Builder icon. This will launch the ClickOnce installer, as shown in Figure 8.43. Once you’ve downloaded and installed the Report Builder application, the application will launch, as shown in Figure 8.44. FIGURE 8.43
The Report Builder application uses ClickOnce technologies to install.
40521.book Page 456 Tuesday, August 8, 2006 1:21 PM
456
Chapter 8
Designing a Reporting Services Solution
Report Builder requires that you create a report model before you use the program. Usually the administrator responsible for maintaining SSRS does this. If a model is not present, you can generate one through the Report Manager web page by creating a new data source and selecting the Generate Model command. (I’ll cover the Report Manager later in this chapter.)
Select the type of report you want to build (Table, Matrix, or Chart), and click OK to open the main Report Builder window, as shown in Figure 8.45. FIGURE 8.44
Microsoft Report Builder
FIGURE 8.45
The Report Builder main window
40521.book Page 457 Tuesday, August 8, 2006 1:21 PM
Creating SSRS Reports Using the Microsoft Report Builder Utility
457
If you use Microsoft Access, the look and feel of Report Builder should be familiar. The left side of the application shows the data entities and fields contained within the data source you selected on the main page, and the right side of the application contains a layout grid that allows you to place the data elements you want in your report. To create a simple report that details product information, select the Product entity from the Entities window, and drag the Product ID, Product Number, Name, Color, and Size fields from the Fields window to the “Drag and drop column fields” area of the report body. Expand Total List Price, and drag the # List Price field to the report body as well. The result should look like Figure 8.46. Click the Run Report button at the top of the application to view the report. Note that there is no title for the report and the formatting for the List Price field is not correct. Click the Design Report button to return to the report designer. Right-click the List Price value, and select Format. Choose an appropriate format in the Format dialog box, and click OK, as shown in Figure 8.47. Add an appropriate title to the report by clicking in the title box and adding the title text. Run the report again, and note that the result is a nicely formatted report with interactive sort capability. You can save this report to the report server, print it, or export it to another format such as PDF or Excel. Microsoft Report Builder is a nice tool that allows users to quickly build ad hoc reports that can be used in conjunction with reports that are deployed as part of an application or website. FIGURE 8.46
Using Microsoft Report Builder to build a simple report
40521.book Page 458 Tuesday, August 8, 2006 1:21 PM
458
Chapter 8
FIGURE 8.47 as currency
Designing a Reporting Services Solution
Using the Format dialog box to format numeric values to display
Optimizing Report Execution Although the primary purpose of SSRS is to provide reporting functionality, you can also think of it as a means of improving the performance of your database solution. Because of the ability to store reports temporarily, SSRS does not have to generate the underlying DataSet and render the report every time a user requests a report. SSRS offers you the capability of either caching reports or generating period snapshots. You can control both cached reports and report snapshots through report-specific schedules. Figure 8.48 shows the scheduling options available in SSRS. As indicated, executing reports can be particularly central processing unit (CPU) and input/ output (I/O) intensive. Consequently, it is important for you to be able to stop “runaway” reports that hog all your server resources from running on your server. SSRS also offers you the ability to prevent “runaway” reports by limiting the report execution through a timeout parameter, as shown in Figure 8.49.
You can also optimize report execution by taking advantage of the features of the database engine, such as an appropriate indexing strategy, indexed views, and partitioning. I covered these optimization options in Chapter 2.
40521.book Page 459 Tuesday, August 8, 2006 1:21 PM
Optimizing Report Execution
FIGURE 8.48
Reporting schedule
FIGURE 8.49
Report execution timeout
459
40521.book Page 460 Tuesday, August 8, 2006 1:21 PM
460
Chapter 8
Designing a Reporting Services Solution
Caching Reports SSRS can cache copies of processed reports, which can reduce the amount of CPU and I/O that needs to be performed because the report doesn’t need to be generated from scratch. This is particularly useful for large, complex, or frequently accessed reports. Caching is a purely performance-enhancing technique. Don’t forget that the cache is volatile, so the contents will change as reports are added, replaced, or deleted.
If you require a more predictable caching strategy, you should investigate report snapshots.
However, not all reports can be cached. So if a report has user-specific data or relies on Windows Authentication or user credentials, it cannot be cached. Figure 8.50 shows the various options available for caching a temporary copy of a report.
To further improve server performance, you can preload the cache. This avoids overloading the server, as in the case of the first hour of the business day when all your users might start running reports off your database solution. Preloading the cache involves creating a data-driven subscription that uses the null delivery provider.
FIGURE 8.50
Configuring report caching
40521.book Page 461 Tuesday, August 8, 2006 1:21 PM
Optimizing Report Execution
461
Creating Report Snapshots SSRS also allows you to create snapshots of your reports. Report snapshots are different from cached reports in that you explicitly need to configure when they will be generated. By comparison, cached reports are generated when the reports are first run and subsequently when they expire. You can have multiple cached reports, but you can have only one report snapshot. Report snapshots work particularly well when you have a database solution that has data periodically loaded into it at a known schedule. In this case, you can schedule the report snapshots for after the data-loading processes have finished (or alternatively for after-hours). Think civil registry or census databases! Figure 8.51 shows the various options available for creating report snapshots.
When configuring a report snapshot, you generally select Create a Report Snapshot When You Click the Apply Button on This Page; otherwise, requests for this report will fail until the snapshot has been generated according to the report execution schedule.
Another great feature of SSRS is the ability to store a history of your report snapshots. You could have many reasons for wanting to do this, and it is quite easy to schedule these as required. Figure 8.52 shows the various options available for creating report histories. I particularly like the ability to create a history of ad hoc snapshots via the Allow History to Be Created Manually option. This allows your users to “record” a report for posterity whenever a significant event occurs or an interesting report is generated. FIGURE 8.51
Configuring report snapshots
40521.book Page 462 Tuesday, August 8, 2006 1:21 PM
462
Chapter 8
Designing a Reporting Services Solution
Make sure you configure the number of snapshots you would like SSRS to keep; otherwise, you might inadvertently be deleting snapshots.
Using the reportserver and reportservertempdb System Databases When developing an SSRS solution, it is critical to capacity plan your SSRS environment and in particular the reportserver and reportservertempdb system databases. The reportserver system database stores configuration/security data, subscription/schedule definitions, and (most important) report snapshots, and the reportservertempdb system database stores session/execution data, cached reports, and worktables. Unfortunately, the default SQL Server 2005 installation’s defaults for these two system databases are not sufficient for most enterprise environments. In fact, several clients of mine that were heavily utilizing SSRS had performance problems related to these two system databases. So for optimal performance, as with tempdb, you should put these databases on a separate drive. Don’t forget to capacity plan and size them correctly, include them in your disaster recovery plan (DRP), and perform all the typical admin tasks you would for your user databases, especially if you are compiling a history of report snapshots.
FIGURE 8.52
Configuring report histories
40521.book Page 463 Tuesday, August 8, 2006 1:21 PM
Using Report Manager
463
Using Report Manager So far I have concentrated predominantly on how to create reports. I can read your mind! “How do I run reports? How do I configure the report server?” Well, with no further ado, let me present Report Manager! Report Manager is a web-based tool used to both access and administer your reporting solution. Figure 8.53 shows the Report Manager environment. You can use Report Manager to perform the following tasks:
Search and execute static and parameterized reports.
Execute ad hoc reports based on a report model through the Report Builder application.
Subscribe to reports.
Create report models.
Create subscriptions that send reports to multiple recipients.
Create and maintain a folder hierarchy for the easier navigation of reports.
Configure security.
Configure report optimization.
Configure site properties.
Running a report is a simple process for users. All they need to do is click the report in which they are interested. Figure 8.54 shows a report being executed. FIGURE 8.53
Report Manager
40521.book Page 464 Tuesday, August 8, 2006 1:21 PM
Chapter 8
464
FIGURE 8.54
Designing a Reporting Services Solution
Executing a report
Wow!
You can also execute ad hoc reports as I showed earlier in the chapter when I covered the report model.
Delivering Reports For me, the most exciting part about SSRS is the ability to push reports to users at a scheduled frequency. You can do this after-hours when your servers are not busy. Imagine how happy management would be to see an Excel spreadsheet emailed at the end of each month with the company’s profitability, for example. The possibilities are endless. You can deliver reports in SSRS through a process called subscriptions, which I will cover shortly. You can deliver reports via several different technologies:
Via a file (using one of the rendering extensions I discussed earlier) to a file share
Via an email attachment (again using one of the same rendering extensions)
40521.book Page 465 Tuesday, August 8, 2006 1:21 PM
Delivering Reports
465
Configuring Email Support Before being able to send reports via email, you need to configure the report server with the requisite configuration email and Simple Mail Transfer Protocol (SMTP) information. Figure 8.55 shows the Reporting Services Configuration Manager and how you would configure the email settings.
Creating Subscriptions The process of creating subscriptions is relatively painless. Two kinds of subscriptions are available in SSRS: Standard A standard subscription generates one instance of a report. The report layout and data do not change. These are “static” reports, if you like. FIGURE 8.55
Reporting Services Configuration Manager
Data-driven A data-driven subscription by comparison is more “dynamic” in that it lets you use a data set generated by a query at report runtime as the basis of any of the following:
The list of recipients who will receive the report
Recipient-specific subscription preferences, such as which report-rendering extension to use, whether the report is attached or linked, and so on
The parameters expected by the report
40521.book Page 466 Tuesday, August 8, 2006 1:21 PM
466
Chapter 8
Designing a Reporting Services Solution
To subscribe to a report, you simply need to edit the report’s properties and then select the Subscriptions tab. You will then be able to create either a new standard subscription or a new data-driven subscription, as shown in Figure 8.56. Figure 8.57 shows the configuration options available when setting up an email subscription. The scheduling options will be identical to what is available when scheduling report execution snapshots and expiring the cache because they all use the same scheduling engine. FIGURE 8.56
Creating a new subscription
FIGURE 8.57
Email subscription
40521.book Page 467 Tuesday, August 8, 2006 1:21 PM
Exam Essentials
467
Figure 8.58 shows the subscriptions that exist for a particular report. FIGURE 8.58
Subscriptions
Not too bad, huh?
Summary This chapter discussed some of the new features included with Microsoft SQL Server Reporting Services 2005 and detailed how to create reports using the Report Wizard, the Visual Studio 2005 report designer, and Microsoft Report Builder. This chapter also discussed the various layout and formatting options for reports and discussed the various rendering options available to report designers. Finally, this chapter discussed how to utilize report style elements, such as grouping and conditional formatting. Finally, this chapter covered how to optimize reports and automatically deliver reports to users through highly customizable subscriptions.
Exam Essentials Know how to design reports using Microsoft SQL Reporting Services. Be able to identify appropriate SSRS report design techniques and technologies. Understand when to use table layouts versus matrix layouts, and understand how report grouping and conditional formatting options work. Be able to specify data source configurations. Be able to identify the appropriate data source configuration and when to use Integrated authentication versus SQL Server authentication. It is also important to understand how data-processing extensions can extend the reach of SSRS.
40521.book Page 468 Tuesday, August 8, 2006 1:21 PM
468
Chapter 8
Designing a Reporting Services Solution
Know how to optimize reports. Understand how the various report options can work together for an optimal reporting experience, and remember that subreports generally perform worse than reports that contain additional data regions. Also, the queries that make up a DataSet should be as optimal as possible. Know how to optimize report execution. Understand how to optimize report execution. SSRS offers both snapshots and caching, so make sure you understand where to use each method. You can also prevent long-running reports from impacting the performance of your SQL Server solution. Understand report delivery. Understand the different options you have to automatically deliver reports through SSRS subscriptions.
40521.book Page 469 Tuesday, August 8, 2006 1:21 PM
Review Questions
469
Review Questions 1.
You have been asked to give users the ability to write their own ad hoc reports. What is the best way to accomplish this? A. Develop a web page that contains an easy-to-use Reporting Services control. Deploy the web page, and instruct users on its use. B. Develop and deploy a report model using the Report Manager website. Point the users to Report Manager, have them download the Report Builder application, and have them use the report model you just developed. C. Deploy Microsoft Visual Studio Express Edition to every user who wants to develop their own reports. Have them use the Report Wizard to design reports. D. Deploy Microsoft SQL Server Management Studio to every user who wants to develop their own reports. Instruct the users on how to use Management Studio’s Report Wizard to create ad hoc reports.
2.
You are building a report using Microsoft Visual Studio 2005 and want to provide users of the report with the ability to choose what data is displayed in a given data region on the report. What mechanism do you use? A. Drill-through options B. Subreports C. Parameters D. Custom expressions
3.
You need to develop a report that includes the appearance to be printed on your company letterhead, even when rendered onscreen. Which built-in report control can you use to accomplish this? A. The TextBox control B. The Rectangle control C. The Image control D. The Logo control
4.
You need to develop a report based on hierarchical data stored in a single table that contains a column named Source, a column named Property, and a column named Value. The data is structured such that Source describes several Property and Value pairs. The people who access the report want to display a single row for each Source value; the row will list each property for that source as a column and each value for that property in the data row. What type of report layout will be most appropriate? A. Table layout B. Matrix layout C. Chart layout D. Data regions containing lists
40521.book Page 470 Tuesday, August 8, 2006 1:21 PM
470
5.
Chapter 8
Designing a Reporting Services Solution
You need to develop a report based on a single table in a database that contains a selfreferencing foreign key relationship that details an employee and manager relationship. Your report must show each manager on the left side of the page, and each employee of that manager should appear directly under that manager, offset by three spaces. What is the best method for accomplishing this task? A. Create a DataSet for each manager who has employees, and create a DataSet for all employees. Use a List control along with a TextBox control to display each manager on the left side of the report. Use another List control placed on the report three spaces to the right to specify all the employees, and tie them together with parameters so each employee is listed with the appropriate manager. B. Create a DataSet that uses T-SQL to represent the hierarchy, and use a table layout with two columns. Place managers in one column and employees in the second column. Ensure that the second column is three spaces offset from the first column. C. Create a DataSet that lists each employeeID and their manager ID. Use custom grouping to group each employee by their manager. Use a TextBox control to represent the employee, and use the Level() function in the padding for the TextBox control. D. Create a DataSet that lists each employeeID and their managerID. Use a matrix layout, and place each data element in the appropriate cell
6.
You need to develop a report that draws a box around the entire page whether it is printed or displayed on the screen. How can you develop this report? A. Use graphical controls that draw the box, and add data regions to display report data. B. Develop a custom control that contains text box elements for the data and Rectangle controls for the lines. C. Use only a single data region in your report, and do not include headers or footers. Ensure that the Border property is set to draw lines on all sides of the data region. D. Implement borders around all data elements in the report.
7.
You need to develop a report that emulates a paper report that has been printed on “greenbar” paper. The report needs to ensure that every other line is painted with a green background. What technique can you use to accomplish this task? A. Develop the report using a custom chart to display the text. B. Use conditional formatting on a TextBox control. C. Use a matrix report. D. Use a custom control that allows you to control each line’s color.
8.
Your users are complaining about the performance of a particular report you have developed. The report in question has multiple graphical elements and several data regions, a main summary at the top, and several sections each defined by a subreport. You run the queries that build the report and notice they return data quickly; however, when you render the report, you notice that it takes a considerable amount of time. What can you do to improve the performance of the report? A. Replace the subreports with their own data region, and create additional DataSets to represent the data in that region. B. Remove any graphical elements from the report. C. Limit the amount of users who can run the report simultaneously through Report Manager. D. Link all subreports to a single DataSet element to take advantage of SQL Server data caching.
40521.book Page 471 Tuesday, August 8, 2006 1:21 PM
Review Questions
9.
471
You are developing a report that will be used as a scheduled report that contains snapshot data. Each user who receives this report will view it differently. How can you ensure the correct data is present for every user who wants to utilize this report? A. Create parameters that will prompt the user for the data they want to see. B. Use a document map to ensure that each user gets only the data they require. C. Use hidden items on the report and conditional formatting based on the user viewing the report to show only the data in which they are interested. D. Add filters to the report that allow the user to select the specific data in which they are interested.
10. You need to develop a report that renders all negative numbers in the profit column as red. How can you accomplish this without writing custom controls? A. Use the standard formatting features of SSRS to format all negative numbers as red. B. Use the following expression in the TextBox control’s BackColor property: =iif(Fields!Profit.Value < 0, "Red", "Black"). C. Use the following expression in the TextBox control’s Font property: =iif(Fields!Profit.Value < 0, "Red", "Black"). D. Use the following expression in the TextBox control’s Color property: =iif(Fields!Profit.Value < 0, "Red", "Black"). 11. You are developing a report that will contain a large amount of data. This report will be rendered using HTML and posted to an intranet. What element can you add to the report to allow users to quickly find the information they are looking for without reading the entire report? A. Use a table of contents, and include links that will reference items on the report. B. Use a custom control that will automatically include link elements. C. Use a document map element. D. Use a URL link element. 12. You are developing a report that is supposed to contain the value from a field in the footer of each page. You design the report layout, add a TextBox control to the page footer, and put the following expression in the text box: =Fields!ManagerID.Value. When you try to render the report, you receive an error. Why is this causing an error? A. You cannot directly reference data values in a page footer. B. The expression is not correctly formed; you need to add parentheses after the equals before Fields and after the e in Value so the expression reads as follows: =(Fields!ManagerID.Value). C. You cannot reference data fields in a TextBox control. D. You must reboot Reporting Services because something has gone wrong.
40521.book Page 472 Tuesday, August 8, 2006 1:21 PM
472
Chapter 8
Designing a Reporting Services Solution
13. You are developing a report that will include references to several other reports that are stored on the server. You do not want these additional reports to render until the user selects them. What element can you include to allow for this? A. Subreports that reference the additional reports on the server B. Drill-through links that reference the additional reports on the server C. A document map that references the additional reports on the server D. An anchor link that references the additional reports on the server 14. You are developing a report that will have several graphical elements to it. You decide to use a rectangle and place several data elements within the rectangle. When the report is rendered, you notice that some of the elements within the rectangle do not display at all. What is the most likely cause of this? A. Rectangle controls cannot be used as containers for data elements; they can be used only for border elements. B. You need to set the CanGrow attribute of the individual controls within the rectangle to true to ensure that each control takes up the appropriate amount of space. C. You need to rearrange the controls within the rectangle to ensure that the controls do not “step” on each other. D. You need to consider a third-party control to ensure you get the desired output. 15. You are developing a report that will be rendered as an Excel spreadsheet. You want to be able to support user interactivity such as filtering and grouping within the report. What must you do to allow this interactivity? A. You cannot add interactivity to reports that are rendered as Excel. B. You must add the appropriate grouping and filtering controls to the report and save the definition on the server. C. You must instruct your users to download the SSRS plug-in for Excel. D. You must add any Excel functionality in a subreport and use a drill-through link. 16. You have designed and implemented a database solution that has an SSRS component. Although SSRS has a number of canned reports, it relies heavily on ad hoc parameters where users can input anything they like. Performance is critical in this database solution because it relies on a real-time feed. The generation of reports is not critical. During the testing phase, users have noticed that performance intermittently degrades quite substantially. You suspect this is because users are running ad hoc reports with large, complex data sets. What should you do to improve performance of the database solution? A. Configure report caching. B. Configure report snapshots. C. Configure report execution timeout. D. Configure report history.
40521.book Page 473 Tuesday, August 8, 2006 1:21 PM
Review Questions
473
17. You need to develop an SSRS report that will display data that is stored in IBM DB2 running on a mainframe computer. How can you best access this data? A. Create a web service connection to the data on the mainframe, and use the web service data-processing extension to connect to the source. B. Use the IBM data-processing extension to connect to the data on the mainframe. C. Develop a custom data-processing extension that uses native connection mechanisms to connect to the data on the mainframe. D. Develop a custom-rendering extension that uses native connection mechanisms to connect to the data on the mainframe. 18. Your users currently use Microsoft Excel to develop pivot-table and pivot-chart reports. Each user has a custom macro installed that retrieves the data from Microsoft SQL Server 2000 Analysis Services. Your management has asked you to develop a report to replace this process. What type of report layout is best suited to this? A. You cannot create a report to satisfy this requirement until you upgrade to SQL Server 2005 Analysis Services. B. Use a table layout report, and render the report as Excel. C. Use a matrix layout report, and render the report normally. D. Use a custom layout report, and export the DataSet directly to Excel. 19. You are working with a report that performs slowly. When examining the DataSet that makes up the report, you notice that the query runs extremely fast, but the report takes a long time to render. The report contains both a chart and a data table. What can you investigate to determine why the report performs poorly? A. Examine how the data fields are used in the report, and make sure each data element is in the appropriate place. B. You should not combine chart and table elements in a single report. Consider using a subreport to solve this problem. C. Create an additional DataSet for the chart element; do not reuse the existing DataSet. D. Use SQL Profiler to determine why Reporting Services is running slow. 20. You are developing a report that prints your company’s organizational chart in text format. You develop the report using the technique for recursive hierarchies, but your manager would like you to further highlight members of the executive management team in red. How can you easily accomplish this task? A. Add information to the query that populates the DataSet. When an employee is a member of the executive management team, return a Boolean true. Use conditional formatting in your report to reference the Boolean value. B. Add conditional formatting to the Color property of your report using the Level() function. C. Create an additional DataSet that contains all the members of the executive management team, and refer to that DataSet using a subreport. D. Create an additional DataSet that contains all the members of the executive management team, and refer to that DataSet using a data region.
40521.book Page 474 Tuesday, August 8, 2006 1:21 PM
474
Chapter 8
Designing a Reporting Services Solution
Answers to Review Questions 1.
B. SQL Server Reporting Services 2005 includes a new tool called Report Builder, which is a ClickOnce application that is launched from the Report Manager website. This tool has a familiar look and feel for users of Microsoft Office and can build rich reports quickly and easily.
2.
C. You can create SSRS reports that prompt users for input prior to rendering the report, and you use this input as parameters to the query that populates a given data region.
3.
C. You can embed the Image control in the header or footer of the report to include a corporate logo. The header and footer text can emulate corporate letterhead so the report appears to be on letterhead both onscreen and when printed.
4.
B. The problem represents a common cross-tab query, where hierarchical data is “flattened” into columns and rows. This is exactly what a matrix layout is designed for.
5.
C. One of the most powerful functions of SSRS rendering is the capability to use custom grouping for expressions within each of the control properties. In this case, you would group by EmployeeID and set the ParentGroup to ManagerID and then use a custom expression in the left padding property of the TextBox control to offset each employee by their level in the hierarchy.
6.
D. You can specify borders on all elements of the report just like cells in Excel. To accomplish this task, you would create borders on three sides of the header, two sides of the body, and three sides of the footer.
7.
B. You can apply conditional formatting to most control properties, and in this case, you can use the MOD function like this: =iif(RowNumber(Nothing) Mod 2, "PaleGreen", "White").
8.
A. Subreports can be inefficient because they must always reference other reports stored on the report server. Each time a report that contains subreports is rendered, each report query is run in its entirety.
9.
D. You can use filters to allow the user to filter the data in a report once it has been rendered. In this case, all the data is initially shown when a user opens the report, but they can then filter the data based on their particular interests at runtime.
10. D. The TextBox Color property controls the color of the font displayed in the box. By applying the conditional formatting to the Color property, you can control when the color changes. 11. C. Document maps are useful for large reports because they automatically generate a table of contents that includes anchor links to other areas within the report. 12. A. Although you can indirectly reference field values in headers and footers, you cannot directly reference them in expressions that are included outside the report body. The error that occurs states clearly that fields cannot be referenced in headers and footers. 13. B. You can use drill-through links to reference other reports stored on the server, and they are better in this case than subreports because a subreport is always executed, whereas a drillthrough link executes the report only when clicked.
40521.book Page 475 Tuesday, August 8, 2006 1:21 PM
Answers to Review Questions
475
14. C. When using Rectangle controls as a grouping mechanism for other controls in a report, you need to be aware of how each data element interacts with the other controls in the rectangle. Setting the CanGrow element to false would work as well, but then the control may not display all the data. 15. A. Excel rendering cannot include any user interactivity items; however, they aren’t necessary by definition, because Excel provides those features natively. 16. B. Configuring the report execution timeout will prevent “runaway” reports from degrading the performance of your database solution. Report snapshots help more for “canned” reports. Report caching helps for reports with the same parameters. Report history is not designed not improve performance. 17. C. Developers can build custom data-processing extensions that allow SSRS to connect to and consume any data source. 18. C. A matrix layout report is identical to a pivot table and allows users to export the data directly to Excel. Once there, the users can automatically build pivot-table charts. 19. A. One of the problems with creating charts in SSRS is that it can be easy to place incorrect or illogical data in the chart. You must understand exactly how the chart will be created and rendered in order to allow the chart to function properly. 20. B. The Level() function will work nicely here. Using the example that was provided earlier in the text, you can see that any employee with a level of less than 2 could be considered an executive manager. Apply the expression =IIF(Level() < 2,"Red","Black") to the Color property of the cell containing the employee name. Any employee whose level in the hierarchy is less than 2 will render as red text.
40521.book Page 476 Tuesday, August 8, 2006 1:21 PM
40521.book Page 477 Monday, August 14, 2006 8:04 AM
Chapter
9
Designing Data Integration Solutions MICROSOFT EXAM OBJECTIVES COVERED IN THIS CHAPTER: Select and design SQL Server services to support business needs.
Design an Integration Services solution
Design a SQL Server Agent solution.
Develop packages for Integration Services.
Select an appropriate Integration Services technology or strategy.
Create Integration Services packages.
Test Integration Services packages.
Select and design SQL Server services to support business needs.
Design a SQL Server Agent solution.
Design data distribution.
Design SQL Server Agent alerts.
Design interoperability with external systems. Design the data transformation.
40521.book Page 478 Monday, August 14, 2006 8:04 AM
When it comes to data manipulation and movement, SQL Server 2005 has a humongous array of tools and commands. As always, it’s a matter of finding and using the appropriate technology correctly. I tend to choose the commands that are closer to the data as opposed to the sexy graphical user interface (GUI) tools, so call me old-fashioned or a prude (not!), but remember to consider what’s actually happening “behind the scenes” of the technology you have chosen. This chapter will predominantly focus on Microsoft’s new extracting, transforming, and loading (ETL) tool, SQL Server Integration Services (SSIS), but I will also cover other technologies at your disposal when it comes to getting data from here to there, or there to here, depending on your requirements. You might even discover some old friends, like BCP!
Using SQL Server Integration Services Although SQL Server has had many tried-and-true tools for importing and exporting data since its earliest years, each has had its own limitations that prevent it from being truly “enterprise ready.” BCP, BULK INSERT, and related utilities work well for structured tabular data but work poorly with anything else. Data Transformation Services (DTS), which was introduced in SQL Server 7.0, took some steps to move beyond the limitations of these tools, but it was quite limited in terms of scalability, manageability, and developer productivity. DTS worked well for small, simple tasks, but using it for a true enterprise-level project took determination, perseverance, patience, and a bit of luck. With SQL Server 2005, Microsoft has introduced a replacement for DTS: SSIS. SSIS is a completely redesigned and new set of tools for handling ETL that ships as an optional service with SQL Server 2005. SSIS is built from the ground up on .NET 2.0—you can write custom scripts using Visual Basic 2005, and you use SQL Server’s Business Intelligence Development Studio (a development environment that uses the Visual Studio 2005 shell) to design, build, test, debug, and deploy your SSIS packages. You can develop custom SSIS components using your favorite .NET language. The deployed packages are .NET assemblies, with all the capabilities and performance benefits that this implies.
40521.book Page 479 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
479
SSIS was called DTS until late in the SQL Server 2005 development cycle, and quite a few artifacts in the product still use this name. For example, SSIS packages have .dtsx file extensions. Despite this, SSIS is a completely new product, not simply the next version of DTS.
As suggested earlier, one detail that SSIS has in common with DTS is the concept of packages. According to SQL Server Books Online, a package is an organized collection of connections, control flow elements, data flow elements, event handlers, variables, and configurations that you either assemble using the graphical design tools that SSIS provides or build programmatically; the package is the unit of work that is retrieved, executed, and saved. Figure 9.1 shows an SSIS package. In short, packages are what you build, store, and run when you’re using SSIS, so let’s not waste any time. (After all, it’s the last theory chapter in the book!) In Exercise 9.1, you’ll create a simple SSIS package to provide a context for the theory. FIGURE 9.1
SSIS package
Package Control Flow Task
Data Flow Task
Data Flow Source
Transformation
Destination
40521.book Page 480 Monday, August 14, 2006 8:04 AM
480
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.1
Creating a Simple SSIS Package 1.
Begin by opening SQL Server Business Intelligence Development Studio from the Start menu. This opens Visual Studio 2005, which might throw you the first time, but remember that SQL Server 2005 uses Visual Studio 2005 as its primary development tool. You don’t get C# or Visual Basic when you install SQL Server 2005; instead, you get a set of business intelligence (BI) projects.
2.
Select File New Project.
3.
In the New Project dialog box that appears, select the Business Intelligence Projects node from the Project Types list, and then select Integration Services Project from the list of project templates on the right. Type AustraliaToWinTheWorldCup for the project name. You should have a dialog box similar to the one shown here.
Note: If your dialog box doesn’t have all the project type options that appear in this dialog box, don’t worry. I wrote this chapter working on my main development laptop, which has Visual Studio Team System installed. When you install both SQL Server 2005 and Visual Studio 2005 on the same computer, all the project types are available regardless of which Start menu shortcut (if any—I usually just type devenv at the command prompt) you use to launch the program.
40521.book Page 481 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
EXERCISE 9.1 (continued)
4.
Click the OK button. This displays similar default project contents to those shown here. If yours doesn’t look the same, don’t worry. Everyone configures their tools to look and work the way they like, and Visual Studio 2005 allows many different customizations. Now let’s set up the environment for working with SSIS packages.
5.
You first want to expose the SSIS Toolbox. Notice there is a Toolbox tab on the left side. Click the Toolbox tab to expose the SSIS Toolbox. Your window should look like this.
481
40521.book Page 482 Monday, August 14, 2006 8:04 AM
482
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.1 (continued)
6.
What you want to do is pin the SSIS Toolbox permanently in your development environment. So, click the pin icon in the top-right corner of the Toolbox. You should now have the SSIS Toolbox docked to the left of your design environment.
7.
From the Toolbox, scroll down until you get to Maintenance Plan Tasks; drag the Check Database Integrity Task item, and drop it onto the Control Flow tab’s designer surface. Unless you took some time to play with the different elements in the project between the previous step and this one, both the Toolbox and the Control Flow tab’s designer should be visible. If you did change things, this is a good exercise to figure out how to get them back. Once you’ve done this, your project should look like the window shown here.
40521.book Page 483 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
483
EXERCISE 9.1 (continued)
8.
Note the red X icon inside the Check Database Integrity Task box. If you hover your mouse over the box, you’ll see that this is because no connection manager has been specified; basically, the task hasn’t been told with what database it should work.
9.
Right-click the Check Database Integrity Task box, and select Edit from the context menu that appears. You will see the Check Database Integrity Task dialog box, as shown here.
10. As you can see, you can configure many options, although you’ll change only a few in this exercise. First you have to create a new connection, so click the New button.
11. Type AdventureWorks for the connection name and (local) for the server name.
40521.book Page 484 Monday, August 14, 2006 8:04 AM
484
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.1 (continued)
12. You have created your first connection to the SQL Server. Now you need to configure the remainder of the Check Database Integrity Task dialog box. Select the AdventureWorks database from the Databases drop-down list, and then click the OK button.
13. In this case, you want to speed up the database check, so uncheck the Include Indexes box. 14. Click the View T-SQL button to see the underlying Transact-SQL (T-SQL) statement that will get executed.
40521.book Page 485 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
485
EXERCISE 9.1 (continued)
15. It’s time to rock ‘n’ roll! Close the current window, and click the OK button to close the Check Database Integrity Task dialog box. You should be back in the Control Flow tab’s designer panel.
16. Notice that the red X icon has disappeared from the Check Database Integrity Task box. This task now has all the information it needs to do its work. Your project should now look like this:
The Check Database Integrity Task item was a simple task with few options. However, as you can see, the SSIS designer has a plethora of tasks with a universe of options! One of the challenges you’re likely to face when starting to work with SSIS is coping with the sheer number of tasks and their related options. When you’re working with a new component for the first time, take a few minutes to see what’s available. Even if you don’t use most of them most of the time, it never hurts to know what’s there.
17. Now you’re ready to run. Press the F5 key, or click the green arrow button on the toolbar (or use your favorite menu or keyboard shortcut—there must be a million ways to do this in Visual Studio, and everyone has his or her favorite way) to start debugging, just like you would with any other Visual Studio project.
40521.book Page 486 Monday, August 14, 2006 8:04 AM
486
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.1 (continued)
18. Watch your Check Database Integrity Task box turn yellow to show that it is executing and then green to show that it is finished without error. Yes, that's right; if there were an error, like if you had improperly configured the connection or typed in your SQL incorrectly, it would have turned red.
19. When you run an SSIS package from inside Visual Studio, it will always pause in break mode when it has completed running—successfully or unsuccessfully—so you can review its progress. Let’s do that before you finish with this project. Click the Progress tab (that wasn’t there before, was it?) in the package designer. You’ll see a step-by-step breakdown of what happened when you ran the package.
40521.book Page 487 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
487
EXERCISE 9.1 (continued)
20. Click the Click Here to Switch to Design Mode… link, or select Debug Stop Debugging.
And that’s that! The world’s simplest SSIS package. (No, please don’t take that as a challenge.) All it does is execute a database integrity check, and a simple one at that, but it does illustrate the basics of SSIS projects and packages. Don’t close the SSIS designer because you will return to this package in Exercise 9.2. (However, you can now save the project and exit the Visual Studio environment if you want, as long as you know how to get the project. That should be easy!)
Now that you’ve seen how simple it is to create a new SSIS package, you’ll take a closer look at the components that make up a package before doing something a bit more complex.
I loathe using all these GUI tools and tasks. Remember, if an SSIS task exists, it does not necessarily represent the best way of achieving that task. The Check Database Integrity Task dialog box you just saw in action is a good example. It does not have the full functionality of the DBCC CHECKDB command, which includes other options such as PHYSICAL_ONLY.
Understanding the Package Structure When you created your first package, you didn’t really drill down at all; you simply looked at the tools. Now it’s time to take a deeper look at what goes into a real package. If you look at the package you just created in Exercise 9.1, you’ll see that it has four tabs (Control Flow, Data
40521.book Page 488 Monday, August 14, 2006 8:04 AM
488
Chapter 9
Designing Data Integration Solutions
Flow, Event Handlers, and Package Explorer) along the top of the SSIS package designer and a Connection Managers tab at the bottom of the package designer. The Package Explorer tab provides a useful tree view of the package contents, but it’s not really a design tool. You’ll spend most of your time in SSIS working with the tools on the other tabs. Any real-world SSIS package you create will have at least three of the main components mentioned in the earlier definition from Books Online: control flow elements, data flow elements, and connections. So, you’ll look at these three first and in the greatest depth.
Introducing Connection Managers Connection managers in SSIS manage the information needed to connect to data sources such as database engines or files. SSIS supports a large range of connection managers. First, the more “traditional” connection managers allow you to connect to the types of data sources that seem to be used in every SQL Server project. Table 9.1 shows these connection managers. TABLE 9.1
Traditional Connection Managers
Connection Manager
Description
ADO
This connection manager allows you to connect to relational data sources using “classic” ADO.
ADO.NET
This connection manager allows you to connect to relational data sources using ADO.NET.
Excel
This connection manager allows you to connect to a Microsoft Excel workbook.
Flat File and Multiple Flat File
These connection managers allow you to connect to delimited or fixed-width flat text files. As you’d expect, one works with individual files, and the other works with sets of files that you can define either with a pipe delimited list of file names or with a wildcard path such as C:\Informare\*.txt.
ODBC
This connection manager allows you to connect to any data source for which you have an ODBC driver.
OLE DB
This connection manager allows you to connect to any data source for which you have an OLE DB provider.
Second, connection managers exist for less common—but still mainstream—data sources that won’t be used on every project but that you’re still likely to use, such as the Multiple Files connection manager for connecting to multiple files and folders. Table 9.2 shows these connection managers.
40521.book Page 489 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
TABLE 9.2
489
More Connection Managers
Connection Manager
Description
Analysis Services
This connection manager allows you to connect to an instance of Analysis Services or an Analysis Services project.
File and Multiple File
Despite its innocuous name, this is one of the most powerful connection managers that ships “in the box” with SSIS. The File Connection Manager allows you to connect to any type of file, which makes it useful for processing proprietary file formats.
FTP
This connection manager allows you to connect to a File Transfer Protocol (FTP) server.
HTTP
This connection manager allows you to connect to a web page or web service hosted via Hypertext Transfer Protocol (HTTP) on a web server.
MSMQ
This connection manager allows you to connect to a Microsoft Message Queuing (MSMQ) queue.
SMTP
This connection manager allows you to connect to a Simple Mail Transfer Protocol (SMTP) email server.
SQL Server Mobile
This connection manager allows you to connect to a SQL Server mobile database for information from Windows CE or other mobile devices.
Finally, Microsoft has also included a set of connection managers for more esoteric data sources, such as the Microsoft .NET data provider for mySAP Business Suite, that you won’t use too often but that you’ll really appreciate when you need them. Table 9.3 shows these connection managers. TABLE 9.3
Esoteric Connection Managers
Connection Manager
Description
Microsoft .NET data provider for mySAP Business Suite
This connection manager allows you to connect to SAP data. (This one is not included when you install SQL Server 2005 but is available as a download from MSDN at http:// msdn.microsoft.com/downloads/.)
40521.book Page 490 Monday, August 14, 2006 8:04 AM
Chapter 9
490
TABLE 9.3
Designing Data Integration Solutions
Esoteric Connection Managers (continued)
Connection Manager
Description
SMO
This connection manager allows you to connect to SQL Server using the new SQL Server Management Objects (SMO) .NET libraries. Some of the existing control flow tasks, such as the Transfer Logins Task, are built using the SMO connection manager.
WMI
This connection manager allows you to connect to Windows Management Instrumentation (WMI) servers.
Windows Management Instrumentation Revisited “What’s WMI?” I hear you ask. Again. Obviously you haven’t worked with Systems Management Server (SMS) since it came out as version 2.0! Or read Chapter 2. The WMI connection manager is one of those tools that not everyone will find a use for but that will prove invaluable if you do need it. More and more enterprise application infrastructures are using WMI to collect information about numerous aspects of the network, operating system, hardware, and software. Having an easy way to gather WMI data and store it in a SQL Server database for analysis is a powerful tool. See Chapter 2 for WMI triggers coverage. For even more information about WMI resources, refer to www.microsoft.com/whdc/system/pnppwr/wmi/default.mspx.
Introducing Control Flow Components You need to control two main features when building an SSIS package. The first is the logic of the package itself—what happens in what order under what circumstances. If you were building an application using a traditional programming language such as Visual Basic, you’d define this logic in code, but in SSIS you define it graphically using the Control Flow tab’s designer within your package. A control flow potentially consists of three components: Container Containers provide a means of encapsulating objects in SSIS, mainly for repeating workflow. Tasks Tasks do the core of the work in SSIS packages. Precedence constraints Precedence constraints control the order in which the control flows within SSIS packages. Figure 9.2 shows a typical control flow that has one container and multiple tasks.
40521.book Page 491 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
FIGURE 9.2
491
Control flow components
Package Task Precedence Constraint
Task
Container Task
Task
Task
Task
Let’s examine in more detail the various control flow components.
Control Flow Containers SSIS basically provides four types of containers for building packages, as shown in Table 9.4. TABLE 9.4
Containers for Building Packages
Container
Description
Foreach Loop
This container runs a control flow repeatedly by using one of a number of different types of enumerators.
For Loop
This container runs a control flow repeatedly until an expression evaluates to a false condition.
Sequence
This container groups tasks and containers into control flows that are a subset of the control flow.
Task Host
This task provides services to a single task.
Control Flow Tasks Control flow tasks define units of work within the package control flow. Let’s briefly look at all the control flow tasks available in SSIS.
40521.book Page 492 Monday, August 14, 2006 8:04 AM
492
Chapter 9
Designing Data Integration Solutions
DATA FLOW TASKS
Data flow tasks encapsulate the data flow engine and fundamentally provide the extracting, transforming, and loading of data. Figure 9.3 shows a simple data flow task that has one data flow. FIGURE 9.3
Data flow task
Source
Transformation
Transformation
Destination
DATA PREPARATION TASKS
The data preparation tasks perform file and directory manipulations, download files, execute web methods, and work with Extensible Markup Language (XML) data, as shown in Table 9.5. TABLE 9.5
Data Preparation Tasks
Task
Description
File System Task
This task performs file- and directory-level operations.
FTP Task
This task performs FTP operations.
Web Service Task
This task executes a web service method.
XML Task
This task manipulates XML data.
WORKFLOW TASKS
The workflow tasks manipulate the WMI, send messages using email or MSMQ, run external processes, and spawn other packages, as shown in Table 9.6.
40521.book Page 493 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
TABLE 9.6
493
Workflow Tasks
Task
Description
Execute Package Task
This task executes other SSIS packages. It provides a powerful ability to design a modular SSIS solution.
Execute DTS 2000 Package Task
This task executes SQL Server 2000 DTS packages.
Execute Process Task
This task executes external processes and batch files.
Message Queue Task
This task sends and receives messages using MSMQ.
Send Mail Task
This task sends emails using an SMTP server.
WMI Data Reader Task
This task queries the WMI using WMI Query Language (WQL).
WMI Event Watcher Task
This task watches for WMI events using WQL.
SQL SERVER TASKS
The SQL Server tasks perform a variety of SQL Server 2005–related tasks, as shown in Table 9.7. TABLE 9.7
SQL Server Tasks
Task
Description
Bulk Insert Task
This performs a BULK INSERT operation.
Execute SQL Task
This task executes a T-SQL command.
Transfer Database Task
This useful task transfers a database between SQL Server instances.
Transfer Error Messages Task This task transfers custom SQL Server error messages from the master system database. Transfer Jobs Task
This task transfers SQL Server Agent jobs.
Transfer Logins Task
This task transfers SQL Server logins.
Transfer Master Stored Procedures Task
This task transfers user-defined stored procedures from the master system database.
Transfer SQL Server Objects Task
This powerful task transfers database objects between SQL Server instances.
40521.book Page 494 Monday, August 14, 2006 8:04 AM
494
Chapter 9
Designing Data Integration Solutions
SCRIPTING TASKS
The scripting tasks allow you to extend the functionality of packages offered by SSIS through scripting, as shown in Table 9.8. TABLE 9.8
Scripting Tasks
Task
Description
ActiveX Script Task
Pretend this one does not exist! This task primarily exists for backward compatibility with SQL Server 200 DTS packages.
Script Task
This is the one to use! This task allows you to write custom Microsoft Visual Basic .NET code to perform functions beyond what SSIS offers.
ANALYSIS SERVICES TASKS
The Analysis Services tasks process SQL Server Analysis Services objects, as shown in Table 9.9. TABLE 9.9
Analysis Services Tasks
Task
Description
Analysis Services Processing Task
This task processes SQL Server Analysis Services (SSAS) cubes, dimensions, and mining models.
Analysis Services Execute DDL Task
This task runs DDL tasks against SSAS cubes, dimensions, and mining models.
Data Mining Query Task
This task runs prediction queries against SSAS data mining models.
MAINTENANCE TASKS
The maintenance tasks perform various administrative databases tasks such as backing up the database, shrinking the database, rebuilding indexes, reorganizing indexes, running SQL Server Agent jobs, and updating statistics, as shown in Table 9.10. TABLE 9.10
Maintenance Tasks
Task
Description
Back Up Database Task
This task backs up a database.
Check Database Integrity Task
This task checks database integrity by running the appropriate DBCC commands.
40521.book Page 495 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
TABLE 9.10
495
Maintenance Tasks (continued)
Task
Description
Execute SQL Server Agent Job Task
This task executes a SQL Server Agent job.
Execute T-SQL Statement Task
This task executes a T-SQL statement.
History Cleanup Task
This task cleans up historical information from the msdb system database.
Notify Operator Task
This task sends notification messages to SQL Server Agent operators.
Rebuild Index Task
This task rebuilds an index.
Reorganize Index Task
This task reorganizes an index.
Shrink Database Task
This task shrinks a database.
Update Statistics Task
This task updates statistics in a database.
Control Flow Precedence Constraints Precedence constraints basically link the various components in the package, control the order in which these components run, and control whether they run at all. Although precedence constraints can be based on an evaluation operation (or controlled programmatically), most commonly they are based on one of the following execution results:
Completion
Failure
Success
Designing Package Control Flow You can create control flow within a package via the Control Flow tab of the SSIS designer. Figure 9.4 shows the Control Flow tab in SSIS.
Introducing Data Flow Components The second main feature you need to control when building an SSIS package is the path of the data: its source, its destination, and how it is transformed between the two. You control this logic through the data flow of the package. It’s not technically necessary to have data flow in every package (remember Exercise 9.1?), but it’s a safe bet that 99 percent of your SSIS packages will involve data flow. If not, SSIS may not be the best choice of tools for solving the problem at hand. Figure 9.5 shows a typical data flow that has a source, a transformation, and a destination.
40521.book Page 496 Monday, August 14, 2006 8:04 AM
496
Chapter 9
Designing Data Integration Solutions
FIGURE 9.4
Control Flow tab
FIGURE 9.5
Data flow components
Document Database External Columns Source Output Columns
Output
Error Output
Input Columns Destination
Input Columns
Input Transformation
Output Columns
Output
Error Output
Input Columns Destination
Input Columns
Input Destination Error Output
Input Columns Destination
40521.book Page 497 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
497
So let’s look at these three data flow components in more detail.
Data Flow Sources In SSIS, a source is a data flow component that allows external data from external data sources to be made available to other components within the data flow. Table 9.11 shows the sources available with SSIS. TABLE 9.11
Data Flow Sources
Source
Description
DataReader
Uses a .NET Framework data provider as a data source
Excel
Uses an Excel file as a data source
Flat file
Uses a flat file as a data source
OLE DB
Uses an OLE DB provider as a data source
Raw file
Extracts raw data from a file
Script component
Uses a script to extract, transform, or load data
XML
Uses an XML file as a data source
Data Flow Transformations SSIS transformations are components within a data flow that perform a myriad of “data manipulation” tasks. It’s useful to know the various transformations that are available with SSIS in SQL Server 2005. BUSINESS INTELLIGENCE TRANSFORMATIONS
The business intelligence (BI) transformations provide powerful functionality that has not been available in SQL Server to date, allowing you to perform standard BI operations such as cleansing data, mining text, and executing data-mining prediction queries, as shown in Table 9.12. About bloody time! TABLE 9.12
Business Intelligence Transformations
Transformation
Description
Fuzzy Grouping
This powerful transformation uses “fuzzy logic” to identify duplicate data, thus standardizing data.
Fuzzy Lookup
This again is a powerful transformation that uses “fuzzy logic” to match data with a reference table, thus cleansing data.
40521.book Page 498 Monday, August 14, 2006 8:04 AM
498
Chapter 9
TABLE 9.12
Designing Data Integration Solutions
Business Intelligence Transformations (continued)
Transformation
Description
Term Extraction
This transformation extracts terms from text.
Term Lookup
This transformation looks up terms in a reference table and counts the terms extracted from the text.
Data Mining Query
This transformation runs data mining prediction queries.
ROW TRANSFORMATIONS
The row transformations update column values and create new columns on a per row basis, as shown in Table 9.13. TABLE 9.13
Row Transformations
Transformation
Description
Character Map
This transformation applies string functions such as UPPER and LOWER to character data.
Copy Column
This transformation generates a new column in the output based on an input column.
Data Conversion
This transformation converts the data type between the input and output columns.
Derived Column
This transformation creates new output column based on expressions to input columns, such as SUBSTRING(FirstName,1,1).
Script Component
This transformation uses a script to extend the functionality of the ETL process.
OLE DB Command
This transformation runs a SQL statement against each row in a data flow.
ROWSET TRANSFORMATIONS
The rowset transformations create new data sets, including aggregate data sets, pivoted data sets, sample data sets, sorted data sets, and unpivoted data sets, as shown in Table 9.14.
40521.book Page 499 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
TABLE 9.14
499
Rowset Transformations
Transformation
Description
Aggregate
This transformation performs a GROUP BY operation or the AVERAGE, COUNT, MAX, MIN, and SUM aggregate T-SQL functions on the data.
Sort
This transformation sorts the data.
Percentage Sampling
This powerful transformation creates a sample data set. It uses a percentage to generate the random data set.
Row Sampling
This transformation also creates a sample data set. But in this case it’s based on the exact number of rows you want to return.
Pivot
This transformation pivots the input data.
Unpivot
This transformation unpivots the input data.
SPLIT AND JOIN TRANSFORMATIONS
The split and join transformations distribute a data set to different data sets, create copies of the transformation data set, join multiple data sets into one data set, and perform lookup operations, as shown in Table 9.15. TABLE 9.15
Split and Join Transformations
Transformation
Description
Conditional Split
This transformation routes inputs to different outputs depending on the expression provided.
Multicast
This transformation distributes the same inputs to multiple outputs.
Union All
This transformation merges multiple inputs into one output.
Merge
This transformation merges two sorted inputs into one output.
Merge Join
This transformation joins two sorted inputs using a FULL, INNER, or LEFT join.
Lookup
This transformation performs lookups by joining input column values to a reference table using an exact match.
40521.book Page 500 Monday, August 14, 2006 8:04 AM
500
Chapter 9
Designing Data Integration Solutions
OTHER TRANSFORMATIONS
SSIS included a number of transformations that can be used to provide auditing, count rows, work with slowly changing dimensions, and perform other transformations, as shown in Table 9.16. TABLE 9.16
Other Transformations
Transformation
Description
Export Column
This transformation inserts data from a data flow into a file.
Import Column
This transformation reads data from a file into a data flow.
Audit
This transformation provides auditing capability within a data flow.
Row Count
This transformation counts rows as they pass through a data flow and stores them in a variable.
Slowly Changing Dimension
This transformation modifies slowly changing dimension for a data warehouse’s dimension tables.
Data Flow Destinations In SSIS, a destination is a data flow component that outputs the data from a data flow to some data store. Destinations have one output and one error output. Table 9.17 shows the destinations available with SSIS. TABLE 9.17
Data Flow Destinations
Destination
Description
Data mining model training
Trains a data mining model.
DataReader
Uses the ADO.NET DataReader interface as a data destination.
Dimension processing
Loads and processes a dimension in SSAS.
Excel
Uses Excel as a data destination.
Flat file
Uses a flat file as a data destination.
40521.book Page 501 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
TABLE 9.17
501
Data Flow Destinations (continued)
Destination
Description
OLE DB
Uses an OLE DB provider as a data destination. This is a powerful feature because it pretty much allows you to output data to most modern database engines.
Partition processing
Loads and processes a partition in SSAS.
Raw file
Writes raw data to a file.
Recordset
Uses an ADO recordset as a data destination.
Script component
Uses scripting to extract, transform, or load data.
SQL Server mobile
Uses a SQL Server mobile database as a data destination.
SQL Server
Uses a SQL Server 2005 table or view as a data destination.
Designing Package Data Flow You can create data flow within a package via the Data Flow tab of the SSIS designer. Figure 9.6 shows the Data Flow tab in SSIS. FIGURE 9.6
Data Flow tab
40521.book Page 502 Monday, August 14, 2006 8:04 AM
502
Chapter 9
Designing Data Integration Solutions
Introducing Event Handlers An SSIS event handler is a workflow that executes in response to a particular event being generated by a package, task, or container. SSIS provides a rich set of events you can use:
OnError
OnExecStatusChanged
OnInformation
OnPostExecute
OnPostValidate
OnPreExecute
OnPreValidate
OnProgress
OnQueryCancel
OnTaskFailed
OnVariableValueChanged
OnWarning Figure 9.7 shows the Event Handlers tab in SSIS.
Introducing Log Providers SSIS includes the ability to log all sorts of information about packages to help facilitate auditing and troubleshooting. SSIS provided the following log providers:
Text file
SQL Server Profiler
SQL Server
Windows Event log
XML file
Introducing Variables SSIS also supports variables that allow you to communicate among the components of a package and between packages. SSIS supports the following types of variables: System variables System variables hold information about the running package and its objects, such as PackageId, PackageName, and StartTime. User-defined variables User-defined variables are variables defined by the user.
Figure 9.8 shows the Variables window in SSIS designer.
40521.book Page 503 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
FIGURE 9.7
Event Handlers tab
FIGURE 9.8
Variables window
Understanding the SSIS Architecture The SSIS architecture consists of the following components:
A data flow task that comprises the data flow engine and data flow components Integration Services object model (ISOM)
503
40521.book Page 504 Monday, August 14, 2006 8:04 AM
504
Chapter 9
Designing Data Integration Solutions
SSIS runtime and related executables
SSIS service
Figure 9.9 shows the relationship between the various components that make up the SSIS architecture. FIGURE 9.9
SSIS architecture Custom Applications
SSIS Designer
Command Line Utilities
SSIS Wizards
Tasks Custom Tasks
Native Managed Object Model Integration Services Runtime
.dtsx File Integration Services Service
Package Task
Task Log Providers
Task
msdb Database
Container
Enumerators
Task
Connection Managers
Task
Event Handlers
Data Sources
Data Flow Task
Data Flow Task Object Model Integration Services Data Flow Source
Source
Transformation
Data Flow Components Custom Data Flow Components
Transformation
Destination
Destination
40521.book Page 505 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
505
Introducing the Integration Services Object Model SSIS fully supports the Microsoft .NET Framework, which allows developers to program SSIS using their choice of .NET-compliant languages, as well as supporting native code. The runtime engine and the data flow engine are written in native code but are available through a fully managed object model. The ISOM allows developers to program SSIS in two ways:
Programmatically create, configure, and run SSIS packages.
Programmatically extend SSIS packages by writing custom components that provide custom functionality.
Extending SSIS with Scripting Although SSIS comes with a large variety of tasks and components for building complex packages, it was important for Microsoft to allow database administrators (DBAs) and developers to easily create custom scripts so as to extend the functionality of SSIS.
SQL Server 2000 had an ActiveX Script Task item. SSIS has included this task for backward compatibility only, for users migrating DTS packages to SSIS. You should not write new scripts using the ActiveX Script Task item.
You can script in SSIS using the Visual Studio for Applications (VSA) environment. SSIS allows you to write scripts using the Visual Basic .NET language only. Other .NET languages such as C# are not available.
Extending SSIS with Custom Tasks and Components If the existing set of control flow and data flow objects included with SSIS are not sufficient for your purposes, you can develop your own custom objects, including the following: Custom connection managers Custom connection managers that connect to external data sources not supported by SSIS Custom data flow components Custom data flow components that can be configured as sources, transformations, and destinations Custom enumerators iteration options
Customer enumerators going beyond the supported set of
Custom log providers Custom log providers that can log events not currently supported Custom tasks Custom tasks that perform tasks
40521.book Page 506 Monday, August 14, 2006 8:04 AM
506
Chapter 9
Designing Data Integration Solutions
Testing and Debugging SSIS Packages The Business Intelligence Development Studio includes a number of tools for debugging the control flow and data flow of packages. The following allow you to debug the control flow: Breakpoints The SSIS designer allows you to set breakpoints at the package or component level. Debug windows The Business Intelligence Development Studio supports debug windows. Progress reporting The SSIS designer reports the progress of a control flow through color coding that indicates its state. Figure 9.10 shows this functionality and the various windows of the debugging environment. Additionally, SSIS and the SSIS designer include a number of features that allow you to troubleshoot the data flow: Data viewers Data viewers display data between two components in a data flow. Row counts The SSIS designer reports the number of rows passing through a data flow. Progress reporting Additionally, the SSIS designer reports the progress of an executing package by displaying each component in a color indicating its status. Finally, you can also debug script tasks in the VSA environment by setting breakpoints. So, let’s wrap up the SSIS topic with the slightly more complex Exercise 9.2. FIGURE 9.10
SSIS designer debugging environment
40521.book Page 507 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
507
EXERCISE 9.2
Creating a Not-So-Simple SSIS Package Management has asked you to build a spam engine. You have decided to use SSIS for this task. You have access to customers, vendors, and job candidates email addressees maintained in three different “systems.” You need to combine the data from these three data sets and then process it before you can use it as a basis for spamming. The process must run after a daily database check. You have already written the database check SSIS package and plan to modify it. Let’s add some further tasks, dependencies, and a control flow to the SSIS package created in Exercise 9.1.
1.
You’ll first want to create a table in the tempdb database to act as a destination. So, drag the Execute SQL Task item to the Control Flow designer pane.
2.
Right-click, and choose Edit to expose the properties of the Execute SQL Task item.
40521.book Page 508 Monday, August 14, 2006 8:04 AM
508
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
3.
As you can see, you can configure many options, although you’ll change only a few in this exercise. First click the Connection property, and then select from the drop-down list for that property. The following dialog box should be visible.
4.
As you can see, you have the AdventureWorks connection defined from the earlier Check Database Integrity Task item. In this case, however, you want to form a new connection to the tempdb database. So, click the New button, and fill in the properties as shown here:
40521.book Page 509 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
EXERCISE 9.2 (continued)
5.
Click the OK button to close the connection manager.
6.
Click the OK button to close the Configure OLE DB Connection Manager window.
7.
Click the ellipse button, and type the T-SQL statement shown here.
509
40521.book Page 510 Monday, August 14, 2006 8:04 AM
510
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
8.
Click the OK button. If you want to check the T-SQL statement, click the Parse Query button. Check to make sure all the bits and bobs are correct as shown here, and click the OK button.
9.
Time for a data flow! Click the Data Flow tab in the SSIS designer. You should notice that the Control Flow Toolbox has been replaced by the Data Flow Toolbox.
40521.book Page 511 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
511
EXERCISE 9.2 (continued)
10. Guess what you should do? That’s right. Click that long blue sentence in the middle of the Data Flow tab.
11. Add your first data source, the vendors. Drag an OLE DB source onto the Data Flow tab. 12. Click the OLE DB Source box, and rename it to Vendors.
13. Now you need to edit the properties of the renamed Vendors OLE DB source by rightclicking it and choosing the Edit option.
40521.book Page 512 Monday, August 14, 2006 8:04 AM
512
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
14. You will notice that if you look at the available list of OLE DB connections in the OLE DB connection manager drop-down list that only (local).tempdb exists. So, click the New button, and select the (local).AdventureWorks1 data connection before clicking the OK button.
15. Choose the [Purchasing].[vVendor] view in the last drop-down list so your OLE DB Source Editor dialog box looks like this.
16. Repeat steps 11–15, but this time for the [HumanResouces].[vJobCandidate] view, changing the OLE DB source name to Job Candidates.
40521.book Page 513 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
513
EXERCISE 9.2 (continued)
17. This time though let’s tweak some of the options. You really are just after the name and email addresses to build your spam list, so let’s get rid of superfluous columns by removing them from the column list after clicking the Columns page.
18. Click OK to close the Customers OLE DB source. You should see the screen shown here.
40521.book Page 514 Monday, August 14, 2006 8:04 AM
514
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
19. Let’s add one more data flow source by dragging across the OLE DB source and renaming it to Customers before editing its properties.
20. This time you will use a query instead of a table or view, so change the Data Access Mode drop-down list to SQL Command.
21. Type the query as shown here, and click OK to close the Customers OLE DB source. You should see the screen shown here.
40521.book Page 515 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
515
EXERCISE 9.2 (continued)
22. Click the OK button. You should see the screen shown here.
23. Now flag where the email addresses came from. Drag across the Derived Column data flow transformation, and drop it next to the Vendors OLE DB source.
24. Click the Vendors OLE DB source. A green and red arrow should appear. 25. Drag the green arrow onto the Derived Column data flow transformation, as shown here.
40521.book Page 516 Monday, August 14, 2006 8:04 AM
516
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
26. Edit the Derived Transformation Editor as shown here.
27. Rename the Derived Transformation Editor to Flag - V. 28. Repeat steps 23–25 but this time for Job Candidates, as shown here.
40521.book Page 517 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
517
EXERCISE 9.2 (continued)
29. Rename the Derived Transformation Editor to Flag - J. 30. Now you need to combine the three data sets. Drag across the Union All data transformation. 31. Link the Customers OLE DB source, Flag - V, and Flag - J data transformations together, as shown here (I’ve reordered objects for clarity).
32. You now need to union the data together. Edit the Union All data transformation by double-clicking it.
40521.book Page 518 Monday, August 14, 2006 8:04 AM
518
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
33. As you can see, it has tried to do its best, but you need to fix it up a bit. Change the various options as shown here.
34. You are ready to pipe all that to a SQL Server table! Drag the SQL Server Destination to the Data Flow tab.
35. Link the Union All data flow transformation to the SQL Server Destination as shown here.
36. Double-click the SQL Server Destination to open the SQL Destination Editor. 37. Change the OLE DB connection manager to (local).tempdb.
40521.book Page 519 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
519
EXERCISE 9.2 (continued)
38. You will notice that there is no table in the Use a Table or View drop-down list. Remember, you plan to create it via SQL Task item before this data flow executes, so click the New button and type the code shown here.
39. Click the OK button. You should now see the screen shown here.
40521.book Page 520 Monday, August 14, 2006 8:04 AM
520
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
40. You now need to map the columns. Click the Mappings page.
41. As you can see, it’s not quite correct, so fix it as shown here.
40521.book Page 521 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
EXERCISE 9.2 (continued)
42. Click the OK button. 43. You’ve finished with the data flow. Confirm that your SSIS designer looks similar to what’s shown here, and then return to the Control Flow tab.
44. Rename the Data Flow Task item to Generate SPAM List. 45. Rename the Execute SQL Task item to Create SPAM Table. 46. Link the Check Database Integrity Task and Create SPAM Table control flow items to the Generate SPAM List data flow task.
521
40521.book Page 522 Monday, August 14, 2006 8:04 AM
522
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
47. You need to further process your spam list, so drag an Execute a SQL Task item to the Control Flow tab.
48. Rename it to Process SPAM List. 49. Link the Generate SPAM List data flow task to the Process SPAM List item.
50. Open the Execute SQL Task Editor for the Process SPAM List item. 51. Change the Connection property to (local).tempdb. 52. Click the ellipse button of the SQLStatement property, and enter the T-SQL query shown here.
40521.book Page 523 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
EXERCISE 9.2 (continued)
53. Hee! Hee! Hee! How did it go with the second DELETE statement? Don’t worry about it; just type the first DELETE statement because it will be sufficient.
54. Management wants to be emailed if the spam list is not generated, so let’s drag the Send Mail Task item to the Control Flow tab.
55. Double-click the Send Mail Task item. 56. Click the Mail page.
523
40521.book Page 524 Monday, August 14, 2006 8:04 AM
524
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
57. Configure the SMTP Server with dummy information. (Note: it does not have to be accurate for this proof of concept to work.)
58. Fill in the rest of the details as shown here, and click the OK button. (Again, the properties do not have to be real.)
59. Now you need to send the email if the spam list does not get generated. So, link the Process SPAM List task to the Send Mail Task item.
40521.book Page 525 Monday, August 14, 2006 8:04 AM
Using SQL Server Integration Services
EXERCISE 9.2 (continued)
60. Since you want an email to be sent only if the preceding task fails, right-click the arrow, and select Failure.
61. It’s about time to execute this SSIS package. Press F5! The SSIS package should start executing.
525
40521.book Page 526 Monday, August 14, 2006 8:04 AM
526
Chapter 9
Designing Data Integration Solutions
EXERCISE 9.2 (continued)
Hot strawberries! It should all have worked fine. As you saw, the Send Mail Task item did not fire because there was no error! I’ll let you work out how you can get SSIS to spam everyone in that spam list!
You’ll find some excellent SSIS tutorials in SQL Server 2005 Books Online in the “Integration Services Tutorials” topic. I would extremely, megahighly recommend you go through them before taking the SQL Server 2005 exams to become familiar with SSIS.
Using Alternate Data Integration Technologies Although I have concentrated on SSIS so far in this chapter, it is not necessarily the best technique to use if you need to extract, transform, and load data. SQL Server 2005 offers a large number of options when it comes to scheduling tasks or getting data from here to there or from there to here.
40521.book Page 527 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
527
Don’t forget your T-SQL Data Manipulation Language (DML) commands! In particular, you can nest SELECT statements within INSERT statements. For example, you might want to populate a lookup table in a new database solution from an existing database solution, in which case you could execute the following: INSERT [NewDatabaseSolution].[Customers].[Country] SELECT [ExistingDatabaseSolution].[Customers].[Country]
In other words, you don’t need to go through the complexity of designing an SSIS package. Now the previous example would work only within a SQL Server instance, but you can also manipulate data using linked servers or ad hoc distributed queries, as you will discover shortly, which gives you the flexibility of running statements such as the following: INSERT [ProductionServer].[NewDB].[Customers].[Country] SELECT [DevelopmentServer].[ExistingDB].[Customers].[Country]
So in the remainder of the chapter, you will examine other techniques at your disposal to integrate and manipulate data:
The BCP utility
BULK INSERT statement
Linked servers
Replication
Using the SQL Server Agent The SQL Server Agent of SQL Server 2005 is a separate service that is responsible for automating administrative tasks, such as the following:
Running scheduled tasks, such as database checks (DBCC), database backups, and data processing/loading commands
Processing and generating alerts based on performance objects and counters, such as generating an email and taking corrective action when a database’s transaction log is 90 percent full
But don’t forget you can use the SQL Server Agent to schedule pretty much anything, including many of the utilities and commands you will be looking at in the remainder of the chapter, let alone the earlier chapters in this book. In fact, technologies such as replication and SSIS rely heavily on the SQL Server Agent. Figure 9.11 demonstrates the powerful execution and scheduling capabilities of SQL Server Agent. I covered the SQL Server Agent in more detail when we looked at “Designing Objects That Perform Actions” at the end of Chapter 2.
40521.book Page 528 Monday, August 14, 2006 8:04 AM
528
Chapter 9
FIGURE 9.11
Designing Data Integration Solutions
SQL Server Agent scheduled tasks
Using the BCP Utility The venerable BCP utility has always existed with SQL Server and basically allows you to both import and export data against a SQL Server table or view. It is quite efficient and flexible, in that it allows you to export from a query. Another nicety is that it supports a format file that you can use as a means of repeatedly importing data with complex BCP options.
If you are new to SQL Server, don’t forget that the BCP utility is a command-line utility, so you have to run it from a command prompt! Yes, you can run it through a call to the xp_cmdshell extended stored procedure, but in SQL Server 2005 that functionality is turned off in a default installation. In this case, you really should be using the BULK INSERT statement to load data instead.
The full syntax for the BCP utility is as follows: BCP {[[database_name.][owner].]{table_name | view_name} | "query"} {in | out | queryout | format} data_file [-mmax_errors] [-fformat_file] [-x] [-eerr_file] [-Ffirst_row] [-Llast_row] [-bbatch_size] [-n] [-c] [-w] [-N] [-V (60 | 65 | 70 | 80)] [-6]
40521.book Page 529 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
529
[-q] [-C { ACP | OEM | RAW | code_page } ] [-tfield_term] [-rrow_term] [-iinput_file] [-ooutput_file] [-apacket_size] [-Sserver_name[\instance_name]] [-Ulogin_id] [-Ppassword] [-T] [-v] [-R] [-k] [-E] [-h"hint [,...n]"]
The switches are case sensitive.
I certainly won’t cover all the BCP switches, but Table 9.18 shows the more commonly used ones. TABLE 9.18
BCP Switches
BCP Switch
Description
-b batch_size
Controls the number of rows that will be imported per batch before being committed.
-c
Performs a BCP operation using characters.
CHECK_CONSTRAINTS
Specifies that check constraints must be checked.
-E
Specifies that the identity values in the import file should be used as opposed to being regenerated by SQL Server 2005.
-e err_file
Specifies the name of the error file.
-F first_row
Specifies the first row to import or export. Useful for importing text files where the first number of rows contain header information.
-f format_file
Specifies the name of the format file. (The default is a text file.)
-i input_file
Specifies the input file.
-k
Specifies that empty columns should have NULLs inserted as opposed to default values.
-L last_row
Specifies the number of the last row to export or import.
40521.book Page 530 Monday, August 14, 2006 8:04 AM
530
Chapter 9
TABLE 9.18
Designing Data Integration Solutions
BCP Switches (continued)
BCP Switch
Description
-m max_errors
Specifies the maximum number of errors, excluding serverside errors that can occur before the BCP operation is canceled. (The default is 10.)
-n
Performs a BCP operation using native data types.
-N
Performs a BCP operation using Unicode characters for character data types and native data types for noncharacter data types.
-o output_file
Specifies the output file.
ORDER(column [ASC | DESC] [,...n])
Specifies the sort order of the data in the data file. (Can be used to improve performance if the data being loaded is sorted according to the table’s clustered index.)
-P password
Specifies the password. (The default password is NULL.)
-r row_term
Specifies the row terminator. (The default is \n, which stands for newline character.)
ROWS_PER_BATCH = bb
Specifies the numbers of rows per batch.
-S server_name[ \instance_ name]
Specifies the SQL Server instance.
-T
Specifies that a trusted connection should be used, (If -T is not specified, -U and -P must be specified.)
-t field_term
Specifies the field terminator. (The default is \t, the tab character.)
BCP is really easy to use if you want to import or export a table through a file. Just fire up a command prompt, and away you go! The following example shows a BCP export of a query to a Unicode text file: BCP "SELECT Title, FirstName + ' ' + ➥ UPPER(LastName) AS Contact, EmailAddress ➥ FROM AdventureWorks.Person.Contact" ➥ queryout "Customers.txt" -T -w
40521.book Page 531 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
531
The next example shows how to generate an XML format file: BCP AdventureWorks.Sales.CurrencyRate format nul ➥-T -c -x -f C:\CurrencyRate.xml
Make sure you are working with the expected version of BCP! If you have multiple instances of SQL Server installed, you might be using an earlier version of BCP, depending on the PATH settings, and so on. You can check the version you are executing by using the -v switch.
In Exercise 9.3, you’ll use the BCP utility. EXERCISE 9.3
Using the BCP Utility You want to export just the product ID, the product name, and a updated list price from the Product table to a text file so you can send it to a number of colleagues via email for them to load into their database solution. You have been asked to use the semicolon as the field delimiter.
1.
Open a command prompt in your Windows environment.
2.
Type the following code, and press Return in the command prompt:
➥BCP "SELECT ProductId, Name, ListPrice * 1.1 ➥ FROM Adventureworks.Production.Product" queryout ➥ C:\LatestProductPricing.txt -c -T -t; You should see something like this being returned in your command prompt: Starting copy...
504 rows copied. Network packet size (bytes): 4096 Clock Time (ms.) total
3.
66
Before emailing the text file, you need to ensure it is correct; type the following code, and press Return in the command prompt: notepad C:\LatestProductPricing.txt
4.
Close the Notepad program.
Don’t delete the text file because you will be using it in the next exercise!
40521.book Page 532 Monday, August 14, 2006 8:04 AM
532
Chapter 9
Designing Data Integration Solutions
Using the BULK INSERT Statement SQL Server 7.0 introduced the BULK INSERT statement, probably because of a high demand for some T-SQL statement that would load external text files into a table or view. In earlier versions of SQL Server, the only way to achieve this was running the BCP utility within an xp_cmdshell, which was a cumbersome technique. The BULK INSERT statement has always been a favorite of mine because it is lean and efficient and can be executed within a user-defined transaction!
The security requirements for the BULK INSERT statement have changed in SQL Server 2005.
The full syntax of the BULK INSERT statement is as follows. The options are much easier to understand because it is a virgin. In other words, it does not have the history or baggage that poor old BCP has. BULK INSERT [ database_name . [ schema_name ] . | schema_name . ] [ table_name | view_name ] FROM 'data_file' [ WITH ( [ [ , ] BATCHSIZE = batch_size ] [ [ , ] CHECK_CONSTRAINTS ] [ [ , ] CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ] [ [ , ] DATAFILETYPE = { 'char' | 'native'| 'widechar' | 'widenative' } ] [ [ , ] FIELDTERMINATOR = 'field_terminator' ] [ [ , ] FIRSTROW = first_row ] [ [ , ] FIRE_TRIGGERS ] [ [ , ] FORMATFILE = 'format_file_path' ] [ [ , ] KEEPIDENTITY ] [ [ , ] KEEPNULLS ] [ [ , ] KILOBYTES_PER_BATCH = kilobytes_per_batch ] [ [ , ] LASTROW = last_row ] [ [ , ] MAXERRORS = max_errors ] [ [ , ] ORDER ( { column [ ASC | DESC ] } [ ,...n ] ) ] [ [ , ] ROWS_PER_BATCH = rows_per_batch ] [ [ , ] ROWTERMINATOR = 'row_terminator' ]
40521.book Page 533 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
533
[ [ , ] TABLOCK ] [ [ , ] ERRORFILE = 'file_name' ] )]
The following example shows a bulk insert of a Unicode text file with a tab as the field terminator and a newline as a row terminator. Since the first row of the text file contains a header, you tell SQL Server 2005 to ignore it. Finally, since you are loading more than 100 million records, you request a table lock, which can significantly improve performance. BULK INSERT AdventureWorks.Sales.SalesOrderDetail FROM 'C:\Informare\NigerianCensus1991_Adults.txt' WITH ( DATATYPE = 'widechar', FIELDTERMINATOR = '\t', FIRSTROW = 2, ROWTERMINATOR = '\r\n', TABLOCK )
Exercise 9.4 shows how to use the BULK INSERT statement to load the text file you created earlier in Exercise 9.3. EXERCISE 9.4
Using the BULK INSERT Statement You have since emailed the LatestProductPricing.txt file to a junior DBA named Paulina who now needs to load it into a new table. She will need to create a new table before loading the data into it.
1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
Click the New Query toolbar button to open a new query window.
3.
Type the following T-SQL code, and execute it: USE tempdb ; GO -- Create new table for BULK INSERT CREATE TABLE [LatestProductPricing] ( ProductNumber ProductName ProductPrice
)
INT, NVARCHAR(50), MONEY
40521.book Page 534 Monday, August 14, 2006 8:04 AM
Chapter 9
534
Designing Data Integration Solutions
EXERCISE 9.4 (continued)
4.
Once you have created the table, she can load the data into the table using the BULK INSERT statement. Type the following T-SQL code, and execute it: BULK INSERT [LatestProductPricing] FROM 'C:\LatestProductPricing.txt' WITH (FIELDTERMINATOR = ‘;’)
5.
Let’s make sure you have loaded the data into the table! Type the following T-SQL code, and execute it: SELECT * FROM NewProducts
I know you want to, but still don’t delete the text file because you will be using it in the next exercise.
Using Linked Servers The ability to create a linked server is one of my favorite features because it allows you to create permanent links within a SQL Server solution to heterogeneous data sources such as text files, Microsoft Access databases, Microsoft Excel spreadsheets, Active Directories, third-party RDBMSs—pretty much anything as long as a driver/provider exists! You can then create views against the linked server’s data within a database so that users are accessing the data through the familiar relational methods without knowing that SQL Server is retrieving the data from a heterogeneous data source! Figure 9.12 shows the architecture of how linked servers work. Yes, there are performance implications because you are going across the network, but in terms of flexibility the options are endless! Exciting stuff!
Creating Linked Servers Generally speaking, you must perform two steps to set up a linked server environment: 1.
Create a link between the SQL Server instance and the external heterogeneous data source.
2.
Configure the security context that SQL Server will use on behalf of database users.
You’ll now look at the two system stored procedures you can use to configure linked servers.
40521.book Page 535 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
FIGURE 9.12
535
Linked server architecture Client
SQL Server
OLE DB Interface Oracle OLE DB Provider
Microsoft OLE DB Provider for Jet Oracle Database
OLE DB Provider for ODBC
Microsoft SQL Native Client OLE DB Provider
Jet Database File
SQL Server Database
Active Directory Services Index Server Others
Third-party Providers ODBC Sybase Informix DB2 Others
Excel Spreadsheet FoxPro Database Dbase File
sp_addlinkedserver The sp_addlinkedserver system stored procedure allows you to create a permanent link between a SQL Server instance and a heterogeneous data source. The full syntax of the sp_addlinkedserver system stored procedure is as follows: sp_addlinkedserver [ @server= ] 'server' [ , [ @srvproduct= ] 'product_name' ] [ , [ @provider= ] 'provider_name' ] [ , [ @datasrc= ] 'data_source' ] [ , [ @location= ] 'location' ] [ , [ @provstr= ] 'provider_string' ] [ , [ @catalog= ] 'catalog' ]
40521.book Page 536 Monday, August 14, 2006 8:04 AM
536
Chapter 9
Designing Data Integration Solutions
The following example shows a linked server being created on a development SQL Server to a production SQL Server using the SQL Native Client OLE DB provider (SQLNCLI) so that you can copy data from production to development for testing as required: EXEC sp_addlinkedserver @server='Production', @srvproduct='', @provider='SQLNCLI', @datasrc='Informare\UnitedNations' GO
The following simple example shows a linked server being created against a Microsoft Access database: EXEC sp_addlinkedserver @server = 'PlatypusData', @provider = 'Microsoft.Jet.OLEDB.4.0', @srvproduct = 'OLE DB Provider for Jet', @datasrc = 'D:\Informare\PlatypusResearch.mdb' GO
And finally, this is another simple example of creating a linked server against an Excel spreadsheet: EXEC sp_addlinkedserver 'GreyNurseSharkData', 'Jet 4.0', 'Microsoft.Jet.OLEDB.4.0', 'D:\Informare\GreyNurseSharkTags.xls', NULL, 'Excel 5.0' GO
sp_addlinkedsrvlogin The sp_addlinkedsrvlogin system stored procedure controls the security context that is being used when a particular database user tries to access data across a linked server. So, SQL Server 2005 will connect to the heterogeneous data source as controlled by the DBA. The DBA has the following options: Impersonation In this instance, SQL Server will impersonate the login used by the person connected to the SQL Server instance. This works great for enterprise environments where users and/or user groups have access to multiple SQL Server instances. It will also work if SQL Server log ins as long as the passwords are identical. Mapping In this instance, SQL Server will login in using explicit login settings, which are different from the logins used by people connected to the SQL Server instance. This is more appropriate.
40521.book Page 537 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
537
Some heterogeneous data sources have no security.
The full syntax of the sp_addlinkedsrvlogin system stored procedure is as follows: sp_addlinkedsrvlogin [ @rmtsrvname = ] 'rmtsrvname' [ , [ @useself = ] 'TRUE' | 'FALSE' | 'NULL'] [ , [ @locallogin = ] 'locallogin' ] [ , [ @rmtuser = ] 'rmtuser' ] [ , [ @rmtpassword = ] 'rmtpassword' ]
The following example shows a mapping being created for all logins so that they can access the linked server created earlier: EXEC sp_addlinkedsrvlogin 'Production'
Querying Linked Servers Once a linked server has been configured, you can run distributed, heterogeneous queries against it using your standard DML statements. It really is that simple.
For linked servers that are SQL Server instances, you can also execute remote stored procedures. But you need to configure the remote SQL Server to allow execution of it’s stored procedures remotely.
When querying linked servers, it is important to remember to fully qualify the linked server’s table or view using the following notation: server.database.schema.object
The following example shows a query against one of the earlier created linked servers: SELECT * FROM PlatypusData…ProcreationPattern WHERE Regularity = 'Daily'
The advantage of pass-through queries is that they use the linked servers’ resources because they are passed through uninterrupted to the linked server.
40521.book Page 538 Monday, August 14, 2006 8:04 AM
538
Chapter 9
Designing Data Integration Solutions
Using the OPENQUERY Function An alternative is to use the OPENQUERY function, which is designed to execute a pass-through query against the linked server. The syntax for the OPENQUERY function is simple: OPENQUERY ( linked_server ,'query' ) The following example shows how to use the OPENQUERY function based on the previous linked server: SELECT * FROM OPENQUERY ( CSIRO_Research, 'SELECT Regularity, COUNT(*) AS Lucky FROM PlatypusData...ProcreationPattern GROUP BY Regularity' )
Using Ad Hoc Distributed Queries Linked servers are not for everyone. In certain cases, you’ll want to access remote heterogeneous data in an ad hoc manner, such as when you need to access the remote heterogeneous infrequently and for security reasons. SQL Server 2005 offers two main functions for ad hoc distributed queries. By default, a default installation of SQL Server 2005 does not allow ad hoc remote connections. So, you need to explicitly enable that functionality in the Surface Area Configuration tool, as discussed in Chapter 4 and again shown in Figure 9.13. FIGURE 9.13
Allowing ad hoc remote connections through the SSAC tool
40521.book Page 539 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
539
Using the OPENDATASOURCE Function The OPENDATASOURCE function allows you to connect to an ad hoc data heterogeneous data source without using a linked server name. You can basically use the OPENDATASOURCE function wherever you typically use the linked server name. The syntax of the OPENDATASOURCE function is simple: OPENDATASOURCE ( provider_name, init_string )
The following example shows the OPENDATASOURCE function being used to access [Contacts] on a remote SQL Server instance called Informare: SELECT * FROM OPENDATASOURCE('SQLNCLI', 'Data Source=Informare; Integrated Security = SSPI' ).AdventureWorks.Person.Contacts
Using the OPENROWSET Function Yet another function (I promise, this is the last one) you can use is the OPENROWSET function. In this case, you are embedding your query (or object) within the function’s parameter. You can reference the OPENROWSET function as the destination table of an INSERT, UPDATE, or DELETE statement. The full syntax for the OPENROWSET function is as follows: OPENROWSET ( { 'provider_name' , { 'datasource' ; 'user_id' ; 'password' | 'provider_string' } , { [ catalog. ] [ schema. ] object | 'query' } | BULK 'data_file' , { FORMATFILE = 'format_file_path' [ ] | SINGLE_BLOB | SINGLE_CLOB | SINGLE_NCLOB } } ) ::= [ , CODEPAGE = { 'ACP' | 'OEM' | 'RAW' | 'code_page' } ] [ , ERRORFILE = 'file_name' ] [ , FIRSTROW = first_row ] [ , LASTROW = last_row ]
40521.book Page 540 Monday, August 14, 2006 8:04 AM
Chapter 9
540
Designing Data Integration Solutions
[ , MAXERRORS = maximum_errors ] [ , ROWS_PER_BATCH = rows_per_batch ]
The following example shows an example of the OPENROWSET function being used to connect to a remote server called Informare: SELECT t.* FROM OPENROWSET ( 'SQLNCLI', 'Server=Informare; Trusted_Connection=yes;', 'SELECT [Group], COUNT(*) AS TerritoryCount FROM Sales.SalesTerritory GROUP BY [Group]' ) AS t
For a nice discussion on whether you should use BULK INSERT, INSERT...SELECT, SSIS, BCP, and so forth, read the “Overview of Bulk Import and Bulk Export” topic in SQL Server 2005 Books Online.
Exercise 9.5 will show how you can use the OPENROWSET function. It’s not the best example because you are working with the text file created earlier. EXERCISE 9.5
Using the OPENROWSET Statement You have also emailed the LatestProductPricing.txt file to a senior DBA named Olya who also needs to load it into a new table. But she has decided to create a new table on the fly as she loads the data into it.
1.
Open SQL Server Management Studio, and connect using Windows Authentication.
2.
Click the New Query toolbar button to open a new query window.
3.
Type the following T-SQL code, and execute it: USE tempdb ; GO SELECT t.* INTO [TempLatestProductPricing] FROM OPENROWSET(
40521.book Page 541 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
541
EXERCISE 9.5 (continued)
'MSDASQL', 'Driver={Microsoft Text Driver (*.txt; *.csv)}; DefaultDir=C:\;', 'SELECT * FROM LatestProductPricing.txt' ) AS If you get the following error message, you will have to enable ad hoc remote connections through the Surface Area Configuration tool, as shown earlier in Figure 9.13: Msg 15281, Level 16, State 1, Line 1 SQL Server blocked access to STATEMENT 'OpenRowset/OpenDatasource' of component 'Ad Hoc Distributed Queries' because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of 'Ad Hoc Distributed Queries' by using sp_configure. For more information about enabling 'Ad Hoc Distributed Queries', see "Surface Area Configuration" in SQL Server Books Online.
4.
So let’s see what Olya has done. Type the following T-SQL code, and execute it: SELECT * FROM TempLatestProductPricing As you can see, the Microsoft text driver has assumed that the first row of the text file contained the column names of the entity! Actually, the end result is pretty much a dog’s breakfast! Welcome to working with primitive text files.
OK, OK…you can delete the text file now!
Using Replication From a database architect or database design point of view, you should also look briefly at canvas replication because it offers another means of automatically “moving” data between databases depending on your environment. Replication has been around since SQL Server 6.0 and has consistently improved and offered more options through the versions. Figure 9.14 shows the various components of the replication architecture.
40521.book Page 542 Monday, August 14, 2006 8:04 AM
542
Chapter 9
FIGURE 9.14
Designing Data Integration Solutions
Replication architecture
Custom Application
Custom Application
Replication Agents Publisher
Subscriber
Articles
Publication
Articles
Distributor
Publication
Understanding the Replication Components It’s important to understand these various components and the roles they play in replication, so let’s go through them: Article An article is basically the smallest unit of replication. It can be a table, view, or stored procedure. It represents the data or a subset of the data from a SQL Server entity. Distributor The distributor is a SQL Server instance that stores replication’s metadata in a system database typically called distribution. A distributor can be a remote SQL Server instance or reside on the publisher. Publication A publication is a collection of one or more articles from a database that form the basis of a unit of replication. Publisher A publisher is a SQL Server instance that makes its database(s) available for replication. Subscriber A subscriber is a SQL Server instance that receives replicated data.
Typically the subscriber is considered “read-only” in transactional replication because you have a strict hierarchy. If you modify data at the subscriber, you run the risk of that data being overwritten by a subsequent modification at the publisher.
40521.book Page 543 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
543
Subscription A subscription is a request for replicated data. A subscription can use either a push model or a pull model.
Replication can actually work with other heterogeneous relational database engines such as Access (seriously) and Oracle. In fact, it should be able to work using the OLE DB interface with any non–SQL Server subscriber.
Understanding the Replication Agents Several replication agents are involved in a replication topology depending on the type of replication configured, so let’s look at the roles they play: Distribution Agent (distrib.exe) Used in transactional and snapshot replication. It basically is monitoring the distribution database and replicating transaction to the various subscribers. Log Reader Agent (logread.exe) Used in transactional replication. It runs on the distributor and basically monitors published databases for incoming transactions that it copies as required from the published database’s transaction log to the distribution database. Merge Agent (replmerge.exe) Used by merge replication reconciling conflicts depending on its configuration. Queue Reader Agent (qrdrsvc.exe) Used by transactional replication with the queued updating option. Snapshot Agent (snapshot.exe) Used by all replication types. It is responsible for the initial schema and snapshot of data that forms the basis of replication.
The SQL Server Agent is the actual process that is used to schedule and control how the various replication agents run inside the replication topology.
Understanding the Replication Types SQL Server replication is extremely flexible, making it a great choice to build complex automated replication scenarios. Replication can occur continuously, or it can be scheduled to run periodically, which makes more sense where you need to replicate across wide area network (WAN) links after-hours. In any case, three types of replication exist: Merge replication With merge replication, data modifications are kept track of via triggers on both the publisher and the subscriber. When the publisher and subscriber try to synchronize, they send each list of modified rows and attempt to merge the changes to get a consistent view of the data. In this type of replication, data modification conflicts can occur, so you need to configure some form of conflict resolution.
40521.book Page 544 Monday, August 14, 2006 8:04 AM
Chapter 9
544
Designing Data Integration Solutions
Transactional replication With transactional replication, you are streaming all the DML operations as required from the publisher to the subscriber. As indicated, a transactional replication has a hierarchy, transactions are replicated from the publisher to the subscriber, and you should not update the subscriber. However, you have the ability to update subscriptions without data conflicts by using one of the following options: Immediate updating With immediate updating subscriptions, the subscriber and publisher are updated in a single distributed transaction using the Microsoft DTC. There is a minimal chance of a conflict with this option, but it requires reliable network connections. Queued updating With queued updating subscriptions, you queue the DML operations, which means you can potentially have a conflict because you effectively allow for simultaneous modification of the same data. Consequently, you have to configure some conflict resolution; the options are as follows:
Publisher wins (default)
Publisher wins, and subscription gets reinitialized
Subscriber wins
Snapshot Replication With snapshot replication, you are sending the entire contents of the articles that make up the publication across the network according to the schedule, instead of incremental changes. Snapshot replication is not commonly used, but in certain cases it makes more sense to send a snapshot, let’s say nightly, than a stream of all the DML statements that occurred within the database throughout that day. Peer-to-peer transactional replication SQL Server 2005 Enterprise Edition offers a new capability called peer-to-peer replication (PTPR). With PTPR, each node acts as a peer, so it can both publish and subscribe to the same schema and data. This is different from “traditional” transactional replication where you have a strict hierarchy. Figure 9.15 show how data flows between the nodes in a PTPR topology based on the number of participating nodes.
I do not know anyone who has implemented a solution based on PTPR technology. If you have, I’d love to hear from you!
Replication Frequency Don’t forget that with replication you also need to configure the frequency at which the replication occurs. For all these replication types, you can have either a continuous time frame or a scheduled time frame. Do not make the mistake of thinking that transactional replication has to be continuous. Also, do not make the mistake of configuring snapshot replication to be continuous either! Seriously…one of my customers did!
40521.book Page 545 Monday, August 14, 2006 8:04 AM
Using Alternate Data Integration Technologies
FIGURE 9.15
Peer-to-peer replication
A
C
B
A
C
B
D
Key Replicated Data
Replication Is for You! For some reason, a lot of people do not think they can use replication within their organization. However, they tend to forget you do not have to replicate the entire database. Any organization that has multiple instances of databases residing on potentially multiple instances of SQL Server can use replication as a means of administering reference data. Consider common reference data such as ZIP codes, employee lists, calendars with public holidays, stock prices, weather details, company holidays, telephone lists, customer lists, monetary exchange rates, and so on. Many times you have the same information residing on multiple servers. So, how do you maintain this reference data? Do you have different DBAs required to update ZIP codes as required? Who in the human resources department maintains the various Employee tables in different systems? Why not use replication? For many clients, I have set up a “central” table that is maintained by a single person within the company (or through some automated process/feed), and then the table (or a subset of it) gets automatically replicated to the various database systems that require it. Just set it, and forget it! It’s so much easier (and reliable) than sending company memos/emails or relying on multiple personnel.
545
40521.book Page 546 Monday, August 14, 2006 8:04 AM
546
Chapter 9
Designing Data Integration Solutions
Summary In this chapter, you looked at the different data integration technologies available in SQL Server 2005. Modern enterprises typically have data stores in all sorts of heterogeneous data systems that need to be integrated with SQL Server–based database solutions. You covered SQL Server Integration Services in detail and sw how it provides an extremely powerful and customizable environment for extracting, transforming, and loading data between various data sources. SQL Server 2005 offers a rich set of utilities and commands that allow you to quickly and efficient manage data. This chapter covered the best technique to use, given certain business and technical requirements. Finally you looked at how replication provides a powerful and flexible platform for automatically synchronizing data between database instances across the enterprise.
Exam Essentials Know how to use the SQL Server Agent. It is important to know what SQL Server Agent is and what it does. Understand when to use SSIS. Make sure you understand when to implement a SQL Server Integration Services solution. Know what makes up SSIS package components. Make sure you understand what an SSIS package consists of and why you would use the data flow and control flow components. Know when to use BCP. Know when to use the BCP utility and some of the basic switches such as -T, -S, -U, -P, -c, -n, and -E. Understand replication. Make sure you understand when to implement a replication solution and what replication models exist.
40521.book Page 547 Monday, August 14, 2006 8:04 AM
Review Questions
547
Review Questions 1.
You are designing a database solution for a financial institution. You receive a data file daily containing the latest interest rates from a market data provider. You are designing an SSIS package that needs to load this reference data into several different database solutions used by various departments. What transformation should you use? A. Conditional Split transformation B. Multicast transformation C. Merge transformation D. Lookup transformation
2.
Which component of the SSIS designer allows you to control which workflow will execute depending on what event is generated as a result of an executing package, task, or container? A. Controls flows B. Data flows C. Event handlers D. Connection managers
3.
Which component of SQL Server 2005 allows you to schedule administrative tasks such as database checks and database backups? A. The SQL Server Agent B. SQL Server Analysis Services C. SQL Server Browser D. SQL Server Integration Services
4.
A developer calls you from home requesting some data from a table in your testing environment. You plan to email her a compressed text file of the data from the table she has requested. What is the best way of generating the text file? A. Create an SSIS package. B. Use the INSERT…SELECT statement. C. Use the BCP utility. D. Use the DTSRUN utility.
5.
Which component within an SSIS package allows you to control the order in which the control flows? A. Event handlers B. Containers C. Tasks D. Precedence constraints
40521.book Page 548 Monday, August 14, 2006 8:04 AM
548
6.
Chapter 9
Designing Data Integration Solutions
You are designing an SSIS package that needs to load census information collected from the 13 administrative districts in East Timor on a daily basis. The 13 data files will be transferred via FTP nightly using a consistent name. The package will be scheduled to run in at 6 a.m. What container should you use to load the 13 data files? A. The Foreach Loop container B. The For Loop container C. The Sequence container D. The Task Host container
7.
What options do you have in Business Intelligence Development Studio for debugging SSIS package control flows? (Choose all that apply.) A. Breakpoints B. Data viewers C. Debug windows D. Progress reports E. Row counts
8.
You are developing a database solution using SQL Server 2005 Enterprise Edition for a multinational call center. There will be offices in Florence, Sydney, and Cuzco (my favorite cities in the world). Users of the call center system will need to modify their “local data” but see everyone’s data for reporting purposes such as performance and statistics. There should be a minimum delay with data being replicated. What replication model should you use? A. Merge replication B. Transactional replication C. Snapshot replication D. Peer-to-peer transactional replication
9.
You are developing a database solution using SQL Server 2005 Standard Edition. You have the same database schema located on two different SQL Server instances on different floors in the same building. The sales department uses one database instance; the forecasting department uses the other. They modify mutually exclusive data sets but want to be able to report on all the data within the database solution. You have determined you will use replication to synchronize the data. What replication model should you use? A. Merge replication B. Transactional replication C. Snapshot replication D. Peer-to-peer replication
40521.book Page 549 Monday, August 14, 2006 8:04 AM
Review Questions
549
10. What must an SSIS package have as a minimum? (Choose all that apply.) A. Connections B. Control flows C. Data flows D. Event handlers E. Log providers 11. What command would you run to export a text file called C:\Products.txt from the [Products] table in the [SalesDB] database using a trusted connection on a server called Katsbox? A. BCP [SalesDB]..[Products] out C:\Products.txt -S Katsbox -c -T B. BCP [SalesDB]..[Products] in C:\Products.txt -S Katsbox -n -T C. BCP [SalesDB]..[Products] out C:\Products.txt -S Katsbox -n -k D. BCP [SalesDB]..[Products] out C:\Products.txt -S Katsbox -c -k 12. You are developing a database solution that uses SSIS. Your initial analysis has determined that the functionality required does not exist in SSIS. What object model should your developers use? A. DMO B. SMO C. ISOM D. ISAM 13. In what order of drivers should you try to connect to various databases systems including SQL Server, if possible, when writing distributed queries and working with heterogeneous database systems in SSIS? A. DB-Library B. ODBC C. OLE DB D. SQLCLI 14. Which component of the SSIS designer allows you acquire data from sources, manipulate it potentially through a series of transformations, and send it to a destination? A. The control flow component B. The data flow component C. The event handler component D. The connection manager component
40521.book Page 550 Monday, August 14, 2006 8:04 AM
550
Chapter 9
Designing Data Integration Solutions
15. You receive a text file from the head office based in Sydney, Australia, containing all the new customers from the six states and two territories in Australia (New South Wales, Victoria, Western Australia, Tasmania, Queensland, South Australia Northern Territory, and Australian Capital Territory). You need to load the new customers within the text file into separate databases depending on the state in which they live. Each state has its own SQL Server instance. You are currently designing the SSIS package. What transformation should you use to load the data into the respective database? A. The Conditional Split transformation B. The Multicast transformation C. The Merge transformation D. The Lookup transformation 16. You are developing a database solution that needs to access reference data on the fly that is located on another SQL Server instance. This reference data is a small data set and needs to be up-to-date when accessed. You plan to create a view that will reference the appropriate table on the other SQL Server instance. You plan to create additional views based on tables located on the other SQL Server instance. A. Create a linked server using the sp_addlinkedserver and sp_addlinkedsrvlogin system stored procedures. B. Use the OPENDATASOURCE function. C. Use the OPENROWSET function. D. Use the BCP utility to export the date from the SQL Server instance and load it into your database solution. 17. Which component of the SSIS designer determines the order in which containers and tasks are executed? A. The control flow component B. The data flow component C. The event handler component D. The connection manager component 18. You are developing a new database solution on a SQL Server 2005 instance. You need to populate a reference table called [Calendar] that basically contains all the public holidays and other important dates until 2070. You have determined that this static data with the same schema already exists in another database on the same SQL Server instance. What is the easiest way of populating the [Calendar] table? A. Create an SSIS package. B. Use the INSERT…SELECT statement. C. Use the BCP utility. D. Use the DTSRUN utility.
40521.book Page 551 Monday, August 14, 2006 8:04 AM
Review Questions
551
19. Which BCP switch should you use to control whether IDENTITY values are maintained or regenerated? A. -e B. -E C. -n D. -N 20. Your developers want to write a stored procedure that will load a text file into a database table and run some tests to ensure the data is fine before deciding to either commit or roll back the operation. What statements and/or command should you use? (Choose all that apply.) A. BCP B. BULK INSERT C. OPENDATASOURCE D. OPENROWSET E. xp_cmdshell
40521.book Page 552 Monday, August 14, 2006 8:04 AM
552
Chapter 9
Designing Data Integration Solutions
Answers to Review Questions 1.
B. The Multicast transformation distributes the same data set to multiple destinations. The Conditional Split transformation routes inputs from a data set to different destination outputs depending on an expression. The Merge Join transformation joins two sorted inputs, and the Lookup transformation performs a lookup.
2.
C. Event handlers allow you control what executes as a result of a runtime event such as OnError, OnProgress, or OnPostExecute occurring.
3.
A. The SQL Server Agent executes scheduled tasks such as database checks and database backups.
4.
C. The BCP utility is the quickest and easiest tool to use to export a table to a text file. The INSERT statement will not export data to a text file. You do not need to create an SSIS package. The DTSRUN utility was available in SQL Server 2000 to run DTS packages.
5.
D. Precedence constraints control the order in which the control flows within an SSIS package.
6.
A. The Foreach Loop container repeats a control flow within a package based on some enumeration, such as enumerating a set of files in a folder. The Loop container is more for testing for a particular condition.
7.
A, C, D. Breakpoints, debug windows, and progress reports help you debug SSIS package control flows. Data viewers and row counts help you debug data flows.
8.
D. Peer-to-peer transactional replication will allow the call centers to modify their data yet have the external data replicated to them. Transactional and snapshot replication use a hierarchy and therefore are inappropriate. Merge replication is not as easy to set up.
9.
A. Merge replication will allow the two departments to modify their data and allow you to synchronize the modifications without needing conflict resolution. Transactional and snapshot replication have a hierarchy and are not appropriate. You cannot use peer-to-peer transactional replication because you are not using Enterprise Edition.
10. A, B, C. An SSIS package must at least have connections, control flow components, and data flow components. 11. A. Option A represents the correct syntax. The -k switch controls empty columns. 12. C. The Integration Services Object Model (ISOM) allows you to programmatically customize your SSIS solution. The Distributed Management Objects (DMO) and SQL Server Management Objects (SMO) are designed for the SQL Server engine and administration. The Indexed Sequential Access Method (ISAM) is used to access nonrelational database systems. 13. D, C, B, A. The SQL Server native client will be the best choice, but it allows you to connect only to SQL Server instances; then comes OLE DB followed by the older technology of ODBC. DB-Library should be your last choice because it is the oldest and is deprecated.
40521.book Page 553 Monday, August 14, 2006 8:04 AM
Answers to Review Questions
553
14. B. The data flow component of the SSIS designer allows you to create the data path through which the data will be sourced, transformed, and sent. 15. A. The Conditional Split transformation routes inputs from a data set to different destination outputs depending on an expression. The Multicast transformation distributes the same data set to multiple destinations. The Merge transformation joins two sorted inputs, and the Lookup transformation performs a lookup. 16. A. Creating a permanent linked server relationship allows you to create the current and future view requirements. The OPENDATASOURCE and OPENROWSET functions are more appropriate for infrequent ad hoc access to remote data. You cannot use BCP because the data needs to be up-to-date when accessed. 17. A. The control flow component in the SSIS designer allows you to control the order in which the containers and tasks that make up the SSIS package are executed. 18. B. Using an INSERT statement with a nested SELECT statement allows you to quickly copy data from one database table to another in SQL Server 2005. Creating an SSIS package or using BCP would be a more complex, multistep operation. The DTSRUN utility was available in SQL Server 2000 to run SQL Server 2000 packages. 19. B. The -E switch controls the IDENTITY values. The -e switch is for error logging. The -n switch generates a native BCP file, and the -N switch generates Unicode characters. 20. B. The BULK INSERT statement allows you to easily write DML code that will insert data from a text file and run tests before committing the transaction.
40521.book Page 554 Monday, August 14, 2006 8:04 AM
40521c10.fm Page 555 Friday, August 11, 2006 1:26 PM
Chapter
10
Case Studies
40521c10.fm Page 556 Friday, August 11, 2006 1:26 PM
The 70-441 exam is a case study–based exam. It requires you to read the existing environment, business requirements, and technical requirements and design some facet of the database solution to meet these requirements. The questions at the end of the individual chapters in this book tested concepts and theory within the chapter. These case studies, however, like in the 70-441 exam, will test your combined knowledge across all chapters. As I said in the Introduction, 70-441 is easily the most difficult SQL Server 2005 exam due to its breadth and depth. A design exam by nature is difficult. Trust me! I’ve sat across a lot of heated discussions in boardrooms all over the world. Furthermore, the 70-441 exam requires knowledge of technology that you might not be familiar with or have no experience with, as there has not been a need to implement it in your experience. However, it does not require you to have in-depth knowledge about all the different SQL Server 2005 technologies. Although if you do, that will certainly help, as will practical experience! It does require you to recognize the appropriate technology to fulfill the business and technical requirements within the context of the existing environment. This chapter is designed to help you recognize these high-level requirements and architect the appropriate database solution. In other words, to help you pass the exam. All the very best of luck!
40521c10.fm Page 557 Friday, August 11, 2006 1:26 PM
Case Study 1: United Nations
557
Case Study 1: United Nations
As part of the rebuilding of East Timor through a United Nations project, the German Foreign office has sent a team of “specialists” to design and implement a civil registry database solution for East Timor’s one million inhabitants. This civil registry database will be the basis for passports and other state documents such as marriage, birth, and death certificates.
Existing Environment Data was gathered in an exhaustive process throughout the various districts of East Timor and was burned to CD-Rs that were then shipped to the capital. The data gathered for each individual consists of an XML file, a JPEG photo, and a TIFF image of a document that was printed onsite and that contains all the participant’s details with a signature or fingerprint. The JPEG is always less than 8,000 bytes in size. The TIFF file ranges from 690KB to 1MB. These three files are zipped up together in a single file, which has a strict naming convention consisting of a prefix of the district ID where the information was collected and an incrementing number. The database server that runs Microsoft Windows 2003 Advanced Server and Microsoft SQL Server 2005 Enterprise Edition is located in the capital, Dili. All the data from the CD-Rs has been copied to a directory on the server, but no data-loading processes have been designed. All users are running Microsoft Office 2003 Professional.
40521c10.fm Page 558 Friday, August 11, 2006 1:26 PM
558
Chapter 10
Case Studies
Business Requirements The tables keeping track of adults and children will include a date-of-birth field, called [DOB]. Investigations have found that some people were born in the 19th century. These tables have a computed field called [Age], which is based on the (DATEDIFF(year, [DOB], GETDATE())) calculation. The database design must be able to keep track of a husband, who can have multiple wives. The database design does not need to keep track of historical information about marriages. State documents will be printed on paper that has special security features, including a serial number that has been preprinted on the paper. The UN has purchased a COM component that will be used to generate a unique passport number. The organization has no budget to purchase an updated version of this external software module. The system will also be responsible for issuing diplomatic passports where security is critical. Additionally, the Minister for State wants to be notified via email whenever a diplomatic passport is issued. Various government employees will want to have read access to the database to be able to run various standard reports that have simple SARGs. A commonly run aggregate summary report is as follows: SELECT FROM WHERE AND AND GROUP BY
COUNT(*) [...] District = @District Gender = @Gender Age BETWEEN @UpperAge AND @LowerAge District, Gender, Age
Future plans include buying some graphics-processing and graphics-scrubbing software, which will capture the fingerprints from the TIFF files.
Technical Requirements The database design must be as simple as possible and use the fewest joins to maximize performance and minimize complexity. Likewise, the overall design must try to be as simple as possible so it’s easy to understand and maintain because of operational requirements. The SQL Server solution will use Windows Authentication only. All users will use their own login accounts. Two main types of users exist: those who enter (and correct) data and those who print documents. All the printing of documents and passports must be audited. As much functionality as possible must be contained within the database for security reasons. Any reporting functionality must be performed on a separate server as an additional security measure.
40521c10.fm Page 559 Friday, August 11, 2006 1:26 PM
Review Questions
559
Review Questions 1.
How should you implement the printing of state documents? The solution must meet all the business and technical requirements. A. Implement a Reporting Services solution. Use a parameterized report for the different types of reports. Use the IDENTITY property in a field to generate a serial number. B. Create a CLR procedure that prints the state documents to a TIFF file. Store the TIFF file in the database with the username and date printed. Use the IDENTITY property in a field to generate a serial number. C. Create a CLR procedure that prints the document and records the username and date printed in the database. In the application that calls the CLR procedure, require the user to add the serial number. D. Use the Mail Merge feature of Microsoft Word to print the documents.
2.
How do you generate the passport ID within the database? The solution must meet all the business and technical requirements. A. Use the xp_cmdshell system stored procedure to call the COM component. B. Use the sp_OA system stored procedures to call the COM component. C. Implement a CLR function to generate the passport ID. D. Use DEFAULT to generate the passport ID.
3.
What kind of solution should you use for generating the reports for government employees? A. Implement a Reporting Services solution. Create a number of parameterized reports that can be accessed as required by government employees. B. Create an application role called [GovernmentEmployees]. Develop an application that activates this application role when run and has a number of predefined reports that can be run. C. Implement a web service that exposes the report data. Government employees can then write applications that can access this web service. D. Create a SQL Server Agent job that emails the employees reports at the end of each week.
4.
What is the best way to implement the minister’s notification? A. Use the Service Broker. B. Use event notifications. C. Use the sp_send_dbmail stored procedure in a DML trigger. D. Use the sp_send_dbmail stored procedure in a DDL trigger.
40521c10.fm Page 560 Friday, August 11, 2006 1:26 PM
Chapter 10
560
5.
Case Studies
What technologies do you need to use to load the CD data into the database? (Choose all that apply.) A. sp_addlinkedserver B. SSIS C. BULK INSERT D. OPENQUERY E. An unzip utility F.
6.
OPENXML
How do you design the tables to store relationships between husbands and wives? The design must meet all the technical and business requirements. A. Use this: Husband (HusbandId, WifeId, Name, Surname, …) Wife (WifeId, Name, Surname, …) B. Use this: Husband (HusbandId, Name, Surname, …) Wife (WifeId, Name, Surname, …) Marriage (HusbandId, WifeId) C. Use this: Husband (HusbandId, Name, Surname, …) Wife (WifeId, HusbandId, Name, Surname, …) D. Use this: Marriage(HusbandId, HusbandName, HusbandSurname, WifeId, WifeName, WifeSurname, …)
7.
How should the JPEG photo of individuals be stored? A. In the same table as the individual to whom it belongs. B. In a separate table that has a one-to-one relationship with the person to whom the JPEG belongs. C. As an external file. A field in the table will indicate the location and name of each JPEG photo. D. As an external file. Each JPEG photo filename must have a name that matches the person’s unique identifier.
8.
What data type should you use for the [DOB] field? A. CHAR(8) B. DATETIME C. CLR user-defined data type D. SMALLDATETIME
40521c10.fm Page 561 Friday, August 11, 2006 1:26 PM
Review Questions
9.
561
What index should you create to improve the performance of the aggregate summary query? A. (District) B. (Gender) C. (District, Gender) D. (District, Gender, Age)
10. How should the TIFF document be stored? A. In the same table as the person to whom it belongs. B. In a separate table that has a one-to-one relationship to the person to whom it belongs. C. As an external files. A field in the table will indicate the location and name of each TIFF document. D. As external files. The filename must have a name that matches a person’s unique identifier.
40521c10.fm Page 562 Friday, August 11, 2006 1:26 PM
562
Chapter 10
Case Studies
Answers to Review Questions 1.
C. A CLR procedure will be the most secure and quickest way of developing a method of printing a state document. The IDENTITY property will not help because the paper has a serial number on it that must be tracked.
2.
B. You must use the sp_OA stored procedures to call the purchased COM component.
3.
A. A Reporting Services solution will allow government employees to access parameterized reports and run their reports. It is easier to develop than a custom application. Government employees might not necessarily know how to write applications.
4.
C. The easiest way to notify the minister that a diplomatic passport is being issued is to write a DML trigger that uses the sp_send_dbmail system stored procedure. It requires less coding than setting up a Service Broker solution. Event notifications and DDL triggers will not do the job.
5.
B, E. The data needs to be unzipped. You can easily create an SSIS package to unzip the file and load the data.
6.
C. Since a husband can have multiple wives, the HusbandId must exist in the same table as the WifeId.
7.
B. The JPEG photos are quite small so can be stored easily within the database. Therefore, you have less chance “losing” them because you’re not relying on filenames or tracking/manipulating them. You should keep them in a separate table to improve performance. In addition, the join operation has no impact because it is a simple lookup of just one row.
8.
B. DATETIME is specifically designed to store dates and will be able to store all the inhabitants’ birth dates. SMALLDATETIME will not be able to store the birth dates of people born before 1900. CHAR(8) and CLR UDDT would not be efficient.
9.
C. A compound index on District and Gender will give the best improvement in performance. The Age field cannot be indexed because it is not deterministic.
10. B. The TIFF files are best stored externally because they are much larger than the 8KB page limit. It will also be easier for future applications to access them directly for manipulation without relying on SQL Server.
40521c10.fm Page 563 Friday, August 11, 2006 1:26 PM
Case Study 2: Energy Trading
563
Case Study 2: Energy Trading
You have been hired to help develop a database solution for an electricity trading company. The energy market is divided into five-minute periods, and the consumer demand and supply both need to be stored and analyzed. This data is provided by a national regulator, which makes the data available in a number of formats:
XML files
BCP files
CSV files
XLS files
You can pull this data from the national regulator’s site through proprietary technology, but you need to request a file format. Each data set contains more than 250,000 rows. You will use a custom OLE Automation–extended stored procedure to load the data until the national regulator will upgrade the process to a CLR component.
Existing Environment The database solution will be hosted on Windows Server 2003 x64 Enterprise Edition with SQL Server 2005 x64 Enterprise Edition installed. The server has four CPUs and 16GB of RAM.
Business Requirements This database has two critical tables called [PeriodDemand] and [PeriodSupply], which are responsible for tracking the demand and supply of energy to the market. The two tables are uniquely identified via a period identifier, which has the following format: YYYYMMDDxxx, where xxx is a number ranging from 0–288 to indicate a five-minute period.
40521c10.fm Page 564 Friday, August 11, 2006 1:26 PM
564
Chapter 10
Case Studies
This period identifier is the also the foreign key to several important children tables that are going to be heavily queried through join operations to the parent tables. Several SARGs are used by queries running against the [PeriodSupply] and [PeriodDemand] tables, but all queries have a time-based SARG. The energy traders want the SQL Server solution to notify them when the energy demand rises beyond a certain level. This threshold will change over the year depending on the season, so the solution must be flexible. They might want to be notified via email or SMS messages to their mobile phones. The energy traders also have a number of critical reports that must return the latest data on the position of the market. Management needs to be notified when the trading desk attempts a trade greater than $10,000,000 because of the requirements of the auditors and the industry regulatory bodies.
Technical Requirements Because of the real-time nature of the market, performance is critical. A SQL Server Reporting Services solution will provide both reports for both the traders and the managers. All the DDL code needs to be encrypted to protect the intellectual property of the algorithms and models being used. The development manager will review all the code before it is executed against the production environment. No developer should be able to modify any object because the development manager will be responsible for change control. The indexing strategy on the [PeriodDemand] and [PeriodSupply] tables seems to be adequate in the testing phase. So the [PeriodDemand] and [PeriodSupply] tables have a number of nonclustered indexes. It is expected that more nonclustered indexes will be created as required. The database will have to hold more than five years’ worth of historical information.
40521c10.fm Page 565 Friday, August 11, 2006 1:26 PM
Review Questions
565
Review Questions 1.
What format should you request for the data files in order to load the data in the shortest possible time? A. XML file B. CSV file C. BCP file D. XLS file
2.
How can you improve the performance of the join operations between the [PeriodSupply] and [PeriodDemand] tables and their related child tables? (Choose all that apply.) A. Create a nonclustered index on all the foreign key columns. B. Create a clustered index on all the foreign key columns. C. Add a surrogate key to the [PeriodSupply] and [PeriodDemand] tables. Create this surrogate key on the children tables, and add a nonclustered index to it. D. Add a surrogate key to the [PeriodSupply] and [PeriodDemand] tables. Create this surrogate key on the children tables, and add a clustered index to it. E. Drop the existing foreign keys.
3.
What technique should you use to maximize the performance of the [PeriodDemand] and [PeriodSupply] tables? A. Create more nonclustered indexes on the tables. B. Create more database files to spread the disk I/O. C. Implement partitioned views. D. Implement partitioned tables.
4.
After deploying the database solution to the production server, the proprietary data-loading process does not work. What utility should you run to troubleshoot this problem? A. The SQL Server Upgrade Advisor B. The SQL Server Surface Configuration tool C. The Database Engine Tuning Advisor D. SAC.EXE
5.
What components should you use to implement the source control mechanisms according to the business and technical requirements? (Choose all that apply.) A. Create all database objects using the SCHEMABINDING clause. B. Create all database objects using the ENCRYPTION clause. C. Create all database objects using the WITH CHECK clause. D. Use DDL triggers at the server scope. E. Use DDL triggers at the database scope. F.
Keep the source code in a source code control system.
40521c10.fm Page 566 Friday, August 11, 2006 1:26 PM
566
6.
Chapter 10
Case Studies
What kind of indexing strategy should you have on the Period Identifier column of the [PeriodDemand] and [PeriodSupply] tables? A. Create a clustered index on it. B. Create a nonclustered index on it. C. Create a nonclustered index on it that includes all SARGs. D. Do not index the columns.
7.
After going live, the users are complaining about the performance of the Reporting Services solution. How can you improve the performance of the reports? A. Configure report caching. B. Implement reporting snapshots. C. Configure a report execution timeout. D. Improve the indexing of the underlying tables.
8.
How would you implement the alerting system for the energy traders? A. Define a SQL Server Agent job that runs hourly and emails them if the demand reaches the threshold. B. Write a DML trigger that runs an extended store procedure that emails them if the demand reaches the threshold. C. Create a stored procedure that queries the relevant tables for the threshold. Write an application that periodically runs the stored procedure. D. Write an notification services solution that emails them when the demand reaches threshold.
9.
What data type should you use for the period identifier? A. INT B. BIGINT C. CHAR(11) D. NCHAR(11)
10. How would you implement the alerting system for management? A. Define a SQL Server Agent job that runs hourly and emails them if the trade has been attempted. B. Write a DML INSTEAD OF trigger that emails them if the trade is attempted. C. Write a DML AFTER trigger that emails them if the trade is attempted. D. Write an event notification solution that emails them when the trade is attempted.
40521c10.fm Page 567 Friday, August 11, 2006 1:26 PM
Answers to Review Questions
567
Answers to Review Questions 1.
C. Native BCP files are the smallest and quickest files to load.
2.
D, E. A new surrogate key of INT would take up only 4 bytes as opposed to the current 8 bytes, so indexes will be more efficient. A clustered index on the surrogate foreign key will be best because there are lots of joins and because all queries have a time-based SARG.
3.
D. Implementing partitioned tables will maximize performance. Partitioned views are being deprecated. Creating more database files or nonclustered index would not necessarily maximize performance.
4.
B. The SQL Server Surface Configuration tool will show whether OLE Automation has been enabled on the SQL Server 2005 instance, which is what the data-loading process relies on. The SAC.EXE utility is use to import and export your surface area settings.
5.
B, E, F. Encrypting the source code and maintaining it in a source control system together with DDL triggers at the database scope will prevent developers from modifying the database objects without authorization and will secure the intellectual property.
6.
B. A nonclustered index will be the optimal index here. A clustered index would be too big and negatively impact the nonclustered indexes, including all the columns used by SARGs.
7.
D. Indexing should improve the performance of the reports. Report caching and snapshots cannot be used because there is a real-time requirement when reports are generated. You have no indication that a report execution timeout will improve performance.
8.
D. The notification services architecture will give the energy traders the flexibility they need to customize their notification requirements.
9.
B. The BIGINT data type will be sufficient to hold the domain for the period identifier and takes up 8 bytes. The INT data type is not sufficient to hold the required range of values. The CHAR and NCHAR fields will take up more than 8 bytes so would not be as efficient as BIGINT.
10. B. A DML INSTEAD OF trigger will be the quickest mechanism to notify the managers when the trade is attempted because it starts as soon as the transaction hits the transaction log.
40521c10.fm Page 568 Friday, August 11, 2006 1:26 PM
568
Chapter 10
Case Studies
Case Study 3: Nigerian Census
The Delegation of the European Commission in Nigeria is planning a census in March 2006. The Nigerian Population and Housing Census data capturing will take three to five months and is estimated to capture 130 million records. The data capturing and processing will take place in seven networked data processing centers (DPCs) that are responsible for their geopolitical zone. After the data capturing and processing, the data needs to be sent to the central server, which is based at the head office in Abuja. The captured data will be in the form of flat files generated by a custom application, a questionnaire, and an XML file. The population database has not yet been developed. There are plans to integrate the population database with a spatial GIS database solution, which is currently being developed on a separate server. Initially, the database will have 100 internal concurrent users utilizing a custom-built .NET application to access static, summarized reports based on the population database. Eventually, it is anticipated that more than 10,000 external users will be accessing the database solution using a web-based solution.
Existing Environment All servers used for the database solution will be running Windows 2000 Advanced Server and SQL Server 2005 Standard Edition. All servers will belong to the same domain. The DPCs are connected to the central office in Abuja via satellites, which have a bandwidth of 128kbps duplexed. These links are used predominantly for email traffic and occasional Internet access. Data from an earlier census in 1991 is stored on a number of Zip disks. This data is in the CSV format and totals approximately 800MB.
40521c10.fm Page 569 Friday, August 11, 2006 1:26 PM
Case Study 3: Nigerian Census
569
Business Requirements The database solution must include validation checking, including simple range checks for inconsistencies. For example, an individual identified as a “Spouse” cannot have a marital status of “Never Married.” The 1991 census data needs to be loaded into the database solution. Data from the new census needs to be correlated via foreign key constraints with data from the 1991 census. However, two separate sets of users will be accessing the 1991 data from the 2006 data. Over the three-to-five-month data collection and processing period, the DPCs will be required to send data that has been collected and processed at the end of each week. No one is in the offices during the weekends. The Legal department has stipulated that the XML census documents that come with the questionnaire are considered legal documents. The Legal department requires that both the raster images of the scanned questionnaires and the associated XML files are stored for legal reasons. The database will be developed initially in Abuja and then deployed to the seven DPCs after testing and loading the reference data and lookup tables. The DBAs in the central office have sized the initial database file at 5GB and the transaction log at 5GB. The DBAs at the DPCs will have minimal experience, so every effort should be made to simplify components of the database solution. In addition, they might be working in the field a lot so will not always be available at the DPCs.
Technical Requirements Because of the low bandwidth between the DPCs and the central office, only emails and “valid” Internet traffic is allowed across the satellite links. A separate server should be used whenever possible to isolate performance-related and security-related issues.
40521c10.fm Page 570 Friday, August 11, 2006 1:26 PM
570
Chapter 10
Case Studies
Review Questions 1.
How should the collected data be migrated to the Abuja office? A. Create an SSIS package that runs weekly on the weekends to send new collected data to the central site. B. Get the DBAs at the DCPs to use the BCP utility to export the table data at the end of the week and FTP the data across the satellite links. C. Back up the database at the DPCs. Get the DBAs to copy the backups across the satellite links weekly. D. Set up a replication solution. Schedule the replication solution to replicate the data weekly on weekends.
2.
What technique can you use to improve the performance on the main tables that will hold 130 million records? A. Create a filegroup with a separate database file. B. Create more database files to spread the disk I/O. C. Use partitioned tables. D. Use indexed views.
3.
What data type(s) should you use for storing the census XML documents? A. Shred the XML document into base types. B. Use the XML data type. C. Use the VARCHAR(MAX) data type. D. Use the CHAR(5000) data type.
4.
What technique should you use to deploy the database to the various DPC sites? A. Use the Copy Database Wizard to copy the database across the satellite links. B. Detach the database in Abuja. Copy the detached database files across the satellite links. Attach the database files at the DPCs. C. Script the database and database objects. Use the BCP utility to export the reference data. Zip up the T-SQL scripts and BCP files, and send them via email to the DCPs. Get the DBAs at the DPCs to execute the Transact-SQL scripts and load the BCP data. D. Back up the database at Abuja. Email the compressed data file to the DBAs, and get them to restore the database files.
5.
How should the 1991 census data be separated from the 2006 data? A. Use views with 1991_ and 2006_ prefixes to separate the respective data sets. B. Use a naming convention for the tables that make up the respective data sets. C. Use schemas to separate the respective data sets. D. Use separate databases to store the separate data sets.
40521c10.fm Page 571 Friday, August 11, 2006 1:26 PM
Review Questions
6.
571
What database object should you use to ensure that a spouse never has a status of “Not Married”? A. A primary key constraint B. A foreign key constraint C. A default constraint D. A check constraint E. A default F.
7.
A rule
What solution should you choose for the 10,000 users wanting to access the summarized summary reports? A. Create views of the summarized summary reports. Expose these views as web services. B. Install a Reporting Services solution on a separate server. C. Implement a service broker solution. D. Install IIS on a separate server. Use the Web Assistant stored procedures to generate HTML pages on an IIS.
8.
Where should the 1991 census data be stored? A. In the 2006 census database B. In a separate database on the same server C. In a separate database on a separate server D. As CSV files on the same server with links created to them via sp_addlinkedserver
9.
What data type should you use to store the GIS-related data? A. Use a Transact-SQL user-defined data type. B. Use a CLR user-defined data type. C. Use the XML data type. D. Use the FLOAT data type.
10. What is the best way to load the 1991 census data into the database solution? A. Use the BULK INSERT statement. B. Create an SSIS package that transfers data from the CSV file to the relevant tables. C. Create a link to a CSV file using the sp_addlinkedserver system stored procedure. Use an INSERT statement with a nested SELECT statement to insert the data into the relevant tables. D. Use the BCP OUT command.
40521c10.fm Page 572 Friday, August 11, 2006 1:26 PM
572
Chapter 10
Case Studies
Answers to Review Questions 1.
D. A replication solution is the best solution because, once set up, it automatically sends the new data every weekend. With an SSIS package you would need to identify new data, which replication does automatically. Database backups and BCP files would consume too much network traffic needlessly. An FTP operation might fail, and no DBAs are working on the weekend to restart it.
2.
D. Indexed views will enable you to split the large tables into a number of smaller tables that can then be combined with a UNION operator. Partitioned tables are not supported on SQL Server 2005 Standard Edition.
3.
C. The VARCHAR(MAX) will efficiently store the XML documents and meet the legal requirements. Shredding the XML document or using the XML data type will not meet the legal requirements. The CHAR(5000) data type is not efficient.
4.
C. Scripting the database and using BCP files will minimize the impact on the satellite links. Database backups and detached files will be substantially larger sets of files.
5.
C. Schemas are the perfect way to separate the data sets for the respective users. Naming conventions for views and tables will not be as elegant. You cannot use separate databases because the constraints would not work.
6.
D. A check constraint can ensure that the status is correct for a corresponding value in the same row. You should not use defaults because they will be deprecated.
7.
B. A Reporting Services solution on a separate server will allow 10,000 users to access the summarized reports without impacting the performance of the production server. The Web Assistant has been deprecated.
8.
A. The 1991 census data should be stored in the 2006 database. There is no compelling reason to store it in a separate database. Consequently, you can use constraints to maintain referential integrity.
9.
B. A CLR user-defined data type is designed to be used for a complex data type such as GIS data.
10. A. The BULK INSERT statement will easily and quickly insert the CSV data. You do not need to create an SSIS package or linked server because this will be a one-off operation. The BCP OUT command exports data.
40521c10.fm Page 573 Friday, August 11, 2006 1:26 PM
Case Study 4: Shark Conservation
573
Case Study 4: Shark Conservation
In an effort to understand more about sharks and promote shark conservation, three organizations—based in Adelaide (Australia), Cape Town (South Africa), and San Francisco (United States)—are developing a scientific database. Great white sharks off the coasts of Adelaide, Cape Town, and San Francisco will be tagged with radio transmitters; these transmitters will plot the sharks’ movements and gather other scientific data such as the salinity and temperature of the ocean. The data will be collected every five minutes until a day’s worth of data is collected at which point it will be compressed and then buffered, after which it will be transmitted when next possible. (The transmitters can store up to two weeks’ worth of data before they start purging the oldest data.) As part of raising awareness, the organizations are planning to have a publicly available website where the public can see the plotted course of these sharks. Additionally, recreational scuba divers and conservationists will be able to submit sightings and encounters with sharks as a means of collating more data about sharks via the website.
Existing Environment Because of the complete lack of government funding and minimal public donations, the three organizations have to develop a database solution that is price sensitive. Luckily, Victor Isakov, a database architect and Microsoft Certified Trainer who happens to be a keen scuba diver, has volunteered to develop the database once the initial database design has been modeled.
40521c10.fm Page 574 Friday, August 11, 2006 1:26 PM
574
Chapter 10
Case Studies
The organizations have been able to procure three servers that have Windows 2003 Standard Edition and IIS preinstalled on them. Each server has two CPUs and 4GB of RAM. There is no budget to purchase more hardware. They have yet to purchase SQL Server 2005.
Business Requirements The transmitters are sending a lot of raw data sent in any given period. The raw data can be gathered by any one of the sites based in Adelaide, Cape Town or San Francisco depending on which sharks they are tracking. This raw data then goes through a complex cleansing process at the site that collected it before being sent to Cape Town. The Great White Shark Institute, based at the Cape Town site, then performs CPU-intensive complex proprietary calculations on the received data. As soon as these calculations are performed, which can take a long time, the resultant summarized data needs to be sent back to the site that needed the calculations to be performed. These complex processes need to be fully automated so they can operate 24 hours as data is coming in from the shark transmitters. Once the processed data comes back, it gets populated into a set of tables used by scientists. The database solution also needs to populate a set of tables that will be used by the public community on the website. This data needs to be summarized and stored in a column using a complex calculation that will be used for generating graphs and plots on the website. Any of the sites want to be able to retrieve collected raw data from another site. This process will be on an ad hoc basis as required A stored procedure called [sp_SharkSighting] has been written. It is responsible for submitting user sightings from the website into the database.
Technical Requirements The scientists need to know whether the complex calculations consume more than 95 percent CPU utilization and more than 95 percent of tempdb’s log. Retrieving the raw data by one site from the other sites must be via pull technology. All tables are owned by the chief scientist’s database user account, [Karla]. All views and stored procedures will be owned by [Victor]. External users connect to the database solution using a single Windows account that has a corresponding database user account. External users are not allowed to access the tables directly under any circumstances.
40521c10.fm Page 575 Friday, August 11, 2006 1:26 PM
Review Questions
575
Review Questions 1.
What edition of SQL Server should the organizations use to meet the technical and business requirements of this solution? A. Express Edition B. Workgroup Edition C. Developer Edition D. Standard Edition E. Enterprise Edition
2.
What technology should the organizations use to allow a site to be able to get collected raw data from another site? The solution must meet all the business and technical requirements. A. Create a web service that allows the remote site to connect and exchange data. B. Use FTP to transfer data between the sites. C. Create an SSIS package that periodically runs and pushes data to the remote sites. D. Create a transactional replication model. E. Create a peer-to-peer replication model.
3.
What is the fixed database role that Victor should be made a member of? A. db_datareader B. db_ddladmin C. db_datawriter D. sa
4.
How can the scientists be notified of a CPU performance bottleneck? A. Through Performance Monitor alerts B. Through SQL Server Agent alerts C. Through the SQL Server Profiler D. Through the Database Engine Tuning Advisor
5.
What data type should the organizations use to store the reported sightings by recreational scuba divers and conservationists? A. TEXT B. NTEXT C. VARCHAR(MAX) D. NVARCHAR(MAX) E. XML
40521c10.fm Page 576 Friday, August 11, 2006 1:26 PM
576
6.
Chapter 10
Case Studies
How should the organizations implement the summarized information? A. Use the SUM function. B. Use the AVG function. C. Use a CLR user-defined aggregate. D. Use the CUBE clause.
7.
What technology should the organizations use to send data between the various sites for processing? A. DDL triggers B. DML triggers C. Service Broker D. SQL Server Integration Services
8.
How can the scientists be notified of tempdb log capacity problem? A. Through Performance Monitor alerts B. Through SQL Server Agent alerts C. Through the SQL Server Profiler D. Through the Database Engine Tuning Advisor
9.
What security options should be configured to the stored procedure to ensure that users can add sightings? (Choose all that apply.) A. Grant the EXECUTE permission on the stored procedure to the user. B. Grant INSERT permissions to the table used for sightings. C. Create the procedure using the EXECUTE AS SELF option. D. Create the procedure using the EXECUTE AS SELF option. E. Create the procedure using the EXECUTE AS Karla option.
10. What fixed database role should the organizations use for public users who effectively use the data? A. db_datareader B. db_ddladmin C. db_datawriter D. sa
40521c10.fm Page 577 Friday, August 11, 2006 1:26 PM
Answers to Review Questions
577
Answers to Review Questions 1.
B. The Workgroup Edition is the minimum edition that will meet the requirements. The Express Edition does not run on two CPUs and does not support the SQL Server Agent. The Standard and Enterprise Edition are more expensive.
2.
A. A web service will allow remote sites to be able to connect to its site to retrieve the collected data at a schedule that is desired by them.
3.
B. The db_ddladmin fixed database role will allow Victor to program database objects.
4.
A. Performance Monitor alerts allow you to send an alert if your CPU utilization exceeds the threshold.
5.
D. NVARCHAR(MAX) will enable people from all over the world to submit sightings in their native language. The TEXT and NTEXT data types are being deprecated. XML is too difficult to work with.
6.
C. CLR user-defined aggregate can be used to develop custom aggregations beyond the simple T-SQL aggregates such as SUM and AVG.
7.
C. The Service Broker is designed to build loosely coupled applications.
8.
A. SQL Server Agent alerts allow you to send an alert if your log utilization exceeds the threshold.
9.
A, E. The users need EXECUTE permission to the stored procedure and need to have the security context changed to Karla to overcome the broken ownership chain.
10. A. Public users of the database solution should be able to only read the data.
40521c10.fm Page 578 Friday, August 11, 2006 1:26 PM
40521.book Page 579 Tuesday, August 8, 2006 1:21 PM
Glossary
40521.book Page 580 Tuesday, August 8, 2006 1:21 PM
580
Glossary
A Stands for Atomicity, Consistency, Isolation, and Durability. These properties are considered compulsory in relational database theory to ensure the RDBMS executes transactions correctly.
ACID
ACL Stands for Access Control List. This is a list maintained by the Windows operating that you can use to control what permissions users and groups have to a resource. Activity Monitor A component of the SQL Server Management Studio that shows what processes are currently connected to the SQL Server instance, what objects they are accessing, what locks have been acquired, and related information. AD Stands for Active Directory. This is an information store used by a Windows 2003
network. It is used by administrators to manage security accounts and control security. Ad hoc report A report in SQL Server Reporting Services that is generated by users through
the Report Builder application. After trigger A SQL Server trigger that fires after the DML operation. Aggregate function A function that performs a calculation at the column level on a set of rows to return a single value. Examples of T-SQL aggregate functions include AVG, MIN, and SUM. Agile An adaptive, iterative method or framework for developing software. Alert A user-defined response to a SQL Server event. Application role A SQL Server role used by the application, instead of the user, to authen-
ticate against a database solution. Article A component of a publication used in replication. This represents a table or a subset
of data from the table you want to replicate. Assembly A managed application module, comprising class metadata and managed code, that you can embed in a database solution as a database object in SQL Server 2005. Asymmetric key An encryption key that is used to encrypt and decrypt data when there is no shared encryption key (secret). It is implemented through a private key and public key system where the keys are related mathematically. The private key is kept secret, while the public key may be widely distributed. One key “locks” the data, and the other is required to unlock it. Attribute See field. Authentication A challenge/response mechanism that ensures that a user connecting to SQL
Server is authorized to do so. Authorization A process that verifies the set of permissions granted to a user and conse-
quently what they can do.
40521.book Page 581 Tuesday, August 8, 2006 1:21 PM
Glossary
581
B Baseline A set of metrics gathered during a performance analysis process that forms the
basis of a performance-tuning methodology. BCP Stands for Bulk Copy Program. BCP is a command utility you can use to import and export data from a view or table. Type BCP -? at the command prompt to get a list of support switches. Benchmark A standard test or set of metrics gathered through a performance analysis
process used to compare the performance of similar systems. Bottleneck A particular subsystem that is slowing down the potential throughput of the
entire system. B-tree Stands for Balanced Tree. This is an abstract data type used to store indexes in
SQL Server.
C Cascade delete/cascading delete A delete operation of a row in a parent table that auto-
matically deletes all the related rows in the child table(s). Cascade update/cascading update An update operation to a primary key value in the parent table that automatically updates all the related foreign key values in the child table(s). Certificate An asymmetric key that has an expiration date and provides the ability to authenticate its holder. Certificates are used in SQL Server 2005 to secure logins or other database objects. Check constraint A constraint that enforces domain integrity. Clustered index A SQL Server index where the sorted order is the physical order. A table
can have only a single clustered index. Computed column A virtual column defined at the table level through a T-SQL expression. Constraint A SQL Server object used to enforce data integrity at the column level. Contract A component of a Service Broker that defines what messages types can be sent in
a dialog and which dialog endpoint is used. Control flow A component of a SQL Server Integration Services package that controls the
flow of tasks within the package. Conversation A SQL Server Service Broker component that exchanges messages in a reliable ordered manner.
40521.book Page 582 Tuesday, August 8, 2006 1:21 PM
582
Glossary
Conversation group A SQL Server Service Broker component that facilitates the grouping
of conversations. CRUD Stands for Create, Read, Update, and Delete. Refers to the major functions that need to be implemented in a database solution.
D Data flow A component of a SQL Server Integration Services package that controls the flow of data within the package. Data source An information store that can be connected to by various SQL Server technol-
ogies such as SQL Server Reporting Services for retrieving data. Data type A database object that allows you to control the type of data that will be stored
in a table’s column. Database Engine Tuning Advisor A SQL Server 2005 utility that recommends an indexing
and partitioning strategy to improve performance. Database Mail A component of SQL Server 2005 that can be used to send email messages. Database role A database object used to group principals. Database snapshot A feature of SQL Server 2005 that enables you to indefinitely store the state of the database at a particular point in time. DBCC Stands for DataBase Console Command. This is a set of commands used by DBAs to
perform administrative tasks. dbo Stands for database owner. This is a special predefined database user account that exists in every database. It’s also called a schema in SQL Server 2005. DCL Stands for Data Control Language. This is the ANSI SQL-92 category of SQL state-
ments that allow you to define the access control against database objects. DDL Stands for Data Definition Language. This is the ANSI SQL-92 category of SQL statements that allow you to define database objects. DDL trigger A SQL Server 2005 database object that automatically executes whenever a DDL operation is attempted at either the server level or the database level. Default constraint A database object (data integrity mechanism) that automatically inserts a default value into a column if no value has been provided by the INSERT T-SQL statement. Denormalization The formal process of introducing redundancy back into the database
design to improve performance. Density The degree of duplicate values for the data contained in a table column.
40521.book Page 583 Tuesday, August 8, 2006 1:21 PM
Glossary
583
DENY T-SQL statement that explicitly denies/disallows an object or statement permission. Deterministic Whether a function always returns the same output given the same inputs. Dialog A SQL Server Service Broker component that represents a conversation between
two endpoints. Distribution A system database in SQL Server 2005 used by the replication technology. Distributor A component of SQL Server Notification Services that is responsible for formatting and sending notifications to subscribers. Also, a role played by a SQL Server instance in a replication topology. The distributor is responsible for monitoring databases marked for replication and logging all data modifications to the distribution database. DML Stands for Data Manipulation Language. This is the ANSI SQL-92 category of SQL statements that allow you to manipulate data in database entities. DML trigger A T-SQL trigger that fires whenever a DML operation is performed on a table. Domain integrity A relational database integrity mechanism that enforces the validity of
data at the column level. Drill-through The ability in a SQL Server Reporting Services solution to go “through” the
summarized data in the report to the detailed source data used to generate it. DTC Stands for Distributed Transaction Coordinator. The DTC is an operating system
component that is responsible for coordinating distributed transactions using the two-phase commit algorithm. A number of SQL Server technologies, such as transaction replication, take advantage of the DTC. Dynamic management view A SQL Server 2005 database object that exposes various
internal memory structures, database engine components, and database components in a relational format.
E Endpoint A termination point of a network connection used by SQL Server technologies such as SQL Server Service Broker. Entity An object in a logical model that is used to store information about a “thing of interest.” Entity integrity A relational database integrity mechanism that ensures that duplicate rows do not exist in a table. Event Something of interest that has happened in the context of a SQL Server Notification Services solution of which you want to be notified.
40521.book Page 584 Tuesday, August 8, 2006 1:21 PM
584
Glossary
Event provider A component of SQL Server Notification Services that is responsible for collecting event data and submitting it to SQL Server Notification Services. Execution context The security permission set within which a SQL Server module
is executed. Extended stored procedure A stored procedure in SQL Server that provides an interface to
external COM components (typically DLL files). Extent A collection of eight contiguous pages used internally by the storage engine in SQL
Server 2005 to track the allocation of file space to tables and indexes.
F Field A column of a table typically used to store an indivisible element of an entity. Filegroup A logical means of controlling the placement of database objects on a file or set of files. Fixed database role A predefined role at the database level that is used to control access to
the database. Fixed server role A predefined role at the server level that is used to control administrative
access to the SQL Server instance. Foreign key Used to maintain relationships between entities. This is implemented as a field in a child table that has a corresponding primary key that it references. Foreign key constraint A database object (data integrity mechanism) that maintains refer-
ential integrity.
G Generator A component of SQL Server Notification Services that is responsible for gener-
ating notifications when it matches an event to a subscription. GRANT T-SQL statement that grants/allows an object or statement permission. Guest A predefined special database user who has minimal privileges.
H Heap A table in SQL Server 2005 that has no clustered index and consequently is in random order. HTTP Stands for HyperText Transfer Protocol. A protocol used by clients and servers on the Internet to exchange data.
40521.book Page 585 Tuesday, August 8, 2006 1:21 PM
Glossary
585
I Impersonation The ability of a principal to assume the security context of another principal. Index A SQL Server database object that is used primarily to speed up data access. This is also used by SQL Server as a means of enforcing uniqueness. Index fragmentation A process that degrades performance because of data not being contiguous as a result of data being modified. Indexed view A view that physically exists through a number of underlying indexes that
have been created on it. Instead of trigger A SQL Server trigger that fires before the DML operation.
K Kerberos The Internet-based authentication protocol used by Windows 2000 and
Windows 2003 to authenticate users in AD.
L Linked report A report in SQL Server Reporting Services that takes its definition through a
link to another report. Locking A mechanism used by a concurrent system to prevent data anomalies by isolating transactions from each other. Log File See Transaction log file.
M master System database in SQL Server 2005 used as the system catalog. Materialized view A view that physically exists through a number of underlying indexes that have been created on it. Matrix layout A component of a report used in SQL Server Reporting Services to display
data using a cross-tab format. Merge replication A replication type that relies on DML operations being captured from
both the published database and the subscriber database(s) and automatically synchronized. model A system database in SQL Server 2005 used as a template for creating new databases.
A system database in SQL Server primarily used to provide support for the SQL Server Agent.
msdb
40521.book Page 586 Tuesday, August 8, 2006 1:21 PM
586
Glossary
N Namespace An abstract container providing context for the items it holds through unique names. Nonclustered index A SQL Server index that is separate from the table. Normalization A formal process of removing redundancy from a database design by sepa-
rating it into children tables from the parent table. Notification A personalized, timely message sent in a SQL Server Notification Services solu-
tion to a subscriber. n-tier architecture A client-server architecture with a number of software layers that are
responsible for a particular function in a software solution.
O Object permission A permission on a database object that controls how the object can
be accessed. One-to-many relationship Used in relational database to denote that a single row in the
parent table can be related to one or more rows in the related child table, but a row in the child table can be related only to a single row in the referenced parent table. One-to-one relationship Used in relational database to denote that a single row in the
parent table can be related to only one row in the related child table and that a row in the child table can be related only to a single row in the referenced parent table. Ownership chaining Ownership chaining is where you have the same object owner across all the dependency chains between the objects.
P Page Basic unit of I/O within SQL Server 2005. A page is 8KB (8,192 bytes) in size and is
used to store data rows. Parameterized report A report in SQL Server Reporting Services that accepts input
parameters. Partition function A database object that determines how data within a partitioned table
will be split. Partition scheme A database object that maps the partitions of a partition function to a set
of filegroups.
40521.book Page 587 Tuesday, August 8, 2006 1:21 PM
Glossary
587
Partitioned table A table that has been separated into a number of logical horizontal partitions (which are mapped to different physical files) through a partition scheme. Pass-through query
A query that is passed through uninterrupted to an external
database engine. Performance counters Windows operating system components used to monitor the performance of a particular operating system subsystem or application. Performance Logs and Alerts tool A Windows operating system tool that can be used capture performance information into logs and generate alerts when thresholds criteria are met. Primary data file The first file used by SQL Server’s storage engine to store the database.
A primary data file stores the system tables and data. PRIMARY filegroup The default filegroup by default within a SQL Server database. Primary key A column (or set of columns) that does not allow NULL values that uniquely
identifies the rows in a table. Primary key constraint A database object (data integrity mechanism) that maintains
entity integrity. Principal An entity, such as a user or group, that can request SQL Server resources. Publication A collection of articles that acts as a basis for subscription in replication. Publisher A role played by a SQL Server instance in a replication topology. The publisher contains a database that is available for replication.
Q Queue A buffer used by SQL Server Service Broker to store messages temporarily. Queue reader A SQL Server Service Broker component that receives messages from a queue.
R RDL Stands for Report Definition Language. This is a programming language used to describe a report in SQL Server Reporting Services. RDMS Stands for Relational Database Management System. Referential integrity Relational database integrity that dictates that all foreign key values in
a child table must have a corresponding matching primary key value in the parent table. Relationship A social association or connection between two or more people.
40521.book Page 588 Tuesday, August 8, 2006 1:21 PM
588
Glossary
Replication A SQL Server technology that can automatically send data modifications made to a database to other databases located on separate SQL Server instances or other heterogeneous database engines. Replication topology The different roles played by various SQL Server instances in a repli-
cation solution. Report Builder A SQL Server 2005 utility used in SQL Server Reporting Services to build ad
hoc reports. Report model A semantic description of business entities and their relationships in a SQL
Server Reporting Services solution. This is used to create ac hoc reports through the Report Builder application. Report snapshot A SQL Server Reporting Services report that contains data that was que-
ried at a particular point in time and has been stored on the report server. ReportServer A system database in SQL Server used to provide support for the SQL Server
Reporting Services. ReportServerTempDB A system database used by SQL Server Reporting Services as a tem-
porary workspace for reports. REVERT A T-SQL statement that returns the execution context of a T-SQL batch to the previous calling principal. REVOKE A T-SQL statement that revokes/removes a granted object or statement permission. Role A SQL Server security account that is a collection of other security accounts that can be treated as a single unit when managing permissions. A role can contain SQL Server logins, other roles, and Windows logins or groups. Rollback To reverse or undo changes made so far.
S Scalar An atomic quantity, such as a field or variable, that can hold only one value at a time. Schema A collection of database objects forming a single namespace. Scope A scope is an enclosing context. Secondary data file The second or subsequent file used by SQL Server’s storage engine to
store the database. A secondary data file stores only data. Securable A SQL Server resource that can be accessed by a principal and thus have security permissions assigned to it. Selectivity The degree of unique values for the data contained in a table column.
40521.book Page 589 Tuesday, August 8, 2006 1:21 PM
Glossary
589
Semistructured data Data that has flexible metadata, such as XML. Service Broker A component of SQL Server 2005 that facilitates asynchronous messaging.
You can use the Service Broker to build SOA-based database solutions. Snapshot replication A replication type that relies on a snapshot of the entire article (table) to be automatically sent from a published database to the subscriber database(s). SOA Stands for Service-Oriented Architecture. This is an application architecture that uses
a loosely coupled, or asynchronous, paradigm. SOAP Stands for Simple Object Access Protocol. This is a protocol used to exchange XML-
based messages over the network, usually using HTTP. Sort order The rules SQL Server uses to return a record set in an ordered fashion. SQL injection A security vulnerability occurring at the database application level that allows
unintended T-SQL code to be executed against a SQL Server instance, SQL Mail A SQL Server 2000 component used to send email messages. This is included in SQL Server 2005 for backward compatibility only. (See Database Mail.) SQL Profiler A SQL Server 2005 utility used to capture network traffic/trace activity
between client applications and a SQL Server instance. SQL Server Agent A component of SQL Server 2005 that runs as a Windows service and is
responsible for running scheduled tasks, notifying operators of events, and generating alerts based on SQL Server Performance object counters. SQL Server authentication An authentication process that relies on SQL Server as the
repository of credentials in the form of a username and password. SQL trace A set of SQL Server stored procedures that allows you to create a trace of SQL Server activity without relying on SQL Server Profiler. SQLXML Designed for SQL Server 2000, allowing databases to be viewed via XPath as XML documents. This was deprecated with MDAC 2.6. SSIS Stands for SQL Server Integration Services. This is a component of SQL Server 2005
designed to extract, transform, and load data. Statement permission A permission on a T-SQL statement that controls who can execute it. Stored procedure An executable code module in SQL Server. Structured data Data that has a strict metadata defined, such as a SQL Server table’s column. Subreport A component of a report used in SQL Server Reporting Services to display a
report within a report. Subscriber A client who has requested a notification to be sent to them in a SQL Server Notification Services solution when a particular event of interest occurs.
40521.book Page 590 Tuesday, August 8, 2006 1:21 PM
590
Glossary
Also, a role played by a SQL Server instance in a replication topology. The subscriber will receive the replicated data from a publisher. Subscription An expressed interest in a specific type of event. SVF Stands for Scalar-Valued Function. This is a user-defined function that returns a single value. Symmetric key An encryption key (shared secret) that is shared between multiple parties to encrypt and decrypt data. Synonym An alternative name for a schema-scoped object. System Monitor A Windows operating system utility used to monitor performance
object counters.
T Table layout A component of a report used in SQL Server Reporting Services to display data
using a tabular format. Task Manager A Windows operating system tool that shows what processes are running;
the amount of memory, CPU, and networking resources used; and other performancerelated metrics. TCP/IP Stands for Transmission Control Protocol/Internet Protocol. This is an Internet-based network protocol that is used for communicating between network nodes. TDD Stands for Test-Driven Development. This is a software development process that involves first writing a test case and then implementing only the code necessary to pass the test. TDS Stands for Tabular Data Stream. This is a low-level protocol used between a client application and a SQL Server instance. tempdb System database used by SQL Server 2005 as a global temporary workspace. Trace A collection of events and related performance data returned by the SQL Server’s database engine. Transaction A group of DML operations combined into a logical unit of work that is either
wholly committed or rolled back. (See ACID.) Transaction log A write-ahead log used by the SQL Server storage engine for recording
transactions made to the database. Transaction log files One or more files used by SQL Server’s storage engine to store the database’s transaction log.
40521.book Page 591 Tuesday, August 8, 2006 1:21 PM
Glossary
591
Transactional replication A replication type that relies on DML operations being captured from a published database and automatically sent to the subscriber database(s). T-SQL Stands for Transact-SQL. This is a variation of the SQL language used by Microsoft
SQL Server. TVF Stands for Table-Valued Function. This is a user-defined function that returns a table.
U Unique constraint A database object (data integrity mechanism) that ensures that values within a column or combination of columns is unique. Uniqueifier column An internally used 4-byte column that SQL Server’s database engine automatically adds to a row to make each index key unique. Unit test A procedure used to validate that a particular module is working properly, typically in regression testing. Unstructured data Data that has no metadata, such as text files. User-defined database role A database role created by the DBA. User-defined function A custom function written by developers in SQL Server.
V Victor Isakov SQL Server database architect and Microsoft Certified Trainer. He specializes
in architecting new database solutions, analyzing existing database solutions, performance tuning SQL Server solutions, and training SQL Server courses. You can contact him for consulting via
[email protected]. View A database object in SQL Server used to encapsulate a query. This is commonly referred to as a virtual table. VLDB Stands for Very Large DataBase. This is an industry term describing databases that are typically so large that you need to take special considerations into account, such as in the case of designing a DRP strategy.
W Web method A database object used to expose stored procedures or user-defined functions
through an HTTP SOAP endpoint for web access.
40521.book Page 592 Tuesday, August 8, 2006 1:21 PM
592
Glossary
Web service A software component based on Internet standards designed to support interoperable machine-to-machine interaction over a network. Windows Authentication An authentication process that relies on the Windows operating system or AD as the repository of the credentials in the form of a Windows user or group account. WSDL Stands for Web Services Description Language. This is an XML-based document that describes the interface of a web service so external processes are able to communicate with and interact with the web service.
40521bindex.fm Page 593 Monday, August 14, 2006 1:40 PM
Index Note to the reader: Throughout this index boldfaced page numbers indicate primary discussions of a topic. Italicized page numbers indicate illustrations.
Symbols and Numbers ##MS_SQLResourceSigningCerti ficate##, 187 @@CONNECTIONS function, 154 @@PACK_SENT function, 154 @@TOTAL_READ function, 154 1NF (first normal form), 4, 4 2NF (second normal form), 5, 5 3NF (third normal form), 3, 5, 6
A ActiveX script, SQL Server Agent for executing, 105 ActiveX Script Task, 505 Activity Monitor, 150, 151 ad hoc distributed queries, 538–541 ad hoc reports, for Report Builder, 419 Add New Item dialog box, for report, 439, 439 Add Web Reference dialog box, 319 ADF file. See application definition file (ADF) for Notification Services application ADO Connection Manager, 488 ADO.NET Connection Manager, 488 AdventureWorks sample database, 309 AFTER DML triggers, 85–90 agents for replication, 543 aggregate data deriving to improve performance, 7 indexed views for, 67 aggregate functions, CLR user-defined, 79–80 Aggregate transformation, 499
alerts, from SQL Server Agent, 104, 113, 113–117 alias types, 15 ALTER DATABASE statement to enable Service Broker, 357–358 SET DB_CHAINING ON, 236 ALTER ENDPOINT statement, 311 for native XML web services, 328 ALTER permission, 229 ALTER statements, DLL triggers and, 90 ALTER TABLE statement, 22–23 for creating foreign key constraint, 29–30 for creating primary key or unique constraint, 18, 27 ALTER USER statement, to assign default schema, 240 American National Standards Institute (ANSI), 13 Analysis Services Connection Manager, 489 Analysis Services event provider, 381–382 Analysis Services tasks in SSIS, 494 Apache web server, 306 application definition file (ADF) for Notification Services application, 365–373 Analysis Services event provider, 381–382 database-specific information, 373–374 event distributor configuration, 383 event generator, 383 file system watcher, 379–380 notification class configuration, 377–378 SQL Server event provider, 380–381 subscription class information, 375–376 application roles, 185, 219–221
applications administration requirements, 221–224 goals for developers, 266–267 security, 192–193 article in replication, 542 ASMX web services, 306 vs. native XML web services, 336–337 ASP.NET ASMX web services, 306, 336 assemblies, 77 attributes, 9–18 data types and sizes, 11–17 domain integrity, 18 in first normal form, 4 in second normal form, 5 types of data, 10 Audit transformation, 500 auditing database design to enable, 193–208 of data access, 194 of data changes, 196 of object changes, 196–208 triggers for, 83 authentication, 185 in native XML web services, 334 authority, 185 authorization, in native XML web services, 332–333
B B-trees, 54 BackgroundColor element for report, 452 BackgroundGradientEndColor element for report, 452 BackgroundGradientType element for report, 452 BackgroundImage element for report, 452 backup and restore, for database deployment, 284–285 baseline, 135–136, 165
40521bindex.fm Page 594 Monday, August 14, 2006 1:40 PM
594
Basic authentication – CREATE DATABASE statement
Basic authentication, 185, 334 batch file, SQL Server Agent for, 105 BATCHES argument, in CREATE ENDPOINT statement, 327 BCP utility, 528–531 BEGIN DIALOG CONVERSATION statement, 355 benchmarks, 136 best practices in native XML web services, 337–338 security, 335 BIGINT data type, 11 BINARY data type, 12 BIT data type, 11, 13 BLOBs (Binary Large Objects), 6 and web services, 338 body of report, 445 bookmark links in reports, 451 BorderColor element for report, 452 BorderStyle element for report, 452 BorderWidth element for report, 452 bottlenecks, 133, 166 CPU as, 169 breakpoints, 506 BULK INSERT statement, 532–534 bulkadmin fixed server role, 222 business intelligence (BI) platform, 337 business intelligence (BI) projects, 480 business rules, triggers for enforcing, 83
C C2-level auditing, 194 cache procedure, 81 for reports, 458, 460, 460 Calendar element for report, 452 calendar scheduling application, Notification Services for, 392 candidate keys, 25 CanGrow property, of TextBox control, 454 CanShrink property, of TextBox control, 454
CASCADE clause, 232 for DENY statement, 231 cascade update and delete operations, 28 certificates creating, 249 need for, 186–188 CHAR data type, 12 Character Map transformation, 498 chart layout for report, 446 check constraints, 18 Check Database Integrity Task dialog box, 483, 483, 484, 484 vs. DBCC CHECKDB command, 487 CHECK_EXPIRATION option, for CREATE LOGIN statement, 213 CIM (Common Information Model) initiative, 96 ClickOnce installer, 455, 455 client applications and performance, 138 Visual Studio for building, 316–323 CLR. See common language runtime (CLR) clustered indexes, 57–58 code macros, 349 CodeSmith, 62, 233 Collect Model Statistics window (Report Model Wizard), 427 Color element for report, 453 columns in database table, creating, 21–22 Common Information Model (CIM) initiative, 96 common language runtime (CLR) dynamic management views (DMVs), 142–143 execution context strategy, 248–249 stored procedures, 80 user-defined aggregates, 79–80 user-defined data types, 17 user-defined functions, 77–79 Common Object Resource Broker Architecture (CORBA), 302, 349 compilations of query plans, and CPU problems, 170–171 composite indexes, 59 compound indexes, 59 compound key, 26
COMPRESSION argument, in CREATE ENDPOINT statement, 327 computed fields, persisting, 23–24 condition actions for notifications, 363 conditional formatting in report, 454 Conditional Split transformation, 499 Connection Manager, 508 Connection Properties dialog box, for report, 434, 434, 440, 440 @@CONNECTIONS function, 154 container for control flow, 490, 491 contract, in Service Broker, 202 control flow components in SSIS, 490–495, 491 Control Flow designer pane, 507 CONTROL permission, 229 conversation architecture in Service Broker, 351–357, 352 conversation groups, 352, 356–357 defining messages, 352–354 dialogs, 354, 354–356 Copy Column transformation, 498 Copy Database Wizard, 275–282, 277–282 CORBA (Common Object Resource Broker Architecture), 302, 349 costs of licensing, reducing, 324 covering index, 60–61 CPU (central processing unit), troubleshooting performance problems with, 169–171 CREATE AGGREGATE statement, 79–80 CREATE APPLICATION ROLE statement, 220 CREATE ASSEMBLY statement, 77–78 CREATE CERTIFICATE statement, 249 CREATE CONTRACT statement, 353–354, 358 CREATE DATABASE statement, for attaching database, 283–284
40521bindex.fm Page 595 Monday, August 14, 2006 1:40 PM
CREATE ENDPOINT statement – database deployment plan
CREATE ENDPOINT statement, 311 for native XML web services, 325–328 for SOAP endpoint, 312–313 CREATE EVENT NOTIFICATION statement, 202 CREATE FUNCTION statement for CLR user-defined function, 78 for inline table-valued functions, 75–76 for multistatement table-valued functions, 76–77 for scalar function, 73–74 CREATE LOGIN statement, 212–213 CREATE MESSAGE TYPE statement, 353, 358 CREATE NONCLUSTERED INDEX statement, 59 CREATE PARTITION FUNCTION statement, 49 CREATE PARTITION SCHEME statement, 49 CREATE QUEUE statement, 358 create, read, update, and delete (CRUD) procedures, 265 CREATE ROLE statement, 219 CREATE SCHEMA statement, 239 CREATE SERVICE statement, 358 CREATE statements, DLL triggers and, 90 CREATE TABLE statement, 21 DLL triggers to prevent, 93–95 with partition scheme, 49–50 CREATE TRIGGER statement, 31, 224 for DDL trigger, 91, 196 for DML trigger, 85 CREATE TYPE statement, 16 CREATE USER statement, 216 to assign default schema, 240 CREATE VIEW statement, 64–65 credentials, for report rendering, 441–442 cross-database ownership chain, 236 cross-tab reports, 446 CRUD (create, read, update, and delete) procedures, 265
custom error messages, 115 custom tasks in SSIS, 505
D DAC (dedicated administrator connection), 157–158 data access layer, 233. See also stored procedures; user-defined functions basics, 61–63 benefits, 62, 234 views, 63–66 indexed, 67–71 data-centric applications, XML web services for, 338 Data Connections window, in Visual Studio, 439 Data Conversion transformation, 498 data-driven subscription, 466 data flow components in SSIS, 495–501, 496 data flow destinations, 500–501 data flow sources, 497 data flow transformations, 497–500 business intelligence transformations, 497–498 row transformations, 498 rowset transformations, 498–499 split and join transformations, 499 data flow tasks in SSIS, 492, 492 data integrity, 2 triggers for enforcing, 83 Data Manipulation Language (DML), 527. See also DML (Data Manipulation Language) triggers index impact on operations, 55 Data mining model training data flow destination, 500 Data Mining Query transformation, 498 data preparation tasks in SSIS, 492 data regions, in report, 445–446 Data Removal setting, in Notification Services, 384 data-rendering extensions, 451
595
data row storage by SQL Server, 19–20 Data Source dialog box, 440, 441–442 Credentials tab, 441 General tab, 440 data source, for report model, 420 Data Source View Wizard, 423–425, 423–425 data source views, 423 Data Source Wizard, 420–422, 421 Data Transformation Services (DTS), 478 data types, 9, 11–17 CLR user-defined, 17 T-SQL user-defined, 15–16 data viewers in SSIS, 506 database dynamic management views (DMVs), 143–144 sample databases, 309 security subsystem for, 215 database access requirements, 214–228 application administration requirements, 221–224 application roles definition, 219–221 database roles definition, 217–219 database users definition, 215–216 users and groups requiring read-only access to data, 224–227 users and groups responsible for modifying data, 227–228 database applications and performance, 138 sending email from. See Database Mail Database Consistency Checker, 151 Database Console Commands. See DBCC ... database deployment plan, 272–287 with backup and restore, 284–285 with Copy Database Wizard, 275–282, 277–282 detaching and attaching databases, 283–284 with scripts, 285–287
40521bindex.fm Page 596 Monday, August 14, 2006 1:40 PM
596
database design – distributed clients
database design to enable auditing, 193–208 of data access, 194 of data changes, 196 of object changes, 196–208 as interactive process, 9 database diagram, generating, 8–9 Database Engine Tuning Advisor, 155–156, 156, 175 database-level context, 242 database-level permissions, 333 Database Mail, 348, 393–407 architecture, 395, 395–407 configuring, 396–401 managing, 401–403 migrating from SQL Mail to, 405–406 using, 403–405 Database Mail Configuration Wizard, 398–401 database object permissions, 228–232 CASCADE clause, 232 DENY statement, 230–231 GRANT OPTION for REVOKE, 232 GRANT statement, 230 REVOKE statement, 231–232 database objects for maintaining security, 233–236 object ownership chaining, 234–236, 235 database owner (dbo), 185 database permissions, 223 database roles, 217–219 database rule objects, deprecation, 18 database scope, and DDL triggers, 91 database snapshots, and detaching database, 283 database user accounts, 215–216 DataReader data flow destination, 500 DataSet populating for report, 442 for SSRS report, 445 DataSet results window, 447 DATETIME data type, 11, 13 function to strip time component from, 74 db_accessadmin fixed database role, 217 db_backupoperator fixed database role, 217
DBCC CHECKDB command, 152 vs. Check Database Integrity Task dialog box, 487 DBCC CLEANTABLE command, 153 DBCC DROPCLEANBUFFERS command, 153 DBCC FREEPROCCACHE command, 153 DBCC FREESYSTEMCACHE command, 153 DBCC HELP command, 153 DBCC INDEXDEFRAG command, 153 DBCC LOGINFO command, 152 DBCC MEMORYSTATUS command, 168 DBCC OPENTRAN command, 152 DBCC PROCCACHE command, 152 DBCC SHOWCONTIG command, 152 output, 153 DBCC SHOW_STATISTICS command, 152 DBCC TRACEOFF command, 152, 155 DBCC TRACEON command, 152, 155 DBCC TRACESTATUS command, 155 DBCC UPDATEUSAGE command, 153 DBCC USEROPTIONS command, 153 dbcreator fixed server role, 222 db_datareader fixed database role, 217, 225, 228 db_datawriter fixed database role, 218, 228 db_denydatareader fixed database role, 218 db_denydatawriter fixed database role, 218 dbo schema, 237 dbo (database owner) user account, 185, 215 dbo.sp_send_dbmail system stored procedure, 403–404 db_owner fixed database role, 218, 228 db_securityadmin fixed database role, 218 DCOM (Distributed Component Object Model), 302, 349
DDL triggers, 83–84, 90–95, 196–198 for auditing object changes, 198–201 creating, 91 to prevent database schema changes, 224 deadlock, 159 capturing information, 100 debugging packages in SSIS, 506, 506 DECIMAL data type, 12 dedicated administrator connection (DAC), 157–158 default schemas, 239–240 DELETE permission, 229 delete trigger, providing data for undelete, 86–87 delivering reports, 464–466 denormalization, 6–8, 7 density of data, 57 DENY statement, 230–231 DLL triggers and, 90 dependencies, transitive, 5 dependency chain, 234. See also ownership chains deploying database. See database deployment plan Derived Column transformation, 498 Derived Transformation Editor, 516 detaching and attaching databases, 275, 283–284 deterministic functions, 72 dialog timer messages, in Service Broker, 353 dialogs, in Service Broker, 351, 354, 354–356 Digest authentication, 334 digital certificates. See certificates Dimension processing data flow destination, 500 Direction element for report, 453 DISABLE TRIGGER statement, 84 discovery, in SOA, 350 disk drives, I/O bottlenecks, 172–173 diskadmin fixed server role, 222 distributed application development, history, 302–303 distributed clients, XML web services for exposing data to, 338
40521bindex.fm Page 597 Monday, August 14, 2006 1:40 PM
Distributed Component Object Model (DCOM) – file system watcher
Distributed Component Object Model (DCOM), 302, 349 Distributed Management Objects (DMO) object model, 276 Distributed Management Task Force (DMTF), 96 distributed queries, ad hoc, 538–541 Distributed Transaction Coordinator (DTC), 303 Distribution Agent (distrib.exe), 543 distributor in replication, 542 Distributor Logging setting, in Notification Services, 384 DML (Data Manipulation Language), 527 DML (Data Manipulation Language) triggers, 6, 83–84, 84–90, 196 AFTER, 85–90 creating, 85 INSTEAD OF, 87–90 order of execution, 85, 86 for referential integrity maintenance, 30–31 DMO (Distributed Management Objects) object model, 276 DMTF (Distributed Management Task Force), 96 DMVs. See dynamic management views (DMVs) document maps in reports, 451 documentation of indexes, 58 of SQL Server Agent jobs, 106 domain integrity, 18 drill-through link, 451 DROP ENDPOINT statement, 311 for native XML web services, 328 DROP statements, DLL triggers and, 90 DTC (Distributed Transaction Coordinator), 303 .dtsx file extension, 479 dynamic management views (DMVs), 141–150 common language runtime (CLR), 142–143 database, 143–144 full-text index, 145–146 I/O, 146 query, 144–145 query notification, 148 replication, 148–149
Service Broker, 142 SQL operating system, 146–148 transaction, 149–150
E Effective Permissions dialog box, 226 email. See also Database Mail for database notifications, importance of alternative checks, 394 to send reports, 465 sending from SQL Server Agent, 407 sending using mail profile, 404 encryption need for, 188–189 of programming objects, 290–291 SQL Server hierarchy for, 188 ENCRYPTION option, in DDL code, 63 END CONVERSATION statement, 355 end dialog messages, in Service Broker, 353 endpoints, 311 authentication types, 334 granting permissions on, 333 types, 312 entities, 62, 233 definition, 21–25 entity integrity, 25–27 entity relationships, 28–31 foreign key constraints, 29–30 procedural code, 30–31 referential integrity, 28–29 error logs, in Management Studio, 151 error messages custom, 115 for memory problems, 167 in Service Broker, 353 Event Generator Quantum setting, in Notification Services, 384 event handlers in SSIS, 502, 503 event notifications, 202–208, 203 components in Service Broker, 203 Event Processing Order setting, in Notification Services, 384
597
Event Throttle setting, in Notification Services, 384 EVENTDATA() structure, 92 events, in Notification Services, 363 Excel Connection Manager, 488 Excel data flow destination, 500 executable process, SQL Server Agent for, 105 EXECUTE AS CALLER, 246–247 EXECUTE AS OWNER, 247 EXECUTE AS SELF, 247 EXECUTE AS user_name, 247–248 EXECUTE permission, 229 Execute SQL Task Editor, 510 EXECUTE statement, 241–242 execution context strategy, 240–253 common language runtime (CLR), 248–249 explicitly switching, 241–245 implicitly switching, 245–253 explicit namespace reservations, 331 explicit permissions, 226 Export Column transformation, 500 exporting table, BCP utility for, 530 Express Edition of SQL Server 2005, 317 of Visual Studio, 317 extended stored procedures, 81 Extensible Markup Language (XML). See XML (Extensible Markup Language) EXTERNAL_ACCESS permission, for CLR assemblies, 248
F fields, persisting computed, 23–24 File and Multiple File Connection Manager, 489 File menu (Visual Studio) ➢ New ➢ Project, 317 file size, for XML, 304 file system watcher, in Notification Services, 379–380
40521bindex.fm Page 598 Monday, August 14, 2006 1:40 PM
598
FILLFACTOR option – interactive layout-rendering extensions
FILLFACTOR option, for index, 56–57 filters, for report, 451 FIPS-140-2 Common Criteria, 194 first normal form (1NF), 4, 4 fixed database roles, 217–218, 222 fixed database users, 215–216 fixed server roles, 221 Flat File Connection Manager, 488 Flat file data flow destination, 500 FLOAT(n) data type, 11, 12 fn_virtualfilestats function, 154 output, 155 FontFamily element for report, 453 FontSize element for report, 453 FontStyle element for report, 453 FontWeight element for report, 453 footer for report, 445 For Loop container, 491 Foreach Loop container, 491 foreign key, 6 duplicating to reduce number of joins, 7 foreign key constraints, 29–30 Format dialog box (Report Builder), 457, 458 Format element for report, 453 formatting in reports, 452–454 free-form text, data type for, 13 FTP Connection Manager, 489 full-text index, dynamic management views (DMVs), 145–146 functions, deterministic vs. nondeterministic, 72 Fuzzy Grouping transformation, 497 Fuzzy Lookup transformation, 497
G GET CONVERSATION GROUP statement, 356–357 GRANT OPTION for REVOKE, 232 GRANT statement, 223–224, 230 DLL triggers and, 90
graphical elements in reports, 452 graphical execution plan, in Management Studio, 151, 152 Grouping and Sorting Properties dialog box, 449, 449 grouping data in reports, 446–450 groups. See also database roles requiring read-only access to data, 224–227 responsible for modifying data, 227–228 in Windows, vs. users, 212 guest database user account, 216, 225 GUIDs, UNIQUEIDENTIFIER data type for storing, 13
H hash (digest), 185 header for report, 445 heap, 57 hidden controls in reports, 451 hierarchy in business, 211 in reports, 449, 450 history of report snapshots, 461 history, of report snapshots, 462 HTTP (Hypertext Transfer Protocol), 304 processing stack, 309 HTTP Connection Manager, 489 Httpcfg.exe utility, 331 HTTP.SYS kernel mode driver, 309, 330
I I/O bottlenecks, troubleshooting performance problems with, 172–173 I/O (input/output), dynamic management views (DMVs), 146 IF EXISTS ... DROP statement, 273 IIS (Internet Information Services), 306 and SQL Server 2005, testing, 330 stopping, 312 IMAGE data type, 12
immediate updating subscriptions, 544 IMPERSONATE permissions, 242 implicit namespace reservations, 331 implicit permissions, 225 Import Column transformation, 500 importing table, BCP utility for, 530 Index Tuning Wizard (ITW), 155 indexed views, 67–71 creating, 70–71 indexes, 53–61, 54 clustered, 57–58 on computed fields, 24 covering, 60–61 DMVs for monitoring, 175 documentation of, 58 for foreign key columns, 30 fragmentation, 138 implementing during table creation, 25 nonclustered, 58–60 syntax for creating, 55–56 information strategy plan (ISP), 210 initial build scripts, 273 inline table-valued user-defined functions, 75–76 IN_ROW_DATA allocation unit, 20 INSERT permission, 229 INSERT statement, nesting SELECT statement within, 527 instance configuration file (ICF), for Notification Services, 385–388 instance of Notification Services, deploying, 388–392 INSTEAD OF triggers, 87–90 for updating view, 64 INT data type, 11 Integrated authentication, 334 Integration Services. See SQL Server Integration Services (SSIS) Integration Services Object Model, 505 integrity of data, 2 triggers for enforcing, 83 interactive layout-rendering extensions, 451
40521bindex.fm Page 599 Monday, August 14, 2006 1:40 PM
Internet Information Services (IIS) – native XML web services
Internet Information Services (IIS), 306 and SQL Server 2005, testing, 330 stopping, 312 ISP (information strategy plan), 210 ITW (Index Tuning Wizard), 155
J join operations duplicating foreign key to reduce number, 7 indexed views for, 67 normalization and, 3, 6 join transformations, 499
K Kerberos, 187, 188, 334 KISS principle, 210
L Language element for report, 453 layout-rendering extensions, interactive, 451 leaf level of B-tree, 54 “least privilege” concept, 249 licensing, reducing costs, 324 lifetime for dialog, 355 LineHeight element for report, 453 linked servers, 534–537 architecture, 535 querying, 537 links, in reports, 451 list layout for report, 446 log files, disk space for, 140 Log Reader Agent (logread.exe), 543 logical database design, 2–8 denormalization, 6–8 normalization, 3–6 first normal form (1NF), 4, 4 second normal form (2NF), 5, 5 third normal form (3NF), 5, 6 logical page-rendering extensions, 451
login access requirements, 212–214 importance of, 214 for report rendering, 441–442 login token, 240 viewing information about, 241 LOGIN_TYPE argument, in CREATE ENDPOINT statement, 327 logs, monitoring, 137 Lookup transformation, 499 “loosely coupled services,” 306
M maintenance tasks in SSIS, 494–495 Management Studio. See SQL Server Management Studio master system database, system stored procedures in, 80 materializing views, 67 limitations, 68–69 matrix reports, 446 from Report Wizard, 433 Memory– Available Bytes counter, 167 Memory– Pages/sec counter, 135 memory pressure, 166 memory, troubleshooting performance problems with, 166–168 Merge Agent (replmerge.exe), 543 Merge Join transformation, 499 merge replication, 543 Merge transformation, 499 message type, in Service Broker, 202 messages in Service Broker, 351, 352–354 as SOA component, 350 Microsoft Baseline Security Analyzer (MBSA), 189 Microsoft ClickOnce WinForms application, 419 Microsoft Internet Information Services (IIS), 306 and SQL Server 2005, testing, 330 stopping, 312 Microsoft Management Console (MMC), to view certificates, 186
599
Microsoft .NET data provider for mySAP Business Suite, 489 Microsoft Product Support Services (PSS), 141 Microsoft Report Builder. See Report Builder Microsoft Solution Framework (MSF), 265 model system database UDFs in, 73 user-defined data types in, 15 modular design techniques, 349 MONEY data type, 11 MOVE CONVERSATION statement, 357 msdb system database, 395 MSF (Microsoft Solution Framework), 265 MSMQ Connection Manager, 489 ##MS_SQLResourceSigningCerti ficate##, 187 Multicast transformation, 499 Multiple Flat File Connection Manager, 488 multistatement table-valued user-defined functions, 76–77
N N-tier architecture design, 264 NAME argument, in CREATE ENDPOINT statement, 327 named instance, restarting, 329 namespaces, 188 organizing database objects into, 237 reservation for native XML web services, 329–331 native XML web services ALTER ENDPOINT statement for, 328 vs. ASMX web services, 336–337 best practices, 337–338 CREATE ENDPOINT statement for, 325–328 DROP ENDPOINT statement for, 328 namespace reservation, 329–331 querying endpoint metadata, 328–329
40521bindex.fm Page 600 Monday, August 14, 2006 1:40 PM
600
natural key – performance monitoring
security implementation, 332–335 authentication, 334 authorization, 332–333 best practices, 335 in SQL Server, 307–308 vs. SQLXML, 335–336 natural key, 26 NCHAR data type, 12, 13 nesting triggers, 84 Network Interface– Bytes Total/ sec counter, 135 Network Monitor Agent, 139 network resources, and performance, 138 New Connection dialog box, 483 New Notification Services Instance dialog box, 389, 390 New Project dialog box (Visual Studio), 318 for business intelligence projects, 480 NEWID() function, 13 nonclustered indexes, 58–60 nondeterministic functions, 72 normalization, 3–6 first normal form (1NF), 4, 4 second normal form (2NF), 5, 5 third normal form (3NF), 5, 6 Northwind sample database, 309 Notification Services. See SQL Server Notification Services (SSNS) Notification Services Management Objects (NMO), 365 notifications, from SQL Server Agent, 104 NTEXT data type, 12, 13 NTLM authentication, 334 null delivery provider, 460 NumeralLanguage element for report, 453 NumeralVariant element for report, 453 NUMERIC data type, 12 NVARCHAR data type, 12, 13, 20
O object counters, user settable, 117
object design, 48–61 comparing versions, 288, 289 indexes, 53–61 clustered, 57–58 nonclustered, 58–60 partitioned tables, 48–53 object instances, 140 object ownership chaining, 234–236, 235 ODBC Connection Manager, 488 OLE DB Connection Manager, 488 OLE DB data flow destination, 501 OLE DB Source Editor dialog box, 512–514 OLE DB transformation, 498 OPENDATASOURCE function, 539 OPENQUERY function, 538 OPENROWSET function, 539–541 operating system and performance, 138 security, 189 operators, SQL Server Agent for defining, 118, 119 ORDER BY clause, in view definitions, 65 orphaned records, 28 overhead from monitoring, minimizing, 158 ownership chains broken, 185 example, 236
P packages in SSIS, 479, 479 creating, 480–487 exercise, 507–526 data flow design, 501 executing, 525 structure, 487–502 connection managers, 488–490 control flow components, 490–495, 491 data flow components, 495–501, 496 testing and debugging, 506, 506 @@PACK_SENT function, 154 PaddingBottom element for report, 453
PaddingLeft element for report, 453 PaddingRight element for report, 453 page density calculation, 20–21 page-rendering extensions, 451 paging, excessive, 166 parameters for report, 451 Partition processing data flow destination, 501 partitioned tables, 48–53 partitioned views, 49 pass-through queries, 537 peer-to-peer transactional replication, 544, 545 Pending Checkins window (Management Studio), 288 Percentage Sampling transformation, 499 Perforce, 287 performance, 2, 79 DML triggers and, 84 factors affecting, 137–138 optimizing report execution, 458–462 caching reports, 460, 460 report snapshots, 461, 461–462 troubleshooting, 138–139, 165–175 CPU problems, 169–171 I/O bottlenecks, 172–173 memory problems, 166–168 queries, 174–175 tempdb problems, 173–174 performance counters, 140 for baseline, 135–136 Performance Logs and Alerts, 139, 140, 159 performance monitoring, 133 minimizing overhead, 158 proactive monitoring, 133–137 benchmark, 136 establishing baseline, 135–136 ongoing monitoring, 136–137 with SQL Server tools, 141–158 Database Engine Tuning Advisor, 155–156, 156 DBCC commands, 151–154
40521bindex.fm Page 601 Monday, August 14, 2006 1:40 PM
performance object – registering assemblies
dedicated administrator connection (DAC), 157–158 dynamic management views and functions, 141–150 Management Studio, 150–151 SQL Server Profiler, 141 SQL Trace, 141 system functions, 154 system stored procedures, 156–157 trace flags, 154–155 tool selection, 158–159 with Windows tools, 139–140 performance object, 140 Performance Object counters, creating custom, 116 performance objectives, 132–139, 268 performance profiles developing, 267–268 testing, 268 Performance Query Interval setting, in Notification Services, 384 permissions. See also database object permissions displaying effective, 226 physical entities’ design, 8–31 attributes, 9–18 data types and sizes, 11–17 domain integrity, 18 types of data, 10 data row storage by SQL Server, 19–20 entity definition, 21–25 entity integrity, 25–27 entity relationships, 28–31 foreign key constraints, 29–30 procedural code, 30–31 referential integrity, 28–29 page density calculation, 20–21 physical page-rendering extensions, 451 PhysicalDisk– Disk Transfers/sec counter, 135 PhysicalDisk Object– Avg. Disk Queue Length counter, 172 PhysicalDisk Object– Avg. Disk Sec/Read counter, 172
PhysicalDisk Object– Avg. Disk Sec/Write counter, 172 PhysicalDisk Object– % Disk Time counter, 172 pivot-table reports, 446 Pivot transformation, 499 PORTS argument, in CREATE ENDPOINT statement, 326 post-build scripts, 274 precedence constraints for control flow, 490, 495 preloading cache with reports, 460 primary key, 25–26 creating, 18, 26–27 in first normal form, 4 importance of, 28 primitive XML data types, 381 principals, 185, 210, 211 granting database permissions to, 223–224 private mail profiles, 396 procedure cache, 81 procedures, 233. See also stored procedures; user-defined functions Process– Working Set counter, 167 processadmin fixed server role, 222 Processor– % Processor Time counter, 135, 169 Product Support Services (PSS), 141 Profiler. See SQL Server Profiler programmable object scripts, 274 progress reporting, in SSIS, 506 PSS (Product Support Services), 141 public fixed database role, 217 public mail profiles, 396 publication in replication, 542 publisher in replication, 542 pushing reports to users, 464
Q quality attributes for application, 267 quantum, 383 Quantum Limits application setting, in Notification Services, 384 queries ad hoc distributed, 538–541
601
dynamic management views (DMVs) for, 144–145 of endpoint metadata, 328–329 of linked servers, 537 performance, 267 testing performance, 269–272 troubleshooting performance problems with, 174–175 Query Builder, 443 query notification, dynamic management views (DMVs), 148 query plans, efficiency of, 171 queue, in Service Broker, 202 Queue Reader Agent (qrdrsvc.exe), 543 queued updating subscriptions, 544
R RAISERROR statement, 115–116, 405 Raw file data flow destination, 501 RDL (Report Definition Language), 419 .rdl file, 419 report definition in, 445 read-only access to database, 225 REAL data type, 11, 12 RECEIVE permission, 229 RECEIVE statement, 355–356 recompilations of query plans, and CPU problems, 170, 170–171 RECOMPILE option, for stored procedures, 81 Recordset data flow destination, 501 recursive triggers, 85 Recycle Bin, triggers for using, 83 redundancy, removing, 3 Reference.cs file, 320 Reference.map file, 320 REFERENCES permission, 229 referential integrity, 8, 28–29 foreign key constraints for, 29–30 triggers for enforcing, 83 Register Instance dialog box, 391, 391 registering assemblies, 77–78
40521bindex.fm Page 602 Monday, August 14, 2006 1:40 PM
602
relationships – security
relationships. See entity relationships Remote Method Invocation (RMI), 302 remote monitoring, performance overhead impact, 158 rendering, 450–451 replication, 541–545 agents, 543 benefits, 545 components, 542–543 for database deployment, 285 dynamic management views (DMVs), 148–149 frequency, 544 types, 543–544 report body, 445 Report Builder, 418, 419, 455–457, 456, 457 ad hoc reports for, 419 selecting report model in, 431 Report Definition Language (RDL), 419 report designer in Visual Studio, 437 report footer, 445 report header, 445 Report Manager, 463, 463 report model, 419–431 building, 420–428 deploying, 429 publishing, 429 for Report Builder, 456 semantic objects, 420 using, 429–431, 430 Report Model Wizard, 425–428, 425–428 report query window, 442 Report Server Project Wizard, 431, 432 Report Wizard in SSRS, 431–437, 432 Choose the Table Layout window, 436, 436 Choose the Table Style window, 436, 436 Completing the Wizard window, 437 Design the Query window, 434, 434 Select the Data Source window, 433 Select the Report Type window, 435, 435 welcome page, 433 Reporting Services Configuration Manager, 466
Reporting Services reports from Visual Studio, 437–445 credentials, 441, 441–442 layout, 443, 443 reports controlling user interaction with, 451 data regions in, 445–446 delivering, 464–466 graphical elements, 452 grouping data, 446–450 layout and rendering options, 445–455 nongraphical elements, 454–455 optimizing execution, 458–462 caching reports, 460, 460 report snapshots, 461, 461–462 rendering process at runtime, 450–451 style elements, 452–454 reportserver system database, 462 reportservertempdb system database, 462 resources application utilization, 267, 268 authorization, 187 RETURN statement, in scalar functions, 73 REVOKE statement, 231–232 DLL triggers and, 90 RMI (Remote Method Invocation), 302 roles, 185 root of B-tree, 54 route, in Service Broker, 202 Row Count transformation, 500 row counts in SSIS, 506 Row Sampling transformation, 499 row transformations, 498 ROW_OVERFLOW_DATA allocation unit, 20 rowset transformations, 498–499 “runaway” reports, preventing, 458
S sa (system administrator) account, 185 mapping to dbo account, 215 SAC.EXE, 190, 191
SAFE permission, for CLR assemblies, 248 sample databases, AdventureWorks and Northwind, 309 SAP data, connecting to, 489 scalar aggregates, in indexed view, 67 scalar-valued functions (SVFs), 73–74 scenario, 265 scheduled jobs report execution as, 458, 459 from SQL Server Agent, 104, 105–113 defining job, 106, 106–107 defining job steps, 107, 107–109 notifications, 111, 111–112 scheduling, 109, 109–111 target server for, 112, 112–113 SCHEMABINDING option, in DDL code, 63 schemas, 188, 211 for object ownership management, 237–240 default schemas, 239–240 Script component data flow destination, 501 Script Component transformation, 498 scripting tasks in SSIS, 494 scripts for database deployment, 272–273, 285–287 to extend SSIS, 505 SDM (System Definition Model), 350 search arguments (SARGs), based on multiple columns, 59 second normal form (2NF), 5, 5 securables, 185, 210 security application security, 192–193 BATCHES argument for CREATE ENDPOINT, 327 for calling web service, 321 certificates, need for, 186–188 database access requirements, 214–228 application administration requirements, 221–224
40521bindex.fm Page 603 Monday, August 14, 2006 1:40 PM
securityadmin fixed server role – sp_addlinkedserver system stored procedure
application roles definition, 219–221 database roles definition, 217–219 database users definition, 215–216 users and groups requiring read-only access to data, 224–227 users and groups responsible for modifying data, 227–228 database design to enable auditing, 193–208 of data access, 194 of data changes, 196 of object changes, 196–208 database object permissions, 228–232 CASCADE clause, 232 DENY statement, 230–231 GRANT OPTION for REVOKE, 232 GRANT statement, 230 REVOKE statement, 231–232 database objects for maintaining, 233–236 object ownership chaining, 234–236, 235 encryption need for, 188–189 SQL Server hierarchy for, 188 execution context strategy, 240–253 explicitly switching, 241–245 implicitly switching, 245–253 exercise, 249–253 KISS principle, 210 login access requirements, 212–214 in native XML web services, 332–335 authentication, 334 authorization, 332–333 best practices, 335 schemas for object ownership management, 237–240 default schemas, 239–240
security model definition, 210–211 in SQL Server hierarchy, 209 history, 185–189 stored procedures and, 80 surface area of attack, 189–190 views and, 63 securityadmin fixed server role, 222 SELECT permission, 229 Select Report Model Metadata Generation Rules window (Report Model Wizard), 426 SELECT statement, nesting within INSERT statement, 527 selectivity of data, 57 semistructured data, 6 Send Mail Task Editor, 523–524 SEND statement, 355 Sequence container, 491 server-level context, 242 server-level events, 204–207 server-level permissions, 333 Server Management Objects (SMO) method, 275–276 server scope, and DDL triggers, 91 serveradmin fixed server role, 222 Service Broker, 83, 202, 348, 350–362 conversation architecture, 351–357, 352 conversation groups, 356–357 defining messages, 352–354 dialogs, 354, 354–356 dynamic management views (DMVs), 142 event notification components, 203 message-queuing architecture for Database Mail, 396 sample application, 357–362 service contract, in Service Broker, 353 service-level agreement (SLA), 132 service-oriented architecture (SOA), 349–350 application, and XML web service, 306 services
603
certificates for, 187 monitoring, 132 security for, 192 in Service Broker, 202 as SOA component, 349 setupadmin fixed server role, 222 SETUSER statement, 242 Simple Message Transfer Protocol (SMTP), 348 Simple Object Access Protocol (SOAP), 304–305 SITE argument, in CREATE ENDPOINT statement, 326 SLA (service-level agreement), 132 Slowly Changing Dimension transformation, 500 SMALLDATETIME data type, 11, 13 SMALLINT data type, 11 SMALLMONEY data type, 11 SMO Connection Manager, 490 SMO (Server Management Objects) method, 275–276 SMTP (Simple Message Transfer Protocol), 348 SMTP Connection Manager, 489 Snapshot Agent (snapshot.exe), 543 snapshot replication, 544 SOA (service-oriented architecture), 349–350 application, and XML web service, 306 SOAP (Simple Object Access Protocol), 304–305 endpoints, 311 creating, 325–328 software configuration management (SCM), 287 implementing, 288–289 Solution Explorer, References folder, 319, 319 Sort transformation, 499 source code management, 287–290 benefits, 287 sp_ prefix, 80 sp_add_job system stored procedure, 106, 112 sp_add_jobserver system stored procedure, 113 sp_add_jobstep system stored procedure, 108–109 sp_addlinkedserver system stored procedure, 535–536
40521bindex.fm Page 604 Monday, August 14, 2006 1:40 PM
604
sp_addlinkedsrvlogin system stored procedure – SQL Server Notification
sp_addlinkedsrvlogin system stored procedure, 536–537 sp_addlogin system stored procedure, 213 sp_addmessage system stored procedure, 115 sp_add_operator system stored procedure, 118 sp_addrolemember stored procedure, 217 sp_add_schedule system stored procedure, 110 sp_addsrvrolemember system stored procedure, 221 sp_addtype system stored procedure, 16 spam engine, creating, 507–526 sp_attach_schedule system stored procedure, 111 sp_cycle_errorlog system stored procedure, 137 sp_delete_http_namespace_reser vation system stored procedure, 331 sp_detach_db stored procedure, 283 sp_grantlogin system stored procedure, 213 sp_helptext system stored procedure, 80 split and join transformations, 499 sp_lock system stored procedure, 157 sp_monitor system stored procedure, 157 output, 157 sp_recompile system stored procedure, 81 sp_reserve_http_namespace system stored procedure, 331 sp_send_dbmail system stored procedure, 396, 405 sp_setapprole system stored procedure, 220 sp_settriggerorder system stored procedure, 85 sp_trace_create system stored procedure, 141, 194 sp_trace_generateevent system stored procedure, 141 sp_trace_setevent system stored procedure, 141, 194 sp_trace_setfilter system stored procedure, 141, 194 sp_trace_setstatus system stored procedure, 141, 194
sp_unsetapprole stored procedure, 220 sp_update_job system stored procedure, 112 sp_who system stored procedure, 157 SQL Destination Editor, 520 SQL login, 212 for report rendering, 442 SQL Mail, 393 migrating to Database Mail, 405–406 SQL operating system, dynamic management views (DMVs), 146–148 SQL Server 2005 basic unit of input/output, 19 data page structure, 19 Express Edition of, 317 and IIS, testing, 330 new features, 48 restarting, 329 security hierarchy, 209 SQL Server Agent, 83, 104–118, 527, 528 alerts from, 113, 113–117 for defining operators, 118, 119 email, 407 scheduled jobs, 105–113 defining job, 106, 106–107 defining job steps, 107, 107–109 notifications, 111, 111–112 scheduling, 109, 109–111 target server for, 112, 112–113 SQL Server Business Intelligence Development Studio, 420 for creating SSIS package, 480–487 SQL Server data flow destination, 501 SQL Server Database Mail. See Database Mail SQL Server Event Alerts, 114–116, 115 SQL Server event provider, in Notification Services, 380–381 SQL Server Integration Services (SSIS), 105, 478–526 architecture, 503–505, 504 connection managers, 488–490
control flow components, 490–495, 491 containers, 491 precedence constraints, 495 tasks, 491–495 event handlers, 502, 503 log providers, 502 Toolbox, 481, 481–482 variables, 502, 503 SQL Server Management Studio for compiling ICF and creating instance, 389, 389, 390 for Database Mail configuration, 398 defining job schedule, 109, 109 and endpoint management, 311, 311 Notification Services integration, 362 Pending Checkins window, 288 for performance monitoring, 150–151 to reverse engineering data as batch of INSERT statements, 275 for scripting database objects, 272–273, 273 software configuration management with, 287, 288 for viewing logs, 137 SQL Server Mobile Connection Manager, 489 SQL Server mobile data flow destination, 501 SQL Server Notification Services (SSNS), 348, 362–393 application building, 365–385 application database information, 373–374 application execution settings, 384–385 architecture, 363–364, 364 deploying instance, 388–392 event distributor properties, 383–384 event generator properties, 382–383 event provider information, 379–382 instance configuration file (ICF), 385–388 notification classes, 376–379
40521bindex.fm Page 605 Monday, August 14, 2006 1:40 PM
SQL Server Performance Condition Alerts – sys.dm_db_missing_index _columns
sample applications, 392–393 subscription class information, 375–376 SQL Server Performance Condition Alerts, 116, 116–117 SQL Server Profiler, 136 capturing SQL workloads, 268, 269 correlating trace with Windows performance log data with, 159–165, 165 for performance monitoring, 141 performance overhead impact, 158 for trend analysis, 158 SQL Server Reporting Services (SSRS), 418 options for report development, 419 Report Wizard, 431–437, 432 Choose the Table Layout window, 436, 436 Choose the Table Style window, 436, 436 Completing the Wizard window, 437 Design the Query window, 434, 434 Select the Data Source window, 433 Select the Report Type window, 435, 435 welcome page, 433 SQL Server Surface Area Configuration (SSAC) tool, 189–190, 190, 248 SQL Server tasks in SSIS, 493 SQL Trace, for performance monitoring, 141 SQLServer: Access Methods–Full Scans/sec counter, 135 SQLServer: Buffer Manager–Buffer Cache Hit Ratio counter, 135, 166, 167 SQLServer: Buffer Manager–Page Life Expectancy counter, 166, 167 SQLServer: Databases Application Database–Percent Log Used counter, 135
SQLServer: Databases Application Database–Transactions/sec counter, 135 SQLServer: Databases–Log Growths counter, 135 SQLServer: General Statistics–User Connections counter, 135 SQLServer: Latches–Average Latch Wait Time counter, 135 SQLServer: Lock Number of Deadocks/sec counter, 159 SQLServer: Locks–Average Wait Time counter, 136 SQLServer: Locks–Lock Waits/ sec counter, 136 SQLServer: Locks–Number of Deadlocks/sec counter, 136 SQLServer: Memory Manager–Memory Grants Pending counter, 136 SQLServer: SQL Statistics–Batch Requests/sec counter, 171 SQLServer: SQL Statistics–SQL Compilations/sec counter, 171 SQLServer: SQL Statistics–SQL Recompilations/sec counter, 171 SQLServer: User Settable–Query counter, 136 SQL_VARIANT data type, 12, 20 SQLXML, vs. native XML web services, 335–336 SSAC. See SQL Server Surface Area Configuration (SSAC) tool SSIS. See SQL Server Integration Services (SSIS) SSNS. See SQL Server Notification Services (SSNS) SSRS. See SQL Server Reporting Services (SSRS) standard subscription, 466 STATE argument, in CREATE ENDPOINT statement, 326 statement-level permissions, 223 status report, in Notification Services instance registration, 392 stock-tracking application, Notification Services for, 392 stopping Internet Information Services (IIS), 312
605
stored procedures, 62, 80–82, 83. See also specific names of procedures creating, 310–311 for Database Mail, 401–403 modifying DLL statements, 82 for testing query performance, 270–272 types, 80–81 structured data, 10 subreport, 454 subscriber in Notification Services, 363 in replication, 542 subscriptions creating, 465–466, 466 displaying, 467 in Notification Services, 363 in replication, 543 for report delivery, 464 Surface Area Configuration tool, 538, 538 to enable Database Mail, 397, 397 surface area of attack, 189–190 surrogate key, 26 SVFs (scalar-valued functions), 73–74 sysadmin fixed server role, 222 sysadmin server role, mapping to dbo account, 215 sys.dm_broker_activated_tasks DMV, 142 sys.dm_broker_connections DMV, 142 sys.dm_broker_forwarded _messages DMV, 142 sys.dm_broker_queue_monitors DMV, 142 sys.dm_clr_appdomains DMV, 142 sys.dm_clr_loaded_assemblies DMV, 143 sys.dm_clr_properties DMV, 143 sys.dm_clr_tasks DMV, 143 sys.dm_db_file_space_usage DMV, 143, 174 sys.dm_db_index_operational_st ats DMV, 143, 175 sys.dm_db_index_physical_stats DMV, 143, 175 sys.dm_db_index_usage_stats DMV, 143, 175 sys.dm_db_mirroring _connections DMV, 143 sys.dm_db_missing_index _columns DMV, 143, 175
40521bindex.fm Page 606 Monday, August 14, 2006 1:40 PM
606
sys.dm_db_missing_index _details DMV – tables
sys.dm_db_missing_index _details DMV, 143, 175 sys.dm_db_missing_index _groups DMV, 144, 175 sys.dm_db_missing_index_group _stats DMV, 144, 175 sys.dm_db_partition_stats DMV, 144 sys.dm_db_session_space_usage DMV, 144 sys.dm_db_task_space_usage DMV, 144, 174 sys.dm_exec_background_job_q ueue DMV, 144 sys.dm_exec_background_job_q ueue_stats DMV, 144 sys.dm_exec_cached_plans DMV, 144 sys.dm_exec_connections DMV, 144 sys.dm_exec_cursors DMV, 144 sys.dm_exec_plan_attributes DMV, 144 sys.dm_exec_query_optimizer_in fo DMV, 145, 171 sys.dm_exec_query_plan DMV, 145 sys.dm_exec_query_stats DMV, 145, 169, 171, 172 sys.dm_exec_requests DMV, 145 sys.dm_exec_sessions DMV, 145 sys.dm_exec_sql_text DMV, 145 sys.dm_fts_active_catalogs DMV, 145 sys.dm_fts_crawl_ranges DMV, 145 sys.dm_fts_crawls DMV, 146 sys.dm_fts_memory_buffers DMV, 146 sys.dm_fts_memory_pools DMV, 146 sys.dm_io_backup_tapes DMV, 146 sys.dm_io_cluster_shared_drives DMV, 146 sys.dm_io_pending_io_requests DMV, 146, 172 sys.dm_io_virtual_file_stats DMV, 146, 172 sys.dm_os_buffer_descriptors DMV, 146 sys.dm_os_cache_counters DMV, 168 sys.dm_os_child_instances DMV, 147 sys.dm_os_cluster_nodes DMV, 147 sys.dm_os_hosts DMV, 147
sys.dm_os_latch_stats DMV, 147 sys.dm_os_loaded_modules DMV, 147 sys.dm_os_memory_cache_clock _hands DMV, 147, 168 sys.dm_os_memory_cache _counters DMV, 147 sys.dm_os_memory_cache _entries DMV, 147 sys.dm_os_memory_cache_hash _tables DMV, 147 sys.dm_os_memory_clerks DMV, 147, 168 sys.dm_os_memory_objects DMV, 147, 168 sys.dm_os_memory_pools DMV, 147 sys.dm_os_performance _counters DMV, 147 sys.dm_os_ring_buffers DMV, 168 sys.dm_os_schedulers DMV, 147, 169 sys.dm_os_stacks DMV, 148 sys.dm_os_sys_info DMV, 148 sys.dm_os_tasks DMV, 148 sys.dm_os_threads DMV, 148 sys.dm_os_virtual_address _dump DMV, 148 sys.dm_os_waiting_tasks DMV, 148 sys.dm_os_wait_stats DMV, 148, 172 sys.dm_os_workers DMV, 148 sys.dm_qn_subscriptions DMV, 148 sys.dm_repl_articles DMV, 149 sys.dm_repl_schemas DMV, 149 sys.dm_repl_tranhash DMV, 149 sys.dm_repl_traninfo DMV, 149 sys.dm_tran_active_transactions DMV, 149 sys.dm_tran_current_snapshot DMV, 149 sys.dm_tran_current_transaction DMV, 149 sys.dm_tran_database _transactions DMV, 149 sys.dm_tran_locks DMV, 149 sys.dm_tran_session_transactions DMV, 149 sys.dm_tran_top_version_genera tors DMV, 150 sys.dm_tran_transactions _snapshot DMV, 150 sys.dm_tran_version_store DMV, 150 sys.endpoints catalog view, 328
sys.endpoint_webmethods catalog view, 329 sys.http_endpoints catalog view, 328 sysmail_ stored procedures, 401–403 sysmail_add_profileaccount_sp stored procedure, 402, 403 sys.server_permissions database catalog view, 221 sys.server_triggers catalog view, 84 sys.soap_endpoints catalog view, 329 sys.sysmessages system catalog, 114 System Definition Model (SDM), 350 System Monitor (Windows), 139, 140 performance overhead impact, 158 for testing performance, 268, 270 for trend analysis, 158 System– Processor Queue Length counter, 169 system resources, application utilization, 267, 268 system stored procedures, 80 to manage SQL trace, 141 for performance monitoring, 156–157 System– % Total Processor Time counter, 169 system variables, 502 sys.triggers catalog view, 84
T T-SQL script, 105 T-SQL stored procedures, 80 T-SQL user-defined data types, 15–16 Table control, in report, 443, 444 table integrity. See entity integrity table layout for report, 445 designer, 436 Table Properties dialog box, Groups tab, 448 table-valued user-defined functions (TVFs) inline, 75–76 multistatement, 76–77 tables, physical order for, 57
40521bindex.fm Page 607 Monday, August 14, 2006 1:40 PM
Tabular Data Stream (TDS) – users
Tabular Data Stream (TDS), 307, 308 authentication in, 334 tabular reports from Report Wizard, 433 design, 435 TAKE OWNERSHIP permission, 229 Task Host container, 491 Task Manager, 139, 168 on SQL Server use of CPU, 169 tasks for control flow, 490, 491–495 Analysis Services tasks, 494 data flow tasks, 492, 492 data preparation tasks, 492 maintenance tasks, 494–495 scripting tasks, 494 SQL Server tasks, 493 workflow tasks, 492–493 TCP/IP (Transmission Control Protocol/Internet Protocol), 304 TDD (test-driven development), 264 TDS (Tabular Data Stream), 307, 308 authentication in, 334 tempdb system database, troubleshooting performance problems with, 173–174 templates, for reports, 436 Term Extraction transformation, 498 Term Lookup transformation, 498 test-driven development (TDD), 264 testing IIS and SQL Server, 330 packages in SSIS, 506, 506 TEXT data type, 12 TextAlign element for report, 453 TextBox controls, 454 TextDecoration element for report, 453 third normal form (3NF), 3, 5, 6 thread pool size, for processing Notification Services, 382 throughput, 268 measuring query, 269 timeout parameter, in report execution, 458, 459 TIMESTAMP data type, 11 TINYINT data type, 11
@@TOTAL_READ function, 154 trace, 141 correlating with Windows performance log data with SQL Profiler, 159–165, 165 trace flags, 154–155 Trace Properties dialog box, 195 Transact-SQL. See T-SQL transaction, dynamic management views (DMVs), 149–150 transactional replication, 544 transformations, data flow, 497–500 transitive dependencies, 5 Transmission Control Protocol/ Internet Protocol (TCP/IP), 304 trend analysis, 158 triggers, 83. See also DDL triggers DML, 85–90 vs. event notifications, 204 nesting, 84 recursive, 85 WMI (Windows Management Instrumentation), 95–104 troubleshooting performance, 138–139 CPU problems, 169–171 I/O bottlenecks, 172–173 memory problems, 166–168 queries, 174–175 tempdb problems, 173–174 TSQLUnit open source project, 264 TVFs (table-valued user-defined functions) inline, 75–76 multistatement, 76–77 typed XML, 15
U UDDI (Universal Description, Discovery, and Integration), 350 Unicode encoding, 13 UnicodeBiDi element for report, 453 Union All transformation, 499, 517, 517
607
unique constraint, creating, 18 UNIQUEIDENTIFIER data type, 12, 13 unit test framework, 264 unit tests benefits, 265 plan design, 264–274 assessing components, 265–267 performance profiles, 268 for query performance, 269–272 Universal Description, Discovery, and Integration (UDDI), 350 Unpivot transformation, 499 UNSAFE permission, for CLR assemblies, 248, 249 unstructured data, 6 untyped XML, 15 UPDATE permission, 229 UPDATE STATISTICS statements, DLL triggers and, 90 URL links in reports, 451 user-defined data types CLR, 17 T-SQL, 15–16 user-defined database roles, 219 user-defined database users, 216 user-defined functions, 62, 71–77 CLR, 77–79 CLR aggregate, 79–80 in indexed view, 67 inline table-valued, 75–76 multistatement table-valued, 76–77 scalar, 73–74 user-defined modules, security and, 193, 193 user token, 240 viewing information about, 241 users controlling interaction with reports, 451 data usage patterns, and indexing strategy, 55 prompting for input to report creation, 451 pushing reports to, 464 requiring read-only access to data, 224–227 responsible for modifying data, 227–228 in Windows, vs. groups, 212
40521bindex.fm Page 608 Monday, August 14, 2006 1:40 PM
608
Vacuuming setting – ys.dm_tran_active_snapshot_database_transactions DMV
V Vacuuming setting, in Notification Services, 384 VARBINARY data type, 12, 13, 20 VARCHAR data type, 12, 13, 20 variables, in SSIS, 502, 503 VeriSign certificates, 186 version control. See source code management VerticalAlign element for report, 454 Victor's Notation, 3 views, 63–66 advantages, 63 creating, 64–65 indexed, 67–71 INSTEAD OF DML triggers and, 87 INSTEAD OF trigger for updating, 64 restrictions, 63–64 virtual memory, 168 Visual Basic.NET, Express Edition, 317 Visual C#.NET, Express Edition, 317 Visual SourceSafe, 287 Visual Studio for building client application, 316–323 Express Edition, 317 Report Designer, 419 Report Wizard, 419 for Reporting Services reports, 437–445 credentials, 441, 441–442 layout, 443, 443 SOA-based development in, 350
for SSIS, Data Flow tab, 510, 511 Team Foundation Server, 287 Visual Web Developer, Express Edition, 317 volatility of denormalized data, 6
W weather application, Notification Services for, 392 web-based solutions, importance of understanding process, 315 web reference, adding to client application, 319–320 web server, for XML web services, 306 Web Services Description Language (WSDL), 305, 314–315, 316 web services, security for, 321 WEBMETHOD argument, in CREATE ENDPOINT statement, 327 Windows groups vs. users, 212 performance monitoring tools, 139–140 Windows authentication, for report rendering, 442 Windows login, 212 WITH ENCRYPTION clause, 290–291 WMI (Windows Management Instrumentation) Connection Manager, 490 event alerts, 95–104, 100, 117 architecture, 95 triggers, 95–104 workflow tasks in SSIS, 492–493
workloads, capturing with SQL Profiler, 268, 269 WQL (WMI Query Language), 96 WritingMode element for report, 454 WSDL (Web Services Description Language), 305, 314–315, 316 WSDL argument, in CREATE ENDPOINT statement, 327
X XML (Extensible Markup Language), 304 index, syntax for creating, 56 for semistructured data, 6 XML data type, 12, 14–15 XML web services, 303–307. See also native XML web services building with SQL Server, 309–314 standards for, 304 using, 314–323 client application, 316–323 xp_ prefix, 81 xp_cmdshell extended stored procedure, 248 xp_sendmail extended stored procedure, 405
Y ys.dm_tran_active_snapshot_dat abase_transactions DMV, 149
40521bcdpage.qxd
8/11/06
3:34 PM
Page B
T
he Best MCITP Book/CD Package on the Market!
Get ready for the new PRO: Designing Database Solutions by Using Microsoft SQL Server 2005 (70-441) exam with the most comprehensive and challenging sample tests anywhere! The Sybex Test Engine includes the following features:
Chapter-by-chapter exam coverage of all the review questions from the book.
Challenging questions representative of those you’ll find on the real exams.
Bonus case study questions, available only on the CD.
Use the over-100 Electronic Flashcards for PCs or Palm devices to jog your memory and prep for the exam at the last minute!
Search through the complete book in PDF!
Access the entire MCITP Developer: Microsoft SQL Server 2005 Database Solutions Design (70-441) Study Guide complete with figures and tables, in electronic format.
Search the MCITP Developer: Microsoft SQL Server 2005 Database Solutions Design (70-441) Study Guide chapters to find information on any topic in seconds.
Reinforce your understanding of key concepts with these hardcore flashcard-style questions.
Download the Flashcards to your Palm device, and go on the road. Now you can study anywhere, anytime.
40521beula.fm Page 1 Friday, August 11, 2006 3:28 PM
Wiley Publishing, Inc. End-User License Agreement READ THIS. You should carefully read these terms and conditions before opening the software packet(s) included with this book “Book”. This is a license agreement “Agreement” between you and Wiley Publishing, Inc. “WPI”. By opening the accompanying software packet(s), you acknowledge that you have read and accept the following terms and conditions. If you do not agree and do not want to be bound by such terms and conditions, promptly return the Book and the unopened software packet(s) to the place you obtained them for a full refund. 1.License Grant. WPI grants to you (either an individual or entity) a nonexclusive license to use one copy of the enclosed software program(s) (collectively, the “Software,” solely for your own personal or business purposes on a single computer (whether a standard computer or a workstation component of a multi-user network). The Software is in use on a computer when it is loaded into temporary memory (RAM) or installed into permanent memory (hard disk, CD-ROM, or other storage device). WPI reserves all rights not expressly granted herein. 2.Ownership. WPI is the owner of all right, title, and interest, including copyright, in and to the compilation of the Software recorded on the physical packet included with this Book “Software Media”. Copyright to the individual programs recorded on the Software Media is owned by the author or other authorized copyright owner of each program. Ownership of the Software and all proprietary rights relating thereto remain with WPI and its licensers. 3.Restrictions On Use and Transfer. (a)You may only (i) make one copy of the Software for backup or archival purposes, or (ii) transfer the Software to a single hard disk, provided that you keep the original for backup or archival purposes. You may not (i) rent or lease the Software, (ii) copy or reproduce the Software through a LAN or other network system or through any computer subscriber system or bulletin-board system, or (iii) modify, adapt, or create derivative works based on the Software. (b)You may not reverse engineer, decompile, or disassemble the Software. You may transfer the Software and user documentation on a permanent basis, provided that the transferee agrees to accept the terms and conditions of this Agreement and you retain no copies. If the Software is an update or has been updated, any transfer must include the most recent update and all prior versions. 4.Restrictions on Use of Individual Programs. You must follow the individual requirements and restrictions detailed for each individual program in the About the CD-ROM appendix of this Book or on the Software Media. These limitations are also contained in the individual license agreements recorded on the Software Media. These limitations may include a requirement that after using the program for a specified period of time, the user must pay a registration fee or discontinue use. By opening the Software packet(s), you will be agreeing to abide by the licenses and restrictions for these individual programs that are detailed in the About the CD-ROM appendix and/or on the Software Media. None of the material on this Software Media or listed in this Book may ever be redistributed, in original or modified form, for commercial purposes. 5.Limited Warranty. (a)WPI warrants that the Software and Software Media are free from defects in materials and workmanship under
normal use for a period of sixty (60) days from the date of purchase of this Book. If WPI receives notification within the warranty period of defects in materials or workmanship, WPI will replace the defective Software Media. (b)WPI AND THE AUTHOR(S) OF THE BOOK DISCLAIM ALL OTHER WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE, WITH RESPECT TO THE SOFTWARE, THE PROGRAMS, THE SOURCE CODE CONTAINED THEREIN, AND/OR THE TECHNIQUES DESCRIBED IN THIS BOOK. WPI DOES NOT WARRANT THAT THE FUNCTIONS CONTAINED IN THE SOFTWARE WILL MEET YOUR REQUIREMENTS OR THAT THE OPERATION OF THE SOFTWARE WILL BE ERROR FREE. (c)This limited warranty gives you specific legal rights, and you may have other rights that vary from jurisdiction to jurisdiction. 6.Remedies. (a)WPI’s entire liability and your exclusive remedy for defects in materials and workmanship shall be limited to replacement of the Software Media, which may be returned to WPI with a copy of your receipt at the following address: Software Media Fulfillment Department, Attn.: MCITP Developer Study Guide, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, or call 1-800-762-2974. Please allow four to six weeks for delivery. This Limited Warranty is void if failure of the Software Media has resulted from accident, abuse, or misapplication. Any replacement Software Media will be warranted for the remainder of the original warranty period or thirty (30) days, whichever is longer. (b)In no event shall WPI or the author be liable for any damages whatsoever (including without limitation damages for loss of business profits, business interruption, loss of business information, or any other pecuniary loss) arising from the use of or inability to use the Book or the Software, even if WPI has been advised of the possibility of such damages. (c)Because some jurisdictions do not allow the exclusion or limitation of liability for consequential or incidental damages, the above limitation or exclusion may not apply to you. 7.U.S. Government Restricted Rights. Use, duplication, or disclosure of the Software for or on behalf of the United States of America, its agencies and/or instrumentalities “U.S. Government” is subject to restrictions as stated in paragraph (c)(1)(ii) of the Rights in Technical Data and Computer Software clause of DFARS 252.227-7013, or subparagraphs (c) (1) and (2) of the Commercial Computer Software - Restricted Rights clause at FAR 52.22719, and in similar clauses in the NASA FAR supplement, as applicable. 8.General. This Agreement constitutes the entire understanding of the parties and revokes and supersedes all prior agreements, oral or written, between them and may not be modified or amended except in a writing signed by both parties hereto that specifically refers to this Agreement. This Agreement shall take precedence over any other documents that may be in conflict herewith. If any one or more provisions contained in this Agreement are held by any court or tribunal to be invalid, illegal, or otherwise unenforceable, each and every other provision shall remain in full force and effect.
40521bperf.fm Page 1 Friday, August 11, 2006 3:28 PM
MCITP Developer: Microsoft SQL Server 2005 Database Solutions Design Study Guide Exam 70–441: PRO: Designing Database Solutions by Using Microsoft SQL Server 2005 OBJECTIVE
CHAPTER
Designing Database Testing and Code Management Procedures Assess which components should be unit tested. Design tests for query performance. Design tests for data consistency. Design tests for application security.
5 5 5 5
Design tests for system resources utilization. Design tests to ensure code coverage.
5 5
Create a performance baseline and benchmarking strategy for a database. Establish performance objectives and capacity planning. Create a strategy for measuring performance changes. Create a plan for responding to performance changes. Create a plan for tracking benchmark statistics over time.
3
Create a plan for deploying a database. Select a deployment technique. Design scripts to deploy the database as part of application setup. Design database change scripts to apply application patches. Design scripts to upgrade database data and objects.
5
Control changes to source code. Set file permissions. Set and retrieve version information. Detect differences between versions.
5
Encrypt source code. Mark groups of objects, assign version numbers to them, and devise a method to track changes.
3 3 3 3
5 5 5 5
5 5 5 5 5
Designing an Application Solution for SQL Server 2005 Select and design SQL Server services to support business needs. Select the appropriate services to use to support business needs. Design a SQL Web services solution. Design a Notification Services solution to notify users. Design a Service Broker solution for asynchronous database applications.
6, 9 6 6 7 7
40521bperf.fm Page 2 Friday, August 11, 2006 3:28 PM
OBJECTIVE Design a Microsoft Distributed Transaction Coordinator (MS DTC) solution for distributed transactions. Design a Reporting Services solution. Design an Integration Services solution. Design a SQL Server core service solution. Design a SQL Server Agent solution. Design a DatabaseMail solution.
CHAPTER 6 8 9 6 6, 9 7
Design a logical database. Design a normalized database. Optimize the database design by denormalizing. Design data flow architecture. Optimize queries by creating indexes. Design table width. Design index-to-table-size ratio.
1, 2
Design an application solution to support security. Design and implement application security. Design the database to enable auditing. Design objects to manage user access. Design data-level security that uses encryption.
4
Design an application solution that uses appropriate database technologies and techniques. Design a solution for storage of XML data in the database. Choose appropriate languages. Design a solution for scalability. Design interoperability with external systems. Develop aggregation strategies.
6, 7, 8, 9
Design an application solution that supports reporting. Design a snapshot strategy. Design the schema. Design the data transformation. Design indexes for reporting. Choose programmatic interfaces. Evaluate use of reporting services. Decide which data access method to use.
1 1 1 2 1 2
4 4 4 4
1 2 2, 6, 7, 8, 9 6, 7, 8, 9 2
8 8 8 2, 8 8 8 8
Exam objectives are subject to change at any time prior to notice and at Microsoft’s sole discretion. Please visit Microsoft’s website ( microsoft.com/learning) for the most current listing of exam objectives.
40521bperf.fm Page 3 Friday, August 11, 2006 3:28 PM
OBJECTIVE Design data distribution. Design a DatabaseMail solution for distributing data. Design SQL Server Agent alerts. Specify a Web services solution for distributing data. Specify a Reporting Services solution for distributing data. Specify a Notification Services solution for distributing data.
CHAPTER 6 7 3, 9 6 8 7
Designing Database Objects Design objects that define data. Design user-defined data types. Design tables that use advanced features. Design indexes. Specify indexed views to meet business requirements.
2
Design objects that retrieve data. Design views. Design user-defined functions. Design stored procedures.
2
Design objects that extend the functionality of a server. Design scalar user-defined functions to extend the functionality of the server. Design CLR user-defined aggregates. Design stored procedures to extend the functionality of the server.
2
Design objects that perform actions. Design DML triggers. Design DDL triggers. Design WMI triggers. Design Service Broker applications. Design stored procedures to perform actions.
2
2 2 2 2
2 2 2
2 2 2
2 2 2 7 2
Designing a Database Design attributes. Decide whether to persist an attribute. Specify domain integrity by creating attribute constraints. Choose appropriate column data types and sizes.
1
Design entities. Define entities. Define entity integrity.
1
1 1 1
1 1
40521bperf.fm Page 4 Friday, August 11, 2006 3:28 PM
OBJECTIVE Normalize tables to reduce data redundancy. Establish the appropriate level of denormalization.
CHAPTER 1 1
Design entity relationships (ER). Specify ER for referential integrity. Specify foreign keys. Create programmable objects to maintain referential integrity.
1
Design database security. Define database access requirements. Specify database object security permissions. Define schemas to manage object ownership. Specify database objects that will be used to maintain security. Design an execution context strategy.
4
1 1 1
4 4 4 4 4
Developing Applications That Use SQL Server Support Services Develop applications that use Reporting Services. Specify subscription models, testing reports, error handling, and server impact. Design reports. Specify data source configuration. Optimize reports.
8
Develop applications for Notification Services. Create Notification Services configuration and application files. Configure Notification Services instances. Define Notification Services events and event providers. Configure the Notification Services generator. Configure the Notification Services distributor. Test the Notification Services application. Create subscriptions. Optimize Notification Services.
7
Develop packages for Integration Services. Select an appropriate Integration Services technology or strategy. Create Integration Services packages. Test Integration Services packages.
9
8 8 8 8
7 7 7 7 7 7 7 7
9 9 9
Exam objectives are subject to change at any time prior to notice and at Microsoft’s sole discretion. Please visit Microsoft’s website ( microsoft.com/learning) for the most current listing of exam objectives.