Mobile Displays Technology and Applications
Edited by
Achintya K. Bhowmik Intel Corporation, USA
Zili Li Motorola, Inc., USA
Philip J. Bos Liquid Crystal Institute, Kent State University, USA
Mobile Displays
Wiley-SID Series in Display Technology Series Editor: Anthony C. Lowe Consultant Editor: Michael A. Kriss Display Systems: Design and Applications Lindsay W. MacDonald and Anthony C. Lowe (Eds) Electronic Display Measurement: Concepts, Techniques, and Instrumentation Peter A. Keller Projection Displays Edward H. Stupp and Matthew S. Brennesholtz Liquid Crystal Displays: Addressing Schemes and Electro-Optical Effects Ernst Lueder Reflective Liquid Crystal Displays Shin-Tson Wu and Deng-Ke Yang Colour Engineering: Achieving Device Independent Colour Phil Green and Lindsay MacDonald (Eds) Display Interfaces: Fundamentals and Standards Robert L. Myers Digital Image Display: Algorithms and Implementation Gheorghe Berbecel Flexible Flat Panel Displays Gregory Crawford (Ed.) Polarization Engineering for LCD Projection Michael G. Robinson, Jianmin Chen, and Gary D. Sharp Fundamentals of Liquid Crystal Devices Deng-Ke Yang and Shin-Tson Wu Introduction to Microdisplays David Armitage, Ian Underwood, and Shin-Tson Wu Mobile Displays: Technology and Applications Achintya K. Bhowmik, Zili Li, and Philip J. Bos (Eds)
Mobile Displays Technology and Applications
Edited by
Achintya K. Bhowmik Intel Corporation, USA
Zili Li Motorola, Inc., USA
Philip J. Bos Liquid Crystal Institute, Kent State University, USA
Copyright # 2008
John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England Telephone (þ44) 1243 779777
Email (for orders and customer service enquiries):
[email protected] Visit our homepage at www.wiley.com All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except under the terms of the Copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency Ltd, 90 Tottenham Court Road, London W1T 4LP, UK, without the permission in writing of the Publisher. Requests to the Publisher should be addressed to the Permissions Department, John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex PO19 8SQ, England, or emailed to
[email protected], or faxed to (þ44) 1243 770620. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold on the understanding that the Publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional should be sought. Other Wiley Editorial Offices John Wiley & Sons Inc., 111 River Street, Hoboken, NJ 07030, USA Jossey-Bass, 989 Market Street, San Francisco, CA 94103-1741, USA Wiley-VCH Verlag GmbH, Boschstr. 12, D-69469 Weinheim, Germany John Wiley & Sons Australia Ltd, 42 McDougall Street, Milton, Queensland 4064, Australia John Wiley & Sons (Asia) Pte Ltd, 2 Clementi Loop #02-01, Jin Xing Distripark, Singapore 129809 John Wiley & Sons Canada Ltd, 6045 Freemont Blvd, Mississauga, ONT, L5R 4J3 Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. Library of Congress Cataloguing-in-Publication Data Mobile displays : technology and applications/edited by Achintya K. Bhowmik, Zili Li, Philip Bos. p. cm. Includes index. ISBN 978-0-470-72374-6 (cloth) 1. Liquid crystal displays. 2. Flat panel displays. 3. Smartphones–Equipment and supplies. 4. Pocket computers–Equipment and supplies. I. Bhowmik, Achintya K. II. Li, Zili. III. Bos, Philip J. TK7872.L56M63 2008 2008003735 621.38150 422–dc22 British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN 978-0-470-72374-6
Typeset in 9/11 pt Times by Thomson Digital, India. Printed and bound in Great Britain by Antony Rowe Ltd, Chippenham, Wiltshire.
Contents About the Editors
xv
List of Contributors
xvii
Series Editor’s Foreword
xxi
Preface 1
2
Introduction to Mobile Displays Zili Li, Achintya K. Bhowmik, and Philip J. Bos 1.1 Introduction 1.2 Advances in Mobile Applications 1.3 Mobile Environment and its Impact on the Display 1.3.1 Illumination Considerations 1.3.2 System Power Considerations 1.3.3 Display Resolution Considerations 1.4 Current Mobile Display Technologies 1.4.1 Overview 1.4.2 Operational Modes of LCDs 1.4.3 Viewing Angle and Illumination of AMLCDs 1.4.4 Display Driving Electronics 1.5 Emerging Mobile Display Technologies 1.5.1 System-on-Glass Technologies 1.5.2 Organic Light-Emitting Diode (OLED) Displays 1.5.3 Bistable Displays 1.5.4 Electrowetting Displays 1.5.5 Three-Dimensional (3D) Displays 1.5.6 Beyond Direct-View and Rigid Displays 1.6 Summary References Human Factors Considerations: Seeing Information on a Mobile Display Jim Larimer 2.1 Introduction 2.2 The Perfect Image 2.3 The JND Map and Metric
xxiii 1 1 2 5 6 8 11 11 11 12 14 15 16 16 17 19 19 19 20 21 22
23 23 27 27
vi
CONTENTS 2.4
Image Bandwidth or Considering a Display or the Eye as an Information Channel 2.5 The Control Signal and Scaling for Rendering 2.6 Jaggies 2.7 Hyperacuity 2.8 Bar Gratings and Spatial Frequency 2.9 Three Measures of Contrast and Weber’s Law 2.10 Contrast Sensitivity Function (csf) 2.11 Veiling Ambient Light: Contrast Reduction from Glare 2.12 Dither: Trade Offs between Spatial Scale and Intensity 2.13 Three Display Screens with Text Imagery 2.14 Color 2.15 Making Color on Displays 2.16 Luminance and Tone Scale 2.17 Concluding Remarks References 3
4
Advanced Mobile Display Technology Kee-Han Uh, and Seon-Hong Ahn 3.1 Introduction 3.2 Advanced Mobile Display Technology 3.2.1 Liquid Crystal Display Mode 3.2.2 Operating Principle of VA Mode 3.2.3 Super PVA (S-PVA) Technology 3.2.4 Mobile PVA (mPVA) Technology 3.2.5 Transflective VA LCD for Mobile Application 3.2.6 Backlight 3.2.7 Substrates 3.2.8 Drive Electronics 3.2.9 Triple-Gate 3.2.10 ALS (Active Level Shifting) 3.2.11 hTSP (Hybrid Touch Screen Panel) 3.2.12 ABC (Adaptive Brightness Control) 3.3 Summary References In-Plane Switching (IPS) LCD Technology for Mobile Applications InJae Chung, and Hyungki Hong 4.1 Introduction 4.2 LCD Modes 4.3 Operational Principle of IPS Mode 4.3.1 Voltage Transmittance Relation 4.4 LC Equation of Motion under an Electric Field 4.5 Schematic Diagram of IPS Pixel Structures 4.6 Characteristics of IPS Mode 4.6.1 Response Time Characteristics 4.7 Light Efficiency 4.8 Viewing Angle Characteristics 4.9 Color and Gray Level
28 29 30 32 33 34 36 38 39 41 43 47 47 50 50 53 53 55 56 57 59 61 64 65 66 66 68 68 69 70 72 72 75 75 76 80 80 82 85 88 88 89 90 91
CONTENTS
5
6
7
vii
4.10 IPS Mode for Outdoor Applications 4.11 Summary References
93 94 95
Transflective Liquid Crystal Display Technologies Xinyu Zhu, Zhibing Ge, and Shin-Tson Wu 5.1 Introduction 5.2 Classification of Transflectors 5.2.1 Openings-on-Metal Transflector 5.2.2 Half-Mirror Metal Transflector 5.2.3 Multilayer Dielectric Film Transflector 5.2.4 Orthogonal Polarization Transflector 5.3 Classification of Transflective LCDs 5.3.1 Absorption Type Transflective LCDs 5.3.2 Scattering Type Transflective LCDs 5.3.3 Reflection Type Transflective LCDs 5.3.4 Phase-Retardation Type Transflective LCDs 5.4 Discussion 5.4.1 Color Balance 5.4.2 Image Brightness 5.4.3 Viewing Angle 5.5 Conclusion References
97
Wide Viewing Angle and High Brightness Liquid Crystal Displays Incorporating Birefringent Compensators and Energy-Efficient Backlight Claire Gu, Pochi Yeh, Xingpeng Yang, and Guofan Jin 6.1 Introduction 6.1.1 Overview 6.1.2 LCD Performance Limitations 6.1.3 Solutions 6.2 WVA (Wide-Viewing-Angle) LCDs with Birefringent Compensators 6.2.1 Overview 6.2.2 Extended Jones Matrix Method for Analyzing Large Viewing Angle Characteristics 6.2.3 Viewing Symmetry in LCDs 6.2.4 Birefringent Compensators for Liquid Crystal Displays 6.2.5 Summary of Section 6.2 6.3 High Brightness LCDs with Energy-Efficient Backlights 6.3.1 Overview 6.3.2 Backlight without Optical Films 6.3.3 Polarized Light-Guide Plate Based on the Sub-Wavelength Grating 6.4 Conclusions Acknowledgements References Backlighting of Mobile Displays Philip Watson, and Gary T. Boyd 7.1 Introduction 7.2 Edge-lit Backlight Components and Function
97 98 98 99 100 100 102 102 104 106 108 126 126 127 127 127 129
133 133 133 134 135 135 135 136 154 158 188 188 188 190 194 208 208 208 211 211 213
viii
8
9
CONTENTS 7.3 Light Source 7.4 Lightguide 7.5 Back Reflector and Bulb Reflector 7.6 The Optical Film Stack 7.7 Prisms-Up Systems 7.8 Prisms-Down Systems 7.9 Reflective Polarizers and Polarization Recycling 7.10 System Efficiencies in Highly Recycling Backlights 7.11 Trends in Mobile Display Backlighting References
213 213 214 214 215 216 217 219 223 225
LED Josef 8.1 8.2
227
Backlighting of LCDs in Mobile Appliances Hu¨ttner, Gerhard Kuhn, and Matthias Winter Introduction Basic Physics of LED Technology 8.2.1 History of LEDs 8.3 Basic Physics of Semiconductor Light Emission 8.3.1 Semiconductor Basics 8.3.2 The p–n Junction and Photons 8.4 LED Efficiency and Light Extraction 8.4.1 Chip Technology 8.4.2 Thinfilm and ThinGaN1 Technology 8.4.3 Design and Manufacturing 8.4.4 Benefits 8.5 Packaging Technologies and White LED Light 8.5.1 Creation of White Light 8.6 Requirements and Designs for LED-based Backlight Solutions 8.6.1 Requirements for BLU Systems 8.6.2 LED Component Design 8.7 LED-Backlighting Products 8.7.1 White versus RGB Backlight Units 8.7.2 Micro SIDELED1 (LW Y1SG and LW Y3SG) 8.7.3 Ambient Light Sensors: Product Introduction 8.8 LED Backlighting of Notebook LCDs 8.9 Summary and Outlook References Advances in Mobile Display Driver Electronics James E. Schuessler 9.1 Introduction 9.2 Rapid Evolution 9.3 Requirements 9.4 Packaging Techniques 9.5 Passive Matrix LCD 9.6 Active Matrix LCD Operation 9.6.1 Generalized AM-LCD System 9.6.2 Typical LCD Matrix 9.6.3 Active Matrix LCD Array 9.6.4 Poly-Silicon LCD Array
227 228 228 229 229 230 231 233 233 233 234 235 235 236 237 238 239 239 241 241 244 248 249 251 251 252 253 254 255 256 256 257 257 258
CONTENTS
10
11
ix
9.6.5 Row Driver Operation 9.6.6 The AM-LCD Driver 9.6.7 The System Interfaces 9.6.8 Frame Memory and Buffer Architecture 9.6.9 Display Lighting and Lighting Control 9.7 Requirements for Driving Example Emerging Display Technologies 9.7.1 Sub-pixel Rendering Displays 9.7.2 OLED/OEL 9.7.3 Bistable and Electrophoretic Drive 9.7.4 IMod – the Interferometric Modulator 9.8 Summary References
258 261 269 270 272 274 274 275 279 281 281 282
Mobile Display Digital Interface (MDDI) George A. Wiley, Brian Steele, Salman Saeed, and Glenn Raskin 10.1 Introduction 10.1.1 The Need for Speed 10.1.2 Handset Display and Camera Trends 10.1.3 The Solution is Serial 10.2 MDDI Advantages 10.2.1 Space Constraints 10.2.2 EMI Reduction 10.2.3 Power Reduction 10.2.4 Scalability 10.2.5 MDDI System Connections 10.3 Future Generations of MDDI 10.3.1 Audio Multiplexed with Video 10.3.2 High Speed IrDA Concurrent on Reverse Link 10.4 MDDI Roadmap 10.4.1 MDDI Gen 1.2 10.4.2 Next Generation MDDI 10.5 MDDI Technical Overview 10.5.1 Overview and Terminology 10.5.2 Physical Connection 10.5.3 MDDI Physical Layer 10.5.4 Internal and External Modes 10.5.5 Multiple Stream Synchronization 10.5.6 Overview of the Link Layer 10.5.7 Link Hibernation 10.5.8 The Reverse Link 10.5.9 Link Budget 10.5.10 Link Skew Calibration 10.5.11 Display Synchronization 10.6 Conclusion References
285
MIPI High-Speed Serial Interface Standard for Mobile Displays Richard Lawrence 11.1 Introduction 11.1.1 Motivation for New Standards 11.1.2 Display Architectures and DSI Goals
285 285 285 287 289 289 290 290 291 291 294 295 295 296 296 297 297 297 298 299 301 301 301 304 306 309 311 312 314 314 315 315 316 316
x
12
13
14
CONTENTS 11.2 11.3
Scope of MIPI DSI Specification DSI Layers 11.3.1 Physical Layer Specification 11.3.2 Multi-Lane Operation 11.3.3 Bidirectional Operation with DSI 11.4 DSI Protocol 11.4.1 Packet Transmission 11.4.2 Packet Formats 11.4.3 Virtual Channels and Data Types 11.4.4 Video-Mode Transmission and Burst Operation 11.4.5 Command-Mode Operation 11.5 Dual-Display Operation 11.6 Conclusion Notes and Acknowledgements About The MIPI Alliance About MIPI Specifications References
317 318 319 319 319 320 320 321 322 322 324 324 328 328 328 328 328
Image Reconstruction on Color Sub-pixelated Displays Candice H. Brown Elliott 12.1 The Opportunity of Biomimetic Imaging Systems 12.1.1 History 12.2 Sub-pixel Image Reconstruction 12.3 Defining the Limits of Performance: Nyquist, MTF and Moire´ Limits 12.4 Sub-pixel Rendering Algorithm 12.5 Area Resample Filter Generation 12.6 RGBW Color Theory 12.7 RGBW Sub-pixel Rendering 12.8 RGBW Sub-pixel Rendering Algorithm 12.9 Gamma Correction and Quantization Error Reduction 12.10 Conclusion References
329
Recent SOG (System-on-Glass) Development Based on LTPS Technology Tohru Nishibe, and Hiroki Nakamura 13.1 Introduction 13.2 Added Value 13.3 Requirements for TFT Characteristics and Design Rule 13.4 Display with Fully-integrated Circuit 13.5 ‘Input Display’ with Scanning Function 13.6 ‘Input Display’ with Touch-panel Function 13.7 Future Application of ‘Input Display’ 13.8 Summary References Advances in AMOLED Technologies Y.-M. Alan Tsai, James Chang, D.Z. Peng, Vincent Tseng, Alex Lin, L.J. Chen, and Poyen Lu 14.1 Introduction 14.2 OLED Technology 14.2.1 Introduction
329 331 332 333 342 346 348 360 361 364 366 366 369 369 370 371 372 374 376 380 382 382 385
385 386 386
CONTENTS
15
16
xi
14.2.2 Electroluminescence Mechanism 14.2.3 OLED Materials 14.2.4 Advanced OLED Devices 14.2.5 Advanced OLED Process 14.3 Backplane for AMOLED Display 14.3.1 Comparison of a-Si TFT and LTPS TFT for AMOLED 14.3.2 TFT Uniformity Issues in AMOLED Applications 14.3.3 Advanced Device for AMOLED Applications 14.4 AMOLED Pixel Circuit Design 14.4.1 Pixel Circuit 14.5 Summary and Outlook References
386 388 391 392 397 398 401 403 404 404 423 424
Electronic Paper Displays Robert Zehner 15.1 Introduction: The Case for Electronic Paper 15.2 What is Electronic Paper? 15.2.1 Paper-like Look 15.2.2 Paper-like Form Factor 15.2.3 Paper-like Power Consumption 15.2.4 Addressing Means 15.2.5 Paper-like Interaction 15.3 Particle-based Electro-optic Materials for Electronic Paper 15.3.1 Bichromal Particle Displays 15.3.2 Electrophoretic Displays 15.3.3 Color Particle-based Displays 15.4 Particle-based Electronic Paper Products 15.5 Conclusion References
427
Reflective Cholesteric Liquid Crystal Displays Deng-Ke Yang 16.1 Introduction 16.2 Basics of Ch Liquid Crystals 16.3 Optics of Ch Liquid Crystals 16.4 Bistable Reflective Ch Display 16.4.1 Bistability 16.4.2 Designs of Reflective Ch Displays 16.4.3 Grayscale Reflection 16.4.4 Viewing Angle 16.4.5 Polymer Stabilized Black-White Ch Display 16.4.6 Encapsulated Ch Display 16.5 Drive Schemes of Ch Displays 16.5.1 Transition Between Ch Textures 16.5.2 Response of the Bistable Ch Liquid Crystal to Voltage Pulses 16.5.3 Conventional Drive Scheme for Ch Displays 16.5.4 Dynamic Drive Scheme for Ch Displays 16.6 Conclusion References
427 427 428 429 430 431 433 434 434 435 439 440 441 442
443 443 444 445 454 454 455 456 457 457 457 458 458 461 462 463 464 466
xii 17
18
CONTENTS BiNem1 Displays: From Principles to Applications Jacques Angele´, Ce´cile Joubert, Ivan Dozov, Thierry Emeraud, Ste´phane Joly, Philippe Martinot-Lagarde, Jean-Denis Laffitte, Franc¸ois Leblanc, Jesper Osterman, Terry Scheffer, and Daniel Stoenescu 17.1 Introduction 17.2 Liquid Crystal Textures of BiNem1 Displays 17.2.1 Bulk Textures 17.2.2 Bistability of the U and T textures 17.3 Optics of BiNem1 Displays 17.3.1 General Equation 17.3.2 Configurations for Bistable Devices 17.3.3 Transmission Spectra of Configurations 17.3.4 Simulated Performance at Off-axis Viewing 17.3.5 Experimental Results in Reflective Mode 17.3.6 Experimental Results in Transmissive Mode 17.4 Physical Mechanisms 17.4.1 Switching by Surface Anchoring Breaking 17.4.2 Control of the Switching 17.4.3 Switching by ‘First Order’ Breaking of Slightly Tilted Anchoring 17.4.4 Grayscale 17.5 Specific BiNem1 Materials 17.5.1 Polymer Alignment Layers 17.5.2 Weak Anchoring Nematic Mixtures 17.6 BiNem1 Manufacturing Process 17.6.1 Structure of BiNem1 Displays 17.6.2 Manufacturing Process 17.7 Passive Matrix Addressing 17.7.1 Switching Thresholds 17.7.2 Blanking Signal 17.7.3 Anchoring Breaking Phase 17.7.4 Texture Selection Phase 17.7.5 Final Phase and Multiplexing Scheme 17.7.6 Partial Refreshing 17.7.7 Implementation of the Driving Schemes 17.7.8 Power Consumption 17.8 Performance of BiNem1 Displays 17.8.1 Optical Performance of Monochrome Reflective BiNem1 Displays 17.9 Other Developments 17.9.1 Flexible BiNem1 Displays 17.9.2 E-documents 17.9.3 Color BiNem1 Displays 17.9.4 Active Matrix Driven BiNem1 Displays 17.10 Applications of BiNem1 Displays 17.10.1 BiNem1 Displays: Market and Applications 17.11 Conclusion References Electrowetting Displays for Mobile Multimedia Applications Johan Feenstra 18.1 Introduction 18.1.1 Market Trends
469
469 470 470 472 472 472 473 475 476 476 478 478 478 481 482 483 486 487 487 490 490 490 492 493 494 494 494 496 497 498 499 500 500 503 503 504 504 507 508 508 509 510 511 511 512
CONTENTS
19
20
21
xiii
18.1.2 Entry Level Phones 18.1.3 Market Survey 18.1.4 White Space in Display Developments 18.2 Electrowetting: The Technology 18.2.1 Basic Background to Electrowetting 18.2.2 Applications of Electrowetting 18.3 Electrowetting as a Display Technology 18.3.1 Electrowetting Display Principle 18.3.2 Electrowetting Display Properties 18.4 Product Platforms 18.4.1 ColorMatchTM 18.4.2 ColorBrightTM 18.4.3 ColorFullTM 18.5 Summary Acknowledgements References
515 516 518 519 519 522 524 524 525 534 534 536 536 537 537 537
3D Displays for Portable Handheld Devices Adrian Travis 19.1 Introduction 19.2 The Perception and Pixelation of 3D Images 19.3 Stereo Pair 3D 19.3.1 Switchable Parallax Barriers 19.3.2 Switchable Lenticular Arrays 19.4 Multiview Displays 19.4.1 Multiview Lenticular Arrays 19.4.2 View-sequential 3D 19.5 Holographic Displays 19.6 Future Developments References
539
Eyewear Displays Paul Travers 20.1 Introduction 20.1.1 Near-Eye Displays – Then and Now 20.1.2 So What Is a Video Eyewear Display? 20.1.3 The Devil’s in the Details 20.2 The Optical Design and Considerations for the Near-Eye Display 20.2.1 Optics FOV Considerations 20.3 Summary References
551
Mobile Projectors Using Scanned Beam Displays Randy Sprague, Mark Champion, Margaret Brown, Dean Brown, Mark Freeman, and Maarten Niesten 21.1 The Need for a Bigger Display in a Smaller Package 21.2 Principles of Operation 21.2.1 Principles of Operation – Scanner 21.2.2 Principles of Operation – Optomechanical 21.2.3 Principles of Operation – Video Processing
565
539 540 541 542 542 543 543 544 547 548 549
551 552 553 555 557 558 563 564
565 568 568 568 569
xiv
22
Index
CONTENTS 21.3
Operation of a Bi-Magnetic Scanner 21.3.1 Performance Goals 21.3.2 Scanner Principals of Operation 21.3.3 Scanner Implementation 21.3.4 Scanner Test Results 21.4 Operation of an Electrode Comb Scanner 21.5 Lasers – New Technology Enabling the Scanned Laser Projector 21.5.1 What Laser Characteristics are Desired? 21.5.2 Laser Speckle 21.5.3 Beam Shaping, Combining and Coupling Optics 21.6 Image Quality Considerations 21.7 Summary References
571 571 572 573 574 577 580 580 584 584 585 587 588
Plastic Backplane Technology for Mobile Displays Cathy J. Curling, and Seamus E. Burns 22.1 Introduction 22.2 Flexible Display Applications and Specifications 22.2.1 Electronic Shelf Labels (ESL) 22.2.2 Electronic Signage 22.2.3 Mobile E-readers 22.2.4 E-Paper Technologies 22.2.5 The Importance of a Flexible Active Matrix 22.3 Active Matrix Backplane Requirements to Drive Bistable Media in E-Paper Applications 22.3.1 Active Matrix Operation 22.3.2 Active Matrix Backplane Requirements 22.4 Review of Flexible Active Matrix Backplane Processes 22.4.1 Challenges with Fabricating Large Area Electronics on Flexible Substrates 22.4.2 Inorganic TFT Based Processes 22.4.3 Organic TFT Processes 22.5 The Plastic Logic Process for Fabricating Flexible Active Matrix Backplanes 22.5.1 Process Description 22.5.2 How the Process Meets the Active Matrix Backplane Requirements 22.6 The Future of E-Paper Display Technologies for Mobile Applications 22.6.1 The Evolution of E-Paper Display Media 22.6.2 The Evolution of Plastic Backplane Technology for Mobile Displays Acknowledgements Note References
589 589 591 591 591 591 592 593 595 595 596 599 599 600 603 606 606 606 610 610 611 614 614 614 617
About the Editors Achintya K. Bhowmik is a Senior Manager at Intel Corporation, where he leads advanced video and display technology research and development, focusing on power-performance optimized mobile computer architecture. He has been an Adjunct Professor in the Department of Information Display at the Kyung Hee University in Seoul, Korea. His prior work includes development of high-definition display systems based on an all-digital Liquid-Crystal on Silicon microdisplay technology, electrooptic modulation in organic molecular crystals, novel light-matter interactions, and integrated optical circuits for high-speed communication networks. He received his PhD and BTech from Auburn University, Alabama, USA, and the Indian Institute of Technology, Kanpur, India, respectively. He has authored more than 70 publications, including 16 issued patents. He is a Program Committee Member for SID and IEEE. He has been a session chair and invited speaker at a number of international conferences. Zili Li is a Distinguished Member of the Technical Staff at Motorola Labs, where he leads research groups in developing advanced mobile display technologies ranging from direct-view displays, headsup displays, and microprojector displays. Prior to Motorola, he was with Rockwell International where he developed advanced avionic display and display manufacture process. He received his PhD and BS, both in Physics, from Case Western Reserve University in the US and Shandong University in China, respectively. He has more than 35 refereed publications and 17 issued US patents. He has been an invited speaker, seminar lecturer, planner, and chair at major international conferences. He is cofounder of SID Mobile Display Conference since 2006 and serves as vice-Chair of SID MW Chapter. He is a member of SID, SPIE, and OSA. He also serves as a member for Motorola Science Advisory Board Associates. Philip J. Bos is a Professor of Chemical Physics and Associate Director of the Liquid Crystal Institute at Kent State University. Before joining Kent State in 1994, he was a principal scientist in the Display Research Laboratory of Tektronix Inc. He received his PhD in Physics from Kent State in 1978. He has authored more than 100 papers in the field of liquid crystals and liquid crystal displays, and has over 25 issued patents. His field of interest is applications of liquid crystals, with contributions to fast liquid crystal electro-optical effects including the invention of the pi-cell. He is active in the field of displays and was twice the general chair of the International Display Research Conference. He is a Fellow of the SID, and has received the Distinguished Scholar Award from Kent State University.
List of Contributors Seon-Hong Ahn Mobile LCD Division Samsung Electronics Co., Ltd. San #24 Nongseo-dong, Giheung-gu Yongin-City, Gyeonggi-Do, 449-711 KOREA Jacques Angele´ VP Technology Programs NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE Achintya K. Bhowmik Intel Corporation 2200 Mission College Blvd. Santa Clara, CA 95054 USA Philip J. Bos Liquid Crystal Institute Kent State University Kent, OH 44242 USA Dean Brown Microvision, Inc. 6222 185th Ave NE Redmond, WA 98052 USA Margaret Brown Microvision, Inc. 6222 185th Ave NE
Redmond, WA 98052 USA Seamus E. Burns 34 Cambridge Science Park Milton Road Cambridge CB4 0FX UK Gary T. Boyd 3204 Canterbury Drive Woodbury, MN 55125 USA Mark Champion Microvision, Inc. 6222 185th Ave NE Redmond, WA 98052 USA James Chang Apple Panel Process and Optics Engineering 9F, No. 499, Sec. 3, Pei Hsin Rd. Chutung Hsinchu TAIWAN L.J. Chen JTouch Corporation Energy Project Division No.8, Zi Qiang 1st Road, Zhong Li Industrial Park, Taoyuan Hsien 320 TAIWAN
xviii
LIST OF CONTRIBUTORS
InJae Chung CTO, EVP LG Display, Seoul, Korea 18F West Tower, LG Twin Building 20 Yoido-dong, Yongdungpo-gu Seoul 150-721 KOREA Cathy J. Curling 10 High Ditch Road Fen Ditton Cambridge CB5 8TE UK
Claire Gu Department of Electrical Engineering MS: SOE2, University of California Santa Cruz, CA 95064 USA Hyungki Hong LG Display, R&D Center 533 Hogae-dong, Dongan-gu Anyang-shi, Gyongki-do 431-080 KOREA
Ivan Dozov NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE
Josef Hu¨ttner Business Unit LED Marketing Communication & Consumer OSRAM Opto Semiconductors GmbH Wernerwerkstrasse 2 93049 Regensburg GERMANY
Candice H. Brown Elliott Clairvoyante, Inc. 874 Gravenstein Hwy South Suite 14 Sebastopol, CA 95472 USA
Guofan Jin Department of Precision Instruments Tsinghua University, Beijing CHINA
Thierry Emeraud NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE Johan Feenstra Liquavista De Witbogt 10 5652 AG Eindhoven THE NETHERLANDS
Ste´phane Joly NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE Ce´cile Joubert NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE
Mark Freeman Microvision, Inc. 6222 185th Ave NE Redmond, WA 98052 USA
Gerhard Kuhn OSRAM China Lighting Ltd. (Shanghai Office) Room 2301–2302 Harbour Ring Plaza No. 18 Xi Zang (M.) Road Shanghai 200001 CHINA
Zhibing Ge School of Electrical Engineering and Computer Science University of Central Florida, Orlando Florida 32816 USA
Jim Larimer ImageMetrics LLC 569 Alto Ave. Half Moon Bay CA 94019 USA
LIST OF CONTRIBUTORS Jean-Denis Laffitte NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE
Redmond, WA 98052 USA
Richard Lawrence 276 River Road Hudson, MA 01749 USA
Tohru Nishibe Research & Development Center Toshiba Matsushita Display Technology Co., Ltd. 1-9-2 Hatara-cho, Fukaya-shi Saitama 366-0032 JAPAN
Franc¸ois Leblanc NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE
Jesper Osterman NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE
Zili Li Motorola, Inc. 1301 East Algonquin Road Schaumburg, IL 60196 USA
D.Z. Peng TPO Displays Corp. AMOLED Panel Development Department No. 12, Ke Jung Rd., ChuNan Miao-Li County TAIWAN
Alex Lin MStar Semiconductor, Inc. IC R&D Group 4F-1, No. 26, Tai-Yuan St., ChuPei Hsinchu Hsien 302 TAIWAN Poyen Lu Ultra-Pak Industries Co., Ltd. R&D Department No. 2, Gungye 10th Rd., Pingjen Industrial Park, Pingjen City, Taoyuan County 324 TAIWAN Philippe Martinot-Lagarde NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE Hiroki Nakamura 1-9-2 Hatara-cho, Fukaya-shi Saitama 366-0032 JAPAN Maarten Niesten Microvision, Inc. 6222 185th Ave NE
Glenn Raskin Qualcomm Incorporated 5775 Morehouse Drive San Diego, CA 92121 USA Salman Saeed Qualcomm Incorporated 5775 Morehouse Drive San Diego, CA 92121 USA Terry Scheffer Motif Corp., Hilo, Hawaii USA James E. Schuessler National Semiconductor 488 Crown Point Circle Grass Valley, CA 95945 USA Randy Sprague Microvision, Inc. 6222 185th Ave NE Redmond, WA 98052 USA
xix
xx
LIST OF CONTRIBUTORS
Brian Steele Qualcomm Incorporated 6180 Spine Road Boulder, CO 80301 USA Daniel Stoenescu NEMOPTIC 1 rue Guynemer 78114 Magny-les-Hameaux FRANCE Paul Travers Icuiti Corporation 2166 Brighton Henrietta Tl Rd Rochester, NY 14623 USA Adrian Travis Clare College University of Cambridge Cambridge, CB3 9AJ UK Y.-M. Alan Tsai DuPont Displays Inc. 600 Ward Drive Santa Barbara, CA 93111 USA Vincent Tseng TPO Displays Corp. OLED Engineering Department No. 12, Ke Jung Rd., ChuNan, Miao-Li County, TAIWAN Keehan Uh Mobile LCD Division Samsung Electronics Co., Ltd. San #24 Nongseo-dong, Giheung-gu Yongin-City, Gyeonggi-Do, 449-711 KOREA Philip Watson 3M Center Bldg. 235-2S-62, Maplewood MN 55144 USA George A. Wiley Qualcomm Incorporated 5775 Morehouse Drive San Diego, CA 92121 USA
Matthias Winter Business Unit LED Marketing Communication & Consumer OSRAM Opto Semiconductors GmbH Wernerwerkstrasse 2 93049 Regensburg GERMANY Shin-Tson Wu CREOL & FPCE The College of Optics and Photonics University of Central Florida 4000 Central Florida Blvd. P.O. Box 162700, Orlando Florida 32816-2700 USA Deng-Ke Yang Liquid Crystal Institute Kent State University Kent, OH 44242 USA Xingpeng Yang Department of Precision Instruments Tsinghua University, Beijing CHINA Pochi Yeh Department of Electrical and Computer Engineering University of California Santa Barbara, CA 93106 USA Rob Zehner E Ink Corporation 733 Concord Ave Cambridge, MA 02138 USA Xinyu Zhu CREOL & FPCE The College of Optics and Photonics University of Central Florida 4000 Central Florida Blvd. P.O. Box 162700, Orlando Florida 32816-2700 USA
Series Editor’s Foreword A transformation is taking place. Hitherto, mobile displays were regarded as the poor cousins of larger, higher resolution, faster, wider color gamut monitor and TV displays, being smaller, with lower resolution, slower response times, narrower viewing angles and less saturated colors. Now, with the advent of high bandwidth mobile communications, innovative low energy ICs (often developed specifically for mobile applications) and new architecture and display developments, the world is changing. Mobile devices are increasingly becoming the drivers of new product opportunities. One might argue that this transformation is already well underway; mobile phones now combine telephony with still and video photography, touch, email and TV. In our increasingly mobile-centric world, customers in growing numbers now expect that all information — telephony, text, email, audio, radio, TV and video – should not just be accessible on mobile devices, but should be accessible at high audio and visual quality. That is the demand. Satisfying it will be far from easy, but such is the scale of research, development and product introduction that changes are now taking place and will accelerate. A large measure of enthusiasm is required to push developments into new product opportunities. This can sometimes lead to an overstatement of opportunities and, of course, the manipulation of product specifications is as rife in this highly competitive market as in others, so it is important that a book such as this presents a rational discussion of the visual requirements and the limitations of the often small displays used in mobile devices in terms of pixel density, luminance or reflectance, dynamic range and gray level capabilities; Chapter 2 does this elegantly. There follow chapters which describe how liquid crystal, viewing angle control and backlight technologies, developed primarily for non-mobile applications are being adapted and optimized for the mobile market. Then the extent to which the mobile market is increasingly driving its own developments begins to become apparent, with chapters on low power electronics, mobile-specific serial interface architectures and innovative pixel designs which can reduce pixel count requirements whilst maintaining display legibility. Note that two serial interface architectures are described. There is competition here as well as between different display technologies. Indeed, one might speculate that serial interfaces developed for low power consumption, low connection cost and mechanical flexibility might begin to find application in the increasingly cost and power conscious non-mobile markets. A chapter on the use of polysilicon backplane technology to produce an entire system on glass elaborates on the benefits of being able to add such function as scanner and touch input capability whilst still being able to minimize the mass, volume and the number of interconnections in a mobile device. Then a number of new or less-established display technologies, OLED, Electrophoretic, Bistable Cholesteric and Nematic LCs and Electrowetting, are described. All but Electrowetting have found application in fixed devices, but it is reasonable to assume that all these technologies will find major, probably their dominant, applications in the mobile sector because of low power requirements
xxii
SERIES EDITOR’S FOREWORD
combined with good visual performance. 3-D is discussed, albeit with a rather low expectation of finding wide application soon, but it is interesting to see that 3-D techniques could be developed for lightweight low power devices. Chapters on eyewear displays and scanned beam projectors follow and the book concludes with a chapter on polymer backplane active matrix technology, which is now moving from development into production, bringing the prospect of rugged, thin and flexible displays a step closer. This is a substantial book and it covers its broad subject matter in considerable depth and detail. Inevitably with such a large multi-author volume, there is some overlap between chapters but this has the advantage that each chapter is substantially self-contained, avoiding the need for the reader to keep referring from one chapter to another. Most chapters include a detailed theoretical and technical description of their subject before progressing to descriptions of products and applications, so the reader has considerable choice at which level to read the book. As the editors remark in their Preface, this is the first comprehensive treatment of all aspects of mobile display technology to be published in a single volume. Achintya Bhowmik, Zili Li, and Phil Bos have done an excellent job in bringing this project to a successful conclusion. Written by acknowledged experts in their fields, as new technological developments begin to find their way into mobile products, this book will fill a muchneeded gap in the literature. Anthony Lowe Series Editor Braishfield, UK
Preface The mobile display industry has been witnessing a rapid growth in recent years, spurred by the tremendous proliferation of mobile communications and computing applications. This has been exemplified by the over 1 billion units of mobile phones and over 100 million units of mobile computers sold in 2007, besides other categories of mobile devices such as MP3 players, digital cameras, PDAs, GPS map readers, portable DVD players, electronic books, etc. This has fuelled a significant investment into the research and development of the display technologies needed to meet the requirements of this burgeoning product category, with key research labs across the display industry and academia producing many exciting technological advancements. Although at first glance one may think of the mobile display as just a smaller and portable counterpart of the large displays such as the desktop monitors or the flat panel televisions, the widely varying usage and viewing conditions coupled with the stringent power consumption and form factor constraints impose a different set of challenges for the mobile display. Thus, the architects and designers of the mobile devices are increasingly demanding unique attributes for the mobile displays, thereby setting them apart from the domestic tethered terminals and requiring specific developments in the technology. As a result, the display technologies have been advancing rapidly to keep pace with the evolving mobile communications and computing devices. Besides the impressive advancements in the incumbent active matrix liquid crystal display (AMLCD) technologies, the mobile display arena has also been a hotbed for the exploration and development of new technologies, including the emerging active matrix organic light emitting diode (AMOLED) displays, eyewear and mobile projector displays, as well as the flexible displays, among many others. The objective of this book is to present a comprehensive coverage of the mobile display in a single volume, spanning from an in-depth analysis of the requirements that the displays must meet, through current devices, to emerging technologies. Some of the topics covered are: applications of mobile displays; human-factors considerations; advances in liquid crystal display technologies; backlighting and light manipulation techniques; mobile display driver electronics and interface technologies; as well as detailed analysis of a number of new display technologies that have been emerging in recent years with promises to bring unique capabilities to the landscape of mobile devices and applications. While there are a number of excellent books on display technologies that cover the fundamentals and applications in many other areas, there is, surprisingly, no title dedicated to the important category of mobile displays. Thus, we believe this book will benefit the reader by providing a detailed update on the state-of-the-art developments in this burgeoning field. The chapters have been authored by wellknown experts working in the field, selected from both industry and academia in order to present a balanced view of both the fundamentals and applications to benefit both the general and the expert readers.
xxiv
PREFACE
We are grateful to the authors who worked with us diligently to produce high-quality chapters with in-depth and broad coverage on the various topics related to all aspects of the mobile display, including both technology and applications. We would especially like to thank the series editor, Anthony Lowe, for his encouragement to pursue the idea of this book, and for his conscientious editing of the final manuscript. We thank the colleagues who assisted us in shaping the outline of this book, especially Thomas Holder of Intel Capital who helped enlist authors to cover several of the emerging technologies. We also appreciate the support that the staff at John Wiley have provided us throughout this project. Finally, we would like to thank our wives Shida Tan, Min Jiang, and Barbara Bos for their support and patience during the course of preparation of this manuscript. Achintya K. Bhowmik Intel Corporation Zili Li Motorola, Inc. Philip J. Bos Kent State University
1 Introduction to Mobile Displays Zili Li,1 Achintya K. Bhowmik,2 and Philip J. Bos3 1
Motorola, Inc., Schaumburg, Illinois, USA Intel Corporation, Santa Clara, California, USA 3 Kent State University, Kent, Ohio, USA 2
1.1 Introduction Mobile displays have been undergoing tremendous advances in recent years, in terms of both technology and applications. In the past, the information that needed to be displayed on mobile devices was sufficiently low for the display not to limit the mobile device applications. For example, an early cellular phone only had a 1-line monochrome display consisting of 102 pixels. However, as the mobile applications became richer, the displays also needed to evolve in order to keep pace with increased requirements. Displays with more than 105 pixels and up to 12 bits of color depth are common in today’s mobile handsets. On the performance front, the mobile phone display is rapidly approaching the performance of a desktop monitor in terms of brightness, contrast, and color saturation, among other important display parameters. These changes are the results of rapid advances in technology for both wireless network infrastructure and mobile handsets, which are intertwined to provide a far-superior overall communication experience for the consumer. Another mobile application area that has witnessed significant improvements in the display characteristics is the mobile computer. The early notebook computers sported embedded displays of relatively modest attributes, whereas today the majority of laptops include a WXGA resolution (1280 800) screen with 18 bits-per-pixel color depth, and the state-of-the-art mobile computer boasts a high-definition screen (WUXGA,
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
2
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
1920 1200) with 24 bits-per-pixel color depth. This rapid enhancement of the display is to match the increased richness of visual content offered by the modern mobile computer. The term ‘Mobile Display’ has only recently appeared in technical literature. A quick scan through the back issues of major display technical publications, such as SID Digest of Technical Papers, International Display Workshop Proceedings, and International Display Research Conference Proceeding, reveals that the earliest session devoted to ‘Mobile Display’ was at the 2002 International Display Workshop held in Hiroshima, Japan [1]. In the past, terms such as mobile display, portable display, or handheld display, were often used interchangeably in the display field with little or no distinction. They were often invoked to refer just to low power display or small size display. Although power and size are important attributes in considering a mobile display, as will be discussed later, these are only a subset of display parameters of importance. In this chapter, we will first take a look at the burgeoning applications that are driving the increasingly challenging requirements for the mobile display, in particular the ever-growing mobile phone and mobile computer applications, analyze the mobile environment and its impact on the display, and then review the advances in the display technologies in order to meet the stringent requirements imposed by the mobile environment and applications. We will also introduce and place in context subsequent chapters in the book.
1.2 Advances in Mobile Applications Mobile communication and computing have seen drastic changes over the past decades. As average consumers, we have all witnessed and benefited from these changes. Let’s start with a look at recent trends in mobile communication: smaller and thinner mobile phones with slick design, much longer battery life for prolonged talk time and standby time between charges, higher resolution and color display replacing the monochrome type, much better voice quality, and the wild popularity of text messages. All these advances are happening while the cost per minute is only a small fraction of what it was a decade ago. In 2007 the annual mobile phone production reached a new peak of more than 1 billion units and is projected to maintain double digit annual growth worldwide for the next several years, as shown in Figure 1.1, below, provided by DisplaySearch, a leading display market research firm. This is an astronomical number that few other electronic devices can ever compete with. Besides the mobile phone, which enjoys the largest share of the mobile communications market, other mobile devices have also seen significant growth. Notable among them are mobile personal computers, MP3 players, digital cameras, PDAs, GPS map readers, potable DVD players, electronic books, etc. The
Figure 1.1 Mobile display growth (courtesy: DisplaySearch).
INTRODUCTION TO MOBILE DISPLAYS
3
250
Units (M)
200 150 100 50 0 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
Figure 1.2 Annual notebook computer shipment data and forecast (courtesy: DisplaySearch).
mobile computer segment, in particular, has been witnessing a strong growth in recent years. The shipment of notebook computers exceeded the 100 million units mark in 2007, as shown in Figure 1.2, above, also provided by DisplaySearch. This is approximately five times the volume shipped in 2000. The mobile computer volume is forecast to continue to grow at about five times the rate of the desktop computer counterpart in the coming years. The notebook market share as percentage of the overall personal computer (PC) market is steadily increasing in all regions of the world, already accounting for more than half of the overall PC volume in mature markets. Sustained growth in notebook computers in recent years is fueled by a widespread consumer interest in getting high-bandwidth access to full Internet, anytime and anywhere, besides the computational power to drive PC applications with the levels of processor capabilities that were only common in desktop systems even a few years ago. Another new category of mobile computers is the emerging Ultra-Mobile PC (UMPC), or the consumer-focused variant termed as the Mobile Internet Device (MID), promising to pack the processing and internet capabilities of a full computer into attractive ultra-portable form factors. One of the key technological enablers for the rapid expansion in mobile computing has been the wide availability of color thin-film transistor active matrix liquid crystal displays (TFT AMLCD) [2], besides remarkable innovations in low-power processor, much improved electronic packaging technologies, and the arrival of lithium-Ion battery among many other electronics advancements. Until recently, mobile computing was primarily for use in office, home, or similar environments that were not initially targeted for wireless communication applications. On the other hand, the technical landscape for mobile communication is more complex because both a wireless communication network and wireless communication device need to work together to deliver the rich experience that consumers demand. In fact, the introduction of the Intel Centrino mobile platforms in 2003 that integrated broad wireless network connectivity and interoperability into the notebook PCs spurred the subsequent rapid growth in the mobile computer market. In the next section we will explain how the interplay at both the network and device levels laid the technology foundation for the rapid expansion in mobile communication applications and how these new applications drive the technological advancement in displays used in mobile devices. Taking the evolution of the cellular network as an example, Figure 1.3, below, depicts the application expansion as the wireless network bandwidth increases. The initial wireless network was based on analog technology and only capable of providing voice calls due to its limited bandwidth. As we moved to the 2nd generation (2G) network, not only were large numbers of channel capability added to enable more users, but service quality in voice communication also significantly improved. However, non-voice services such as text messaging were not enabled until the cellular network was further upgraded to use packet data service network or 2.5 generation (2.5G) network, primarily due to the low bandwidth capability from the earlier networks. Nowadays the landscape of wireless communication is
4
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 1.3 Applications enabled by increased network bandwidth.
much more complex than just a few years ago when cellular network had the monopoly for the most part, though cellular network is still dominant and hundreds of billions of dollars have been spent to move from 2G to 3rd generation (3G) network. In some parts of the world 3G network has already entered into service. The 3G network enables a bandwidth of 10–100 times of a 2G network, ranging from a few megabits per second (Mbps) to 50 Mbps. In research labs around the world, next generation cellular networks beyond 3G have started to move away from the drawing board into limited field testing. On the other hand, among the developments that are occurring using non-cellular network with unlicensed spectrum band, WiMAX has a large coverage distance up to multiple miles and could provide more bandwidth than a 3G network. For a short range, the now ubiquitous Wi-Fi systems also provide high bandwidth in the range 10s of Mbps. These increased bandwidths are able to support data and image transmission in addition to voice transmission. In parallel, improvements in signal processing techniques, such as data compression and decompression, have allowed further optimization of the utility of the available bandwidth to enhance transmission quality. In parallel with the network bandwidth explosion, in the past decade we have also witnessed vast enhancements in mobile terminal devices. In this regard, there has been a tremendous similarity between mobile communication and mobile computing at the device level. Much of the technological advancements discussed here are also applicable to the mobile computing devices. The early analog mobile phone had very limited process and memory that were only capable of handling voice communication. As we move to 2G and 3G handsets, there are several important technological developments at components and subsystem level. First of all, rapid progress has been made in miniaturization. This makes the much needed compact package readily available for designers. In addition to smaller components, interconnections among components are also much more condensed through varieties of high density interconnection technologies. The optimized usage of miniaturized active and passive components through design innovation has further driven the overall package volume down. Reduction of the power consumption of these devices has been another major advancement. These reflect the combined results from drastic improvements from base-band and other processors, power amplifier, and more efficient power-management chipsets. Memory devices have also seen major power reduction over the years in addition to its own size reduction. All these made it possible to integrate a much more powerful processor, high capacity memory chip, and other active and passive components into a compact package of a handheld device. Figure 1.4, below, depicts this general trend versus different generations of mobile devices. At the time of writing, a 3G handset can deploy as much as 256 MB memory with an equivalent processing power of an early mobile computer processor. Some handsets even have a hard drive in addition to flash memory. With all these advances, it may not be exaggerating to say that a modern mobile communication handset may be regarded as a PC in your palm.
INTRODUCTION TO MOBILE DISPLAYS
5
Figure 1.4 Processing power and memory trends for mobile communication devices.
These combined advances in network and handset provide the technological foundation for many new market applications far beyond simple voice communication. Nowadays, although voice communication is still the bread and butter for service providers, non-voice applications have become increasingly important to maintain existing customers and entice new ones. These applications take the forms of text messaging, multimedia messaging with images, mobile TV, and mobile on-line gaming, to name but a few. Another phenomenon that has expedited usage of images for communication was the introduction of cameras to handsets, which began around 2000, and which created a need for users to be able to send and receive pictures. All these applications demand a much more advanced display on a mobile device than a traditional voice communication device. Just take the camera phone as an example: there is a huge disparity, several orders of magnitude in fact, between the resolutions of the camera and the display on a mobile phone. The first camera phone already had a camera resolution of VGA and now to have a few megapixels still-image camera on a phone is a norm. On the display side, VGA resolution display with 0.3 mega-pixels has just emerged recently from the trade-show floors and prototypes. A high quality photo taken with a mobile phone can not be fully rendered on its own display as a result. These new image and data centric applications targeted for mobile communication applications create a need for a new category of display – ‘mobile display’. It’s important to realize that a new trend has been forming for the past few years, which is the convergence of mobile communication and mobile computing. In other words, the boundary between a mobile communication and mobile computing device has been increasingly blurring. This trend is accelerating due to the arrival of the so-called non-cellular wireless networks, most notably the nowubiquitous Wi-Fi and the emerging WiMAX. We have seen the emergence of power and performanceoptimized mobile computer platforms with high bandwidth wireless connectivity, ushering in the era of ‘personal Internet on the go’. The rapid evolution in the computational and communications capabilities of the modern mobile computers has also been accompanied by increased richness of the media processing and playback quality as well as the visual characteristics of the display embedded within the system.
1.3 Mobile Environment and its Impact on the Display What we define as a mobile display here is a display system that adequately performs in a mobile communication and computing environment. So to better understand what constitutes a mobile display, we need first to examine the mobile environment and its impact on the display performances and
6
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Table 1.1 Summary of key environmental differences of display importance.
Illumination Range Device Dimension Power Relative Motion Temperature Range Viewer User Interface
Personal Mobile
Automotive
Office/Home
huge small limited non-stationary wide 1-2 difficult
huge middle better in-between wider a few difficult
minimum large unlimited stationary narrow multiple easy
requirements, and how these environments differ as we move from a low mobility environment to a high mobility environment. Table 1.1, above, provides a summary of the key mobile environmental considerations and compares the differences among the three main environmental conditions: personal mobile; automotive; and office/home. These considerations are primarily selected for their relevance in the context of visual communication. Personal mobile includes displays used in the mobile phone, laptop computer, or similar handheld devices. This category has the highest mobility environment. On the other end, office/ home handles pretty much a steady situation. As one moves across the environment boundary or increases/decreases in the mobility, a large set of parameters unique to that environment also change accordingly. With increased mobility, the environment generally imposes a more restrictive and tougher requirement for a display. Among those listed in Table 1.1, we believe illumination range, power, handheld form factor, and a combination of these pose the most challenges for a mobile display. An ideal mobile display should render vivid color and be fully readable from pitch dark to direct sunlight, consume a tiny fraction of energy from a portable power source, and provide large image size and resolution similar to an office display, while still maintaining handheld form factor. Human factors play a major role in how we see information on a display, since image quality is highly dependent on the viewing environment and context, more so for a display on a mobile device due to the widely varying viewing conditions. Related to this, Larimer details the major issues that impact the visibility of images reconstructed on a mobile display in Chapter 2. In the following section, we will discuss how environment impacts on display requirements and the related visual performance issues. As by far the largest market segment for mobile displays belongs to personal mobile communication terminals, we will mostly focus our discussion on the personal mobile communication display.
1.3.1 Illumination Considerations The first major environmental difference between mobile and office/home environments is in ambient illumination. This difference has multiple aspects that include illumination level, illumination source, viewing angle with respect to illumination, and surroundings. Figure 1.5, below, illustrates measured surface luminance, measured using a white reflectance standard, at various locations where a typical consumer uses or desires to use a mobile device such as a mobile phone or a laptop computer [3]. Here we have used luminance instead of the conventional illuminance as the measure of display brightness, since for a mobile display, especially a reflective display, the perceived image is more influenced and related to the comparison with nearby surfaces. With the same illumination source at a fixed intensity, depending on the surface the display is being viewed on, one might perceive it differently. The mobile illumination range is extremely large and is in excess of six orders of magnitude from dark night to bright outdoor conditions. In order to further illustrate this condition and its impact on display we also plot out, at each location, the calculated nominal value of the display surface reflection. They are indicated by the short columns in Figure 1.5, below. Even in cloudy day conditions – the third data
INTRODUCTION TO MOBILE DISPLAYS
7
100000
Luminance (Nits)
10000 1000 100 10
O ffi ce A1 H ot el R oo m O ffi ce A2 C af et er ia O ffi ce B1 O ffi ce B2 O Pe ffi ce op le B3 's Sh ad ow C lo ud y D Li ay ttl e Su cl ou nn dy y Af te rn oo n
1
Surface Luminance
Surface Reflection
Figure 1.5 Surface luminance measured using a white reflectance standard and reflected. Luminance from the display at typical locations and conditions.
point from the right – there is significant surface reflection to cause substantial background luminance, enough to white out an intended image. For indoors, not only is the level of illumination much lower than outdoors, there is also little variation from location to location, in sharp contrast with outdoors. Finally, differently from an electronic book display, the mobile communication display mandates readability in low or zero ambient light condition. To that end, some form of auxiliary lighting is needed if the display in use is purely of reflective nature. This simple requirement can actually prevent certain display technologies, which otherwise could be very promising, from entering the mobile display market. For example, those displays that are purely targeting electronic book application need to find feasible lighting solutions before they become commercially viable for mobile communication and computing applications. This large illumination range poses a huge challenge for any electronic display. Thus a mobile display, with limited power source to increase brightness of the device, needs special attention and a new set of solutions. In many cases, this may determine the viability of a display technology for its application in the mobile application space. In addition to illuminance level or intensity, we also need to pay attention to other differences in illumination, again particularly between indoor and outdoor lightings. These differences include illumination pattern or profiles, spectrum distribution or color temperature of the light sources, and angle dependence between display and light source. Figure 1.6 shows the measured surface luminance at a typical indoor lighting condition and how it compares with a typical point source illumination condition. The latter is very similar to outdoor conditions under bright sun. This data depicts that outdoor conditions can very closely resemble a point source illumination in a sunny day, while a typical indoor lighting is much more diffused in nature. Substantial illumination still remains when a display surface is turned away 90 degrees from its maximum which is directed toward the normal of the display. In other words, a display in the indoor conditions receives more portions of total ambient illumination at large angles than in the case of an outdoor illumination. This arises from multiple light sources and confined space in the indoor condition. Moreover, these multiple sources might have different characteristics in their spectrum distribution, irradiation profile, among other things. These considerations are particularly important when designing a reflective type display which, as will be discussed later, depends on effective
8
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Relative Luminance
1 0.8 0.6 0.4 0.2 0 0
10
20
30
40
50
60
70
80
90
Angle(Deg) Outdoor
Indoor
Figure 1.6 Relative illuminance as a function of the viewing angle in outdoor and indoor conditions.
collection and redistribution of ambient light. The source color temperature is another consideration, especially for color rendering and color fidelity.
1.3.2 System Power Considerations Mobile devices are all battery operated and this imposes paramount limitation on power consumption for the display in use. Fundamentally different from non-communication types of handheld devices such as a pocket calculator or a Personal Digital Assistant (PDA), the communication device calls for a special power budget for the display. The reason resides in its intrinsic architectural complexity. Take the mobile communication device as an example; Figure 1.7 represents a simplified mobile communication terminal architecture diagram. There are multiple major high power components to share the energy provided by a battery. Among these, the base band processor, RF transceiver and associated components, and power amplifier for the RF transceiver are particularly high power consuming components. When considering the power issue in this context, we ought to have in mind the total energy. It is the total energy capacity that limits power/energy budget for individual components like the display and the power amplifier. The Liþ battery has become the only energy source for today’s mobile devices. A battery is rated for its total energy capacity in mAh; a typical cell phone battery has a capacity in the range of 500–1500 mAh. One of the major attributes of a Liþ battery is its suitability for high peakpower applications. Early displays on the mobile devices were all monochrome and reflective with
Power Management
Battery
Antenna
Audio RF
PA PA
BB
Display
Figure 1.7 Simplified mobile communication terminal architecture diagram. PA: Power Amplifier, RF: Radio Frequency Components, BB: Baseband Processor.
INTRODUCTION TO MOBILE DISPLAYS
9
limited resolution, and as a result, the energy impact on the mobile device was small. As we move through from early analog devices to today’s digital devices in the form of 2G and 3G mobile terminals, we see a reversal in energy considerations for the display power budget. This reversal in power demand is the result of several factors. From the source side, the commercial release of Liþ batteries provided a large jump in the power density and peak power handling capability compared to the early battery technologies. The past decade has seen a steady increase in the power density from the Liþ battery, however, this improvement is limited to a few per cent annually. Without a breakthrough in new energy source materials or chemistry, we would not expect a major leap in the power density. Moreover, as explained earlier, rapid miniaturization has drastically reduced the dimensions of all major components to enable much thinner design and slick form factors for mobile devices. This puts pressure on device designers to use a smaller battery in order to maintain the required form factor. These trends have limited the available energy on the source side. Meanwhile, on the demand side, the opposite situation applies. Increased wireless bandwidth enables more applications and mobile communication devices to become more complex with more processing power and more memory needed to be able to handle these added applications. Many of the new components are power hungry and are not depicted in Figure 1.7. The wide adaptation of cameras in cell phones is just one of many examples of added components. Along with the camera, it creates a need for a large amount of digital storage and processing, both of which need a power budget out of the total energy capacity. In addition, other new functionalities for example, near-range communication links such as Bluetooth, GPS, and others require multiple radio frequency components. All these compete for the fixed amount of available energy. With these new applications, a monochrome low-resolution display is replaced by a high-resolution color display that needs more power to operate. Even with a reflective liquid crystal display, which is by far the most power efficient, at least an order of magnitude increase in power consumption has been required compared with early displays. To further compound this energy shortage problem, average use duration for mobile devices has been steadily increasing as well, due to these new applications and functionalities. People use their phone for calling, playing games, sending/ receiving messages, playing music as a MP3 player, and more. The combined effect of more power consumption from added hardware and extended usage has caused the rate of power consumption to outpace the limited improvement in battery power density. As a result, device designers are increasingly forced to impose a much more restricted power budget on the display subsystem. Figure 1.8 shows a generic energy model depicting the relationship of display energy consumption as a function of the display resolution for different display technologies. In the model, we used representative data for the battery and for display power consumption. A battery capacity of 10,000 Joule is assumed. Power consumption data of different types of displays are based on popular LCDs and taken from published data from Society for Information Display publications [4]. We assume eight hours on-time for the display and this is also the time duration between two successive chargings of the battery of the device. Armed with this information, Figure 1.7 plots the energy consumption by the
Energy Ratio Display/Battery (%)
10000 QVGA (~77K pixels)
1000 100 10 1 0.1 0.01 100
1000
10000
100000
1000000
Pixel Count STN Transflective
p-SiTFT Transflective
a-SiTFT Transmissive
Figure 1.8 Ratio of the energy consumption of the display alone over the total battery capacity as a function of the display resolution. Assumptions: battery capacity ¼ 10,000 J, display on-time ¼ 8 hours.
10
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
display as a percentage of the total battery capacity versus display resolution [3]. This model yields many insights into the power management considerations when choosing a display technology for a mobile device. At very low resolution, as was the case for early displays, power consumption was not a cause for concern regardless of the display technology in use. The very first mobile phone had a one-line electroluminescent display which was known for its high power consumption due to its emissive nature. As we move to higher resolution, say 170 120 pixels, and color which, in most liquid crystal displays, has three times more pixel count than a monochrome counterpart, a transmissive type display could easily exceed 10% of the total energy budget. As will be discussed in Section 1.3.3 the Resolution Considerations section, we see display resolution increasing to QVGA (320 240) and beyond. With such resolution, to maintain a manageable power budget for the display, we are left with only reflective or transflective options if a liquid crystal display is used. The high power consumption of a transmissive display is primarily due to the backlight. With rapid advancement in solid state lighting, we expect transmissive display power to continue to decline. With certain phone form factor design or clever use of case manipulation, moderate power drain from a transmissive LCD is expected in the future. In mobile computers, the display subsystem poses a significant power consumption issue as well. Figure 1.9 shows the breakdown of average power consumption of a typical main-stream notebook computer by each major component. A mainstream notebook computer includes a 14.100 –15.400 diagonal size transmissive LCD panel of WXGA resolution, which consumes as much as 30–50% of the overall system average power depending on the brightness setting, thereby significantly limiting the battery life. The largest power consuming component within the notebook LCD panel is the backlight. For example, the backlight of a typical notebook LCD panel accounts for as much as 80% of the total panel power consumption at 200 cd/m2 brightness, and 67% at 60 cd/m2 brightness, while the rest of the display power consumption is due to the panel electronics. This has prompted the development of system-level display power and performance optimization techniques integrated in the notebook graphics chipsets, such as dynamic backlight modulation and image enhancement, dynamic content and power-policy dependent display refresh rate management, etc., besides the more traditional approaches to achieve display module-level power reduction by incorporating higher-efficiency optical components [5].
35%
Average Power (%)
30% 25% 20% 15% 10% 5%
VR O th er
Au di o
em or y lo ck G en
O DD
C
M
IC H C om m D is pl ay HD D
C PU G M C H
0%
Figure 1.9 Average power consumption breakdown for a typical lightweight notebook. Computer consisting of a 14.100 WXGA display. CPU: Central Processing Unit, GMCH: Graphics and Memory Control Hub, ICH: Input and Output Control Hub, Comm: Communication Components, HDD: Hard Disk Drive, ODD: Optical Disk Drive, VR: Voltage Regulators.
INTRODUCTION TO MOBILE DISPLAYS
11
1.3.3 Display Resolution Considerations The data shown in Figure 1.8 raise an important display resolution issue for mobile device as resolution significantly impacts the display power consumption. In this section, we will discuss another aspect of resolution of mobile displays, namely, the resolution limitation imposed by the intrinsically small display dimension of a handheld display. It is well known that the human vision system has finite angular resolution. Typically, one arc minute resolution is assumed. The accommodation limit of our vision system is about 25 cm while the comfortable reading distance is around 40–50 cm. These conditions impose a pixel density limit on an electronic display. For a small mobile display, the situation is far different from a desktop monitor in terms of reaching the resolution limit of the human eye. Let’s illustrate this with an example of a 3.5 square inch display, which by mobile display standards is a fairly large display. We can easily calculate that the eye resolution limit corresponds to 0.16 mega pixels at 40 cm distance or 0.40 mega pixels at the accommodation limit. Since most displays on mobile handsets are smaller than this, these pixel counts are already at the higher end of what can be resolved. Also, these numbers of pixels are what we can effectively resolve under ideal lighting conditions. As discussed earlier, the mobile lighting environment is far from ideal, and we expect an average user to start to reach the eye resolution limit at lower pixel counts than those stated above. Now looking at the display side, at half VGA resolution, a display already has 0.20 mega pixels. This indicates that we are already pushing eye resolution to its limits with this large a display on a mobile device. One can do the same calculation for the case of a desktop monitor to show that the reverse is true. A 1600 diagonal display requires more than 5 mega pixels to reach the eye resolution limits. An ultra high resolution display (e.g. WUXGA at 1920 1200) only offers slightly more than 2 mega pixels. Thus, for a desktop monitor, there is a long way to go before reaching the eye resolution limit. The single factor that makes this difference is the large display size disparity between a mobile display and a desktop monitor. So a further increase in the pixel resolution of a mobile display will bring no actual visual benefit while creating substantial power constraints on the power budget. The technological advances in mobile display technology during the past decade have been primarily focused on addressing the first two constraints imposed by the mobile environment associated with high personal mobility: the huge illumination range and limited power availability with ever increasing power demand. Rapid progress on both fronts has been achieved and nowadays display performance on mobile devices has met end-user expectations for supporting applications with some exceptions. With the accomplishments on these fronts and a flood of new applications that call for high information content, i.e. more resolution, a shift in research effort towards addressing the resolution considerations has been observed in the last few years.
1.4 Current Mobile Display Technologies 1.4.1 Overview The dominant technology for mobile displays is currently the color TFT AMLCD. In addition to many other benefits that the LCD provides, its inherent ability to adopt reflective, transflective, and transmissive device configurations, as described in the next section, makes it uniquely suitable for mobile applications. The liquid crystal layer in a LCD, in conjunction with the polarizers, serves as an optical shutter to modulate the light intensity created from a separate light source. Liquid crystals respond to very low voltages, a few volts in most cases, and require sub-micro Ampe`re current to switch, thus enabling very low power devices, consuming a fraction of a micro-watt per pixel. A liquid crystal device can be illuminated from either the front or the back to form an image.
12
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Mobile displays started to migrate to color versions early 2000 and the adoption rate has been accelerating ever since. By 2002–2003, color displays overtook their monochrome counterpart to become the overwhelming majority display choice. These days nearly all mobile devices are shipped with color displays. Color performance in terms of saturation has made major strides at the same time. The early color displays had rather limited color saturation – color gamuts below 20% of the NTSC color space were common. With the technological progress discussed throughout this book along with the wide adoption of AMLCD displays, the norm is currently well over 40% of NTSC. Although the early versions of color mobile displays used passive matrix super-twisted nematic (STN) liquid crystal displays due to their low power and low cost, their performance lagged behind that of active matrix liquid crystal displays using direct drive liquid crystal effects such as twisted nematic (TN), particularly in terms of video and color performance. With new applications increasingly calling for higher display performance, major improvements have been made in power consumption, electrical connection and mechanical packaging, as well as in the visual performance of active matrix devices. There are two active matrix technologies available in the market today, based on amorphous silicon (a-Si) and poly-crystalline silicon (p-Si) backplanes. The a-Si based thin-film transistor (TFT) was the first active matrix technology introduced to mobile displays and it provided better video and color performance than STN. The p-Si technology was introduced to provide increased resolution, a thinner package, simple connection to display drivers, and improvement in luminance. These benefits from p-Si are, by and large, due to the much higher electron mobility of p-Si over a-Si. More recently, even higher performance active-matrix backplane technologies have been introduced, targeting single crystal silicon performance for high level integration. All these are driven by unique mobile display requirements: high compactness, high mechanical integrity, and high visual performance. Higher cost has been an issue for the active matrix, especially for the p-Si version. As the cost differential between a passive matrix and active matrix has diminished in recent years, the better performance provided by TFTs has become very attractive and the active matrix is going mainstream. With the active matrix replacing the passive matrix, the mobile entertainment era is finally here, enabling exciting new applications such as mobile TV, mobile on-line gaming, just to name two. Chapters 3 and 4 provide detailed discussion on important advances and trends related to color AMLCDs devices.
1.4.2 Operational Modes of LCDs As mentioned earlier, one of the major advantages of LCDs are their flexibility in the way they manipulate light. An LCD can employ three distinct operating methods to utilize light, namely reflective, transflective, and transmissive modes (see Figure 1.10, below). As the backlight module often accounts for over half of the total power consumption for a typical transmissive liquid crystal display used in a mobile device, the reflective design is used when maximum power savings are required. It uses only ambient light to provide illumination for the display. This approach was popular for early displays when only simple segmented and monochrome displays were required. As it has no
Internal Reflector
LC Cell Backlight
(a) Reflective
(b) Transflective
(c) Transmissive
Figure 1.10 Three types of LCD modes: reflective, transflective, and transmissive.
INTRODUCTION TO MOBILE DISPLAYS
13
backlight, its cost is also reduced. However, as visual applications become more complex with new network and mobile device capabilities, these new application trends drove the introduction of color displays for mobile terminals. The requirement for color brought performance issues with reflective designs. As all mature LCD technologies use a direct drive effect, such as the twisted nematic mode, with an active matrix backplane, or a multiplexed passive matrix drive for super-twisted nematic liquid crystal configurations, in which a polarization effect from the liquid crystal layer is employed to control the intensity, only one polarization state of ambient light can be utilized to form the image. To achieve color, a spatial color filter is added as the most reliable technique to synthesize color. The color filter is highly absorbing with an insertion loss for unpolarised light in excess of 60%. The combined loss from polarizer and color filter results in a very dim display under most lighting conditions except good outdoor lighting. To create saturated colors for a reflective LCD in low light conditions, a front light is often required. Extensive research and development into front lighting technology yielded very limited success. One challenge is how to create very efficient light coupling from the light source to the LCD while maintaining a light guide free of visual artifacts, because a viewer needs to look through the front light guide to see the displayed image. This requires tough trade-offs in thickness, shape and other parameters. After trials over several years, the industry appears to have come to a halt on the front lighting approach. In addition, the display in many of the modern mobile devices also serves a dual role as the input device, so often also functions as a touch panel. The front light and touch panel combination further worsens visual usability. Without a good solution to embrace color displays, reflective displays, although the most power-efficient, ceased to be used soon after mobile displays switched to color in early 2000. To meet both power and lighting requirements, in some designs a compromise is to use a transflective configuration. As many of the current mobile displays use transflective LCDs, Chapter 5 focuses on the fundamentals of transflective display. Early versions of this configuration used a semi-transparent reflector and the ratio between transmittance and reflectance was tailored to optimize either the transmissive or reflective mode. New versions of the transflective LCD use a different approach by physically dividing a single pixel into two regions: one part for the transmissive mode and the other part for the reflective mode, as depicted in Figure 1.10(b). The transmissive region has no reflector and provides an opening for the backlight to pass through. On the other hand, at the reflective region, a reflector is built in to reflect incident ambient light. Again the ratio between the transmissive and reflective areas can be controlled to emphasise reflective for ‘outdoor conditions’ or transmissive for ‘indoor conditions’. The flexibility in tuning the ratio between transmissive and reflective provides a good compromise and trade-off to handle power saving and offer readability in all lighting conditions. It meets both requirements reasonably well and thus has become a dominant display technology for mobile devices in the past decade. Early on, the transmission to reflection ratio was about 30:70, which is primarily for reflective mode usage because of its power saving, and then 50:50 became quite popular to balance more towards transmissive. Recently, this ratio has been reversed to around 70:30, which with good improvement in lighting control and material technologies, offers an even better balance between indoor and outdoor conditions. As discussed below, a trend to shift to even more transmissive seems to be taking place due to the rapid progress in LED efficiency and light collection technology. Lighting the display from the back side, away from the viewer, like a laptop, a transmissive display can provide performance that is close to that of a desktop monitor. It has superior performance over the reflective design in most indoor and outdoor lighting conditions except for the brightest outdoor conditions. However, the always-on backlight can impose severe power requirements on the battery. Using the generic model example in Figure 1.8, at a moderate pixel count, say 1/8 of VGA, the display energy consumption already approaches 10% or more of the total battery capacity. On the other hand, at the same pixel count, a reflective display requires less than 1% of the total battery capacity. In addition to the high power consumption, the transmissive display also suffers from poor outdoor viewability. Without meeting both power and lighting requirements, the transmissive approach had found limited usage for mobile handsets in the past. However, the last few years have seen a new trend towards more transmissive designs due to the advances in power reduction from improvements in
14
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
backlighting technology. On the application side, there has been re-thinking on the sunlight readability problem as it may not be as important as good indoor performance since mobile devices are increasingly being used indoors more often than outdoors. This has prompted more transmissive designs. Also, today’s transmissive LCD power consumption has been reduced to a point that a designer is willing to trade off some of its power budget for its superior visual performance.
1.4.3 Viewing Angle and Illumination of AMLCDs As a result of the advancements in light control, dual-cell gap design, improved color filter technologies, and better backlighting among many other innovations, the luminance level of mobile displays has made a major breakthrough. Within a few years, it is anticipated that the luminance, color saturation, and contrast of a mobile communication device like a cell phone will approach that of a mobile computing device like a laptop for most lighting conditions. One important advance for mobile display is in light control and manipulation [6]. This is based on the consideration that a mobile device such as a cellular phone is largely a personal device that is primarily intended for a single user and the need for sharing its display content is a secondary consideration. Taking advantage of this, the most efficient approach to channel the limited illumination energy, either from the ambient or its own light source, into a desired view cone for the user is determined in order to maximize the perceived brightness, color and contrast among other parameters. In doing so, we also end up with power saving benefits as less power is needed from the backlight to achieve the same level of visual performance. To achieve this objective, the light control technique needs to accomplish three tasks. First, it needs to collect the maximum amount of incoming illumination and concentrate it into a pre-defined viewing cone. This viewing cone is designed for the best viewing experience by the device user primarily by enhancing the perceived brightness of the display. Second, it needs to steer the light in this narrow viewing cone away from the glare angle of a display surface. This is primarily for contrast enhancement as undesirable glare provides a background illumination which reduces the contrast (see Figure 1.3). Third, the light control needs to create a large degree of diffusion within this viewing cone to mimic a Gaussian distribution. This property is highly desirable in order to eliminate angular dependence of the display performance within the pre-defined view cone and provide a good viewing experience. For reflected light, one has the option to control the light either on its way into the display or on its way out of the display. In some instances, a combination of both could be used. Figure 1.11, below, illustrates the concept of light control. For a monochrome display, to meet these three requirements is relatively easy. One widely used technology is to use a thin optical film. The film is in general attached after the display is fabricated by a separate process step. Often this requires an approach that controls the reflected light on its way out of the display by attaching the film onto the lower substrate. In early implementations, the film was used to replace the conventional metallic reflector to serve as a modified reflector. For example, a holographic reflector has been shown to be very effective in meeting these requirements for a reflective display. Both contrast and luminance have been increased multiple times within the pre-defined viewing cone. By incorporating diffusion into the hologram by diffused exposure during the hologram creation process, it can also result in less angular dependence of the display viewing experience. Similar to the reasoning of a backlight being preferred to a front light as discussed earlier, another benefit from controlling the reflected light is the reduced sensitivity to visual artifacts, as the polarizer and other layers are quite effective in shielding undesirable artifacts from a typical backlight. As the display on the mobile device migrates to color, the optical film approach becomes less effective than for a monochrome display. One challenge is the parallax that results in reduced reflectivity and color cross-talk. In a typical color display, each color pixel is made of three sub-pixels of primary colors. Color filters with red, green, and blue transmission spectrum are used to create each primary color through a planar and triad arrangement. With this arrangement, a sub-pixel is three times
INTRODUCTION TO MOBILE DISPLAYS
15
Figure 1.11 Concept of light control for reflective display.
smaller than a single monochrome pixel. The off-normal reflected light from an optical film affixed externally to the substrate partially impinges upon the adjacent color filters. A typical glass substrate used in the mobile display is about 0.7 mm in thickness and the sub-pixel dimension is a small fraction of the substrate glass thickness. Thus light rays at slight off-normal angles would cause parallax. For mobile applications, the display is often illuminated over a large angle range and this worsens the parallax problem to a point that is no longer acceptable to the viewer. Although theoretically the front film approach could solve this problem, it demands a much tighter control of the film quality in terms of clarity and defect-free performance. An attempt has been made to move the film from the outside of the substrate to inside the liquid crystal cell but this has had limited success. With advanced photolithographic techniques, a method of building a reflector with microstructures on the inner surface of a substrate has proven to be effective in reducing parallax. The resulting structure is also able to incorporate all three functions onto this single internal reflector. This improvement enables much improved luminance and color saturation for reflective or transflective displays. As the display goes to higher resolution, this internal structure will be critical in achieving the required performance. Chapter 6 provides insight into the physics governing the viewing angle properties of LCD devices, and the fundamentals of backlighting technology. In Chapter 7, Watson et al. describe the details of product implementation aspects of the backlight structure and brightness enhancing technologies. While the illumination source for LCDs has predominantly been the cold cathode fluorescent lamp (CCFL), the increasingly popular LED provides important benefits in power, color, and form factors, which are discussed in Chapter 8.
1.4.4 Display Driving Electronics There are several technical advances that have resulted in major improvements in mobile display visual performance while maintaining or improving its power performance. One of these areas is the electrical circuitry or driving electronics for small displays. As discussed earlier, the actual panel power consumption is only a small fraction of the final display module power for mainstay LCD technology based on the TN or STN mode liquid crystals. To reduce the system power, much research and development has gone into optimizing how the display is driven electronically. In addition, as data rates increase, enabled by higher bandwidth networks, how to effectively channel this large amount of image
16
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
data becomes critical. Chapters 9, 10, and 11 provide broad and in-depth discussions on the issues and trends regarding mobile display driving electronics and interfaces. Clearly, increasing the data channel capacity is one way to accommodate increasing resolution, but another approach is to explore alternative sub-pixel layout schemes with considerations of the human visual system. Sub-pixel rendering is a technology that enables perceived high display resolution at comparatively low pixel counts on the display panel. In Chapter 12, Elliot describes both the theoretical and practical aspects of this technology along with recent applications and future trends.
1.5 Emerging Mobile Display Technologies Among display technologies, the mobile display has probably been the most dynamic in its technological progress over the past decade. In the direct view display category, the technological advancement, in conjunction with established display manufacturing infrastructure of the passive STN and active matrix TN TFT, has proven to be a formidable barrier for new display technologies. However, much research and development investment has been devoted to create a vast number of new display mechanisms ranging from different liquid crystal modes for bi-stable operation to more exotic device architectures such as rotating micro-spheres with each semi-sphere coated with different colors/ shade. Several promising contenders are moving out of research labs and into production. As an overview, we have chosen a promising subset from this plethora of emerging technologies for inclusion in this book. One of the major considerations in selecting a topic for detailed analysis for this book has been based on the assessment of its maturity and its broad impact on meeting the challenging mobile application requirements set forth earlier in this chapter. In this section we provide a brief overview and discuss the challenges facing several emerging technologies that have developed paths to commercialization. These topics are followed up with detailed discussion in the subsequent dedicated chapters.
1.5.1 System-on-Glass Technologies The tough mobile requirements such as power, lighting, compact form factor, size limitation for resolution are increasing, and extreme cost-sensitivity calls for unique system-level solutions. One approach is to integrate more functionality onto the panel itself. With the traditional a-Si backplane technology this is difficult to achieve due to the low performance of a-Si circuitry in terms of electron mobility, among many other low performance transistor parameters. With the introduction of p-Si backplane technology, integration of many circuits, such as column or row drivers, becomes feasible. Limited by equipment and defect issues, large size p-Si panels are still difficult to fabricate at a low cost, but the small size of mobile displays makes them a very attractive application for this technology and also makes economic sense. Moving beyond p-Si, several performance-enhancement technologies are also moving out of research labs around the world into a commercial phase, which enables even higher levels of integration to build multiple functionalities into a single device or to explore dual functionality for a single component through system integration. Such attempts recently have resulted in integrated display and image reader combinations, display and touch panel combinations, display and speaker combinations, and display and energy harvesting device integration, etc. Chapter 13 details some of the notable technological advancements that use this integration approach. When considering what functionality should be integrated onto a display panel, one needs to carefully evaluate its trade-offs. Some important guidelines need to be established and functionality priority should be considered with intended-use cases and applications in mind. One must keep in mind that the ultimate priority should be given to the display function and the resulting device ought to
INTRODUCTION TO MOBILE DISPLAYS
17
maintain at least the same level of visual performance as a standalone unit. Further illustrating this with an example, we consider the approach of integrating a solar panel into a display panel for energy harvesting in an attempt to address the power constraints imposed by the limited battery power capacity of a mobile device. The concept of using a solar panel to power small electronic devices has been successfully used in many applications. The solar powered calculator is a good example. However, powering a mobile communication device such as a cell phone is quite different from powering a calculator in many respects. Even in its standby operation, a typical cell phone requires from a few to more than ten milliwatts (mW) of power compared to a fraction of a mW for a calculator. Moreover, the standby state of a mobile phone runs at 100% duty cycle for communication needs. Solar power generation is proportional to the solar panel dimension or surface area. Surface area on a mobile communication device is at much more of a premium than on a calculator as there are other user interface components such as the keypad competing for the surface area. This renders mounting a solar panel side-by-side on the display – as is done on a solar powered calculator – impractical for a mobile device. To overcome this, a new approach was proposed: to put a solar panel behind the display, taking advantage of the display size increase trend [7]. A conventional reflective display is virtually nontransparent as it contains many light absorption layers such as the polarizer and the reflector. For a reflective display, the reflector is the highest light blocking element among all the layers. Overall, a reflective LCD may transmit less than 5% of the incoming light over the entire solar spectrum. As only the irradiation within relatively narrow bands of three primary colors are needed for visual perception, a method was devised to selectively control the reflected wavelength for image formation while maximizing the transmission of all other wavelengths. Then, a photon-electron converting device, such as a solar panel, can be placed behind the display for power generation. This approach has been demonstrated using a conventional monochrome STN liquid crystal display, where a conventional metallic reflector was replaced with a green holographic reflector. Optical modeling and optimization, taking into account the spectrum responses of all layers and the solar panel, show that the majority of the incoming light, 60%, can pass through to reach the solar panel. A prototype was built to verify the theoretical modeling, and power measurement with simulated solar irradiation was carried out. Figure 1.12, below, shows the measured results with a green reflector as a function of the solar irradiation level. As a reference, the bare solar panel power collection was also measured and approximately 60% reception was verified. With such large throughput, outdoor lighting conditions can generate substantial power per unit display area to maintain stand-by power usage. With typical indoor lighting, however, the resulting power collection is too small to be useful for power budget purposes. These characteristics point to a highly use-dependent scenario with this technology. One good application could be for emergency use to charge a dead battery. A mobile phone with a small display of 1 square inch has been demonstrated to show that with a dead battery, enough energy can be generated in sunlight conditions through three minutes’ irradiation to allow the user to make a short duration – 30 seconds to 1 minute – phone call. As the mobile display size keeps increasing, more power will become available with this approach. With color displays becoming the mainstream display option, the available light for collection will decrease accordingly. However, with a broad spectrum source, there is still much energy that can be extracted outside and between the red green and blue filter spectral bands.
1.5.2 Organic Light-Emitting Diode (OLED) Displays There has been much activity in the research and development of OLED technology in recent decades and much progress has been made in material properties, device architecture, and manufacturing technologies. OLED provides a set of attractive attributes for use as a mobile display. It has vivid color with a
18
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Power Collection (mW / sq. in.)
70 60 50 40 30 20 10 0 0
1000
2000
3000
4000
5000
6000
7000
Surface Luminance (Nits) Bare solar panel + lens
Solar display + broad spectrum light source
Figure 1.12 Power collection versus solar illumination with a monochrome solar display.
Lambertian emitting profile that is very easy on the eye. If used indoors, it offers excellent contrast and vivid color. Unlike liquid crystal displays which are voltage driven, an OLED is a current-driven device. More importantly, its luminance is extremely linear with drive-current. This characteristic makes it easier to control gray levels and enables a much more detailed rendering of color images. It has a very fast response time, making it a perfect video display. Mechanically, OLED offers a thinner and simpler structure without the need for a backlight, enabling a thinner display module than a comparable LCD. An OLED display with less than 1 mm in thickness has been shown with high contrast and saturated colors. At the time of writing this book, active matrix OLED displays of up to 2.800 diagonal have started shipping commercially. While the OLED technology has very positive attributes as noted above, it also has some challenges to overcome in order to gain wide acceptance in the mobile display market. OLED is a Lambertian emitter and emits light into the full 4 solid angle. To increase the perceived luminance of an OLED display, the emitted light needs to be controlled and confined to only the upper hemisphere. To that end, the cathode in an OLED display often uses highly reflective metallic materials to redirect the downward irradiation to the upper hemisphere, after transmission through other transparent OLED material layers. As a result, ambient light is also reflected, thereby degrading contrast of the image. Another challenge is the sensitivity of the OLED materials to photon irradiation of shorter wavelengths, a photoluminescence effect, which creates secondary emission as a background when exposed to short wavelength irradiation. Under outdoor illumination, short wavelength irradiation at UV and blue wavelengths will excite the OLED material to emit in the visible range. Several remedies for this problem have been implemented with good visual improvement, for example, a circular polarizer to block the reflected ambient light. A 60% luminance penalty from the insertion loss of a circular polarizer is an issue which directly relates to the power consumption of an OLED. The power consumption of an OLED display depends on the number of ‘ON’ pixels and their gray-levels. If the primary use is to show photographs or video content, then the total power consumption can be lower than an equivalent transmissive LCD module. It is known that a typical picture is equivalent to about only a quarter of a total white field. Based on this, OLED power data are often quoted for a 25% white image. For applications that require whiter content screen, the power consumption estimate needs to be adjusted upward accordingly, and can reach comparatively high values. Another issue is the higher cost of OLED displays relative to the prevalent AMLCD displays at the current time. Intense effort is
INTRODUCTION TO MOBILE DISPLAYS
19
underway to improve these deficiencies as the mobile display provides the best market and opportunity for OLED to get a foothold given its attractive attributes discussed earlier. In Chapter 14, Tsai et al. provide a detailed account of the material, device, and driving scheme fundamentals as well as summarizing recent progress and future trends in this promising technology area.
1.5.3 Bistable Displays One way to get around the stringent power requirements for mobile displays is to use a bistable display. The bistable display, as the name indicates, is a display that has two separated and energetically stable states. In order to switch from one state to another, an electrical voltage is required. In the absence of an applied voltage, the display will stay at either of these states. Bistability enables a display to maintain a static image without consuming power. In some cases, these states may not be absolute energy minimum states, and with time, could migrate to other states. However, for all intents and purposes, these devices can be treated as truly bistable for mobile applications. Examples include cholesteric, zenithal bistable, twist bistable, and surface stabilized ferrolectric liquid crystal displays, as well as electrochromic, electrophoretic, and MEMS displays. In these displays very different electro-optical effects have been explored to create visual images that are stable at zero power. The electrophoretic display, a well known electro-optical effect, has gained renewed interest during the last several years as its encapsulation technology matures. The commercial launch of Motofone from Motorola and Ereader from Sony, both using an Electrophoretic display, signals its entry, finally, into the mobile market place. Zehner covers this technology in Chapter 15, along with a detailed overview of the electronic paper displays in general. Yang and Angeles et al. cover liquid crystal based approaches in Chapter 16 and Chapter 17, respectively. While much progress has been made in these emerging technologies, the challenges facing these candidates remain for them to become the mainstay technology choice for mobile displays. One issue has to do with the required frame update rate. In a true bistable display, there is no power required to maintain a static image. However, in these devices, the power required to update an image can be higher than in the case of conventional LCDs. Another challenge for the bistable technologies is color generation. In some bistable devices continuous gray scale and non-monochome displays are difficult to achieve. In summary, bistable display technologies have great potential for new mobile display applications, but the gains resulting from their use depend on the particular application being considered.
1.5.4 Electrowetting Displays The electrowetting display is an emerging technology targeted for use in mobile applications. Currently still in the early stages of development, this approach promises to deliver low power consumption and full-color video display capability in the future. Based on this technology, displays can be constructed in all optical modes in principle – transmissive, reflective and transflective – while the current implementation efforts are focused on low-power reflective structures that do not require a backlight. In Chapter 18, Feenstra discusses electrowetting display technology fundamentals, as well as its potential applications and advantages in future mobile devices.
1.5.5 Three-Dimensional (3D) Displays Another area of emerging application interest for mobile display is the 3D display. While the current bandwidth of wireless devices and the lack of content have limited the application of this technology thus far, it continues to generate excitement for future applications. In Chapter 19, Travis describes the
20
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
visual perception fundamentals of a 3D display and explains several enabling technologies for the 3D display system, emphasizing its application to the small display format.
1.5.6 Beyond Direct-View and Rigid Displays As we have discussed, the pace of display resolution increase is accelerating and is outpacing the increase in size of mobile displays to meet increasing visual application demands. Early mobile displays used STN technology and typical pixel counts were of several thousands. Since then we have seen pixel count doubling every few years. Even at this fast pace, the resolution increase is still outpaced by even faster increases in the wireless network bandwidth besides the processing and memory capabilities of mobile devices. More importantly, as discussed earlier, resolution increase on a mobile display will reach a limit imposed by the small display size from a handheld form factor and the human vision resolution limit. This is the next technical frontier to challenge the mobile display industry. To meet these challenges, multiple approaches have been researched and explored. There are three general approaches to obtain a large image size without increasing the physical size of the display. These are eyewear or virtual displays, mobile projection displays, and flexible/foldable displays. These three categories of technology are comparatively immature compared to mainstream LCDs and the several emerging display technologies discussed earlier. Their immaturity not only reflects in the technology itself but also in many cases the applications that these devices are targeting. However, given the major trends and constraints discussed earlier, the need for enabling large images for a mobile device is truly fundamental and the associated challenges in technology and applications need to be met. After brief introduction of these technologies in this chapter, we have three chapters devoted to these topics in the last section of the book.
1.5.6.1 Eyewear Displays As the name implies, an eyewear display uses optics to magnify the image from a microdisplay to form a much larger perceived image. The availability of high resolution micro-imagers from a variety of microdisplay technologies laid the foundation for this approach. The advances in micro-optical elements provided further paths to compact sizes. SVGA or higher resolution microdisplays with diagonal dimensions of less than 1 inch are commercially available. A major advantage of this approach is the relatively low power needed to create a large and bright virtual image. Chapter 20 provides a system-level analysis to help readers understand the issues and challenges in integrating the micro-imager, optics, and electronics into an ultra-compact form factor to meet consumer demands.
1.5.6.2 Mobile Projection Displays To counter issues with a virtual display and meet the increasing demand for high resolution information, attempts to create an extremely compact projection display have picked up momentum in the last few years. Both micro-imager based and laser-scan based approaches have been explored and demonstrated. In addition to facing similar miniaturization challenges as a virtual display, from the illumination system to the projection optics, a micro-projection display needs to overcome another formidable barrier: large light output in terms of total lumens provided from a small device volume. Projection devices require increased power over eyewear devices due to the need to form a real image. Thus far, prototypes have been demonstrated with more than 10 lumen output, some using high power LEDs and others using lasers. However, they still require multiple watts of electrical power to operate and they have a system volume of several 10s–100 cubic centimeters, based on published reports.
INTRODUCTION TO MOBILE DISPLAYS
21
Clearly, major improvements are needed at component and system integration levels before this new category of display can become widely adopted for mobile devices. However, there are many industry trends favoring the projector approach, primarily rapid progresses in LED efficacy, semiconductor lasers, micro-optical elements, and optical system integration. Chapter 21 provides in-depth analysis focusing on the laser scan approach. In the near future, the micro-imager based solutions are expected to become commercially available as they are based on relatively mature components and betterunderstood system operations.
1.5.6.3 Flexible/Rollable Displays Flexible or rollable displays are generic terms used to describe displays that are not rigid. Early on, the display industry paid little attention to rigidity. Only very recently, driven by much larger differences in both technology and application among different flexible display configurations, the display industry has started to make clear distinctions. To provide a large image, scrollable or rollable flexible displays are needed. Rollable displays are being developed aggressively in both industry and governmentfunded research efforts. Electrophoretic, electrochromic, cholesteric, and OLED displays are candidate technologies to enable a display that can be rolled or folded into a smaller form factor for storage and then extended when needed. In recent years we have seen substantial progress in this area, with the demonstration of rollable reflective black and white displays. Maturity of a rollable display is still at an early stage, and it remains to be seen whether it will meet the full set of tough mobile display requirements imposed by the mobile environment and the high-end performance expected from such displays. Substantial research and development as well as infrastructure building are needed. To enable a truly rollable display, the single most challenging and formidable task is to develop a rollable backplane technology. Much progress has been made in the flexible electronics as well as corresponding front plane display media [8]. Chapter 22 provides a review and an update on the organic transistor technology that enables circuitry for the flexible/rollable displays for the future mobile devices.
1.6 Summary Mobile display applications have been one of the fastest growing segments among all display applications and we expect this trend will continue for the foreseeable future. Mobile display applications can be generally divided into two broad classes: mobile communication and mobile computing. Mobile communication has primarily been represented in the form of mobile phones or smart phones, while mobile computing has been carried out on the laptop or mobile computer. However, a new trend is forming to blend these applications and devices. Increasingly, we see mobile communication devices with more computing and large display capabilities, and mobile computing devices with communication link to broadband wireless networks and smaller form factor. No matter how these trends continue, these devices provide a fertile ground for new and emerging display technologies, that range from new types of LCDs to OLED to micro-projection displays, which will be discussed in detail in the rest of the book. As the nature of the mobile environment dictates, displays that are intended for mobile applications require a set of more stringent display performance requirements than their domestic counterpart display terminals. For the time being, AMLCD is still the dominant display technology choice due to its flexible device architectures and maturity. With time, a number of emerging technologies are expected to enter mobile applications, e.g. system-on-glass technologies, OLED, bistable display, electrowetting display, etc., to name a few. Another actively researched area is the beyond direct-view display technologies. Among these, the eyewear display needs to overcome human ergonomics constraints associated with a virtual display arrangement. Micro-projection and flexible displays are new frontiers in mobile display technology aimed at
22
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
providing high display resolutions in a compact mobile form factor. The chapters in the rest of the book provide detailed analysis of the fundamental technologies, applications, recent progress, future trends and challenges in each of these areas.
References [1] Proceedings of International Display Workshops (IDW) 2002, Hiroshima, Japan. [2] Castellano, J.A. (2005) Liquid Gold: The Story of Liquid Crystal Displays and the Creation of an Industry, New Jersey: World Scientific. [3] Li, Z. (2004) Applications Tutorial Notes, Society for Information Display, A5. [4] Pollack, J. (2003) Information Display, 19 (7) 12. [5] Bhowmik, A. K. (2007) SID Symposium Digest of Technical Papers, 38 (2) 1539–1542. [6] Valliath, G., Coleman, Z. Jelley, K. and Akins, R. (1998) SID Symposium Digest of Technical Papers, 29, p. 139. [7] Li, Z. and Smith, M. (2002) SID Symposium Digest of Technical Papers, 33 (1) 190–193. [8] Crawford, G.P. (Ed) (2005) Flexible Flat Panel Displays, New Jersey: John Wiley & Sons, Ltd.
2 Human Factors Considerations: Seeing Information on a Mobile Display Jim Larimer ImageMetrics, Half Moon Bay, California, USA
2.1 Introduction For the sake of persons of different types of mind, scientific truth should be presented in different forms and should be regarded as equally scientific whether it appears in the robust form and vivid colouring of a physical illustration or in the tenuity and paleness of a symbolic expression. James Clerk Maxwell, from The Scientific Papers of J. Clerk Maxwell [1]
Maxwell is a good place to start when embarking on a consideration of the human factors that make a good display; his earliest work was on color. Insights into the factors that determine image quality rely upon analogies and metaphors as well as formulas and rules. Image quality is highly situational, so the rules are complex and must be understood within the viewing context. The goal of this chapter is to develop a conceptual basis for framing the issues and some quantitative detail, when that can be easily used, to inform engineering goals. Some of the issues encountered are due to the nature of human vision, and some are due to the way light interacts with materials. This chapter will attempt to bridge
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
24
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
those domains but only where a bridge is necessary to make a point about image quality and seeing information on the display. Are mobile displays different than other kinds of displays? The obvious answer is no, an image is an image and a device that reproduces images that must be plugged into a wall cannot be fundamentally different from one that contains it own power source or needs none when it comes to imagery. That said, there are some significant differences. A mobile display at least currently is smaller than displays that you plug in for power; that may change if flexible electronic paper emerges from the lab as a product. Currently you would have to search a great deal to find a display that does not require a rigid substrate to work, so ‘unfolding’ a small device to produce a large area display is still a dream to be realized in the future. Mobile displays also need to be very efficient. Liquid crystals obtained their central role in reconstructing dynamic imagery by virtue of their efficient use of power; indeed, materials like cholesteric liquid crystal and electrophoretic materials can sustain an image once written for days, weeks and even years. Of course most of the mobile displays that are integrated today into cell phones and PDAs use backlights. The constraints of physical materials and the eye together set a hard limit here. The number of photons one can squeeze out of a material is limited by the physics and the eye’s ability to convert watts into lumens is limited by the physiology. No device will ever be more efficient that 680 lumens per watt and then only if the light is a narrow band of greenish yellow of 555 nm – such a light source would convert a colorful world into one of blacks and whites. This is a problem that only evolution can fix. The other path is to develop displays that require only natural light sources and that points to the only important difference, other than the power budget, between mobiles and pluggables. Controlling ambient light is more critical to success in the mobile world than in the pluggable one. Here, the problem is dealing with the impedance light encounters when it enters a material such as plastic or glass; the index mismatch reflects a substantial amount of that light and this can be a problem depending upon our viewing angle to the display, the diffusely reflecting properties of the display surface, and the likelihood that the viewer’s head blocks the source of the most problematic rays. Some day there will be a material system that can match the performance of paper and ink which modulates the light immediately upon encountering the display, only paper with glossy topcoats share the glare problem with electronic displays. We will discuss glare later in this chapter, but fundamentally all displays produce images and all images have the same fundamental properties when formed on the retina. Images are formed when light reflects off of objects or is emitted by them and a lens or optical device forms a pattern from this light onto a two-dimensional surface. The light is the distal signal; the pattern or image formed is the proximal signal. The pattern formed depends upon the lighting, the physical properties and positions of the objects and the location of the optical device. The image formed on an observer’s retina is the proximal signal that will ultimately determine the image quality and the visibility of information on a display. Our visual system encodes and processes the image formed on our retinas to generate perceptions. Perception is a cognitive process. We see chairs, people, trees and not patterns of light on our retina. The information about objects within our field of view is encoded in the distal signal; this signal is sampled by aiming our eyes and the optical phenomena that produce the proximal signal, the image, on our retinas. In this early stage of perceptual processing a great deal of information is lost. The nervous system then samples the retinal image converting it into a neural signal and more information is lost. A display’s function is to reconstruct a signal that would be generated in the environment and to produce the same or a similar pattern of light on our retina when we are looking at the display. The display may be reconstructing a natural scene or a pattern of light similar to viewing a page in a book; the object represented may only exist when the display is functioning. The reconstruction need not be identical to the naturally occurring proximal signals for the displayed image to look like the objects it portrays. The perceptual sampling process is inherently lossey so only the information that will not be lost needs to be reconstructed by the display. The visual object environment exists within four physical dimensions that locate the objects in space and time. Like a camera our eyes form images in space-time. Our two eyes, slightly displaced from each other, form two slightly different images by virtue of this displacement. Our retinas are
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
25
two-dimensional surfaces tessellated with photoreceptors that discretely sample the energy in the image at different locations within the retinal image and convert it to a neural signal. Although our visual system has many properties similar to a camera, it is not a camera. What we see in the scene, our conscious experience, is not the image formed on our retinas. We may not notice an object in the scene although the signal produced by it is present in the retinal image. Our eyes are constantly moving as we move through the environment. Even standing still our eyes continue to move relative to the scene, yet our perception of the scene is stable. We do not see stationary objects moving as their images do on our retinas as a result of our eyes’ constant motion and when objects move we correctly infer their motion as separate from our self-motion. The visual system’s spatial and temporal sampling processes are unlike the sampling in a camera. Cameras produce homogenous samples in space and time. Our visual system samples the scene with an array of photoreceptors that are not uniformly distributed on the retina. The visual system’s rate of temporal sampling depends upon the spatial scale of objects in the scene. We rapidly sample the large and gross features, i.e. blobs, in the image to estimate closing rates when the blobs are moving away or towards us. We slowly sample the fine structure in the retinal image so image motion reduces our ability to see details. We see small objects moving only when the low spatial frequency components of the objects proximal signal have enough contrast to activate a blob detector in the visual pathway. Try to read a newspaper while the page is moving; the small text is much more difficult to read but the headlines remain legible.
Figure 2.1 Micrographs from a paper published in 1990 [2] of small 30mm square sections of human retinae are shown at different retinal eccentricities from the optical axis centered on the fovea. Three different individual foveae are shown in the center of the figure revealing the large individual differences in spatial sampling densities across people. The numerical values refer to the location of the small sections of retina relative to the optical axis at 0 mm. The larger cells in the eccentric sections are cones and the small cells are rods. The foveal sections shown in the center of the figure are all rod-free regions of the fovea containing only cone photoreceptors. Reproduced from [2] by permission of John Wiley & Sons Inc.
Our visual experience includes a sense of depth. We rarely notice the blur that results from focusing our eyes on a near or distant object. Depth perception is a potent aspect of our visual perceptions even when one eye is closed and despite the flat projection of the object world onto the retinal surface. The signal for depth is primarily generated by parallax produced either by object motion or our self-motion. Disparities in the relative position of objects within the images formed on the two retinas play an important role in the near field or when distance objects are very large and far apart. Like a camera the optics of our eye geometrically distort the spatial relationships among objects in the scene, but this too goes unnoticed in our perceptual experience. Remarkably, if you sit to the far right or left of the projection screen in a movie theater you will not notice the keystone distortion that would be apparent and distracting in a photograph or video on the screen made from the same location. We are not radiometric sensors; our visual perceptions are limited in several ways. We are unaware of surface properties whose reflection or emission signature is outside of the VIS band, i.e., 380 nm l 780 nm, of the spectrum. The colors we experience are only loosely correlated with wavelength. Our ability to detect variations in intensity is limited and dependent upon spatial scale.
26
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 2.2 Two images of a rug taken from different angles illustrate keystone distortion. The image on the right, taken from an acute angle relative to the plane of the rug, reveals the characteristic trapezoidal shape typical of keystone. The image on the left was taken a normal angle to the rug. These two images were taken at approximately the same distance to the center of the rug. Standing where the camera was located did not produce this dramatic keystone distortion in perception; the rug appeared to be a rectangle at both locations.
When the spectral composition of the illumination changes, for example during the diurnal cycle, we rarely notice this as a change in the color of object surfaces. The apparent surface color is more or less invariant and appears to us as a property of the object rather than the light illuminating it. Although we see shadows we tend to be unaware of illumination variations and gradients. Our absolute sense of brightness is very poor, but unlike a camera we can see details in the shadows where a camera would darken the scene making details invisible.
Figure 2.3 The Gelb illusion illustrates that human vision is not radiometric in nature. The two squares are uniform and have exactly the same flux values, yet one appears brighter than the other. Viewing this figure in bright or dim illumination has little effect in the perceived brightness of the squares and they appear to be of constant brightness over a wide range of illumination levels.
Electronic displays, printers and photographs attempt to reproduce natural scenes, printed text, and graphics. Distortion and artifacts produced by the capture and reconstruction process can be very apparent and even distracting preventing the observer from fully utilizing the information present in the displayed image. A distortion rearranges the signal but does not necessarily lose information; keystone and blur are examples of distortions. An artifact is a signal produced during the sampling1, processing, and reconstruction of the image signal. Flicker, jaggies, judder, tone scale banding, metamerism, and noise are all examples of artifacts added to the signal during the capture, communication or reconstruction process. Artifacts matter when we can see them and they do not matter when they are below the threshold for perception. 1 Sampling is being used somewhat loosely in this essay. It refers to any process that converts a signal to some other form or scale. In this sense sampling occurs when an image is gathered with an imaging sensor, or a graphic object is converted to a bit map, and when digital codes are converted back to an analog signal, for example, to produce the electric field to control a liquid crystal light valve.
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
27
2.2 The Perfect Image What does a perfect image look like? Any criterion for image quality depends ultimately on what we can see. A difference that is not visible to the observer is not a difference that matters or impacts image quality. For example, a large electronic sign made from an array of LEDs looks very good at a distance where the LED pixel elements are too small to notice. On a laptop or cell phone display, if the viewing distance is large enough, the user will not notice the faint grill-like structure made by the discrete pixel boundaries on the screen. The use of a magnifier will make this grillwork very apparent. In 1978, Carlson and Cohen [3] proposed a definition for image quality and the perfect image. They suggested that ‘. . . a ‘‘perfect’’ image is one that looks like a piece of the world viewed through a picture frame’. They went on to propose that the discriminability of the reconstructed image relative to the actual scene should be the measure of image quality. We should measure image quality in terms of the human visual system’s ability to see differences. A Just Noticeable Differences (JND) is defined to be the physical signal difference required to see a difference at or near the threshold of discrimination. This idea requires some refinement because images that look real are not necessarily ideal. The artist creates the scene to have a look that appeals to her aesthetic values and tastes. Portrait photos never look exactly like the person viewed through a picture frame. Blemishes are removed, skin tones improved to make the subject look healthy. The cinema artist purposely distorts the image to achieve an effect that contributes to the story. Film Noir purposely darkens shadows and distorts the tone scale to evoke a surrealistic aura for the story. A Coen brothers movie released in 2000, O Brother, Where Art Thou, used digital techniques to change the colors in the scene to make the images appear as if the movie had been created during a prolonged drought. In the movie Donnie Darko the Director of Photography, Steven Poster, purposefully used judder at a climactic moment in the story. The appropriate measure of image quality of an image on a display, the ‘perfect image’, is how close it is to the intended image. The intended look is predetermined, it is a standard, and the reconstructed image must not be noticeably different from this standard. The desired image may contain jaggies or geometric distortion, it may be blurry but it is blessed in the sense that it is a standard against which a reproduction should be judged. The JND is a measure of the visibility of differences between the rendered image on the display and the standard image. If there is no noticeable difference between the standard and displayed images, the reproduction is judged perfect with respect to the human visual system; if there are noticeable differences it could be better. The rendered image may be noticeably different from the standard at one location or many locations and at one time or many times. Changing the viewing distance may reveal differences that had been below the threshold for perception at a greater distance. These examples illustrate the situational nature of image quality. Understanding how to measure noticeable differences and how they depend upon the display and the viewer is the subject of the rest of this chapter. The direction we will take is to consider the sampling processes of the human visual system, how the percept is formed and what object properties in the image formed on the retina can be sensed and will rise to a perception. We will not claim to define the perfect image as an aesthetic quality, but we will describe what makes the rendering of a test pattern appear noticeably different from the desired standard. An important point to remember is that the ability to see differences depends upon situational factors like the viewing distance, lighting, and contrast. A useful measure of the image quality of a display is not invariant it will change with the situation in which the display is viewed. A good display for signage will not be a good display for a laptop computer or cell phone because of situational factors.
2.3 The JND Map and Metric It is possible to define a measure that compares two images in terms of an observer’s ability to see differences between them. This measure is a JND map. A JND map is a function of the two images, and the appropriate situational parameters that describe the viewing conditions. We can write the measure
28
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
as JNDðIarb ; Istd ; x; y; t; vd; h; l; uÞ where Iarb is an arbitrary image and Istd is the standard image; x and y are the spatial sampling locations relative to the observer’s retinal sampling and not the spatial resolution of the images; t is the observer’s temporal sampling rate; vd is a vector describing the viewing distance to each image; h is a heading vector describing the observer’s line of sight to the respective images being compared; l is a vector describing the ambient lighting; and u is a vector describing observer uniqueness. An example of a JND map comparing a standard image to the same image reconstructed at a lower resolution in shown in Figure 2.4. The computation
Figure 2.4 On the left is a quarter zone plate as the standard image and one the right is the same image reconstructed on a simulated display with one-third the number of pixels. The upper image is the difference between the standard and the lower resolution image. The bottom image is a JND map where just noticeable differences are displayed in pseudo color from black (no difference) through red to yellow (very noticeable difference). The JND map in this figure was created by the Sarnoff vision model [4] and assumes that an image h units high is being viewed from a distance of 9h.
of P a JND map is described in [5]. A perfect image by this definition occurs whenever image JNDðIarb ; Istd ; x; y; t; vd; h; l; uÞ ¼ 0. In this narrow sense all images are trivially perfect at some viewing distance, usually quite a large distance, or for reflective displays when viewed in a dark environment. Although this is obviously a trivialization it points out the fundamental situational dependency of the concept of image quality, it will always depend upon situational factors, many of which cannot be controlled by the display. The visibility of differences or JNDs always diminishes with increasing viewing distance. This is illustrated in Figure 2.5. The darker JND maps signify reductions in discriminability as viewing distance increases from 9 to 38 picture heights. The reader can verify this prediction by increasing the viewing to the right and left images in Figure 2.4.
2.4 Image Bandwidth or Considering a Display or the Eye as an Information Channel Today an electronic display consists of r rows and c columns of pixels that can take on a variety of states determined by a control signal. There are r c pixels all together on the screen. The number of states a pixel can assume in a digital display can be measured in bits. A pixel may have 2n states where n is the number of bits. The control signal is a digital value DC, a counting number, with 0 DC 2n1 that controls the intensity of the pixel. In a color display these bits control three or more color primaries DC1 þ þ DCi þ þ DCm ¼ n, where DCi is the number bits controlling the ith primary. In purely physical terms each pixel state is associated with a characteristic energy spectrum of light that depends upon the display and the ambient illumination.
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
29
Figure 2.5 Three JND maps comparing the standard image at the left to two lower resolution renderings of this image (top row at one fourth the number of pixels and bottom row at one ninth the number of pixels) shown in the next column compare the visibility of jaggies as a function of viewing distance. The distances measured in zone plate heights are 9, 19, and 38 picture heights away from the image. The JND maps are becoming increasingly dark signifying that the observer’s ability to see differences in the two lower resolution images are diminishing with increasing viewing distance.
The rate that all pixels on the screen can be changed defines the frame rate or temporal frequency, f, of the display. The channel capacity of a display is r c f n bits per second. It is reasonable to make an analogy between the sampling within the visual system and this data structure for the image on a display. The distal signal in nature is continuous in every dimension, but the punctate spatial sampling by the eye illustrated in Figure 2.1, the finite time required to generate a neural signal, and the limitations in representing the amplitude of that signal at each space-time location determine the channel capacity of the eye. Just as a video signal can be quantified in bit rates, for example a 1080p HD display requires 3 109 bits/sec, an equivalent measure can be estimated for the eye. The optic nerve contains about 1:5 106 fibers. If each fiber generates a signal every 125 msec, or equivalently at 8 Hz, and if the amplitude coding is approximately 10 bits per signal that is an upper limit on the channel capacity of 1:2 108 per sec. The eye’s channel capacity is at least an order of magnitude less than a typical HDTV set’s channel capacity. Compared to the information in the distal signal the amount of information available on the display or to the visual system of the observer is a great deal smaller. Suppose that every pixel on the display screen is changed randomly at every frame time then this is the most information that the display can convey per unit time. A recognizable object rendered on a display in a series of frames contains significant correlation from pixel to pixel within individual frames and a high degree of correlation between frames. Images of objects that we can recognize as unique and distinct from noise or other objects contain many orders of magnitude less information than a pattern of random noise. The reason images on displays are useful and look like real objects are that so much information about differences is lost in the sampling process.
2.5 The Control Signal and Scaling for Rendering The control signal is specified by a set of three independent dimensions: spatial location on the screen, a time of occurrence or frame in the case of video, and the signal intensity at each space-time location. The image capture process converts the continuous natural signal into a digital signal with discrete
30
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
values for all three dimensions. The discrete sampling process may generate visible artifacts as a consequence. Reconstructing the image signal on a display may additionally require scaling the control signal. Suppose a control signal consisting of r’ rows, c’ columns, f’ frames, and n’ bits per pixel is to be reconstructed on a display with corresponding parameters r, c, f, and n and where r 0 6¼ r; 6¼ c0 6¼ c; f 0 6¼ f or n0 6¼ n. In order to render this signal it must be re-sampled or scaled to fit the display. The re-sampling process can create additional artifacts. A scaler, a hardware device that processes the signal, is employed for this function. If there are not enough locations, levels or frames in the signal to control all of the pixels, frames or levels available on the display, the scaler must produce the missing control signals and if there are more data in the signal than required, the scaler must reduce the signal to fit display. Figure 2.6 illustrates the scaling process changing the number of pixel locations or the number of tone scale levels. Whether or not artifacts created by scaling will be visible depends upon the
Figure 2.6 The center image is a 24-bit 640480 image. On the left the image has been down sampled by a factor of 8 to a 24-bit 8060 image; notice the stair stepping along slanted edges. On the right the center image has been down sampled to a 3-bit 640480 image; notice banding produced by the limited number of intensity levels available. (See Plate 1).
characteristics of the sampling and neural processing performed by the visual system. Artifacts only matter when they are visible to an observer. Visibility will always be situation dependent. Viewing distance, light level in the room, contrast, and temporal rate at which the signal is reconstructed will turn out to be the most important situational variables that determine what can be seen on the display. The visual system samples the retinal image of the display spatially, temporally, and intensively. It gathers data from the retinal image at discrete locations on the retina as shown in Figure 2.1. The individual photoreceptors have a temporal tuning characteristic or impulse response. They have a spectral tuning characteristic with respect to the wavelength of electro-magnetic radiation. Information is lost in this sampling process. Not seeing a feature in a scene imaged on the retina due to a lack of attention is an example of information lost during neural cognitive processing of the visual signal. Information is lost due to the spatial coarseness of the photoreceptor locations in the retina; some is lost due to the narrow spectral tuning; some information about rapid signal change is lost due to the long integration time required by the photoreceptors and neural networks; and amplitude information is lost due to veiling glare in the retinal image and to the limited dynamic range of the neural message and the number of levels that can be represented in the neural signal.
2.6 Jaggies Jaggies are a salient spatial artifact produced on digital displays. They appear as a sawtooth or stairstep like serration along edges that should be smooth and continuous and occur when an edge is oriented at an angle with respect to the rows and columns of pixels on the display screen. The jaggies are illustrated on the right side of Figure 2.7, below. The image on the left is a simulation of how an image signal might appear on a high spatial resolution display, and the image on the right is a simulation of the same signal rendered on a much lower spatial resolution display. Spatial artifacts can
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
31
Figure 2.7 The graphic objects, line segments, on the left were rendered at 300 ppi and the right rendered at 15 ppi. The two rendered images illustrate that coarse spatial sampling on a rectangular grid produces jaggies whereas the same signal rendered at a higher ppi does not. The jaggies result along edges oriented at an angle relative to the rectangular pixel grid of the display. The jaggies on the right become less apparent as the viewing distance is increased and/or as light level is reduced. This illustrates the situational nature of image quality.
be produced either as a result of the limitations of the display device or by processing. A low-resolution image signal rendered on a high-resolution display may contain jaggies created by the signal processing required to scale the signal; visible artifacts are not always the consequence of a display limitation. To predict the visibility of a spatial artifact such as the jaggies, two sampling densities and the viewing distance must be known. The number of pixels per unit length on the screen determines the physical dimension of the image’s smallest features that can be represented on the screen. Pixel pitch is frequently stated in the English units of pixels per inch (ppi). Ppi is a measure of the spatial resolution of the screen. The width of a line drawn on a piece of paper produced by a pencil depends upon the width of the pencil lead and is independent of the orientation of the line. This is not the case on a digital display. Here, the line thickness depends upon pixel pitch and edge orientation. This feature of digital displays is illustrated in Figure 2.7, above. The slanted lines on the right are thicker than the vertical andphorizontally lines because the pixels are squares. The slope -1 line on the right is thicker by a factor ffiffiffi of 2 than the vertical or horizontal lines all of which are one pixel wide. The corresponding diagonal lines on the left are not the same thickness as the vertical and horizontal lines for the same reason, but the variation in thickness on the left is less apparent due to the higher ppi and the eye’s inability to see these differences. Higher resolution displays support a finer grain spatial quantization but the graphics engine used to render the image signal must also be programmed to compensate for line thickness variations due to pixel geometry. How well the algorithm compensates for this affect is not a feature of the display it is a feature of the algorithms used to control the display. When a visible artifact exists it is not always a simple matter to determine which element, i.e. the algorithm or display, is most responsible for the outcome. The sensory capabilities of our eyes limit our ability to detect these differences so ultimately it is the eye’s ability to detect these effects, caused either by a limitation inherent to the display or in the rendering algorithm, that determines image quality. Manufacturer’s market displays by the screen’s pixel count, for example 1920x1080, and only indirectly by the more important pixel pitch by stating the screen diagonal and aspect ratio in the product specifications. How many pixels are on the screen determines the information channel capacity of the screen, but it is the pixel pitch that is one of three primary dimensions needed to determine the likelihood that an observer will see the jaggies and other artifacts. Pixel pitch is a determiner of image quality, not the pixel count. The observer’s retinal sampling density – the number of cone photoreceptors that will encode the retinal image of the display – is the second important sampling density that must be considered to
32
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
determine a display’s ability to produce artifact-free imagery. Retinal sampling resolution is stated in samples per degree because images of objects are formed optically on the retina and will scale with the distance of the eye to the objects. To know how many photoreceptors will sample an electronic display the viewing distance to the display must be known. In Figure 2.1 it is evident that retinal photoreceptor sampling in the fovea achieves the highest sampling density and that individual cone densities vary substantially across people. To perceive a spatial change the observer’s retina must gather at least two contiguous samples coincident with the image feature and at roughly half the scale of the feature in the retinal image. This is a biological instantiation of the Nyquist sampling theorem. Every individual has a slightly different cone sampling density in the foveal region of best spatial vision as is apparent in Figure 2.1. The variation across people can be quite large. The foveal cone spacing in the three foveal samples shown is between 1.5mm and 2mm on average. If the eye were perfectly spherical with a radius of 17 mm and centered on the optical nodal point 1 would correspond to 297mm, so the sampling density would vary for these three retinas from 200 photoreceptor per degree to 150. It would be reasonable to suppose that the Nyquist limit would therefore be 100 to 75 cycles per degree based upon this observation. This measure is not sufficient to predict the ability to see very small bars. First there is the resolution loss due to the eye’s optical diffraction limit. The pupil of the eye in a well-illuminated environment is around 2 mm in diameter for the average person. The Airy disc at 555 nm is approximately 3.8mm in radius, so diffraction will degrade the eye’s spatial resolution. Other optical aberrations such as lens blur and scatter further reduces spatial sensitivity by degrading retinal image focus and by adding veiling glare produced by stray light that reduces image contrast. For these reasons we would expect spatial resolution to be less than the limit inferred by only considering the number of photoreceptors per degree visual angle based upon the anatomical data from Figure 2.1.
2.7 Hyperacuity An unusual perceptual ability, hyperacuity, often misleads people to overstate the spatial capabilities of the eye. A small difference in a pair of retinal samples underlying an image on the retina may be repeated many times along the extent of an image feature. The perceptual affect of the sharpness of an edge formed by a feature and its extent are not the same in terms of the visual salience of the feature. The nervous system can use the aggregated differences over many pairs of photoreceptors within oriented receptive fields2 on the retina coextensive with the edge or feature to detect spatial variations in the image. Because we can aggregate retinal samples and process them in complex ways we can see a vernier offset much smaller than the photoreceptor sampling resolution. This perceptual ability depends upon the length of the edge and decreases as the edge length decreases; this is illustrated in Figure 2.8. Westheimer [6] has called this perceptual ability hyperacuity because it exceeds the foveal photoreceptor sampling density by better than an order of magnitude. It depends upon our ability to perceive intensive differences in the signals generated over many photoreceptors. It should not be confused with the sampling density of the retina or the ability to see very small features in an image; hyperacuity is a second order phenomena related to retinal spatial densities but dependent as well on neural processing over large extents of the retinal surface. Very small image features do not benefit from this neural processing to extract correlation and average over large extents within the image.
2 The study of the visual nervous system using small electrical probes has discovered contiguous spatial regions on the retina that converge upon a single neuron. These regions are called receptive fields because stimulating photoreceptors within the region generate an electrical signal in the microelectrode probe near a second order neuron in the visual sensory pathway. The second order neuron could be a ganglion cell in the optic nerve or something more proximal to the visual cortex.
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
33
Figure 2.8 Line pairs stacked vertically with the lines displaced by one pixel horizontally are positioned above each tick mark on the bottom of the figure. Starting on the left of the figure the two lines are each two pixels long. Each subsequent pair moving to the right doubles in line length. All lines are two pixels wide and offset by one pixel. For the longer lines the offset is perceptible whereas it is not for the shorter lines. This phenomenon is called hyperacuity and depends upon the visual nervous system’s ability to use aggregations of data along edges in the retinal image.
2.8 Bar Gratings and Spatial Frequency The behavioral performance measure used to characterize the eye’s spatial sensitivity is the threshold contrast required to see a periodic pattern such as a bar grating. A bar grating is series of light and dark bars of equal width and height as illustrated in Figure 2.9. The number of bar pairs
Figure 2.9 Two bar patches of 8 bars or 4 cycles are shown in this figure at a distance d and 2d from the eye. The number of cycles/deg will depend upon the viewing distance, so for any periodic grating, the spatial frequency at the eye depends on the viewing distance.
per patch width is one way to characterize the grating. The highest spatial frequency bar pattern that a display is capable of producing occurs when every successive row or column of pixels alternatives between two brightness levels. For every viewing distance relative to a particular bar grating this can be characterized as line pairs or cycles per degree visual angle (cycles/deg). This is illustrated in Figure 2.9. The threshold contrast required to see bars as opposed to a uniform flat field at each spatial frequency is a measure of the spatial sensitivity of the eye. This has become the standard method to characterize the spatial tuning characteristic of the eye. There are two common methods used to compute the spatial frequency of the bar grating: the screen height method and the 1 patch method. They both assume a normal line of sight to the screen and yield virtually identical values when the full screen angle is 1 or less. If pp is the pixel pitch and d the viewing distance measured in the same units then the number of line pairs per degree is given by lp=deg ¼ pp 2 d tanð1 Þ. Some display standards recommend viewing distances in units of screen heights. If the viewing distance to the screen is h screen heights and number of rows of pixels forming one r screen height is r, then lp=deg ¼ 2arctanð1=hÞ .
34
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Entertainment displays are viewed from a large distance typically more than two meters. A home viewer survey by the BBC [7] found that the typical viewing distance for television was 8.1 screen heights with a near distance of 4.8 screen heights. Using the second formula for PAL TV the typical screen resolution at the viewer’s eye point found in the survey was between 20 to 40 lp/deg. Small hand held displays used in cell phones and PDAs have fewer pixels than TVs and laptop screens and they tend to be viewed at close range although as people age the ability to focus near to the eye diminishes. A younger person can easily view a cell phone or PDA display at short range where the jaggies may be visible. The visibility of the jaggies depends upon the scale of the feature in the retinal image and the contrast at the edge of the feature. Contrast is a physical property of the image. For a bar grating the contrast is a function of the intensity of the dark and light bars. We have not yet defined what is meant by luminance3, the perceptual measure of intensity, but we will see that it is proportional to a weighted sum of wavelength dependent radiometric intensities of the light producing the signal at any region in the image. An intuitive definition of luminance is a measure of the eye’s ability to see a change in intensity either in space or time, i.e. @I=@x@y or @I=@t.
2.9 Three Measures of Contrast and Weber’s Law Contrast is a critical parameter in predicting the visibility of any image feature, distortion or artifact in a displayed image. The contrast in a bar grating or for any image feature can be computed in several ways. If the luminance of the dark bar is LD and the luminance of the brighter bar is LB simple contrast is defined as LB/LD. The Weber contrast is (LB-LD)/LB or D and is sometimes referred to as increment contrast. Michelson contrast is (LB-LD)/ (LBþLD). Display manufacturers typically report a variety of simple contrast measures; the ratio of the luminance of a white screen to black screen, or the contrast ratio comparing different regions of the screen set to the same intensity value, or the simple contrast of black and white checks when the full screen is displaying a checkerboard of vary large checks and the same contrast measure when the check size is reduced. These measures evaluate important electrical and optical properties of the display. How uniform the levels are across the screen and how much electrical or optical cross talk there is in the display as the feature size goes from large to small check sizes. This is useful information because it is correlated with the electronic circuits that control and power the screen and optical properties of the display. Simple contrast however is not very predictive of image quality. For example, a screen may be considerably brighter in the center than at the edge, yet an observer may not be able to see this variation. The checkerboard contrast may reduce with check size and here too the observer may not be able to see the difference. In the vision research literature increment and Michelson contrasts have played a central role in characterizing sensory systems. A universal property of sensory systems identified in 1834 by the physiologist E.H. Weber is known today as Weber’s Law [9]. This law states that the change in intensity of a physical quantity such as light, sound, or weight required for an observer to perceive the change is a constant fraction of the level or magnitude of the quantity. For example, to feel that an object is heavier or lighter than other object of similar size, the weight difference must be proportional to the weight of the one of the objects. For seeing a change in the intensity of light, if I is the intensity of one light then the other light must have an intensity I I where I=I ¼ k, a constant. This relationship is known as Weber’s Law and the constant k is known as the Weber fraction.
R 3 The CIE defines luminous flux, lumens, as F ¼ Fm VðlÞEðlÞdl where EðlÞ is the power spectrum of the light in watts, VðlÞ is the photopic luminous efficiency function for the standard observer, and Km is the maximum luminous efficiency (680 lumens/watt at l ¼ 555 nm for which VðlÞ ¼ 1). [8]
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
35
W.S. Stiles measured the Weber fraction of the visual threshold for detecting a short duration increment, a circular spot of light, flashed superimposed onto a larger circular background field. The experimental paradigm required the observer to adjust the intensity of the increment or the field to the make the increment just visible on the field. A report published in 1932 by Stiles and Crawford [10] described a characteristic function for the threshold for seeing the increment as a function of the intensity of the field. This characteristic function has been found to fit detection threshold measurements made under a wide variety of control conditions including monochromatic increments and adapting fields, different spatial and temporal waveforms for the increment or the field [11]. A template for the characteristic function was published in tabular form in Wysecki and Stiles [8, p532] and is shown in Figure 2.10. The graphic in the figure illustrates the visual target arrangement Stiles used in these measurements. The function can be plotted in two ways: as I ¼ gðIÞ, the solid blue curve, or as I=I ¼ gðIÞ, the dashed red curve in the figure.
Figure 2.10 The increment threshold versus field intensity template discovered by W.S. Stiles during his investigation of the eye’s sensitivity to spatially localized changes in intensity. The graphic superimposed upon the graph is the stimulus arrangement used by Stiles. The threshold intensity for a superimposed increment of light, the blue cylinder, upon a steady background light, the yellow flatter cylinder, is determined as a function of the intensity of the background. The blue curve is the template. The red dashed curve is the Weber ratio at each adapting field or background level.
There are three distinct regions in the function. At low field intensities, shown on the left in the figure, the field intensity has no affect on the threshold for seeing the increment – this is most evident in the blue curve of the template in Figure 2.10. At these intensities the absolute sensitivity of the dark-adapted eye is being measured. As the field intensity is increased it begins to control the threshold requiring more intensity in the increment for the increment to be visible. This region is shown in the blue curve in Figure 2.10 as it lifts off from the dark threshold plateau region at the left of the graph. In this region the intensity of the increment must be increased to make the increment visible on the background field. The ratio of increment to background intensity, the Weber fraction shown as the red dashed line in the figure, is decreasing in this region. As the field intensity continues to increase the Weber fraction is decreasing as shown in the red dashed line that plots the Weber fraction as a function of the background intensity. The eye is becoming relatively more sensitive as the background intensity is increased. Finally, a region known as the Weber-Fechner region is reached where the Weber fraction achieves is lowest level and stabilizes; further increases in the field intensity do not produce any additional
36
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
improvement in sensitivity in this region. The red symbols on the blue curve or the flattening of the dashed red curve illustrate this region in Figure 2.10. Weber’s Law applies in the Weber-Fechner region and this region is reached at light levels that are encountered in normal work and home lighting environments.
2.10 Contrast Sensitivity Function (csf) In 1967 van Nes and Bouman [12] reported a characterization of the eye’s contrast sensitivity function (csf) using bar gratings. They determined the threshold contrast required to see a bar grating as a function of the grating’s spatial frequency measured in cycles/deg visual angle. Bar gratings similar to those used are shown in Figure 2.11. The size of the patch must be large enough to accommodate at least one cycle of the bar grating and changing the size of the grating and the number of cycles included in the patch changes the threshold contrast required to see the bars especially when the patches are small [13].
Figure 2.11 Bar grating similar to those used by van Nes and Bouman [12] are illustrated in this graphic. Both sets of nine bar gratings increase in spatial frequency by one octave per column moving from left to right, and increase in bar contrast from the top to bottom. Each bar grating within a set of nine has the same average luminance averaged over the entire bar grating. The nine gratings on the left are at a lower average or DC luminance than the nine gratings on the right.
Observers viewed the bar gratings in an apparatus that allowed precise control of the mean intensity level of the retinal image of the grating. The threshold contrast required to see the bars as opposed to a uniform field was determined for a series of mean luminance levels and spatial frequencies. The two arrays of bar grating shown in Figure 2.11 illustrate the conditions of the experiment with three spatial frequencies (columns) at three contrast levels (rows) and two brightness levels (right and left arrays). Data from van Nes and Bouman [12] are re-plotted in Figure 2.12. The threshold Michelson contrast for seeing bars is plotted versus the average grating intensity measured in Trolands4. Each curve in Figure 2.12 corresponds to a unique spatial frequency with the average brightness of the bar
4 The Troland is a measure of retinal luminance. It is proportional to ordinary luminance measured in cd/m2 by a proportionality factor that depends upon pupil size.
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
37
Figure 2.12 The data from Figure 5 of van Nes and Bouman (1967) are re-plotted in this figure. Each curve is a spatial frequency, the threshold Michelson contrast is plotted versus mean luminance of the bar gratings
gratings increasing from left to right. For each spatial frequency the data follow a path similar to the red dashed curve in Figure 2.10. The Michelson contrast decreases until it reaches a minimum value in the Fechner-Weber region. In the Fechner-Weber region the eye achieves its best sensitivity; an additional increase in intensity will not result in better performance once the Weber-Fechner region is reached. For every spatial frequency when the mean brightness level falls below the Weber-Fechner region, the amount of contrast required to see the bars increases. This is why it is very difficult to read small print in dimly illuminated rooms. Peak performance for spatial frequencies less than 4 cycles/deg occurs at lower light levels than for higher spatial frequencies. The eye requires more light to see texture and detail in the image when these features are small. At high spatial frequencies above 40 cycles/deg more contrast is always required. In the dark areas of an image more relative contrast is required to see a feature, but at the same time a smaller absolute intensity difference may be perceptible. Constant amplitude random noise in the signal will be more apparent in dark areas of the image than in bright areas unless it is also at a very high spatial frequency. These observations follow the Webers’s Law behavior of the csf. The data from Figure 2.12 can be plotted as a function of spatial frequency and this plot is shown in Figure 2.13, below. The peak spatial sensitivity occurs within the frequency range of 4 to 8 cycle/ deg. The contrast sensitivity functions in Figure 2.13 is an average of the observers who participated in the van Nes and Bouman measurements there is no established normative csf but these curves are typical. Individuals differ in their contrast sensitivity as is expected from the anatomical data shown in Figure 2.1. At low brightness levels (Td 0:09) the eye behaves as a low pass spatial filter. As the intensity levels increase this changes and the eye behaves more like a band pass filter with a peak in the 4 to 8 cycles/deg. By 50 cycles/deg, sensitivity has dropped by more than an order of magnitude from the peak sensitivity; this is to be expected from the sampling density of the cones and the optical diffraction limit of eye. At very high spatial frequencies, above approximately 60 cycles/deg, even 100% contrast is not sufficient to see spatial details. A handheld display product in a cell phone or PDA will vary in dot pitch from 100 to 170 ppi. A young user may view the display from a distance as close as 6 inches or as far away as 30 inches. Over this range of viewing distances a 100 ppi display will vary between 5 and 26 lp/deg and the 170 ppi
38
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 2.13 The same data shown in Figure 2.12 is plotted as a function of spatial frequency in this graph. The individual curves correspond to different average brightness levels for the bar gratings. (From van Nes and Bouman, Fig. 5).
display will vary from 9 to 44 lp/deg. When the spatial frequency of the pixels falls below 10 lp/deg the viewer will notice the pixel structure of the display as a fine grid structure.
2.11 Veiling Ambient Light: Contrast Reduction from Glare Contrast, spatial scale and brightness determine what we can see. Making the features in the image smaller by increasing the viewing distance, lowering the light level and thereby reducing the average intensity level near a feature in the image, or reducing the contrast of the feature will reduce the visibility of information in the display. Ambient light reflecting off the screen will reduce the visibility by introducing two sources of glare, diffuse and specular, both types of glare reduce the contrast of the image. Figure 2.14, below, illustrates the two kinds of veiling glare. The image on the left emulates what an image on a display would look like when viewed in a dark room where the only light is coming from the display. The middle image emulates the diffuse reflection of room light off the surfaces of the display. Any surface which is rough and bumpy relative to the wavelength of light, i.e. 0.5mm, will produce a diffuse reflection. The diffuse reflection of the room light appears as a haze added to the image produced on the display. The index miss-match between the air and the material of the display produces about a 4% reflection of the ambient light and if the surface is rough this reflection will appear as a veiling haze over the image reducing its contrast. If the display surface is flat the reflection will be specular and similar to the reflection from a mirror. The image on the right emulates both diffuse and specular glare that occurs when the viewing angle and the angle to the light source are matched and complementary. When this happens the light source is reflected directly into the viewer’s eye greatly reducing the contrast of the image produced on the display. Displays attempt to mitigate these two types of glare by putting a diffusing layer on top of the display surface. This works well for indoor applications where room lighting has intentionally been diffused to prevent strong specular reflections off of objects from lamps in the room. Out of doors, where bright light sources such as the sun will illuminate the display, diffusers degrade the image produced by the display especially if the display is not bright relative to the ambient. In this case the two images on the right of Figure 2.14 are typical.
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
39
Figure 2.14 Diffuse and specular veiling glare are illustrated in this figure. The image on the left represents viewing a display in a dark or low ambient level environment. The middle image illustrates the diffuse reflection of ambient light degrading the contrast of the image produced by the display. Finally, the image on the right illustrates the combination of diffuse and specular reflections that can overwhelm the image produced by the display reducing its contrast sufficiently to obscure image features from view. The figures under each image represent the viewing geometry with the display lying flat and horizontal and with the viewer on the left and light source on the right. The specular angles are represented in yellow. (See Plate 2).
2.12 Dither: Trade Offs between Spatial Scale and Intensity Pixels in an electronic display are capable of producing a finite number of intensities. Dither is a signal processing technique used to expand the apparent number of intensity levels in the rendered image. For example, if a display has intensity levels Ij and Ijþ1 the intensity (Ij þ Ijþ1 Þ=2 can be dithered in a region on the screen by setting half the pixels to level Ij and the other half to Ijþ1 . For dither to work well the spatial pixel pattern produced by this technique should not be visible. Dither depends upon the properties of the csf, at high spatial frequencies beyond the peak in the 4 to 8 cycles/deg range more contrast is required to see spatial variations. This is illustrated in Figure 2.15, below, with black and white checkerboards 3 spatial frequencies each increasing by an octave from left to right. Viewed from a distance these three checkerboards will appear to be identical accept for a small variation in brightness due to the printing process.
Figure 2.15 Three checkerboard patterns are shown at three different scales on a 300300 grid. When viewed from a distance all three of these checkerboard patterns will look identical because the check contrast falls below the threshold for the spatial frequency of the checkerboard pattern.
In Figure 2.16, below, a gray scale ramp was rendered in five different resolutions. Ramp (c) contains 256 intensity levels and at 300 ppi. Ramps (a), (b), and (d) are the same ramp rendered with 16 intensity levels and additionally in ramp (a) the pixels are two octaves bigger, 75 ppi, than the pixels in the other
40
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 2.16 A continuous grayscale ramp is sampled at 256 levels and displayed as ramp (c); no sampling artifact is visible. If this signal is rendered on a display with 16 gray levels the image signal must scaled and if the scaling is performed without dither ramp (d) results with noticeable banding artifacts. If the 16 gray levels are dithered by signal processing performed with the scaling operation, then ramp (b) results with few noticeable artifacts relative to ramp (c). If the display is spatially coarse with 16 levels, then ramp (a) results. If the display has binary gray levels, i.e. black and white, then ramp (e) results. All five ramps will look the same viewed from a large distance. The sampling and rendering process creates the visible differences shown in this illustration.
ramps. Ramp (e) contains only 2 intensity levels, black and white. From a distance these five strips look more or less identical. As the observer moves away from the page the pixels subtend smaller visual angles and the contrast required to see individual pixels increases so the dithering becomes less apparent. Ramp (d) reveals another sampling artifact that can occur due to scaling, the tone-scale banding artifact. A signal with n intensity levels cannot be rendered on a display with m levels, where n > m, without re-sampling the signal to fit the display. The number of intensity levels in the signal must be down sampled to match the number of intensity levels than can be displayed. If the down sampling reassigns all of the intensities between two display levels to one display level, then the banding artifact seen in ramp (d) can occur. This artifact is also apparent in Figure 2.6 and for the same reason. Had signal processing been employed when the tone scale resolution in Figure 2.6 was reduced the banding artifact could have been suppressed. This is illustrated in Figure 2.17,
Figure 2.17 The center image from Figure 2.6 is shown re-sampled in tone scale resolution from 24 bits to 3 bits in both images in this figure. The re-sampling on the right was combined with dithering, a non-linear signal processing step to reduce the visibility of the banding artifacts that are visible in the image on the left. (See Plate 3).
below. Signal processing to produce intermediate tone scale values, dithering, can be used to either reduce or increase the resolution of the control signal required to render the image signal onto the display. If the image signal is at a lower resolution than the display, signal processing must be used to scale the signal to match the resolution of the display. This kind of processing is required
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
41
routinely when a standard television signal at NTSC resolution is reconstructed on a 1080p HDTV display. In this case, if the scaling method simply replicates pixels, banding artifacts and jaggies will almost certainly appear in the image rendered on the higher resolution display. The signal processing used in the scaling step may take advantage of the limitations of human vision to employ techniques based upon the tuning characteristics of the eye to hide artifacts generated by the scaling process from the viewer. The artifacts in Figures 2.16 and 2.17 depend upon the contrast between adjoining levels within the gray scale and the slope of the gradient on the screen. Even these bands will disappear from view with increasing viewing distance; this phenomenon underscores the importance of the csf in determining artifact visibility. Ramp (b) does not exhibit the same banding artifact that is amply apparent in ramp (d) because the dithering algorithm attempts to match the intensity level in the full resolution signal by mixing the intensity levels between bin boundaries within small regions of the ramp to achieve an average the appropriate average intensity in the region. It is also apparent from ramp (b) that the dithering was not always successful. The jaggies, banding and dither are only apparent when the spatial scale of the pixels and contrast at the pixel boundaries rises above the threshold of visibility. This point is illustrated with another signal processing technique, anti-aliasing, used to smooth slanted edges in drawings and in text. In Figure 2.18, below, the left most triangle demonstrates the jaggies. The pixels filling the stair steps in
Figure 2.18 The slanted edges in this figure have been coarsely anti-aliased by darkening the pixels filling the stair step edge with a slightly darker than white pixel going from left to right. By holding this page at increasing distance from the eye the edges starting on the right will appear smooth. As the distance increases all of the slanted edges will eventually appear smooth. The upper right triangle is a 2x magnification of the edge to emphasize the anti-aliasing technique by making it more visible.
this triangle are white. The pixels filling the stair steps in the triangles to the right have been set to increasingly darker values relative to the background. The small triangle at the very right is a 2x magnification of the slanted edge. By viewing this figure from slightly different distances the edges can all be made to look smooth, that this occurs at different distances is due to the contrast of the lighter pixels ‘filling’ the stair steps.
2.13 Three Display Screens with Text Imagery Figure 2.19, below, shows microscope images at 50x magnification of three different display screens. Individual pixels are indicated in the figure by the white rectangles. All three screens have square pixels consisting of RGB vertical sub-pixel stripes. On all three screens the sub-pixels can be independently manipulated by the control electronics. Each screen is rendering dark text on a white background. The display on the left is a CRT television and the pixel tessellation rule is called the delta triad. Alternating columns of pixels are staggered by half a pixel height relative to the adjoining column. The image on a CRT is created when the electron beam raster sweeps across the screen. A portion of one of the letters is shown in this image. A unique feature of a CRT display is that a pixel region defined by the shadow mask need not be entirely activated by the electron beam as it sweeps the screen. This is exactly what has happened to the screen pixel on the right that is darkened below its negative diagonal and not above it to form part of a letter character on the screen. Scattering within the phosphor layer also adds to the edge softening effect. These features of a CRT tend to reduce the visibility of the jaggies by naturally anti-aliasing edges within the image.
42
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 2.19 Three micrographs are shown of different display screens at 50x magnification. The left screen is a shadow mask 36-inch diagonal CRT television with a 4:3 aspect ratio, the middle image is a 46-inch diagonal LCD television with a 16:9 aspect ratio, and the right screen is a 15-inch diagonal laptop computer screen with a 15:10 aspect ratio. The white squares are indicating the location of RGB triads on each screen. The spatial resolution of these screens is increasing from left to right. (See Plate 4).
The LCD laptop screen shown on the right has the smallest pixels and highest pitch of the three screens. A 6-point font text image is shown on this screen without anti-aliasing; at this magnification jaggies are quite visible. The combination of the viewing distance, typically 0.5 m, and fine pixel pitch, approximately 105 ppi, makes jaggies difficult to see with the unaided eye on this screen so antialiasing at this font size would not improve image quality; anti-aliasing at the pixel level would tend to thicken the stroke width within these text letters making them less legible. The text algorithm that controls how letters will be rendered on the screen does not employ anti-aliasing with text smaller than eight points. At eight points and above anti-aliasing can be applied without distorting the letters making them appear thicker or less legible, some computer operating systems anti-alias at the sub-pixel level introducing a small chromatic effect that is below the threshold of visibility at this scale. Text below six points on this screen is illegible. The middle screen is a 19201080 LCD high definition television screen. A single pixel on this screen consists of six sub-pixels, two reds, two greens, and two blues, all of which can be independently controlled. On this screen a very sophisticated anti-aliasing that includes sub-pixel addressing is being applied to soften the edge on the much larger fonts being displayed on this screen. The row of four highlighted pixels reveals how the signal forming a stroke within a letter transitioned from white on the left to black on the right. Within this row of pixels the features of sub-pixel antialiasing algorithm are notable. To the unaided eye the letter appeared to be smooth and uniform from the typical viewing distance to this screen of several meters. The laptop screen has the highest resolution because it is used nearest to the eye at a distance of approximately 0.5 m putting the resolution at the observers eye at approximately 15 lp/deg. TV screens are viewed at a greater distance than computer monitors or handheld devices so the pixels can be bigger. The rule applied during television screen design is to match the resolution of the eye of a typical viewer, 20/20 vision, at the design eye point or approximately 30 lp/deg. The BBC survey [7] found that resolution available at the viewing distances that were reported ranged from 20 to 40 lp/deg. Hand held displays built into cell phones and PDAs are viewed from very short distances. A young user may view the display from as close as 6 or 7 inches. Older users tend not to view at such close range due to presbyopia, the loss with age of the ability to focus at short range. In Figure 2.20 the relationship between viewing distance and pixel pitch is shown for fixed resolutions at the observer’s viewing distance. The green band corresponds to pixel densities between 75 and 175 ppi, the red band corresponds to user viewing distances. A display with 150 ppi will never fall below 10 lp/deg within this range of viewing distances.
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
43
Figure 2.20 The pixel pitch required as a function of viewing distance to produce a fixed resolution measured in lp/deg at the observer’s eye is shown in this figure. The green band corresponds to the range of resolutions found in PDAs and cell phones and the red band corresponds to the range of user viewing distances.
2.14 Color Consumers prefer color displays to monochrome or limited color displays. Very few products succeed in the marketplace today that do not provide color imagery. Why do we see color, what value does color have for communicating information? Helmholtz conjectured that ‘Colours have their greatest significance for us in so far as they are properties of bodies and can be used as marks of identification of bodies’ [14, p 286]. Colors help us to identify objects and find them quickly in a cluttered scene. The color of food or the pallor of a face can suggest that the food should not be ingested because it could be spoiled or that the person with the sickly pallor might be ill and best avoided. It is reasonable to suppose that color must be a property of the objects viewed. Color results from light interacting with objects, by being absorbed and reflected by them, but it also results from the way the nervous system encodes and processes the light imaged onto the retina. Isaac Newton, in his experiments with light, concluded that color is a perceptual and not a physical property of light. Newton’s conclusion is based upon the observation that a mixture of different bands of the spectrum can look as if it is the same color as an entirely different and unique band of the spectrum. Based upon this observation he wrote, The homogeneal Light and Rays which appear red, or rather make Objects appear so, I call Rubrifick or Redmaking; those which make Objects appear yellow, green, blue, and violet, I call Yellow-making,Greenmaking,Blue-making,Violet-making, and so of the rest. And if at any time I speak of Light and Rays as coloured or endued with Colours, I would be understood to speak not philosophically and properly, but grossly, and accordingly to such Conceptions as vulgar People in seeing all these Experiments would be apt to frame. For the Rays to speak properly are not coloured. In them there is nothing else than a certain Power and Disposition to stir up a Sensation of this or that Colour. For as Sound in a Bell or musical String, or other sounding Body, is nothing but a trembling Motion, and in the Air nothing but that Motion propagated from the Object, and in the Sensorium ‘tis a Sense of that Motion under the Form of Sound; so Colours in the Object are nothing but a Disposition to reflect this or that sort of Rays more copiously than the rest; in the
44
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Rays they are nothing but their Dispositions to propagate this or that Motion into the Sensorium, and in the Sensorium they are Sensations of those Motions under the Forms of Colours. Sir Isaac Newton [15]
Light by definition is electromagnetic radiation that is visible to humans without any mechanical or physical aids. The primary sources of light in our environment until recently have been the sun, stars, and emissions released by objects when they burn or are heated. The light reflecting off objects produces wave fronts that our eyes use to form images on our retinas. Since different materials in nature are differentially reflective with respect to wavelength, Helmholtz’s conjecture seems plausible. When the nature of the radiation illuminating the objects changes however so does the image formed on the retina, so for the visual system to precisely sense the differential nature of reflected light it would also have to sense the illuminant and compensate for changes in it. Newton discovered that we are not very good at detecting the composition of light. The VIS band is bounded on each end for reasons determined by evolution. Photons of wavelengths slightly shorter than the VIS band, the UV band, interact with biological tissues and damage them. Our eyes have developed filters, the cornea and lens, which absorb these photons before they reach the retina and damage it. At the other end of the VIS band the longer wavelengths are close to the black body temperature of our bodies. If we could see into the IR band we would see emissions from our bodies and if they were captured by the photoreceptors they would overwhelm the image formed by shorter wavelengths just as the light in a brightly illuminated room will overwhelm the image on a display veiling it with reflected room light. Our photoreceptors are tuned to be insensitive to light above wavelengths 780 nm thus avoiding this otherwise constant saturating signal source, our body heat. During the Cambrian explosion 540 million years ago hard body animal life forms rapidly evolved. These were the first life forms capable of self-locomotion. Their emergence was rapidly followed by predatory behavior and the evolution of the biological sensory systems that are required to enable predation. Zoologists Michael Land and Dan-Eric Nilsson speculate that soon after this ‘find and eat’ or ‘hide to survive’ cat and mouse strategy evolved among living organisms both the complex and the compound eye rapidly evolved in as little as 500,000 years. Darwinian evolution produced many different eyes including an eye similar to the human eye in function and form once vision became important for survival [16]. The VIS band is radiation that can be absorbed and sensed non-destructively by biological molecules, it is strongly reflected by a broad range of materials, and it is suitable for detecting moving objects. This is all consistent with Helmholtz’s conjecture. Light reflects off objects in our field of view forming the distal signal in the environment. The eye forms an image from the distal signal on the retina, the proximal signal. The proximal signal is encoded and processed by the visual nervous system. The image contains edges defined by juxtaposed of more or less homogenous regions differentiated by the characteristics of the signals within the regions. Cognitive processing categorizes the edges and regions as objects inferring objects in our field of view. We perceive the objects, the lightness and color of their surfaces, and the relative brightness of areas within the object surface that may be partially shaded by a shadow cast by another object within the scene. The light in images formed on the retina can be characterized at each location in the image as a function on the VIS band, EðlÞ. The set of all possible functions {EðlÞ} is large but constrained by the characteristics of natural light sources and materials that reflect light. There are no naturally occurring emissive light sources in our environment that emit narrow bands or single wavelengths within the VIS spectrum. Historically, most of the materials that produce light when heated or burned are broadband emitters with smoothly varying emission spectra. Man made light sources, invented in the last 200 years, however can produce narrow band emission, for example lasers, LEDs and sodium vapor lamps, or broadband emitters with narrow bands of intense radiation, for example, the lines in a CCFL lamp or the emission spectra of a electro-fluorescent phosphor. Few naturally occurring materials reflect light in
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
45
narrow bands – oil slicks and butterfly wings are the rare exceptions. Most materials encountered in nature are broadband reflectors that vary in reflectively slowly as a function of wavelength. Most natural light sources are also broadband and smooth. The dimensionality of the set of light functions within the VIS band, {EðlÞ}, characterizing the wave fronts produced by naturally occurring light sources and their light reflected from objects is surprisingly low. Cohen [17] and more recently Maloney [18] suggest that a very low dimensional linear basis set will correctly reconstruct most of these functions almost exactly. This observation helps us understand why human color vision is tri-chromatic or three-dimensional. The eye has three photoreceptors classes each with overlapping broadly tuned action spectra. Photoreceptors are specialized neurons that contain biological molecules that are bleached by light. The bleaching of these molecules sets off a process that leads to the generation of a neural signal. The chemistry and kinetics of these molecules and the bleaching and regeneration process is such that the amount of photopigment bleached is proportional to a weighted sum of the light, EðlÞ. The spectral tuning of these materials is now known [19] and the signals generated in the cone photoreceptors can be written as Z S ¼ M ¼ L ¼
Z Z
EðlÞUðlÞSðlÞdl;
½1a
EðlÞUðlÞMðlÞdl; and
½1b
EðlÞUðlÞLðlÞdl;
½1c
where SðlÞ; MðlÞ, and LðlÞ are the spectral tuning curves for the S-, M-, and L-cone photoreceptors respectively. The function UðlÞ is an absorption function unique to every individual; it represents the colorations, scattering and absorption of light within the optical media of the eye unique to each person. This function changes over every individual’s lifetime and is different for each eye. The CIE standardization of these functions combines a normative individual’s uniqueness function, Unorm ðlÞ, with a linear transformation of Unorm ðlÞSðlÞ; Unorm ðlÞMðlÞ, and Unorm ðlÞLðlÞ so separating these functions to represent an individual observer is not practical. The CIE standard observer represents an average observer and not a particular person, so when one or two displays are set to a particular specification individuals can disagree about the quality of the rendered colors or even about whether or not the two displays set to the same standard look identical to each other; because of the uniqueness function they may not [20]. The triple hS; M; Li represents the momentary quantum catch of photons in the respective cone photoreceptors outer segments where a homogenous patch of light is imaged on the retina. A process of bleaching cone pigments underlies the transduction of light into a neural signal within the cone photoreceptor. These signals are linear with light but the resulting neural action is not necessarily a linear function of this signal. Neurons exhibit strikingly non-linear behavior, so linear connectivity within the visual pathway is surprising if it exists at all. There is, however, behavioral evidence that suggests that an implicit linear signal may exist at a second order synaptic junction within the neural visual pathway [21]. The set {EðlÞ} is very large. Consider what happens when two lights, a and b, with respective spectral power functions, Ea ðlÞ and Eb ðlÞ, are encoded into neural signals by the mapping functions [1a], [1b] and [1c]. Two functions that were potentially different at every wavelength are reduced to three quantum catch values hSa ; Ma ; La i and hSb ; Mb ; Lb i respectively; these three values need not be different. Newton observed that by combining a narrow spectral bands of light near 540 nm, that appeared green, with another near 600 nm, that appeared red, that the mixture of these two bands in correct proportion appeared indistinguishable to his eye to a narrow spectral band near 575 nm which appeared to be yellow. Passing the mixture that looked yellow through a prism again
46
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
separated the light into red and green spectral bands, but the yellow spectral band when passed through the prism remained yellow. If a is the mixture of red and green and b the pure spectral yellow, what Newton observed was the condition hSa ; Ma ; La i ¼ hSb ; Mb ; Lb i. This phenomenon is called metamerism and is the result of the low dimension of the spectral sampling of the eye. This is why Newton proclaimed ‘the rays . . . are not coloured’. Metamerism is also the reason that artists can make paintings and drawings that appear to have the same colors as objects in nature. It is also the reason why color photography and color video works. The colors seen on the screen are rarely spectrally identical to the objects they represent with images reconstructed by the display device. The prisms that Newton used in his experiments are able to produce a quality of light that is rare in nature, monochromatic spectral bands. To perform these experiments Newton had to shutter his rooms to darken the space and thereby control the quality of the light he saw and measured in the spectral bands formed by the prism. Had the room been illuminated the light in the spectral bands would have been combined with and veiled by light reflected off of the many surfaces within the room. This would have distorted the quality of the light and rendered the measurements and observations Newton derived from these experiments very difficult or even impossible to interpret or record accurately. This was especially true because the spectra that Newton was able to create with sunlight and prisms was very dim compared to the average intensity of the light flowing into a room with windows. The daylight illumination would have washed out the spectra just as it washes out the image on an electronic display if that display is not very bright. Metamerism is the result of the low dimensionality of the mapping functions [1a], [1b] and [1c] relative to the potentially high dimensionality of set of {EðlÞ}. Cohen and Maloney provide statistical evidence that the space {EðlÞ} is highly constrained by the ways in which light is produced in nature and interacts with materials. The interaction of naturally occurring light sources with materials in nature has resulted in the set {EðlÞ} being populated with power spectral density functions that tend to be highly correlated thus reducing the variety of lights encountered in nature to a relatively low dimensional linear space that can be well approximated by three samples and possibly perfectly spanned by not more than seven or eight basis elements. The colors we perceive as a quality of the surface of objects are not simply related to the local proximal signal generated on the retina. If the perception of color depended solely on the local proximal signal then we would rarely correctly identify object colors because, as the illumination changes throughout the day, so does the light reflected from surfaces. Our visual systems use not only the signals generated in each region within the retinal image to estimate the color of surfaces but also the relative relationships among the regions within the image of an object and the neighboring objects in the scene and even the shadows and gradients within the object’s boundary. All of these relationships change with the illuminant and this global information is used by the nervous system to generate the color percept. How exactly this process works is still the subject of investigation. What is known is that when the image is sparse with respect to familiar objects and relationships, for example if only an annulus and center disk are visible, then people are very poor at correctly separating the illumination from the object surface. When the image is of a complex scene typical of what is constantly visible in our visual environment people are pretty good at this colornaming task. Tri-chromatic color vision evolved because three sampling primaries combined with the ability to spatially and temporally process the image satisfies the Darwinian imperative to correlate color experience with the surface properties of objects in the distal signal most of the time. Thomas Young in 1802 [22] was the first to appreciate the limitations of color perception and to propose that it was based upon stimulating three mechanisms in the eye. James Clerk Maxwell [23], who also made significant contributions to color science, said that Young ‘saw that since this triplicity has no foundation in the theory of light, its cause must be looked for in the constitution of the eye . . . ’Young ‘. . . attributed (trichromacy) to the existence of three distinct modes of sensation in the retina, each . . . produced in different degrees by different rays.’
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
47
2.15 Making Color on Displays Displays use two methods to make color. The magnified images of display screen in Figure 2.19 plainly shows the red, green and blue sub-pixel structure of the screen under pinning the spatial multiplex method for making colors. It is not possible to program a single pixel to replicate a broad range of reflected lights, but a mixture of three more or less fixed primaries, each with a unique spectral characteristic, can be added together to ‘look like’ a broad range of reflected lights. This follows directly from the trichromacy of the eye. Whenever the 3-cone quantum catches within corresponding regions of the retinal image are the same, the color perceived will be the same. Because of this trivariant equivalence every color made the same way anywhere on a display screen will always look the same to all observers as long as the display’s color primaries are invariant over the display surface. This is called on-screen isomers [20] because all regions set to the same RGB values will be physically identical if the display is uniform. The ‘adding together’ of the spatial sub-pixels results from the eye’s inability to spatially resolve the sub-pixels elements. The laptop screen in Figure 2.19 has a pixel pitch of approximately 105 ppi. At a viewing distance of 19 inches (0.5m) this produces 17 lp/deg. The sub-pixels in the vertical direction have the same spatial frequency but in the horizontal direction where these pixels are further subdivided into red, green and blue stripes the spatial resolution is 52 lp/deg. There is not enough contrast between red, green and blue stripes for these stripes to be visible. On this screen making the entire image green and viewing from 12 inches reveals the vertical stripe pattern. The CRT in this figure has much larger pixels and at a viewing distance of 12 to 20 inches from the screen the color stripes are quite visible as expected from the data shown in Figures 2.12 and 2.13. Spatial color multiplexing works because at the viewing distances and scale used to construct the sub-pixels, the eye is incapable of resolving this fine detail. So a slightly displaced – by two sub-pixel widths – red, green and blue image is written to the screen and the eye ‘adds’ these images together because of its inability to resolve small details at these high spatial frequencies. The temporal response of the eye is also not very fast; only comic book characters can see a speeding bullet. Measurements of the sensitivity of the eye to flickering lights at a variety of intensity levels from dim to very bright have provided empirical evidence that the eye is insensitive to temporal variations much above 60 Hz [24]. If three color primaries are flashed one after another in rapid succession above this frequency, for example at a subfield frequency of 180 Hz, the eye’s temporal response will not be fast enough to resolve the temporal variation and once again the color primaries will be ‘added together’. Maxwell used this method with spinning sectored disks to study color mixing [23]. Time sequential methods have been used to reconstruct spatially complex color images. The CBS color system proposed in the late 1940s was a time sequence of three-color separations of the image. This system was called color sequential television. For color sequential reconstruction to work well the eye must cooperate and remain relatively fixed in its direction of regard during the time sequential reconstruction. Suppose an eye movement is evoked by some event off the screen which occurs during the time sequential presentation of the red, green and blue sub-fields. A loud sound or someone entering a room will often capture the attention of people in the room and produce an eye movement in the direction of the sound or person. These pre-attentive reflective behaviors are hard wired into all of us. The eye movement will result in a displacement of the color sub-frames and an image similar to the image shown on the right in Figure 2.21 will momentarily result. This is called color break up. Increasing the temporal rate reduces this effect.
2.16 Luminance and Tone Scale There are many non-intuitive aspects of human vision which impact how imagery appears on a display. We have noted the highly inhomogeneous spatial sampling performed on the retina in both space and time, but our perception of the world is uniform. The realization that color is perceptual although
48
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 2.21 In a time sequential color system the image is broken into color sub-frames that are written to the screen sequentially. If the eye moves during this screen writing process the color sub-frames will be blurred and displaced on the retina producing color fringes. If the imagery is mostly dark with a few bright objects in the scene observers will report seeing red and green flashes when these eye movements occur. (See Plate 5).
generally correlated with the spectral albedo of materials is another non-intuitive aspect of human vision, but no aspect of human vision is stranger than the relationship between luminance and brightness. Figure 2.22 contains three patches, two grays and one green. The green and darker patches
Figure 2.22 The center and right patch are matched in lightness and the center and left patch are matched in luminance. Luminance is the visual quantity that defines edges in the image. (See Plate 6).
have approximately the same luminance values whereas the lighter gray and green patches have the same lightness/brightness values. As illustrated in the Gelb effect in Figure 2.3 we do not experience the intensity of the light reflected from objects forming the image on our retinas, lightness and brightness are inferred properties that depend upon the relationships among the objects in the scene. Luminance is a measure of the visibility of an edge in either space or time and it is characterized by the Vl function of the CIE. Vl is a measure of the proportional radiometric flux required to produce a visible edge in space or time between a narrow band light and a broadband light. It is also called the luminosity function in that it measures the ability of each wavelength of light to evoke an edge. Its units are lumens and for any particular wavelength in the VIS band it provides the ratio of radiant flux measured in watts to luminous flux measured in lumens. It is also linear, so the luminosity of any two arbitrary lights, a and b, is the weighted sum of their respective power density functions. If Ea and Eb are the respective spectral power density functions for the two lights a and b, then theyR will match in match in luminance, but not necessarily brightness, whenever Fa ¼ Km Vl Ea ðlÞdl ¼ R Km Vl Eb ðlÞdl ¼ Fb . Luminance is linear with light so if a and b are equivalent in luminance adding a third light to each by superposition will not change their equivalence. The Vl function was first derived through flicker photometry. A broadband white is exchanged temporally with a narrow band spectral light at around 8 Hz. Each observer sets the exchange frequency so that they can just see a variation in color, increasing the exchange frequency would blend the two lights into a single experience. At a temporal frequency near 8 Hz most individuals can see a variation in both brightness and color. The observer is required to adjust the intensity of the narrow band light until the only remaining temporal variation is a change in color and no variation in brightness. People do this with surprising precision. Often when this ‘equal apparent brightness’ is achieved both the brightness flicker and the chromatic flicker vanish and the observer sees a steady
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
49
Figure 2.23 The Vl curve is plotted on the left and the flicker photometric matches of the ITU-709 color primaries are illustrated in the bar graph representing the flicker match on the right. The blue bars on the right would appear too dark in the relative units of this example and would have to be increased to null the flicker.
appearing field when the luminance of the two lights are matched; however, the extinction of the temporal color variation depends upon adaptation states and the phase relationships so it does not always occur [24]. What is being equated in this procedure is the ability of the eye to see an edge in time. An empirical procedure involving only spatial edges that produces a Vl function similar to flicker photometry derivation of Vl is the minimally distinct boundary method of Boynton et al. [25]. They asked observers to make the separation between two adjoining spatial patches of light, one a broadband white and the other a narrow band spectral light, as indistinct as possible. This procedure generates a luminosity function similar to Vl . The flicker photometric method of determining Vl is illustrated in Figure 2.23, below. On the right of this figure is an illustration of the temporal exchange at 8 Hz of a white and colored field. The colored fields in this illustration are meant to be similar to the ITU-R-BT 709-3 reference color primaries for electronic displays [26]. Luminance is additive. The relative contributions to the peak white in a display with red, green, and blue primaries that satisfies the 709-3 specification are respectively 71% from green, 21% from red, and 8% from blue. These ratios are indicated by the height of the bars on the right side of Figure 2.23 with the exception of the blue bars which at this scale would be too dark. On the left in this figure is a graph of the Vl function showing the location more or less of the dominant wavelengths of the 799-3 color primaries. The height of the green bar primary relative to the red and blue simply indicates the luminous efficiency of light near 555 nm the peak of the Vl function. The choice of a green with dominant wavelength near the Vl peak is not accidental; it was a deliberate choice to enable high efficiency bright displays because this light produces the most lumens per watt. The luminance is the visual quantity that determines edge salience. Artifacts are only visible when they generate sufficient luminance to define an edge at the appropriate temporal and spatial frequency. Any artifact produced by the display that does not generate sufficient contrast to be visible can be ignored, as it will be outside the window of visibility. The tone scale of a display is produced by the control of the spacing between levels. With digital displays the gaps between levels are discrete. Figures 2.6, 2.16, and 2.17 contained examples of banding artifacts in the rendered images that were generated by reducing the number of tone scale levels available on the display. Many display products are delivered with 64 levels of tone scale for each of the three-color primaries. To provide additional levels the signal processing is used in the controlling electronics to dither in space and time additional levels. This works because of the limitations of the eye to spatial and temporal details. As displays improve their dynamic range and
50
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
become larger in physical size the question of how many tone scale levels is required to avoid banding artifacts must once again be raised. Currently, the IEC standards [e.g. 26] standardize the tone scale function and color primaries in an attempt to move the industry towards uniform image appearance across display types and manufacturers. The goal is device-independent imagery. The industry has adopted a standard for the tone scale growth function that can be described as a power function with a power of approximately 2.2 relating the digital command values to the screen luminance. Weber’s law suggests that the optimal relationship between steps would follow a geometric growth function of digital command levels to assure that each step increment in intensity would be less than the threshold for seeing a step similar to the steps that are visible in ramp (d) in Figure 2.16. It is apparent from the discussion above that has the size of the screen increases and the dynamic range of the display increases, so will the visibility of banding artifacts. The solution will be to add more levels or to increase the complexity of the signal processing algorithms that render imagery to the screen.
2.17 Concluding Remarks This chapter has attempted to touch on the major issues that impact the visibility of images reconstructed on an electronic display. The technology is rapidly evolving and each technology has its own unique and special problems, so this chapter has at most touched upon important concepts but it has certainly not provided the answers to most of the relevant and pressing questions any engineer is faced with when designing a display into a product. The chapter could easily have been expanded in a number of directions, but the goal was not to replicate other essays devoted more narrowly to specific topics. Much more could be said about the eye’s temporal performance. Nothing was said about flicker. In the world before the active matrix and self-latching, bi-stable, optical devices such as cholesteric liquid crystal the video display world was dominated by film and CRT based television. Both of these devices reconstruct the image in a series of brief impulses lasting only a few msec. Today, most displays write imagery to the screen that remains there more or less steadily until a new control signal changing the image is written to the screen. This has eliminated the dark field phenomena that produced flicker on CRTs, but it has replaced it with a new problem generated by eye motion with the resulting more salient retinal image blur and has made judder on electronic displays as noticeable as it can some times be in cinema presentation. This and many other topics could have been developed or expanded on here because the topic is truly vast. Encountering a display artifact for the first time usually excites the detective in most engineers and scientists, this essay may help that engineer track down the cause and maybe even fix it.
References [1] Maxwell, J.C. (1890/2003) The Scientific Papers of James Clerk Maxwell, New York: Dover. [2] Curcio, C.A., Sloan, K.R., Kalina, R.E., E.Hendrickson, A.E. (1990) Human photoreceptor topography, Journal of Comparative Neurology, 292, 497–523. [3] Carlson, C.R. and Cohen, R.W. (1978) Visibility of Displayed Information: Image Descriptors for Displays. Tech. Report to the Office of Naval Research. Contract No. N00014-74-C-0184.; Carlson, C.R. and Cohen, R.W. (1980) A simple psychophysical model for predicting the visibility of displayed information, Proceedings of the Society for Information Display, 21 (3) 229–245. [4] The JND map was provided by Dr. Jeffrey Lubin of Sarnoff Corporation. [5] Lubin, J. (1995) A visual discrimination model for imaging system design and evaluation. In E. Peli (Ed) Vision Models for Target Detection and Recognition, Singapore: World Scientific. [6] Westheimer, G. (1975) Visual acuity and hyperacuity, Investigative Ophthalmology and Visual Science, 14 (8) 570–572; Westheimer, G. (2005) The resolving power of the eye, Vision Research, 45, 945–947. [7] Tanton, N.E. (2004) Results of a survey on television viewing distance, BBC R&D White Paper, WHP 090.
HUMAN FACTORS CONSIDERATIONS: SEEING INFORMATION ON A MOBILE DISPLAY
51
[8] Wyszecki, G. and Stiles, W.S. (1982) Color Science: Concepts and Methods, Quantitative Data and Formulae, New York: John Wiley & Sons, Ltd. [9] Weber, E.H. (1834) De pulsu, resorptione, auditu et tactu annotationes anatomicae et physiologicae, Leipzig: C.F. Koehler. [10] Stiles, W.S. and Crawford, B.H. (1932) Equivalent adaptation levels in localized retinal areas, Discussion Vision: The Physics and Optics Societies, London, 194–211. [11] Stiles, W.S. (1978) Mechanisms of Colour Vision, London: Academic Press. [12] van Nes, F.L. and Bouman, M.A. (1967) Spatial modulation transfer in the human eye, J. Opt. Soc. Am., 57, 401–406. [13] Hoekstra, J., van der Groot, D. P. J., van den Brink, G. and Bilsen, F. A. (1974) The influence of the number of cycles upon the visual contrast threshold for spatial sine patterns, Vision Research, 14, 365–368. [14] Helmholtz, H. (1896/1962) Helmholtz’s Treasties on Physiological Optics (ed. J.P.C. Southhall), New York: Dover. [15] Newton, I. (1730/1952) Opticks, New York: Dover. [16] Land, M.F. and Nilsson, D.E. (2002) Animal Eyes, New York: Oxford. [17] Cohen, J. (1964) Dependency of the spectral reflectance curves of the Munsell color chips, Psychonomic Science, 1, 369. [18] Maloney, L. T. (1999) Physics-based approaches to modeling surface color perception. In K.R. Gegenfurtner and L.T. Sharpe (Eds) Color Vision: From Genes to Perception, Cambridge, UK: Cambridge University Press, pp. 387–422; Maloney, L. T. (1986) Evaluation of linear models of surface spectral reflectance with small numbers of parameters, Journal of the Optical Society of America, 3, 1673–1683. [19] Smith, V.C. and Pokorny, J. (1975) Spectral sensitivity of the foveal cone photopigments between 400 and 500 nm, Vision Research, 15, 161–171. [20] Brill, M.H. and Larimer, J. (2007) Metamerism and multi-primary displays, Information Display, 23, 16–21. [21] Cicerone, C.M., Krantz, D.H. and Larimer, J. (1975) Opponent-process additivity-3: Effect of moderate chromatic adaptation, Vision Research, 15, 1125–1135. [22] Young, T. (1845) Course of lectures on natural philosophy and the mechanical arts, London: Kelland. [23] Maxwell, J.C. (1890) On the theory of three primary colours. In W.D. Niven (Ed) The Scientific Papers of James Clerk Maxwell, Cambridge University Press, pp. 445–450. [24] de Lange, H. (1958) Research into the dynamic nature of human fovea-cortex systems with intermittent and modulated light. II. Phase shift in brightness and delay in color perception, Journal of the Optical Society of America, 48, 784–789; Kelly, D.H. (1961) Visual response to time-dependent stimuli. I. Amplitude sensitivity measurements, Journal of the Optical Society of America, 51, 422–429. [25] Boynton, R.M., Kaiser and Vision, P.K. (1968) The additivity law made to work for heterochromatic photometery with bipartite fields, Science, 61, 366–368. [26] IEC 61966-2-1 Colour management – Default RGB colour space – sRGB.
3 Advanced Mobile Display Technology Kee-Han Uh, and Seon-Hong Ahn Mobile LCD Division, Samsung Electronics Co., Ltd., Gyeonggi-Do, Korea
3.1 Introduction The mobile display market is expected to continue expanding as flexible broadcasting and communication services become available, and mobile product applications converge with enhanced functionality. Because of this trend, mobile devices require new features with advanced mobile technologies to provide ‘smartness’, high quality displays, low power consumption, and slim design. As illustrated in Figure 3.1, there are many mobile device applications, but it is possible to categorize them into four groups: communication, digital imaging, entertainment, and telematics. The communication group includes cellular phones and PDAs. The digital imaging group contains the DSC (Digital Still Camera) and DVC (Digital Video Camera), which are becoming very popular tools for taking pictures nowadays. The entertainment group contains products for listening to music or playing games, such as MP3 players or game players with displays. Lastly, in telematics, the PND (Portable Navigation Display) and CNS (Car Navigation System) are being used for locating places where people want to go. In reviewing the current trends of mobile devices, the touch screen function of the user interface is one of the key issues. It has become almost impossible to think of mobile displays without this function. Figure 3.2 illustrates some of the current and emerging requirements for mobile displays. As a sensational example of a touch screen application, Apple has made extensive use of multi-touch functionality in its new product, the iPhone. As for physical specifications, the displays are becoming bigger, slimmer, and more compact. To achieve a higher quality of display, a wider viewing angle, high color gamut, and finer pixel pitch are being pursued. Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
54
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 3.1 Mobile display applications.
Mobile displays need wider viewing angles, greater than 160 degrees, with no gray inversion. To achieve this, mPVA (mobile PVA) or AFFS (Advanced Fringe Field Switching) (or IPS) could be a solution in the future. In terms of color purity, the color gamut presently available on the market is generally 60%, but this increases to more than 80% for high end displays in various mobile applications.
Figure 3.2 Requirements for mobile displays.
ADVANCED MOBILE DISPLAY TECHNOLOGY
55
Another trend is convergence. As an example, we can look at the history of Samsung’s cellular phone. Until 1997, the only major functions were voice communication and a black-and-white display for basic system status such as remaining battery life and signal sensitivity. In 2000, the adoption of color STN LCD (Super Twisted Nematic LCD) led to color text and color still images, followed by the TFT-LCD (Thin Film Transistor LCD) with 4096 colors in 2002 which had a better display quality and allowed mobile phones to play moving pictures. Over the years, cellular phones began to feature a camera, mobile TV, and MP3 player – in 2004, 2005, and 2006, respectively – as the communication infrastructure evolved. Recently, the touch screen function has become a key user interface in high end phones. In short, functions such as a camera, gaming, mobile TV, MP3 are converging into one personal mobile application. This trend necessitates even higher quality mobile displays than conventional ones. One aspect of the trend toward convergence of mobile applications is the Smartphone. The CAGR of the Smartphone is expected to be about 56% between 2006 and 2010. In other words, this will be main stream and occupy one-third of the cellular phone market, or 505 million units out of 1,550 million units in 2010, as Figure 3.3 shows.
Figure 3.3 Growth of the Smartphone market.
The smart phone environment is shifting from an e-mail base to a multimedia base as the mobile infrastructure evolves, with improved wireless communication infrastructure, accelerated web personalization, and integrated hardware. To meet the multimedia needs of the market, mobile displays for smart phones must realize long battery life, high resolution, slim shape, a touch screen, as well as fancy looks. As we can see in Figure 3.4, below, display sizes are increasing from the 2.200 –2.800 range to the larger 2.600 –3.500 range, along with the higher resolutions from QVGA to WQVGA, and up to WVGA.
3.2 Advanced Mobile Display Technology In this section we will consider the components of the display module and how they affect its performance.
56
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 3.4 Display size and pixel count.
3.2.1 Liquid Crystal Display Mode Most of the LCDs available on the market use the normally white twisted nematic (TN) mode because of its low price, excellent structural stability, and wide process margin. However, TN mode LCD shows limitations of performance in viewing angle, contrast ratio, and gray inversion [1]. When a TN LCD is viewed from different oblique angles, the incident light experiences different effective birefringence, which is the main reason for image deterioration. To achieve better optical image quality, normally black LC modes with a good dark state have been developed, such as the vertical alignment (VA) mode, in-plane switching (IPS) mode, and fringe-field switching (FFS) mode [2–3]. In the IPS/FFS mode, the LC molecules are homogeneously aligned between the substrates; in the VA mode, the negative dielectric LC molecules are aligned vertically between the substrates. In 1997, an MVA mode LCD was produced commercially for the first time by Fujitsu [4], and it had a high contrast ratio and faster response time. Subsequently, Sharp announced the advanced super VA (ASV) mode, which has a protrusion on the color filter (CF) and patterned ITO (Indium Tin Oxide) in the TFT layer for multi-domain alignment of the liquid crystal [5]. Samsung Electronics (SEC) also developed patterned VA (PVA) mode LCD [6], in which the multi-domain alignment is generated by the fringe field between the patterned electrodes at the CF and TFT substrates. The structure of MVA, ASV, and PVA mode LCDs is schematically shown in Figure 3.5, below. In these modes, electrode protrusions or patterned electrodes are used to generate the fringe field, which will drive vertically aligned LC molecules to form multiple domains.
ADVANCED MOBILE DISPLAY TECHNOLOGY
57
Figure 3.5 The schematic view of VA mode LCD structure (field-on state).
One major difference in the manufacture of VA mode LCD is that no rubbing process is needed. In the IPS and TN modes, the LC alignment layer (normally polyimide) needs to be rubbed to avoid LC disclination. In the VA mode LCD, a negative dielectric LC and vertically aligned material are used to achieve the vertical alignment. In the initial state, the LC molecule is vertically aligned to the substrate, and since the LC molecule’s tilt direction is controlled by the fringe field, the VA mode LCD does not need a rubbing process. A rubbing-free process not only means a reduction in manufacture steps, but also raises the yield of mass production, especially for the large sized panels where rubbing could be a serious problem. During the rubbing process, particles from the rubbing cloth may be incorporated and an electrostatic charge may cause damage to the TFT array. In addition, the inclusion of TFT and CF arrays inside the pixel area makes it difficult to align the LC molecules in one direction, which can cause light leakage and decrease the contrast ratio. Generally, VA mode LCD can achieve a very high contrast ratio and higher transmittance than IPS mode LCD. Next, we will discuss VA technology, by using the PVA mode as an example.
3.2.2 Operating Principle of VA Mode VA LCD normally uses a black mode. When there is no driving voltage applied, the LC molecules tilt vertically in the cell. The incident light experiences no retardation and is blocked by the polarizer. When a driving voltage is applied, the LC molecules tilt in the cell. The incident light experiences retardation from the switched LC, and will transmit through the polarizer to display the white state. The LC molecules’ alignment state and its transmittance are shown in Figure 3.6.
Figure 3.6 Operation of a single domain VA cell and its transmittance property: (a) field-off state; (b) field-on state; (c) gray level transmittance at different view direction.
58
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
The transmissive properties of a single domain VA cell are asymmetric at different viewing directions. Due to the optical anisotropy of LCs, gray inversion will occur at certain viewing angles, as shown in Figure 3.6(c). A multi-domain pixel structure can be used to suppress gray inversion and maintain stable transmittance at all viewing angles. Although each domain has a gray shade inversion at one viewing angle, the other domains compensate for it so that the pixel as a whole does not exhibit a gray inversion problem, as shown in Figure 3.7. The PVA mode uses four-domain alignment, which produces a symmetrical response at all viewing angles. Figure 3.8, below, schematically shows the LC director profile under a driving voltage of the
Figure 3.7 The mechanism of multi-domain compensation.
PVA and IPS modes. For the PVA mode in Figure 3.8(a), LC molecules – having negative dielectric anisotropy – are aligned vertically by a vertical alignment layer in the off state. When the driving voltage is applied, a fringe field is formed by the patterned ITO to create multi-domains. As the LC molecules have negative dielectric anisotropy, the LC aligns perpendicular (or normal) to the fringe electric field. Subsequently, the bulk of the LC molecules align parallel to those aligned by the fringe field and hence a uniform white state is obtained. For the IPS mode in Figure 3.8(b), the LC molecules with positive dielectric anisotropy are aligned in-plane (y–z direction) with a horizontal alignment layer in the off state. When an in-plane electric field is applied in the x–z direction, the LC molecules rotate laterally so that they are aligned parallel to the applied electric field. However, LC molecules located directly adjacent to the pixel electrodes may not align parallel to the electrodes because the field is tilted away from the plane of the electrodes there. When light propagates through LC molecules under crossed polarizer conditions, the transmittance of the VA and IPS modes can be derived as follows [5]: T ¼ sin2 ð2Þ sin2 ð=2Þ
ð1Þ
where ¼ 2nd= is the phase retardation, is the wavelength, is an angle between the x axis and LC molecules, n is the LC birefringence, and d is the cell gap.
ADVANCED MOBILE DISPLAY TECHNOLOGY
59
Figure 3.8 The operating principle of (a) VA and (b) IPS mode.
In the VA mode, as the LC molecules are in a 45 direction ð ¼ 45 Þ to the polarizer axis, the transmittance depends on the phase retardation of LC under the driving voltage. For the IPS mode, as the LC molecules are in-plane, phase retardation () is constant. The transmittance depends on the twist angle, which shows a maximum at 45 . From Equation (1), the maximum transmittance will be achieved at a retardation value of 275 nm for a 550 nm wavelength at 45 LC direction to the polarizer axis. However, in a real LCD, since the LC molecules do not align in one direction under a saturation voltage, the effective retardation of the maximum transmittance is selected to be more than 275 nm.
3.2.3 Super PVA (S-PVA) Technology PVA technology has the advantage of a wide viewing angle and native high contrast ratio for various applications. However, there is an issue of a gamma shift at large inclination angles for the PVA mode, which results from the V–T curve distortion at oblique directions. For LCD TV applications, the off-axis image quality is a critical factor. The screen must maintain consistent luminance and color when viewed from any angle. Based on PVA technology, Samsung Electronics developed super PVA (S-PVA) technology [8–9]. S-PVA technology overcomes the limitations of PVA and has become one of the most appropriate solutions to fulfill the demands for large area flat panel displays. Compared to PVA, S-PVA divides each sub-pixel into two parts, so S-PVA has twice as many domains as PVA. This concept is illustrated in Figure 3.9, below. One sub-pixel consists of two separate sub-pixels, A and B. Sub-pixel A and sub-pixel B have different applied voltages and therefore different tilt angles. The two divided domains effectively construct an eight-domain VA cell, which can compensate and minimize gamma distortion for images viewed off-axis. The S-PVA structure can use an integrated capacitor to charge the secondary (B) sub-pixel from the primary (A) sub-pixel. This structure is referred to as capacitive-coupled S-PVA, or CC type S-PVA. To enable complete and independent control over the A and B sub-pixels, another new type of S-PVA called two transistor type S-PVA, or TT S-PVA, has also been developed. In TT S-PVA, the number of
60
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 3.9 Motion of LC molecules in an 8-domain S-PVA cell. (See Plate 7).
gate lines is doubled, and sub-pixels A and B share the same data line but are controlled by different transistors. TT S-PVA preserves peak luminance, and enables maximum performance by providing independent control of each of the sub-pixels. The equivalent circuits and a comparison of luminance performance in CC type and TT type S-PVA are shown in Figure 3.10 and Figure 3.11, respectively. In CC S-PVA, the peak luminance of the secondary sub-pixel B is always lower than that of sub-pixel A because sub-pixel B cannot reach its full white saturation condition. Note that although the number of gate lines is doubled in TT type S-PVA, the number of row drivers has not increased. The multi-channel gate driver ICs are used to effectively
Figure 3.10 Equivalent circuits of CC type and TT type S-PVA.
ADVANCED MOBILE DISPLAY TECHNOLOGY
61
Figure 3.11 Comparison of luminance performance in S-PVA.
address the double density gate line structure without any increase in the number of driver ICs compared to that of conventional LCDs. In S-PVA mode, a sub-pixel is changed from four domains to a total of eight domains in each pixel. Sub-pixels A and B have different gamma curves. The off-axis gamma distortion is minimized through the gamma mixing technology. Sub-pixel A is optimized in the normal direction; sub-pixel B is optimized in the off-axis view and has higher gamma properties. These mixed gammas are optimized to an ideal value for all directions. The multi-domains and the separate sub-pixel driving method improve performance at off-normal viewing angle. Figure 3.12, below, shows a comparison of viewing angle dependence of the gamma curves for conventional PVA and S-PVA panels. The viewing angle properties of S-PVA are improved considerably compared to normal PVA, as shown in Figure 3.12(b).
3.2.4 Mobile PVA (mPVA) Technology Generally, the LCD screen size for monitor and TV application is 1700 to 5000 , or larger. The pixel pitches are around 200–500 mm with red, green, and blue sub-pixels. However, in mobile applications, especially the cellular phone type, the panel is smaller but has a large number of pixels with a pixel pitch of around 100–200 mm. Although the PVA mode could be used for mobile application, the aperture ratio decreases significantly as the pixel size decreases. To improve the aperture ratio for mobile application, mobile PVA technology has been developed. Figure 3.13, below, shows a PVA and newly developed mobile PVA (mPVA) [10] structure. For less than 130 pixels per inch, the PVA structure could be used, as shown in Figure 3.13(a). For more than 130 ppi, the mPVA pattern is used to get a higher aperture ratio. For a panel of 200 qVGA (320 240, about 200 ppi), the aperture ratio of mPVA is increased by about 20% over the PVA structure. In the mPVA structure of Figure 3.13(b), the fringe electric field direction is along the circumference of the round common electrode at the color filter when the driving voltage is applied. The LC molecules are inclined vertically to the electric field direction. To minimize free energy, the LC molecules around the hole move downwards followed by rotational movement in the horizontal plane. LC molecules far from the CF hole align similarly. The field-on state LC molecule director distribution of the PVA and mPVA mode is schematically shown in Figure 3.14, below. The PVA mode has four-domain LC alignment, while the mPVA mode has many more domains. Therefore, their polarizer structures are different. A linear polarizer can be used in the PVA mode; a
62
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 3.12 The comparison of viewing angle dependence of the gamma curves for conventional PVA and S-PVA.
Figure 3.13 Pixel design of (a) PVA and (b) mobile PVA and (c) cross-sectional view of mobile PVA.
circular polarizer is necessary for the mPVA mode to increase the transmittance. If a linear polarizer is used for the mPVA LCD, the LC texture will be seen at the white state, and transmittance will decrease due to the texture. The polarizer structure difference is shown in Figure 3.15, below. Furthermore, a new design rule and LC materials have been suggested to improve the aperture ratio. To minimize the electric field distortion between the sub-pixel and the contact hole, a 2-sub-pixel design has been introduced, as shown in Figure 3.16(a), below. The contact hole is shifted to the center of the pixel
ADVANCED MOBILE DISPLAY TECHNOLOGY
63
Figure 3.14 Schematic view of LC molecule director in field-on state.
Figure 3.15 The polarizer structure of (a) PVA and (b) mPVA LCD.
Figure 3.16 Structure (a) and LC textures (b) of novel mPVA pixel.
and each pixel is divided into two sub-pixels. Very stable LC textures are achieved and the momentary image retention is removed regardless of the electric field condition, as shown in Figure 3.16(b). Figure 3.17(a) shows the simulation results of LC molecule movement with an applied voltage for an mPVA mode pixel structure. For the mPVA structure, it is not necessary to use a color filter black
64
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 3.17 Cross-sectional view of novel mPVA pixel design and its director profile.
matrix (BM) for blocking light leakage, because the LC molecules on the data line show stable movement. Therefore, it is possible to obtain a higher aperture ratio without an extra black matrix on the CF substrate.
3.2.5 Transflective VA LCD for Mobile Application Transflective LCDs [11] have been widely used, because of their low power consumption and good legibility in indoor and outdoor environments. The pixels of transflective LCDs are separated into the transmissive and reflective regions. To show the displayed image, the transmissive regions control the backlight illumination and the reflective regions control reflection of ambient light. Most transflective LCDs use homogeneous cells (also known as ECB mode); however, their contrast ratio (CR) is limited and their viewing angle is narrow in the transmissive region. Recently, VA [12] and IPS/FFS [13] mode transflective LCDs have also been developed to achieve better image quality. There are various methods of making transflective VA LCDs. To switch the transmissive and reflective regions separately, dual driving VA, dual cell gap VA and dual electrode VA transflective types have been proposed. One example of a dual cell gap transflective VA LCD structure is shown in Figure 3.18. In the dark state, LC molecules are aligned vertically in both transmissive and reflective regions. Because the cell gap of the reflective region is half that of the transmissive region, the optical path of the reflective and transmissive regions are the same. In the VA transflective LCD, there is an organic embossed layer in the reflective region, but not in the transmissive region. In this organic layer, a pixel electrode and reflector are formed to reflect
Figure 3.18 Schematic view of the operating principle of transflective VA LCD.
ADVANCED MOBILE DISPLAY TECHNOLOGY
65
ambient light backwards. The LC cell gap is made half that of the transmissive region by covering the color filter resin with an overcoat layer. In the case of the IPS transflective LCD, since the reflector must be below the pixel electrode and its passivation layers due to the pixel structure [14], reflectivity is reduced. To obtain high reflectance and maintain the transmissive performance, an in-cell retarder [15] can be used in the IPS panel structure, but the retardation of the in-cell retarder cannot be controlled as accurately as the film type retarder because of the manufacturing process. A mismatch of retardation will also cause reflective light loss. For the reasons given above, it is difficult to achieve high reflectance and reflective CR for IPS transflective LCD. The transflective VA mode may be a better solution for high image quality mobile applications, both indoors and outdoors.
3.2.6 Backlight The color gamut of an LCD results from the combination of the spectrum of the backlight, the LC optical properties, and the color filter. However, the spectrum of LEDs, conventionally used as the backlight, is broad in the green and red region of the spectrum, which causes a lower color gamut of LCD when used with a broad spectrum color filters as shown in Figure 3.19. At the time of writing, mobile displays available on the market have a color gamut between 50% and 70% with some customers requiring higher than 90%. An LED spectrum in the red and green region of the spectrum is not distinctive compared with that of a CCFL backlight as is shown in Figure 3.19. That is the reason why it is more difficult to increase the color gamut of mobile LCD using white LEDs than to
Figure 3.19 Light spectrum of LED and color filter.
66
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
increase that of its color filter. It is possible to obtain more than 90%, even with a normal 72% color filter, by matching the transmission spectrum of the CF and the spectrum of RGB LED or RG LED. SEC has its own backlight technology for achieving high color gamut, which is called 51GT. The RG LED applied in 51GT allows SEC’s mobile LCDs to have an excellent optical performance with a brightness of 500 nit and 100% color purity. The result is that LCDs can have comparable display quality to OLEDs.
3.2.7 Substrates In order to make LCD modules thinner, every layer from the backlighting units to the polarizer on top must be reduced in thickness. There are four major factors in decreasing the thickness: glass, LGP (Light Guiding Plate), LED, and polarizer. Let’s take glass, for example. Glass has continuously become thinner; some LCD suppliers use 0.1 mm CF glass on top of 0.2 mm or 0.3 mm TFT glass. The LED is expected to be 0.3 mm in 2010 from 0.4 mm in 2007. LGP will be reduced from 0.3 mm to 0.25 mm by 2010. The total thickness of an LCD module could be less than 1.0 mm by 2009, compared with 1.2 mm in 2008.
Figure 3.20 Substrate thickness.
However, the most important consideration is how to design a robust LCD module because thinness and robustness need to be traded off against each other.
3.2.8 Drive Electronics The active matrix of an AMLCD is basically an array of transistor switches. Using this transistor fabrication capability, one can integrate the driver electric circuits onto the LCD panel. Types and sizes of circuit integration vary with the active matrix materials (a-Si or LTPS) and the needs of applications.
3.2.8.1 Amorphous Silicon Conventionally, circuit integration using a-Si TFT on an LCD panel is very difficult because of its low mobility compared with LTPS (Low Temperature Poly Silicon). However, in 2003, Samsung developed
ADVANCED MOBILE DISPLAY TECHNOLOGY
67
a novel integration technology called ASG (Amorphous Silicon Gate) to replace the previous gate driver IC with circuit integration using a-Si TFT on glass, which provides cost benefits by removing one gate driver IC, turning two chips into a one-chip LCD module (Figure 3.21). On the other hand, ASG technology could lead to a slim design of the LCD module through a symmetrical narrower bezel by compact ASG integration under the black matrix side of the CF. One of the critical obstacles to overcome is how to obtain stable ASG integration circuitry by compensating for each ASG transistor that consists of more than five unstable a-Si transistor units. This ASG technology has been applied up to 2.600 VGA of 300 ppi resolution and has demonstrated its potential use for high end and high resolution products.
Figure 3.21 Benefit of gate IC removal.
Since ASG first emerged in 2003, Triple-Gate ASG was manufactured in 2006, and ALS (Active Level Switching) was derived from the ASG technology in 2007. Triple-Gate ASG and ALS will be explained in more detail later in this chapter.
3.2.8.2 Poly Silicon (LTPS) As far as LTPS circuit integration is concerned, many variations in the use of SOG (System On Glass) technology can be considered. Due to high electron mobility, the integration level of LTPS SOG is higher than that of a-Si ASG, which only integrates the gate driver. The biggest difference is that LTPS can integrate analog circuits. According to the level of integration, LTPS SOG is conventionally categorized into four types. Samsung has developed all four types of SOG. Every one has its own advantages and disadvantages in terms of cost, power consumption, design, etc. For type I, gate driver and de-multiplexing parts are integrated onto the LCD panel. The other parts, such as the source driver, SRAM, time control, and DC/DC, are transferred to the driver IC, which gives low power consumption and high yield in spite of the cost of the driver IC. In the case of type II, high voltage DC/DC is integrated more on TFT-LCD, and the driver IC no longer needs the high voltage
68
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
process, reducing driver IC delivery time and cost by about 30%. However, this type may cause instability in power and increased power consumption. For type III, a block or point addressing function is added and the driver IC with a source driver, SRAM, timing control, and low voltage DC/DC can be relocated from the LCD panel to the FPC; in other words, it could be known as COG-less. Adapting block or point addressing in this type III produces the benefits of wide design freedom and cost efficiency in the driver IC, but, on the other hand, power consumption will be increased and highly stable TFT quality will be necessary in the fabrication. Lastly, type IV is where all the functions on the driver IC are shifted and integrated onto the LCD panel. However, practically speaking, there are some limitations of LTPS. For example, it occupies too much space to allow full frame memory on the LCD, it reaches 6 bit DAC at the most, and it is not fast enough to allow the high frequencies needed to apply high speed serial interfaces such as MIPI (Mobile Industry Process Interface), CDP (Compact Display Port), and MDDI (Mobile Display Digital Interface), even if it is the most cost efficient compared with the other types.
3.2.9 Triple-Gate Triple-Gate ASG, which means the horizontal stripes of RGB pixels rather than the vertical ones, was developed in 2006 as advanced ASG technology (Figure 3.22). Due to its horizontal scheme, the number of gate lines is three times more, but the number of source (data) lines only a third. As a result, the eight-chip module of 700 WVGA is reduced to only one chip. The benefits are compact design and cost reduction from removing seven chips of driver ICs and a resultant slim FPC with a reduced number of components.
Figure 3.22 Triple-Gate displays.
3.2.10 ALS (Active Level Shifting) The power consumption of an LCD module becomes larger with an increase in panel size and resolution. It is no longer negligible for high resolution panels when compared with the power consumption of LEDs. ALS reduces the power consumption by using the DC common voltage, Vcom. It accounts for 30% of the panel power consumption so that a power consumption reduction of 30% is theoretically achievable. In the ALS driving technique, pixel data is outputted as in the line inversion case, but the pixel voltage is boosted up or down after the pixel charging period. The boosting circuitry is also integrated with a-Si TFTs on the panel. An additional benefit of ALS is that an ALS-driven LCD panel is audible-noise-free. The audible noise is attributed to the toggling of the common voltage, Vcom, and mechanical vibration. ALS allows less power consumption as well as audible noise-free operation (Figure 3.23).
ADVANCED MOBILE DISPLAY TECHNOLOGY
69
Figure 3.23 ALS (Active Level Shifting).
3.2.11 hTSP (Hybrid Touch Screen Panel) The touch screen panel (TSP) attached to the display has been widely used as an input device for mobile systems such as CNS (Car Navigation System), PDA (Personal Digital Assistant), digital cameras, and hand-held phones. TSP enables devices to implement an adaptive user interface in which the functionality of the input area changes depending on the device’s application. It also makes the user interface more convenient and flexible. It can provide extra functionalities such as drawing, writing, and multi-touch. On the other hand, conventional TSP has not overcome the problems of degradation in display quality and the increase in module thickness as the mobile market demands. The TSP-attached LCD has a reduction of between 10% and 20% in brightness, a significant reduction in contrast ratio in outdoor conditions, and an increase in module size. These problems have impeded its application as a leading hand-held phone on the market, where slimness and brightness are the most critical quality factors. Whether or not the application can be expanded to other mobile devices including hand-held phones depends on how much the future TSP technologies can meet the requirements of the mobile market in a cost-effective way. Strong demands for high display quality and smaller size have accelerated integration of the TSP function in TFT-LCD, which is aptly called ‘hybrid TSP’. This integration of the TSP function enables devices to have better optical and mechanical performances since the upper TSP layers are removed and replaced with integrated sensor arrays. The various integration technologies for the touch function have been introduced using optical, capacitive, or resistive sensor arrays. At SEC, the resistive type is in the process of being developed and will soon be mass produced. The merits of hybrid TSP are described in Figure 3.24, below. Hybrid TSP makes the panel clearer and slimmer. Also, the external controller can be removed because TSP control logic can be integrated into the already-existing display driver IC. This hybrid technology can also meet the growing needs of the multi-touch TSP function. A conceptual description of hybrid multi-touch TSP is presented in Figure 3.25, below. Without much of an increase in cost, the multi-touch function, which is greatly required in today’s mobile equipment, can be realized on hTSP.
70
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 3.24 Hybrid TSP.
Figure 3.25 Multi-touch hybrid TSP.
3.2.12 ABC (Adaptive Brightness Control) Benefits can be obtained if the brightness of the panel, which is controlled by the backlighting unit (BLU) and gamma adjusting curve, is controlled appropriately. The benefits are two-fold. Firstly, power consumption is lowered by reducing excessive amounts of light in a dark environment.
ADVANCED MOBILE DISPLAY TECHNOLOGY
71
Secondly, better visibility is realized by making the panel brighter in a sunny environment and adjusting the gamma curve according to the image content to be displayed (Figure 3.26). These benefits are currently being realized in two different ways, as follows.
3.2.12.1 Light Sensor Based Ambient light sensors on the LCD panel can contribute to lowering power consumption or increasing visibility by detecting the luminance around the LCD panel to find a suitable brightness for the backlight. This is called light sensor based adaptive brightness control (sABC). Both a-Si and LTPS can be used for this purpose, but PIN diodes using the LTPS process have an advantage as light detectors because they can be produced in the same process as LTPS TFTs. Also, LTPS PIN diodes have a wide dynamic range in light sensing, making them suitable for ALS. However, the photo current in reverse bias changes easily depending on the ambient temperature. As ALS is expected to be usable at high temperatures such as 70 C, LTPS ALS needs a temperature compensation function or should have higher efficiency in detecting ambient light.
3.2.12.2 Image Contents Based Adaptive Brightness Control (cABC) While sABC only adjusts the BLU’s brightness, image content based ABC (cABC) controls the gamma curve as well. The algorithm determines the color combination of the image to be
Figure 3.26 Adaptive brightness control.
72
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
displayed, and accordingly chooses a gamma curve which best enhances the visibility of the particular image. For example, a 1.2 or 1.4 gamma curve can be used and the brightness of backlight can be lowered in order to save 50% or 30% of the original backlight power when the image is dark as in Figure 3.27. On the other hand, normal 2.2 gamma and full backlight power must be applied when the image data has a bright picture like a snowy scene.
Figure 3.27 Image contents based adaptive brightness control.
3.3 Summary Newly developed technologies for mobile displays, ranging from enhancing display quality to valueadded functions, were reviewed in this chapter. For cost competitiveness, using a-Si technology, SEC developed ASG. By enhancing the reliability of a-Si TFT and circuit stability, a compact and slim a-Si module has been realized. For value-added functions, an a-Si based photo sensor and hTSP have been developed along with ALS for low power consumption and audible noise-free operation. SOG with LTPS materials is currently being realized through four different types of circuit integration. For hTSP applications of an integrated touch screen, the resistive type has been studied. The technical barriers have almost been overcome and by optimizing the processes, mass production will soon be possible.
References [1] Mori, H., Nagai, M., Nakayama, H. et al. (2003) SID’03, p1058. [2] Kondo, K., Kinugawa, K, Konishi, N. and Kawakami, H. (1996) SID’96, p81. [3] Lee, S.H., Lee, S.M. et al. (2001) SID’01, p484.
ADVANCED MOBILE DISPLAY TECHNOLOGY [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15]
Koma, N., Nishigawa, R. and Tarumi, K. (1996) SID Digest of Tech. Papers, p558. Mizushima, S., Watanabe, N., Shiomi, M. et al. (1999) AMLCD’99, p177. Kim, K.H., Song, J.K. and Souk, J.H. (1998) Proceeding of Asia Display ’98, p383. Yeh, P. (1988) Optical Waves in Layered Media, John Wiley & Sons, Ltd. Kim, S.S. (2004) SID Symposium Digest, p760. Kim, S.S. (2005) SID Symposium Digest, p1842. Kim, J.H., Yeo, Y.S., Park, W.S. et al. (2006) IDW’06, p181. Back, H.I., Kim, Y.B., Ha, K.S. et al. (2000) IDW’00, p41. Choi, W.K., Wu, Y.H., Lee, M.L. et al. (2006) IDW’06, p165. Song, J.H., Lim, D.H., Park, J. B. et al. (2005) IDW’05, p103. Lim, Y.J., Lee, M. H. Lee, G. D. et al. (2007), J. Phys. D: Appl. Phys., 40, p2759. Doornkamp, C., van der Zande, B.M.I, Roosendaal, S.J. et al. (2004) Journal of the SID, p233.
73
4 In-Plane Switching (IPS) LCD Technology for Mobile Applications InJae Chung, and Hyungki Hong LG Display, Seoul, Korea
4.1 Introduction Liquid crystals were first discovered in 1888 by Friedrich Reinizer, but it was not until 1974 that the technological and commercial potential of liquid crystal was realized. At that time, the first electronic calculators using LCD for alphanumeric display were introduced and major electronic companies began to explore the use of LCD in consumer products such as wrist watches and calculators. It wasn’t too long before a color active matrix LCD using amorphous-silicon TFT (Thin Film Transistor) appeared on the scene in the mid-1980s in portable TVs. Another significant development occurred in the late 1980s, when the first notebook computers with TFT LCD were introduced by several companies, including IBM, NEC, and Sharp. This breakthrough was made possible by technological advances in the semiconductor and PC industries [1, 2]. As notebooks required lightweight and thin flat-panel displays, active matrix TFT LCD provided significant advantages in form design for portability. And now, more than a century after the discovery of liquid crystals the mass market for consumer electronics has allowed increasing investment in manufacturing and further expansion of applications of mobile devices to large-size TVs, as Figure 4.1, below, illustrates.
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
76
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 4.1 Various display applications using TFT LCD.
Nowadays, LCD technology has been refined and the extensive development of infrastructure for manufacturing equipment and materials, especially in Asia, has propelled the technology towards practical application in everyday life. Traditionally known shortcomings of LCD compared with CRT for realistic image reproduction included limited viewing angle and color characteristics, slow response time, etc. Limited viewing angle, where the displayed image looked different in color and gray level depending on the viewing direction, was quite a severe limitation and ‘in-plane switching (IPS)’ technology was developed especially to improve the viewing angle characteristics of LCD. Due to its wider viewing angle characteristics and later known stable gray-to-gray response time, IPS mode became widely used for high-end LCD displays and TV applications [3–6]. These days, in mobile applications, the importance of multimedia is increasing. For the display to meet this trend, new characteristics should be considered, such as good viewing angle for portrait and landscape displays, high resolution, and good response time even at lower temperatures. In this chapter, the IPS technology of LCD will be reviewed from the viewpoint of how the IPS mode can satisfy these new requirements of mobile applications. The differences between IPS and VA (Vertical Alignment) modes will be compared and the operational principles and electro-optical characteristics of the IPS mode will be described in more detail.
4.2 LCD Modes The ‘positive LC’ of positive e aligns parallel to the direction of the applied electric field and the ‘negative LC’ of negative e aligns normal to the direction of the electric field, as Figure 4.2 illustrates. Considering two alignment directions of LC and two directions of the electric field, a total of eight configurations are possible as described in Table 4.1. The horizontal direction and the vertical direction with respect to substrates are represented in Figure 4.2. Among these possible combinations of LCD type, only a few are currently used for display applications. For the horizontal direction of the electric field, three possible configurations exist, in which LC molecules move under the applied electric field. The IPS type using positive LC and horizontal
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS
77
Figure 4.2 Movement of LC under electric field for (a) positive " and (b) negative ". Dotted ellipse represents initial LC alignment. (c) Notations of direction are represented.
alignment has inherently wider angle characteristics as the LC molecules move in-plane and provide the viewers with a wider angle of view accordingly, and more stable color representation from that wider angle of view than the other modes. These and other merits made this configuration commercially successful. So the terms ‘IPS’ and ‘configuration of horizontal electric field LC operation’ are currently used almost interchangeably. For the vertical direction of the electric field, two configurations are possible for display applications and the LC molecules move out-of-plane when the electric field is applied. The structure of positive LC molecules with an initially horizontal alignment is known as TN (Twisted Nematic) or ECB (Electrically Controlled Birefringence) respectively, depending on whether LC molecules inside the cells are twisted or not. The structure of negative LC molecules with an initial vertical alignment is known as VA. Figure 4.3 shows the schematic structure of these widely used LC modes for display applications with their respective cell configuration. Typically, an LCD has two electrodes, one called the common electrode and the other called the pixel electrode. In IPS mode, the two electrodes are formed only on one side of the substrate and horizontal electric fields are thereby induced. On the other hand, in VA, ECB, and TN modes, the pixel electrodes are formed on the same substrate where the TFT is formed and the common electrodes are placed on the opposite side. So for the IPS mode positive LC rotates in-plane to align parallel with the horizontal electric field. Negative LC of VA mode aligns perpendicular to the vertical electric field. Positive LC of TN or ECB mode aligns parallel to the vertical electric field. The viewing angle dependence is a typical characteristic of LCD, which is hardly observed in other light-emitting displays such as CRT or PDP. This dependence is caused by the angular dependence of retardation which
Table 4.1 Possible configurations of LC type, alignment direction, and electric field. LC type
Direction of electric field
Initial direction of LC alignment
Rotation of LC
Example of LC mode
Positive
Horizontal
Horizontal Vertical
Yes Yes
IPS
Vertical
Horizontal Vertical
Yes No
TN, ECB
Horizontal
Horizontal Vertical
Yes No
Vertical
Horizontal Vertical
No Yes
Negative
VA
78
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 4.3 Schematic configurations of various LCD modes and principle of viewing angle dependence.
determines the transmittance of LCD. In the lower part of Figure 4.3, the viewing angle characteristics of different LC modes are also shown. In the case of out-of-plane switching, LC moves in polar and azimuth directions. Especially when the viewing direction is parallel with the optic axis of LC molecules at any gray level, a large decrease of transmittance is observed. In the case of in-plane switching, the optic axis is always in-plane and variation of the effective refractive index is relatively smaller than that of out-of-plane switching. For the VA mode, a multi-domain method had been used to improve the viewing angle characteristics. Figure 4.3 shows the schematic concept for the case of two domains. Figures 4.4 and 4.5 illustrate a schematic pixel structure for IPS and VA modes. For the IPS mode, inter-digital electrodes are placed parallel to each other on one side of the substrate, and the pixel electrode is connected to a data bus through the TFT, while the common electrode is connected to the voltage source. Rubbing angle is defined by the angle between the direction of the planar electrode and the initial alignment direction of the LC. For the VA mode, the viewing angle in one domain is quite narrow so multi-domain structures are generally used, where the LC is tilted to a different direction for each domain. The tilting direction of the VA mode is controlled by fringe fields induced by ITO slit pattern or geometric protrusion as Figure 4.5(b) illustrates. In locating electrode patterns inside pixels of fixed area, the aperture ratio for larger transmittance and prevention of pixel voltage fluctuation due to bus line electric fields should always be considered to get a good display performance. For VA, common electrodes are sometimes placed on the lower substrate to make a storage capacitor and to prevent any disturbing effect on tilt direction due to the electric field caused by bus lines. Regarding the fabrication aspect, even though the shapes of ITO pixel electrodes are different for each mode, the fabrication processes on the TFT substrate side are quite similar for these modes. The process of a five-mask TFT structure is shown in Figure 4.6(a). In the first-mask process, the gate bus and common electrodes patterns are made. In the second-mask process, the active layer on TFT is patterned. In the
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS
79
Figure 4.4 A schematic pixel structure of top view for (a) IPS and (b) VA mode.
third-mask process, source/drain and data bus electrodes are made. In the fourth-mask process, passivation and gate insulating layers are patterned at the same time to make contact hole and pad contact. In the fifth-mask process, ITO electrodes are made inside each pixel and pad contacts are covered. A transparent low-dielectric material layer such as photo acryl has been used before the pixel
Figure 4.5 A schematic pixel structure of cross-section for (a) IPS and (b) VA mode.
80
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 4.6 Production process of IPS and VA for (a) five-mask TFT process with optional photo acryl process; (b) C/F process; and (c) cell process.
ITO mask step to improve light efficiency for most currently mass-produced VA modes and some models of the IPS mode. The fabrication processes on the side of the color filter substrate differs for each mode. IPS mode does not need an ITO electrode, though a planarization layer is used to flatten the color filter layer and to prevent direct contact between the LC and color filter material. In VA mode, construction of a domain-control structure is necessary to make multiple domains. Such a structure is generally made by the additional process of ITO patterning or protrusion generation. In the cell process, IPS needs the rubbing process to align LC molecules horizontally. VA has a simpler cell process, as rubbing is unnecessary. However, in VA mode, unequal distances between domain boundaries cause non-symmetric viewing angle characteristics. So position control at substrate assembly should be more accurate in VA mode than in other modes.
4.3 Operational Principle of IPS Mode 4.3.1 Voltage Transmittance Relation A birefringent medium induces a phase change in incident light and a uniaxial medium placed between crossed polarizers causes non-zero transmittance as Equation (1) and Figure 4.7 show. Here, in the first sine term on the right is of the angle of the optic axis in the uniaxial medium and R in the second sine term is the size of retardation. Maximum transmittance occurs when is 45 and retardation R is half the wavelength of incident light [4–6]: T ¼ sin2 ð2Þ sin2 ðpRð; Þ=lÞ
ð1Þ
Assuming that LC molecules inside the LC cell align together with the same angles under an electric field, the LC cell can be treated as a uniaxial medium whose optic axis changes depending on the driving voltage. In IPS mode, the LC cell is placed between a crossed polarizers, and the initial alignment direction of LC is parallel to the optic axis of the polarizer or analyzers. As LC moves only in-plane, retardation
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS
81
Figure 4.7 Uniaxial medium between crossed polarizers.
of the LC cell can be treated as constant independent of driving voltage. Under this simplified model of a uniaxial medium, the transmittance can be written as follows: TðVÞ ¼ sin2 2ððVÞÞ sin2 ðpdnð; Þ=lÞ
ð2Þ
As the optic axis of LC molecules approaches 45 from the initial alignment, the transmittance reaches a maximum. If the voltage is higher than the voltage of maximum transmittance, the transmittance decreases, as Figure 4.8(a) illustrates. 100 Transmittance (A.U.)
Transmittance (A.U.)
100 80 60 40 20 0
80 60 40 20 0
0
2
4 (a)
6 8 Voltage ( V )
0
2
4 (b)
6 8 Voltage ( V )
Figure 4.8 Typical shape of voltage transmittance curve for (a) IPS mode and (b) VA mode.
In the case of an LC mode like VA or ECB using out-of-plane switching of LC molecules, the transmittance changes due to the change of retardation while the azimuth angle remains unchanged as follows: TðVÞ ¼ sin2 ð45 Þ sin2 ðpdneff ð; ; VÞ=lÞ
ð3Þ
In Equation (2) of the IPS mode, the transmittance is mostly determined by rotation of the LC optic axis through the driving voltage. For normalized transmittance, the second sine term can be treated as constant. This implies a uniform shape of spectral transmittance under various driving voltages. On the other hand, transmittance of out-of-plane switching is determined by the second sine term of Equation (3), whose wavelength dependence cannot be reduced by normalization of the transmittance. This means that the transmittance ratio between different colors cannot be kept constant under different driving voltages, and that other methods, such as signal modification, should be employed to reduce this phenomenon.
82
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS circular elliptical elliptical
I
I
linear
linear equator
(a)
(b)
Figure 4.9 Trajectory of polarization change between minimum and maximum transmittance on Poincare´ sphere for (a) IPS mode and (b) VA mode.
Another interesting difference between in-plane switching and out-of-plane switching is that of polarization state at gray levels. In the case of out-of-plane switching like VA or TN, incident linear polarization changes into elliptical polarization at intermediate gray levels except for maximum transmittance. In IPS mode, incident polarization is still kept linearly polarized after passing through the LC cell at any driving voltage, while the polarization direction changes. Figure 4.9 illustrates these differences where the Stokes vector of the IPS mode remains on the equator of the Poincare´ sphere and Stokes vector of the VA mode changes from one linear polarization through elliptical polarization to the another linear polarization [7].
4.4 LC Equation of Motion under an Electric Field In order to understand LC motion under an electric field, the Gibbs free energy is defined and an equation of motion is derived from the Euler–Lagrange equation [4, 6]. As Figure 4.10 illustrates, a coordinate system is chosen such that the x–y plane is parallel with the substrates and the rubbing direction is chosen to be parallel with the x axis.
Figure 4.10 Schematic x, y, z coordinate representation of LC cell for (a) z coordinate; and (b) x,y coordinate.
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS
83
In a real IPS cell, non-zero rubbing angle is necessary to make LC molecules rotate in the same direction. In that sense, it plays a role similar to the pre-tilt angle of out-of-plane switching LC modes in which non-zero pre-tilt prevents generation of domains. As the threshold condition is known to exist only when the rubbing angle is zero for the former, and the pre-tilt angle is zero for the latter, the rubbing angle is selected as zero. Assuming that only a uniform horizontal field (Ex ; Ey ; Ez Þ ¼ E (cos , sin , 0) exists, LC rotates only in the x–y plane and elastic free energy density fe and dielectric energy density fE depend only on azimuth angle and can be written as follows: 1 2 fe ¼ K22 _ ; _ ¼ d=dz 2 1 fE ¼ "0 j"jE2 sin2 2
ð4Þ
~ and " represent the twist elastic constant, electric displacement vector, and dielectric Here K22 , D, constant. From Equation (4), the Gibbs free energy density can be written as 1 1 f ¼ fe þ fE ¼ K22 _ 2 "0 j"jE2 sin2 2 2
ð5Þ
By applying the Euler–Lagrange relation to Equation (5), the following differential equation is obtained: @f d @f ¼ 0 ¼ "0 j"jE2 sin cos K22 € @ dz @ _ "0 j"jE2 1 "0 j"jE2 sin cos ¼ 0 ¼ € þ 2 sin cos ; 1=2 ¼ € þ K22 K22
ð6Þ
When twisting is small and the anchoring energy is strong enough, LC is fixed at the boundary and the boundary condition is given as ð0Þ ¼ ðdÞ ¼ 0, where d is the cell gap. Equation (7) can be approximated as an harmonic equation and the following threshold condition can be derived: 0 ¼ € þ
1 1 sin cos € þ 2 2
ðzÞ ¼ m sinðpz=dÞ ¼ m sinðz=c Þ for E < Ec ; ðzÞ ¼ 0 for E > Ec p where Ec ¼ d
ð7Þ ð8Þ
sffiffiffiffiffiffiffiffiffiffiffiffiffi pK22 "0 j"jEc2 ; c ¼ d=p; 1=c2 ¼ "0 j"j K22
The threshold voltage can be represented as pl VC ¼ Ec l ¼ d
sffiffiffiffiffiffiffiffiffiffiffiffiffi pK22 ; where l is the distance between electrodes "0 j"j
ð9Þ
The equations above imply that LC material parameters K22 and " and cell structure parameters d and l are key factors determining LC molecular motion. Figure 4.11(a) illustrates the effect of cell structure design on the T–V curve. An increase of electrode distance increases overall transmittance and shows saturation trends. Though not included in the above equations, rubbing angle is also an
Normalized transmittance (%)
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Relative Luminance
84
6 µm 11.5 15 20 28 0
4
8 12 Driving voltage (V) (a)
16
100 80
10 deg 15 deg 20 deg
60 40 20 00
2
4
6
Driving voltage (V) (b)
Figure 4.11 Dependence of voltage–transmittance curve on (a) the electrode distance and (b) the rubbing angle.
important parameter determining electro-optical characteristics. The rubbing angle changes the component of the electric field interacting with LC molecules, which in turn affects EO characteristics for the same electrode structure, as Figure 4.11(b) illustrates [8]. For visible light, humans are most sensitive to wavelengths around 550 nm corresponding to green color. So LCD design is optimized for a maximum transmittance of 550 nm. From the transmittance equation, this corresponds to a retardation of dnð550 nmÞ ¼ 550 nm=2 ¼ 275 nm, but, experimentally, maximum transmittance of IPS was observed to happen at a retardation condition approximately 20% larger as Figure 4.12(a) illustrates [9]. The cause of this discrepancy was explained by the twisting nature of LC molecules inside the LC cell. The anchoring force at the boundary of an LC cell determines the alignment state of LC molecules inside the LC cell and response time. So strong anchoring is preferred. From Equation (8), LC motion can be approximated as sine functions as Figure 4.12(b) illustrates. Such twisting of LC molecules causes a decrease in the effective retardation compared with the retardation value at initial non-twisted LC alignment. It was reported that the ratio of effective refractive index at maximum transmittance condition is reduced to 70–80% of the initial refractive index. Similar phenomena of the effective refractive index have also been reported in FFS (Fringe Field Switching) mode, another type of in-plane switching structure where two types of electrodes overlap in
Figure 4.12 (a) Dependence of voltage–transmittance curve on cell gap; and (b) schematic distribution of optic axis of LC molecule along the direction of cell gap at various voltages.
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS
85
Figure 4.13 (a) Two-dimensional LC alignment under an electric field and (b) simulated transmittance along the direction normal to the electrodes. Regions of low transmittance correspond to the electrode positions.
the vertical direction [10]. In this case, LC molecules move in the polar direction as well as azimuth direction and the optimum initial retardation condition is reported to be around 400 nm. When electric fields of three-dimensional distributions are considered, the motions of LC molecules inside pixels become more complicated. Figure 4.13 shows LC alignment under an electric field and the transmittance change along the direction normal to the electrodes, which is parallel to the y direction of Figure 4.10. As the driving voltage increases, the transmittance between electrodes increases to a maximum, while the transmittance directly on the electrodes is relatively unchanged. This low transmittance is caused by the direction of the electric field as the electric fields on the upper region of electrodes are mostly in the vertical direction. To reduce these regions of low transmittance, the width of electrodes should be decreased compared to the distance between electrodes.
4.5 Schematic Diagram of IPS Pixel Structures In applying concepts of the IPS mode to real display applications, pixel structure and production process should be considered. At least three kinds of electrodes should be made on the lower substrate
86
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
of TFT LCD: gate bus electrodes to send gate pulse signals; data bus electrodes to send data signals; and ITO electrodes to affect the LC and make pad contact. Interdigital electrodes in IPS mode can be made from any of these three electrodes. Of the electrodes in IPS mode, one electrode – the ‘common electrode’ – is connected to a constant voltage, while the other – the ‘pixel electrode’ – is connected to the data signal. The pixel electrode is not connected to an outside voltage source at gate-off state and is affected by the electric field of bus lines surrounding pixels. To prevent unwanted interaction between the pixel electrode and bus lines, pixel electrodes should be placed far from bus lines [11]. Another general rule is to use a symmetric design of electrode structure. During the production process, minute misalignments among electrodes are inevitable and the EO characteristics of the symmetric structure are less affected compared with those of the asymmetric one.
Figure 4.14 Example of IPS pixel structure of storage on common type.
Figure 4.14 provides an example of IPS pixel structure [13]. Common electrodes and gate bus lines are placed on the same layer. Pixel electrodes and data bus lines are formed on the same layer by metallic material. For the IPS mode, LC molecules located directly on the top of pixel electrodes, or the commons electrodes, may move occasionally as the electric field direction at that position is not horizontal. In this regard, the pixel electrodes in IPS mode do not need to be made of transparent ITO. Common electrodes in the vertical direction are placed adjacent to bus lines to prevent a coupling effect between pixel electrodes and bus lines. To increase the aperture ratio and get good LCD performance, the design of the storage capacitor is an important issue. In the examples in Figures 4.4(a) and 4.14, the storage capacitor is formed at the overlapping area of the common electrode and pixel electrode. The storage capacitor can also be formed on the top of gate bus lines by overlapping gate bus lines and pixels electrodes. Another structure is hybrid storage in which the type of storage on common and the type of storage on gate electrodes are employed together. Figure 4.15 shows various examples of electrode location for the horizontal cross-section of IPS pixels. Common electrodes are generally made on the same layer as gate electrodes. Making common electrodes on the same layer as data bus lines is not recommended: it is easy for shortening to take place between two adjacent electrodes on the same layer. Pixel electrodes can be made from ITO electrodes or by metal layers on the same layer as gate or data bus lines. Figure 4.16 illustrates the schematic electrode structure reported to improve the original IPS performance. For improving the viewing angle, a two-domain structure using zigzag electrodes has been proposed as Figure 4.16(b) shows [13]. Horizontal IPS (H-IPS) and FFS have been reported as one way of improving the performance of IPS [14, 10]. For FFS, an additional ITO step is necessary to make two ITO layers and the storage capacitance is formed between these two ITO layers. So, areas solely used for a capacitor in other pixel configurations can be removed. Similar viewing angle characteristics have been reported for these variations of IPS mode.
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS
87
Figure 4.15 Examples of electrode location for the cross-section of IPS pixel structure.
Figure 4.16 Schematic electrode configurations in the active area of the pixel for (a) original IPS; (b) two-domain IPS; (c) horizontal IPS; (d) FFS.
88
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
4.6 Characteristics of IPS Mode 4.6.1 Response Time Characteristics A model of the response mechanism was first proposed by Oh-e et al. [4, 5]. From the equation of motion derived above, the response time is derived from the equilibrium condition where elastic and electrical deformation are matched by a viscous force. For simplicity, the flow effect is neglected:
K22
@2 @ @2 K22 2 þ "o "E2 þ "o "E2 sin cos ¼ 1 2 @z @t @z
ð10Þ
Here 1 is the twist-viscosity coefficient. With the boundary condition of ð0Þ ¼ ðdÞ ¼ 0, the solution of the above equation can be derived as ðz; tÞ ¼ m sinðpz=dÞ expðt=Þ
ð11Þ
where is a relaxation time constant. When the voltage changes from electric field E to zero, the falling time constant f is determined as f ¼
1 d 2 1 ¼ p2 K22 "0 j"jEc2
ð12Þ
The above equation implies that falling time is determined by the viscosity coefficient, cell gap, and twisting elastic constant, but not by the initial LC state or initial electric field. On the other hand, when the voltage changes from zero to non-zero value E, the rising time constant is determined as r ¼
1 "0 j"jðV=lÞ ðpK22 =dÞ 2
2
¼
1 "0 j"jðE2 Ec2 Þ
ð13Þ
where l is the distance between the electrodes. The rising time constant shows a strong dependence on the applied electric field of the final state. For the specification of an LCD, transient times between 10% and 90% of initial transmittance and final transmittance are generally used. Though these time constants are different from this specification, the general trends are known to agree with each other [15]. In early office applications of monitor displays, which were mostly used for presenting text and graphics, the response time between the black and white states was sufficient to describe the characteristics of the LCD. However, as moving images became increasingly used for multimedia content in TVs and other displays, the response time between different gray levels became more important. For that reason, gray-to-gray level (G2G) RT and average G2G RT were introduced. For mobile applications, the importance of multimedia is also increasing and new applications such as PMP (Personal Multimedia Player) and mobile TV are gaining popularity. Figure 4.17 illustrates an example of G2G RT of LC modes, where 9 among 256 gray levels are represented. The values on the left represent the initial gray levels and the values on the right represent the final gray levels. For normally black LC modes like IPS and VA, the change from non-zero gray level to zero gray level corresponds to the falling time of Equation (12). In Figure 4.16, this is the case where the final state is 0 gray, and relatively small variation is observed. This corresponds to the independence of falling time with respect to initial LC state. However, if the final state is not 0 gray, the ratio of G2G RT to black/white RT is known to be significantly larger than 1, especially for the VA mode. For mobile applications, the working temperature can be lower than room temperature. The
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS Gray to Gray Response Time
Gray to Gray Response Time
G255 G223 G191 50
final
G31 G0
30 20 10 0
initial (a)
G0
G0
G 127
G159 G63 G 63
initial
G255
G63
G127
20 10 0
40
G63
G95
G255
30
G191
G127
Response Time[ms]
G159
40
G 255 G191
Response Time[ms]
50
89
G255 G223 G191 G159 G127 G95 G63 G31 G255 G191G0 G127 G63 G0 final
(b)
Figure 4.17 Response time characteristics from one gray level to another gray level for (a) IPS mode and (b) VA mode at room temperature.
discrepancy between IPS and VA is reported to increase at lower temperatures [16]. In TV and monitor applications, reduction of RT by a driving scheme has been reported [17]. However, this method requires additional circuits and ICs and seems to be unsuitable for mobile applications which should meet strict requirements of mechanical dimensions. Rising and falling time constants show a strong dependence on the parameters of materials and pixel structure as well as driving voltage. Yet these parameters also affect other EO characteristics. For example, a decrease in electrode distance reduces rising time but decreases transmittance as well. Optimization of these parameters has continued for faster response times by designing a low cell gap material and a patterned spacer for the stable process condition. Design of new LC components is another important factor as Figure 4.18 illustrates.
Figure 4.18 Approach for reducing response time. Reproduced from [18] by permission of SID.
4.7 Light Efficiency For mobile applications, low power consumption is an important issue to ensure longer working time. Various methods have been reported to this end. One example is an increase in the panel transmittance by the arrangement of subpixels. Figure 4.19 illustrates the pixel image of a quad subpixel structure where each pixel consists of four rectangular subpixels of red, green, blue, and white. There is no C/F layer on the white subpixel; consequently, the
90
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
RG B ( a)
R G W B (b)
R
G
W
B
(c )
Figure 4.19 Schematics of subpixels of (a) RGB stripe patterns and (b) RGBW quad patterns; (c) photo of RGBW quad subpixel.
transmittance through the white subpixel is much higher than the red, green, and blue subpixels, which absorb about 70% of the incident light. Therefore, the maximum luminance can theoretically increase by about 50% compared with the RGB subpixel structure of the same pixel size. For the purpose of increasing the aperture ratio of each subpixel, a high aperture structure has been reported which uses a thick insulating layer such as photo acryl to reduce the electric field derived from bus lines. Figure 4.20(a) illustrates a high aperture structure for the IPS mode. The thick insulating layer prevents deterioration of the data signal caused by coupling between overlapped electrodes, so common electrodes can be placed on top of bus lines without degrading the signal. As bus lines prevent light leakage, the area of black matrix (BM) can be removed or reduced. In the case of a high aperture structure for the VA mode of Figure 4.20(b), electric coupling between adjacent pixel electrodes on the same layer cannot be prevented by an insulating layer. So BM is necessary.
Figure 4.20 High aperture structure for (a) IPS mode and (b) VA mode.
Nowadays, the need for higher ppi (pixels per inch) is increasing in order to display a larger quantity of information on a limited screen size. To meet such requirements, LCDs of higher ppi have been reported such as 300 ppi (VGA of 2.600 diagonal size) and 400 ppi (VGA of 2.600 diagonal size). A comparison between IPS mode and TN mode under high aperture structures has been reported where the aperture ratio of IPS becomes larger than that of TN above 340 ppi [19]. As the aperture ratio of VA mode is always smaller than that of TN mode, it is expected that IPS mode will be better suited for mobile applications of high ppi.
4.8 Viewing Angle Characteristics For mobile applications such as digital cameras and hand-held phones, users can use the display in the landscape and portrait positions. As the left and right eyes see the display at slightly different angles, viewing angle characteristics should be symmetric in the vertical direction and horizontal direction, respectively. Often, several people watch a small-size display together, so a wide viewing angle is still an important requirement for mobile applications.
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS
91
Figure 4.21 Angular dependence of gray level–brightness curve for (a) IPS mode; and (b) VA mode (film compensated 4 domain).
As mentioned earlier in this chapter, the IPS mode has inherently wide viewing angle characteristics. Thus LCDs using IPS mode can be made thinner compared to other LCDs using TN or VA, which need compensation films a few hundred mm thick to get a wide viewing angle. To view the same image from various positions, the relation between gray level and brightness should remain constant, regardless of viewing angles. Figure 4.21 illustrates the brightness change for the same gray level at different angles. The curve of the IPS mode remains unchanged while that of the VA mode shows brightness deformation at lower and medium gray levels. This means that relative brightness inside one image is deformed for the VA mode. This deviation affects not only the contrast ratio but also color characteristics. In LCDs, color is represented by the combined luminance of red, green, and blue subpixels of different gray levels. If the luminance for each subpixel changes at different ratios of oblique angles, the user would perceive deformed color. While a VA pixel structure using two TFTs has been reported to decrease this deformation for TV of less than 50 ppi, it is questionable to apply this two-TFT structure of the VA mode to mobile applications of a much higher resolution. While wide viewing angle is an important issue, narrow viewing angle is sometimes needed for the privacy of the users. Many methods have been reported to get narrow viewing angle characteristics or to switch between narrow and wide viewing angles. Among these, a method using the viewing angle characteristics of the LC mode has been reported where subpixels of the IPS and ECB modes are made to exist inside one pixel. Viewing angle is controlled by combining the different viewing angles of the IPS and ECB modes, as Figure 4.22 illustrates [20].
Figure 4.22 Viewing angle control method which combines (a) subpixels of IPS mode and (b) subpixels of ECB in the same panel.
4.9 Color and Gray Level While the color performance of a display is generally measured by pure red, green, and blue color, most of the colors we see and use are mixed colors. In addition, the relation between gray level and
92
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS 0.6
0.6 (16 255 16)
(16 255 16)
(255 16 16)
0.4
0.2 I PS m o d e (16 16 255)
0
0
v'
v'
H0 H20 H40 H60 H80
(255 16 16)
0.4
0.2 VA mo de (16 16 255)
0.2
0.4
0
0.6
0
0.2
u' (a)
0.4
H0 H20 H40 H60 H80
0.6
u' (b)
Figure 4.23 Viewing angle dependence of the effective color gamut: (a) IPS mode; (b) VA mode. Color gamut of VA decreases due to the deformation of the gamma curve.
100 75 50
IPS VA
Normalized voltage
Transmittance (%)
perceived brightness is not linear. So luminance level at lower gray level should be more accurately controlled. Figure 4.23 illustrates the way in which the color deviation described in the previous subsection affects the effective size of color gamut. Luminance of 16 gray levels is only about 0.22% of luminance of 255 gray levels for a gamma setting of 2.2. If a mixed color of red, green, blue of (255, 16, 16) is displayed, for example, it will be perceived as pure red and is used as red on the standard color chart. However, at an oblique angle, the relative luminance of red, green, and blue subpixels changes and this results in a decrease in the chroma and vividness of color. The equation of motion described earlier in this chapter does not include the effect of rubbing angle for the IPS mode and the pre-tilt angle for VA. It is known that a threshold voltage exists when these angles are zero for each corresponding mode. For the VA mode, the pre-tilt angle is near zero and threshold behavior at low gray levels still remains, as Figure 4.24(a) illustrates. For the IPS mode, a rubbing angle of 15–20 is mostly used for display applications, so the transmittance changes smoothly at low gray levels. For LCDs, the transmittance of each subpixel is controlled by the driving voltage which is assigned to each gray level. Figure 4.24(b) illustrates the relationship between gray level and driving voltage. When the shape of the curve is steep, as it is in the VA mode, it is difficult to reproduce smooth transmittance changes due to the limitation of voltage resolution of the driving IC.
2 1
25 0 0. 0
0 0.0
0.2
0. 2 0. 4 0. 6 0. 8 Normalized voltage (a)
1.0 0.8
IPS VA
0.6 0.4 0.2
0.4
1. 0
0.0 0
50
100 150 Gray (b)
200
250
Figure 4.24 (a) Transmittance–voltage characteristics for LC mode; (b) standard gamma characteristics for LC mode. IPS mode shows smooth curve for each characteristic.
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS
93
Figure 4.25 IPS technology for outdoor applications.
4.10 IPS Mode for Outdoor Applications To obtain good outdoor performance, various approaches have been reported and these are summarized in Figure 4.25. When an LCD is used under strong illumination, such as direct sunlight, light reflected at the LCD overlaps light coming from the LCD and this causes degradation of image quality. Light reflection on the LCD is caused by the metal lines inside the LC cell and the refractive-index difference between the polarizer and the air. The former cause can be reduced by using metal lines of low reflectance. The latter cause can be reduced using an anti-reflection coating on the surface of the polarizer, where the anti-reflection coating consists of multiple layers of different refractive indices and uses the concept of destructive interference. Temporarily increasing backlight luminance for outdoor usage is also effective in improving outdoor readability. Pixels of the transflective mode consist of reflective and transmissive regions. Transmissive structures using compensation films outside the LC cell were first commercialized for the ECB mode and various structures for other LC modes were later reported. The key consideration of compensation film design is to maintain a low level of black luminance for reflection and transmission. Wideband circular polarizers were designed for this purpose [21]. Figure 4.26 illustrates an example of a proposed structure [22]. In this configuration, the LC layer functions with HWP (Half Wave Plate) compensation films to make wideband QWP (Quarter Wave Plate). One limitation of this kind of structure is that the wideband polarizer affects the viewing performance of the transmission region.
Polarizer (0 degree) HWP (15 degree) LC (75 degree) HWP (165 degree) HWP (105 degree) Polarizer (90 degree) Figure 4.26 A schematic configuration for transflective dual gap IPS mode using compensation films outside an LC cell.
94
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
To apply different structures of compensation for reflective and transmissive regions, the compensation layer should be made inside the LC cell to prevent parallax problems. One method of making such a patterned compensation layer inside an LC cell has been reported [23]. As compensation layers are made inside the LC, compatibility with the existing process becomes important. If the compensation layer is made on the side of the color filter substrate, process compatibility with cell processes like alignment layer and sealant baking temperature would be important issues. If the compensation layer is made on the side of the TFT substrate, the compensation layer can be made either on top of, or below, the pixel electrodes. The electric field through the compensation layer and the position of contact holes between the storage capacitor and pixel electrode should be considered when the compensation layer is placed on the TFT substrate. When the patterned compensation layer is placed only on the reflective region, the EO performance of the transmissive region is equivalent to the typical transmissive LC mode. Figure 4.27 illustrates a schematic configuration for the transflective dual gap IPS mode with patterned compensation layers placed on the side of the color substrate inside the LC cell [24, 25]. Polarizer (0 degree)
Polarizer (0 degree)
Patterned compensator film LC (0 or 90 degree)
LC(0 or 90 degree) d∆n = ~ 320 nm
Reflector
Polarizer (90 degree)
(a)
(b)
LC retardation/275nm
Figure 4.27 A schematic configuration of the transflective dual gap IPS mode with patterned compensation layers placed on the color substrate inside the LC cell for (a) the reflective region and (b) the transmissive region.
0.8 0.6 0.4
0.1
0.2 0 0
0.9 0.2
0.4 0.6 0.8 1 1st retardation /275nm
0.9 1.2 1.4
Figure 4.28 Contour map of reflectance as a function of retardations of compensation layer and LC layer. Optic axes of 1st retarder are 22.5 . Optic axis of LC is 90 . Horizontal axis and vertical axes represent retardation of 1st retarder and LC cell divided by 275 nm.
Wideband characteristics of the reflective region should be optimized under the constraint that the optic axis of the LC is parallel or perpendicular to the polarizers. Figure 4.28 shows a contour map of reflectance for the retardation values of LC and compensation film for the configuration of Figure 4.27(a). A low-reflectance condition exists for compensation film with an optic axis of 22.5 and retardation of 275 nm and LC with a retardation of 275 nm/2.
4.11 Summary The performance requirements for mobile displays are gradually becoming higher. The IPS mode shows good gray-to-gray response time characteristics without depending on a complicated driving
IN-PLANE SWITCHING (IPS) LCD TECHNOLOGY FOR MOBILE APPLICATIONS
95
scheme. Also, the IPS mode has inherent wide viewing angle characteristics in contrast to other wide viewing angle LC modes that need layers of compensation films a few hundred mm thick. Considering good electro-optical performance, a simple driving scheme, and a thin form factor of the IPS mode, we think that the IPS mode is the most suitable for mobile display applications compared to other LC modes.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25]
de Gennes, P. G. (1974) The Physics of Liquid Crystals, London: Oxford University Press. Hahadur, B. (Ed) (1990) Liquid Crystals-Applications and Uses, Vol.1, World Scientific. Yeo, S.D., Oh, C.H., Lee, H.W. and Park, M. H. (2005) SID 05, p. 1738. Oh-e, M. and Kondo, K. (1995) Appl. Phys. Lett., 67, 3895. Oh-e, M. and Kondo, K. (1996) Appl. Phys. Lett., 69, 623. Oh-e, M. and Kondo, K. (1997) Liquid Cryst., 22, 379. Bigelow, J.E, and Kashnow, R.A. (1977) Applied Optics, 16, 2090. Hong, H.K. and Seo, C.R. (2004) Jpn. J. Appl. Phys., 43, 7639. Satake, T., Nishioka, T., Saito, T. and Kurata, T. (2001) Jpn. J. Appl. Phys., 40, 195. Lee, S.H., Lee, S.L. and Kim, H.Y. (1998) Appl. Phys. Lett., 73, 2881. Ohta, M., Oh-e, M. and Kondo, K. (1995) Asia Display’95, Hamamatsu, Japan, p. 707. Jung, S.M., Jang, S.H., Park, K.B., Lee, S.C., Seo, C.R. and Park, W.S. (2004) Sid ’04, p. 618. Aratani, S., Klausmann, H., Oh-e, M. et al. (1997) Jpn. J. Appl. Phys., 36, L27. Kim, D.S., Kang, B.G., Choi, W.S. et al. (2004) Sid ’04, p. 245 Wang, H., Wu,T.X., Zhu, X. and Wu, S.T. (2004) Journal of Applied Physics, 95, 5502. http://www.sanyo-epson.com. Kawave, K. and Furuhashi, T. (2001) SID 2001, p. 998. Lim, C.S. et al. (2003) IMID 2003 (Daegu, Korea), p. 68. Hwang, H.W. et al. (2005) SID 2005 (Boston), p. 344. Jin, H.S. et al (2005) SID 2006 (San Francisco), p. 729. Baek, H.I., Kim, Y.B., Ha, K.S. et al. (2000) IDW 00 (Japan), p. 41. Lee, G.S., Kim, J.C., Yoon, T.H., et al. (2006) SID06 (San Francisco), p. 813. van der Zande, B.M.I., Nieuwkerk, A.C., van Deurzen, M. et al. (2003) SID 03, p. 194. Hong, H.K. and Shin, H.H. (2006) IMID06 (Daegu), p. 822. Tanno, J., Norimoto, M., Igeta, K. et al. (2006) IDW06 (Japan), p. 635.
5 Transflective Liquid Crystal Display Technologies Xinyu Zhu,1 Zhibing Ge,2 and Shin-Tson Wu1 1
College of Optics and Photonics, University of Central Florida, FL, USA School of Electrical Engineering and Computer Science, University of Central Florida, FL, USA
2
5.1 Introduction Transmissive liquid crystal displays (LCDs) have been widely used in laptop computers, desktop monitors, high-definition televisions (HDTVs), and so on. The most commonly used transmissive 90 twisted-nematic (TN) LCD exhibits a high contrast ratio due to the self-phase compensation effect of the orthogonal boundary layers in the voltage-on state. However, its viewing angle is relatively narrow since the liquid crystal (LC) directors are switched out of the plane and the oblique incident light experiences different phase retardations at different angles. For TV applications, a wide viewing angle is highly desirable. Currently, in-plane switching (IPS) mode [1], fringe-field switching (FFS) mode [2], multi-domain vertical alignment (MVA) [3], and patterned vertical alignment (PVA) mode [4] are the mainstream display modes for wide-view LCDs. A major drawback of the transmissive LCD is that its backlight source needs to be kept on all the time as long as the display is in use; therefore, the power consumption is relatively high. Moreover, the image of a transmissive LCD is easily washed out by strong ambient light such as direct sunlight. The reflective LCD, on the other hand, has no built-in backlight source. Instead, it utilizes ambient light for displaying images. The detailed introduction of available operating modes for reflective
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
98
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
LCDs can be found in Wu and Yang, 2001 [5]. In comparison to transmissive LCDs, reflective LCDs have the advantages of lower power consumption, lighter weight, and better outdoor readability. However, a reflective LCD relies on the ambient light and thus is inapplicable under low or dark ambient conditions. In an attempt to overcome the above drawbacks and take advantage of both reflective and transmissive LCDs, transflective LCDs have been developed to use ambient light when available and backlight only when necessary. A transflective LCD can display images in both transmissive mode (T-mode) and reflective mode (R-mode), simultaneously or independently. Since LC material itself does not emit light, the transflective LCD must rely on either ambient light or backlight to display images. Under bright ambient circumstances, the backlight can be turned off to save power and therefore the transflective LCD operates in the R-mode only. Under dark ambient conditions, the backlight is turned on for illumination and the transflective LCD works in the T-mode. In low to medium ambient surroundings, the backlight is still necessary. In this case, the transflective LCD runs in both T- and R-modes simultaneously. Therefore, the transflective LCD can accommodate a large dynamic range. Currently, the applications of transflective LCDs are mainly targeted at mobile display devices, such as cell phones, digital cameras, camcorders, personal digital assistants (PDAs), pocket personal computers (PCs), and global positioning systems (GPS), etc. In this chapter, we first explain the transflector classifications, which will help us to understand the transflective mechanism. Then, based on the development history of transflective LCDs, we will address their underlying operating principles and analyze their pros and cons. Finally, we will discuss those factors that affect the image qualities of transflective LCDs. The main portion of this chapter is from Zhu et al., 2006 [6], which is authorized by the IEEE.
5.2 Classification of Transflectors Since a transflective LCD should possess dual functions (transmission and reflection) simultaneously, a transflector is usually required between the LC layer and the backlight source. The main role of the transflector is to partially reflect incident ambient light back and to partially transmit the backlight to the viewer. From the device structure viewpoint, the transflector can be classified into four major categories: (1) openings-on-metal transflector; (2) half-mirror metal transflector; (3) multilayer dielectric film transflector; (4) orthogonal polarization transflector.
5.2.1 Openings-on-Metal Transflector The concept of an openings-on-metal transflector was first proposed by Ketchpel at the Rockwell International Corporation [7]. Figure 5.1(a), below, shows the schematic structure. Typical manufacturing steps include first forming wavy bumps on the substrate, then coating a metal layer, such as silver or aluminum, on the bumps and, finally, etching the metal layer according to the predetermined patterns. After etching, those etched areas become transparent so that the incident light can be transmitted, while the unaffected areas are still covered by the metal layer and serve as reflectors. The wavy bumps function as diffusive reflectors to steer the incident ambient light away from surface specular reflection. Thus, the image contrast ratio is enhanced and the viewing angle is widened in the R-mode. Due to the simple
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
99
reflected light transmission openings
transmitted light
reflection area bump substrate
(a) reflected light transmitted light
thin metal film
substrate
(b)
reflected light
transmitted light
n1 n2 n2 n2
n1 n1 n1
substrate
(c) Figure 5.1 Schematic illustration of the first three major types of transflectors: (a) openings-on-metal transflector; (b) half-mirror metal transflector; and (c) multilayer dielectric film transflector. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
manufacturing process, low cost, and stable performance, this type of transflector is by far the most popular one implemented in commercial transflective LCD products.
5.2.2 Half-Mirror Metal Transflector The half-mirror has been widely used in optical systems as a beam splitter. It was implemented in a transflective LCD by Borden [8] and Bigelow [9] with the basic structure shown in Figure 5.1(b). When depositing a very thin metallic film on a transparent substrate, one can control the reflectance and transmittance by adjusting the metal film thickness. The film thickness can vary, depending on the metallic material employed. Typically, the film thickness is around a few hundred angstroms. Since the transmittance/reflectance ratio of such a half-mirror transflector is very sensitive to the metal film thickness, the manufacturing tolerance is very narrow and volume production is difficult. Consequently, this kind of transflector is not too popular in commercial products.
100
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
5.2.3 Multilayer Dielectric Film Transflector Multilayer dielectric film is a well-developed technique in thin-film optics, but until very recently, it was incorporated into transflective LCDs [10]. As illustrated in Figure 5.1(c), two inorganic dielectric materials with refractive indices n1 and n2 are periodically deposited as thin films on the substrate. By controlling the refractive index and thickness of each thin layer as well as the total number of layers, one can obtain the desired reflectivity and transmissivity. Similar to the half-mirror transflector, the transmittance/reflectance ratio of the multilayer dielectric film is sensitive to each layer’s thickness. In addition, to produce several layers successively increases the manufacturing cost. Therefore, the multilayer dielectric film transflector is rarely used in current commercial transflective LCDs.
5.2.4 Orthogonal Polarization Transflector The orthogonal polarization transflector has a special characteristic where the reflected and the transmitted polarized light from the transflector have mutually orthogonal polarization states. For instance, if a transflector reflects horizontal linearly (or right-handed circularly) polarized light, then it will transmit complementary linearly (or left-handed circularly) polarized light. Figures 5.2(a)–(c), below, show three such examples: the cholesteric reflector [11], birefringent interference polarizer [12], and wire grid polarizer [13].
5.2.4.1 Cholesteric Re£ectors The cholesteric LC layer manifests as a planar texture with its helix perpendicular to the cell substrates when the boundary conditions on both substrates are tangential. If the incident wavelength is comparable to the product of the average refractive index and the cholesteric pitch, then the cholesteric LC layer exhibits a strong Bragg reflection [14]. Figure 5.2(a) shows the schematic configuration of a right-handed cholesteric reflector, where the cholesteric LC polymer layer is formed on a substrate. For unpolarized incident light, the right-handed circularly polarized light, which has the same sense as the cholesteric helix, is reflected but the left-handed circularly polarized light can transmit such a righthanded cholesteric reflector.
5.2.4.2 Birefringent Interference Polarizer The birefringent interference polarizer transflector consists of a multilayer birefringence stack with alternating low and high refractive indices, as shown in Figure 5.2(b). One way to produce such a transflector is to stretch a multilayer stack in one or two dimensions. The multilayer stack consists of birefringent materials with low/high index pairs [15]. The resultant transflective polarizer exhibits a high reflectance for light polarized along the stretching direction and, meanwhile, a high transmittance for light polarized perpendicular to the stretching direction. By controlling the three refractive indices of each layer, nx ; ny , and nz , the desired polarizer behaviors can be obtained. For practical applications, an ideal reflective polarizer should have 100% reflectance along one axis (the so-called extinction axis) and 0% reflectance along the other (the so-called transmission axis) axis, at all incident angles.
5.2.4.3 Wire Grid Polarizer Wire grid polarizers (WGPs) are widely used in the infrared [16]. They are constructed by depositing a series of parallel metal strips onto a dielectric substrate, as shown in Figure 5.2(c). To operate in the
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
101
right-handedness cholesteric LC polymer layer
substrate (a)
Z
X
Y
na,z
a
na,x
a
nb,x
b
na,y
b
nb,y
a b
nb,z
a
(b) metal strip
t substrate W (c)
P
Figure 5.2 Schematic illustration of the three examples of orthogonal polarization transflectors: (a) cholesteric reflector; (b) birefringent interference polarizer; and (c) wire grid polarizer (WGP). # 2005 IEEE. Reproduced from [6] by permission of IEEE.
visible spectral region, the pitch of the metal strips P should be about 200 nm, which is approximately half the wavelength of blue light [17]. In general, a WGP reflects or transmits light with its electric field vector respectively parallel or perpendicular to the wires of the grid. In practice, the wire thickness t, wire width W, and grid pitch P play important roles in determining the extinction ratio and acceptance angle of the polarizer.
102
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Unlike the first three transflectors discussed above, the entire area of the orthogonal polarization transflector can be utilized for reflection and transmission simultaneously. Nevertheless, the transmitted light and reflected light possess mutually orthogonal polarization states so that the reflective and transmissive images exhibit a reversed contrast. Although an inversion driving scheme may correct such a reversed contrast problem [13], the displayed images are still unreadable under moderate brightness surroundings when both ambient light and backlight are in use. Thus, orthogonal polarization transflectors have not yet been widely adopted in current high-end commercial transflective LCD products.
5.3 Classification of Transflective LCDs LCDs rely on ambient light or backlight to display images. Based on the light modulation mechanisms, transflective LCDs can be classified into four categories: (1) absorption type; (2) scattering type; (3) reflection type; and (4) phase-retardation type. The first three categories do not modulate the phase of the incident light; rather, they absorb, scatter, or reflect light. In some cases, it is possible to use one polarizer or none at all to achieve high brightness. As for the phase-retardation type, two polarizers are usually indispensable in order to make both transmissive and reflective modes work simultaneously.
5.3.1 Absorption Type Transflective LCDs Guest–host LCDs utilize the absorption mechanism to modulate light. In a guest–host LCD, a few percent (2%) of dichroic dye is doped into an LC host. As the LC directors are reoriented by the electric field, the dye molecules follow. Because of the dye’s dichroism, the absorption of the LC cell is modulated. This mechanism was first introduced in the nematic phase by Heilmeier and Zanoni [18] and later on in the cholesteric phase by White and Taylor [19]. In the twisted or helical LC structure, the guest–host display does not require a polarizer. A major technical challenge of guest–host displays is the tradeoff between reflectance/transmittance and contrast ratio. A typical contrast ratio for the guest–host LCD is 5:1 with 40–50% reflectance. The low contrast ratio is limited by the dichroic ratio (DR) of the dye.
5.3.1.1 Nematic Phase Bigelow [9] devised a transflective LCD structure using a half-mirror metallic transflector, two quarterwave films, and nematic phase LC/dye mixtures, as illustrated in Figure 5.3(a), below. In the figure, the upper half and lower half show the voltage-off and voltage-on states, respectively. When no voltage is applied, the LC/dye mixtures are homogeneously aligned within the cell. In the R-mode, the unpolarized incident ambient light becomes linearly polarized after passing through the LC/dye layer. Then, its polarization state becomes right-handed circularly polarized after the inner quarter-wave film. Upon reflection from the transflector, its polarization state becomes left-handed circularly polarized due to a -phase change. When the left-handed circularly polarized light passes through the inner quarter-wave film again, it becomes linearly polarized, with a polarization direction parallel to the LC alignment direction. As a result, the light is totally absorbed by the dye dopant and a dark state is achieved. In the T-mode, the unpolarized light from the backlight source becomes linearly polarized after the polarizer. Then it changes to left-handed circularly polarized light after emerging from the outer quarter-wave film. After penetrating the transflector, it still keeps the same left-handed circular polarization state. Thereafter, its travel path is identical to that of the R-mode. Finally, the light is totally absorbed by the dye mixture, resulting in a dark state.
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
ITO substrate
λ/4 film
103
λ/4 film polarizer
45° 45°
nematic LC and dye mixtures
transflector
backlight
(a) ITO substrate
λ/4 film polarizer
45°
transflector
cholesteric LC and dye mixtures
backlight (b)
Figure 5.3 Schematic configurations and operating principles of two absorption type transflective LCDs with (a) nematic phase LC (host) and dye (guest) mixtures, and (b) cholesteric phase LC (host) and dye (guest) mixtures. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
In the voltage-on state, the LC directors and dye molecules are reoriented nearly perpendicular to the substrates, as illustrated in the lower half of Figure 5.3(a). Therefore, the light passing through it experiences little absorption and no change in polarization state. In the R-mode, the unpolarized ambient light passes through the LC/dye layer and the inner quarter-wave film successively without a change in polarization state. Upon reflection from the transflector, it is still unpolarized light and goes all the way out of the transflective LCD. Consequently, a bright state with very little attenuation is
104
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
achieved. In the T-mode, the unpolarized backlight becomes linearly polarized after passing through the linear polarizer, the outer quarter-wave film, the transflector, and the inner quarter-wave film, successively. Since the vertically aligned LC/dye layer causes very little absorption of it, the linearly polarized light finally emerges from the transflective LCD, resulting in a bright state. In the abovementioned transflective guest–host LCD, the inner quarter-wave film is put between the transflector and the guest–host layer. There are two optional positions for the transflector. If the transflector is located inside the LC cell, then the quarter-wave film should also be sandwiched inside the cell. Nevertheless, it is difficult to fabricate such a quarter-wave film and assemble it inside the cell. On the other hand, if the transflector is located outside the cell, then both the quarter-wave film and transflector can be laminated on the outer surface of the LC cell. In this case, however, a serious parallax problem occurs, as will be explained in Section 5.3.4.1.
5.3.1.2 Cholesteric Phase To eliminate the quarter-wave film between the transflector and the LC layer, Cole proposed a transflective LCD design using a half-mirror metallic transflector and cholesteric LC/dye mixture [20], as illustrated in Figure 5.3(b). As we can see, only one quarter-wave film is employed, which is located between the transflector and the linear polarizer. Consequently, the quarter-wave film can be put outside the cell, while the transflector can be sandwiched inside the cell. As a result, no parallax occurs. The upper and lower portions of this figure demonstrate the voltage-off and voltage-on states, respectively. In the voltage-off state, the LC/dye layer renders a right-handed planar texture with its helix perpendicular to the substrates. In the R-mode, the unpolarized light is largely attenuated by the LC/dye layer and only a weak light passes through it. Upon reflection from the transflector, it is further absorbed by the guest dye molecules, resulting in a dark state. In the Tmode, the unpolarized backlight first becomes linearly polarized and then right-handed circularly polarized after it passes through the polarizer and, in turn, the quarter-wave film. The circularly polarized light is further attenuated after it penetrates the transflector. Such weak right-handed circularly polarized light is absorbed by the same twist sense cholesteric LC/dye mixture, resulting in a dark state. In the voltage-on state, both LC directors and dye molecules are reoriented perpendicular to the substrates. As a result, little absorption of the incident light occurs. In the R-mode, the unpolarized light is unaffected throughout the whole path, resulting in a very high reflectance. In the T-mode, the unpolarized backlight becomes right-handed circularly polarized after passing through the polarizer, the quarter-wave film, and the transflector. It finally penetrates the LC/dye layer with little attenuation. Again, a bright state is obtained. In the abovementioned two absorption type transflective LCDs, only one polarizer is employed instead of two. Therefore, the overall image in both T- and R-modes is relatively bright. However, due to the limited dichroic ratio of dye materials (DR15:1), a typical contrast ratio of the guest–host LCD is around 5:1 [21], which is inadequate for high-end full-color LCD applications. Thus, the absorption type transflective LCDs only occupy a small part of the hand-held LCD market.
5.3.2 Scattering Type Transflective LCDs Polymer-dispersed LC (PDLC) [22], polymer-stabilized cholesteric texture (PSCT) [23], and LC gels [24] all exhibit optical scattering characteristics and have wide applications in displays and optical devices. The LC gel-based reflective LCD, which was proposed by Ren et al., can also be extended to transflective LCDs [25]. Figure 5.4, below, shows the schematic structure and operating principles of the LC gel-based transflective LCD. The device comprises an LC gel cell, two quarter-wave films, a transflector, a polarizer, and a backlight. The cell was filled with a homogeneously aligned nematic LC
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
105
and monomer mixture. After UV-induced polymerization, polymer networks are formed and the LC materials are confined within the polymer networks. When no voltage is applied, the LC directors exhibit a homogeneous alignment. Consequently, the LC gels are highly transparent for the light traveling through, as illustrated in the upper portion of Figure. 5.4.
transflector p
Voltage-off
s
s s
s
s s
p
p
p
p
substrate p
p
p
p p
λ /4 film
λ /4 film
polarizer
s
s p p Voltage-on
p transflector
Figure 5.4 Schematic configuration and operating principles of scattering type transflective LCD with homogeneously aligned LC gel. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
In the R-mode, the unpolarized ambient light remains unpolarized all the way from entering to exiting the LC cell. As a result, a fairly bright state is obtained. In the T-mode, the unpolarized backlight turns into a linearly polarized p-wave after the polarizer. After passing the first quarter-wave film, penetrating the transflector, and passing the second quarter-wave film whose optical axis is orthogonal to that of the first one, the p-wave remains linearly polarized. Since the LC gel is highly transparent in the voltage-off state, the linearly polarized p-wave finally comes out of the display panel, resulting in a bright output. On the other hand, when the external applied voltage is high enough, the LC directors deviate from the original homogeneous alignment by the exerted torque of the electric field. Therefore, micro-domains are formed along the polymer chains such that the extraordinary ray, i.e. the linear polarization along the cell rubbing direction, is scattered, provided that the domain size is comparable to the incident light wavelength. In the meantime, the ordinary ray passes through the LC gels without being scattered. In the R-mode, the unpolarized ambient light becomes a linearly polarized s-wave after passing the activated LC cell since the p-wave is scattered. After a round trip of passing the quarter-wave film, being reflected by the transflector, and passing the quarter-wave film again, the s-wave is converted into a p-wave. Due to the scattering of LC gels, this p-wave is scattered again. Consequently, a scattering translucent state is achieved. In the T-mode, the unpolarized backlight turns into a linearly polarized p-wave after passing the polarizer, the second quarter-wave film, the transflector, and the first quarter-wave film, successively. Thereafter, similar to the case of the R-mode, the p-wave is scattered by the activated LC gels, resulting in a scattering translucent output.
106
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
This scattering type transflective LCD only needs one polarizer; therefore, it can achieve a very bright image. However, there are three major drawbacks in the above LC gel-based transflective LCD. First, the light scattering mechanism usually leads to a translucent state rather than a dark state. Therefore, the image contrast ratio is low and highly dependent on the viewing distance to the display panel. Although doping a small concentration of black dye into the LC gels can help to achieve a better dark state, the contrast ratio is still quite limited due to the limited dichroic ratio of dye dopant. Second, the insertion of the first quarterwave film will cause a similar parallax problem as the absorption type transflective LCD using cholesteric LCs. Third, the required driving voltage is usually over 20 volts due to the polymer network constraint, which is beyond the capability of current thin-film-transistors developed for LCD applications. Therefore, these drawbacks hinder the scattering type transflective LCD from commercialization.
5.3.3 Reflection Type Transflective LCDs As mentioned in Section 5.2.4.1, the cholesteric LC layer exhibits a strong Bragg reflection at a central reflection wavelength lo ¼ nPo , where n and Po are the average refractive index and the cholesteric helix pitch, respectively. The reflection bandwidth lo ¼ nPo is proportional to the birefringence n of the cholesteric LC employed. Apparently, to cover the whole visible spectral range, a high birefringence (n > 0:5) cholesteric LC material is needed, assuming the pitch length is uniform. Because the transmitted and reflected circular polarization states are orthogonal to each other, the cholesteric LC layer must rely on some additional elements to display a normal image without the reversed contrast ratio. By adopting an image-enhanced reflector (IER) on the top substrate, as well as a patterned ITO and a patterned absorption layer on the bottom substrate, the transflective cholesteric LCD can display an image without reversed contrast ratio [26, 27], as shown in Figures 5.5(a) and (b). The opening areas of the patterned absorption layer on the bottom substrate match the IER on the top substrate. In addition, right above the openings area of the patterned absorption layer and below the IER is the openings area of the patterned ITO layer. Therefore, the cholesteric LC directors below the IER are not reoriented by the external electric field. In operation, when no voltage is applied, the cholesteric LC layer exhibits a right-handed planar helix texture throughout the cell, as shown in Figure 5.5(a). In the R-mode, when unpolarized ambient light enters the cholesteric LC cell, the left-handed circularly polarized light passes through the righthanded cholesteric LC layer and is absorbed by the patterned absorption layer. At the same time, the right-handed circularly polarized light is reflected by the same sense cholesteric LC layer and the bright state results. In the T-mode, when the unpolarized backlight enters the cholesteric LC layer, similarly, the right-handed circularly polarized light is reflected and it is either absorbed by the patterned absorption layer or recycled by the backlight system. In the meantime, the left-handed circularly polarized light passes through the cholesteric LC layer and impinges onto the IER. Due to a -phase change upon reflection, it is converted to right-handed circularly polarized light, which is further reflected by the cholesteric LC layer to the reviewer. Consequently, a bright state occurs. In the voltage-on state, the planar helix texture above the bottom-patterned ITO layer becomes a focal conic texture, while the LC directors between the IER and the opening area of the bottompatterned ITO layer are still unaffected, as shown in Figure 5.5(b). The focal conic texture, if the domain size is well controlled, exhibits forward scattering for the incident light [5]. In the R-mode, the unpolarized incident ambient light is forward scattered by the focal conic textures. It is then absorbed by the patterned absorption layer, resulting in a dark state. In the T-mode, the unpolarized light still experiences a right-handed planar helix texture before it reaches the IER on the top substrate. Thus, the right-handed polarized light is reflected back and it is either absorbed by the patterned absorption layer or recycled by the backlight system. At the same time, the left-handed circularly polarized light passes through the planar texture and impinges onto the IER. Upon reflection, it turns into right-handed circularly polarized light. Then it is forward scattered by the focal conic texture and finally absorbed by the patterned absorption layer. As a result, the dark state is obtained.
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
107
Figure 5.5 Schematic configuration of reflection type transflective cholesteric LCD and its operating principles at (a) the off-state; and (b) the on-state. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
108
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
In the abovementioned reflection type transflective cholesteric LCD, no polarizer is employed. Therefore, its light efficiency is high. However, to produce the IER array on the top substrate increases the manufacturing complexity. In addition, the IER should be well aligned with the patterned absorption layer; otherwise, light leakage will occur. More importantly, the forward scattering of the focal conic texture is incomplete. Some backward scattered light causes a translucent dark state, which deteriorates the image contrast ratio. Therefore, the reflection type transflective cholesteric LCD is not yet popular for high-end transflective LCD applications.
5.3.4 Phase-Retardation Type Transflective LCDs The operating principle of the phase-retardation type transflective LCD is based on the voltage-induced LC phase-retardation modulation. Since a transflective LCD consists of both T- and R-modes, two polarizers are usually required. Compared to the absorption, scattering, and reflection types, the phaseretardation type transflective LCDs have the advantages of higher contrast ratio, lower driving voltage, and better compatibilities with current volume manufacturing techniques. Therefore, the phaseretardation type transflective LCDs dominate current commercial products, e.g. cellular phones and digital cameras. In this section, we will describe the major transflective LCD approaches based on the phase-retardation mechanism. To gain a better understanding of the underlying operating principle and electro-optical (EO) performances of each transflective LCD approach, we carried out numerical simulations based on the extended Jones matrix method [28]. Hereafter in this chapter, unless otherwise specified, we assume that: (1) the LC material is MLC-6694-000 (from E. Merck); (2) the polarizer is a 190 mm thick dichroic linear polarizer with complex refractive indices ne ¼ 1:5 þ i 0:0022 and no ¼ 1:5 þ i 0:000 032; (3) the transflector does not depolarize the polarization state of the impinging light upon reflection and transmission; (4) the transflector does not cause any light loss upon reflection and transmission; (5) the ambient light and backlight enter and exit from the panel in the normal direction; (6) the wavelength of the light is 550 nm. It should be pointed out that the brightness of a transflective LCD depends on three major factors: (1) the intensity of the backlight source and ambient light; (2) the area ratio of the transmission and reflection regions; (3) the reflectance of the R-mode and the transmittance of the T-mode. For the first factor, the backlight intensity mainly depends on the application. Some transflective LCD devices even come with a control to adjust the backlight intensity. On the other hand, the ambient light intensity is determined by the surrounding light conditions. The second factor mainly depends on the application of the transflective LCD. For indoor applications, the area of transmission region is usually larger than that of the reflection region. On the contrary, if the transflective the display is intended for outdoor applications, the reflection region should be comparable to or even larger than the transmission region. The third factor, however, mainly relies on the operating mode employed. Throughout this chapter, we focus our discussion on optimizing the reflectance for the R-mode and transmittance for the T-mode.
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
109
5.3.4.1 Trans£ective TN and STN (Super-Twisted Nematic) LCDs The 90 TN cell can be used not only in transmissive and reflective LCDs [29], but also in transflective LCDs [30]. The device configuration of a transflective TN LCD is shown in Figure 5.6(a). A 90 TN LC cell, which satisfies the Gooch–Tarry minima conditions [31], is sandwiched between two crossed
ambient light source
polarizer 1 (0°) ITO/substrate TN LC ITO/substrate polarizer 2 (90°) transflector
backlight (a)
Transmittance and reflectance
0.40 T-mode R-mode
0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00 0
1
2
3
4
5
6
Applied voltage (V )
(b) Figure 5.6 Transflective TN LCD: (a) schematic device configuration and (b) voltage-dependent transmittance and reflectance curves. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
110
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
polarizers. In addition, a transflector is laminated at the outer side of the bottom polarizer and a backlight is used for low level ambient light conditions. In the null voltage state, the LC directors exhibit a uniform twist throughout the cell from the lower substrate to the upper substrate. In the T-mode, the incoming linearly polarized light which is generated by the bottom polarizer closely follows the twist profile of the LC directors and continuously rotates 90 with respect to its original polarization state. This is known as the polarization rotation effect of the TN cell. Thus the linearly polarized light can pass through the top polarizer, resulting in a bright output known as a normally white mode. While in the R-mode, the incoming linearly polarized light which is generated by the top polarizer rotates 90 as it passes through the TN LC layer. It then penetrates the bottom polarizer and reaches the transflector. A portion of the linearly polarized light is reflected back by the transflector and passes the bottom polarizer again. This linearly polarized light then follows the twist LC directors and its polarization axis is rotated by 90 , i.e. parallel to the transmission direction of the top polarizer. Accordingly, a bright state is achieved. In the voltage-on state, the bulk LC directors are reoriented substantially perpendicular to the substrate, leaving two orthogonal boundary layers. The perpendicularly aligned bulk LC directors do not modulate the polarization state of the incoming light. At the same time, those two orthogonal boundary layers compensate each other. Consequently, the incoming linearly polarized light still keeps the same polarization state after it passes through the activated TN LC layer. In the T-mode, the linearly polarized light which is generated by the bottom polarizer propagates all the way to the top polarizer without changing its polarization state. Therefore, it is blocked by the top polarizer, resulting in a dark state. In the R-mode, the linearly polarized light produced by the top polarizer passes through the activated LC layer without changing its polarization state. Consequently, it is absorbed by the bottom polarizer and no light returns to the viewer’s side. This is the dark state of the display. Figure 5.6(b) plots the calculated voltage-dependent transmittance and reflectance curves of a typical transflective TN LCD. Here twist angle ¼ 90 and the first Gooch–Tarry minimum condition dn ¼ 476 nm is employed, where d is the cell gap and n is the LC birefringence. We can see that the grayscales of both T- and R-modes overlap well with each other. This is because the reflection beam in the R-mode experiences the bottom polarizer, LC layer, and the top polarizer in turn, as the transmission beam does in the T-mode. Compared to the conventional transmissive TN LCD, the above transflective TN LCD only requires one additional transflector outside the bottom polarizer. Naturally, this transflective LCD device configuration can also be extended to an STN-based transflective LCD [32]. Different from the socalled polarization rotation effect in TN LCD, the STN LCD utilizes the birefringence effect of the STN LC layer [33]. Therefore, a larger twist angle (180 –270 ), a different LC cell gap, and a different polarizer/analyzer configuration are required. The abovementioned TN and STN type transflective LCDs have advantages in simple device structure and matched grayscales; however, their major drawbacks are in parallax and low reflectance. Parallax is a deteriorated shadow image phenomenon in the oblique view of a reflective LCD [34]. Similarly, it also occurs in some transflective LCDs, such as the above described transflective TN and STN LCDs. Figure 5.7 demonstrates the cause of parallax in the R-mode of a transflective TN LCD when the polarizer and transflector are laminated at the outer side of the bottom substrate. The switched-on pixel does not change the polarization state of the incident light because the LC directors are reoriented perpendicular to the substrate. From the observer side, when a pixel is switched on, it appears dark, as designated by a0 b0 in the figure. The dark image a0 b0 , generated by the top polarizer, actually comes from the incident beam ab. Meanwhile, another incident beam cd passes through the switched-on pixel and does not change its linear polarization state as well. Therefore, it is absorbed by the bottom polarizer, resulting in no light reflection. Accordingly, a shadow image c0 d0 occurs from the observer viewpoint. Different from the dark image a0 b0 , which is generated by the top polarizer, the shadow image c0 d0 is actually caused by the bottom polarizer. This is why the shadow image c0 d0 appears to subside under the dark image a0 b0. Because the bottom polarizer and transflector are laminated outside the bottom substrate, the incident ambient light beams ab and cd must traverse the
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
111
shadow image dark image
a′ a
c′
d′
b′
b
c
polarizer (0°)
d
substrate
off
off
on
off
off TN LC substrate
polarizer (90°) transflector Figure 5.7 Schematic view of the cause of the parallax phenomenon in the R-mode of a transflective LCD with polarizer and transflector laminated outside of the bottom substrate. # 2005 IEEE. Reproduced from [6] by the permission of IEEE.
bottom substrate before they are reflected back. Due to the thick bottom substrate, the reflection image beams a0 b0 and c0 d0 are shifted away from the pixel area in which the incoming beams ab and cd propagate, resulting in a shadow image phenomenon called parallax. Such a parallax problem becomes more serious with the decreased pixel size as well as increased bottom substrate thickness. Therefore, the transflective TN and STN LCDs with the abovementioned structures are not suitable for highresolution full-color transflective LCD devices. To overcome the parallax problem in transflective TN and STN LCDs, the bottom polarizer and transflector must be located inside the LC cell. Recently, a burgeoning in-cell polarizer technology, based on thin crystal film (TCF) growth from aqueous lyotropic LC of supramolecules, has attracted some transflective LCD manufacturers’ interest [35]. By depositing both the transflector and polarizer inside the cell, the abovementioned annoying parallax problem can be eliminated. Nevertheless, transflective TN and STN LCDs still have another shortcoming, which is low reflectance in the R-mode. As shown in Figure 5.6(b), although the grayscales of both modes can overlap each other, one can still clearly see that the reflectance in the R-mode is much lower than the transmittance in the T-mode. This is because the light accumulatively passes through polarizers four times in the R-mode but only twice in the T-mode. Due to the absorption of the polarizers, the light in the R-mode suffers much more loss than that in the T-mode. Accordingly, the reflectance of the R-mode is reduced substantially.
5.3.4.2 Trans£ective MTN (Mixed-Mode Twisted Nematic) LCD To overcome the parallax and low reflectance problems of transflective TN and STN LCDs, the bottom polarizer for the R-mode should be moved to the outer surface of the bottom substrate. Thus, the
112
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
transflector can be implemented on the inner side of the LC cell, acting as an internal transflector. Under such a device configuration, the R-mode operates as a single polarizer reflective LCD. More importantly, both ambient light and backlight pass through the polarizer twice; therefore, both T- and R-modes experience the same light absorption from the polarizer. Nevertheless, the conventional TN LC cell does not work well in the single polarizer reflective LCD [5]. This is because, after the light travels a round-trip in the LC layer, the light polarization state in the voltage-on state is identical to that in the voltage-off state. By reducing the dn value of the TN LC layer to around half of that required in a conventional transmissive TN LCD, the MTN mode overcomes the problem mentioned above [6, 36]. Unlike the TN LCD, the twist angle of MTN mode can vary from 0 to 90 and its operating mechanism is based on the proper mixing between the polarization rotation and birefringence effects. Molsen and Tillin of Sharp Corp. incorporated the MTN mode into their transflective LCD design [37], as shown in Figure 5.8(a), below. Compared to the transflective TN LCD shown in Figure 5.6(a), this transflective MTN LCD exhibits two different features. First, the transflector is located inside the LC cell, thus no parallax problem occurs. Second, a half-wave film and a quarter-wave film are inserted in each side of the MTN LC cell. These two films together with the adjacent linear polarizer function as a broadband circular polarizer over the whole visible spectral range [38]. Thereby, a good dark state can be guaranteed over the whole visible range for the R-mode. In the voltage-off state, the MTN LC layer is equivalent to a quarter-wave film. In the R-mode, the incident unpolarized ambient light is converted into linearly polarized light after passing through the top polarizer. After penetrating the top two films and the MTN LC layer, the linearly polarized light still keeps its linear polarization except that it has been rotated 90 from the original polarization direction. Upon reflection from the transflector, it experiences the MTN LC layer and the top two films once again. Hence its polarization state is restored back to the original one, resulting in a bright output from the top polarizer. In the T-mode, the unpolarized backlight turns into linearly polarized light after passing through the bottom polarizer. After it passes through the bottom two films, penetrates the transflector, and continues to traverse the MTN LC layer and the top two films, it becomes circularly polarized light. Finally, a partial transmittance is achieved from the top polarizer. In the voltage-on state, the bulk LC directors are almost fully tilted up and those two unaffected boundary layers compensate each other in phase. Therefore, the LC layer does not affect the polarization state of the incident light. In the R-mode, the linearly polarized light generated by the top polarizer turns into orthogonal linearly polarized light after a round-trip in the top two films and the activated LC layer. Accordingly, this orthogonal linearly polarized light is blocked by the top polarizer, leading to a dark state. In the T-mode, the linearly polarized light, caused by the bottom polarizer, passes through the bottom two films, penetrates the transflector, then continues to pass through the activated LC layer and the top two films. Before it reaches the top polarizer, its linear polarization state is rotated by 90 , which is perpendicular to the transmission axis of the top polarizer, and a dark state results. As an example, Figure 5.8(b) depicts the voltage-dependent transmittance and reflectance curves of a transflective MTN LCD with ¼ 90 and dn ¼ 240 nm. Here both T- and R-modes operate in a normally white mode. Generally speaking, for the TN- or MTN-based LCDs, the normally white display mode is preferable to the normally black mode because the dark state of the normally white mode is controlled by the on-state applied voltage. Thus, the dark state of the normally white mode is insensitive to cell gap variation. Such a large cell gap tolerance is highly desirable for improving manufacturing yield. When comparing Figure 5.8(b) with Figure 5.6(b), one can see two distinctions between the transflective MTN LCD and the transflective TN LCD. First, without bottom polarizer absorption, the reflectance of the transflective MTN LCD is higher than that of the transflective TN LCD. Second, the transmittance of the transflective MTN LCD is much lower than that of the transflective TN LCD. This is because the maximum obtainable normalized transmittance is always less than 100% for a transmissive TN cell sandwiched between two circular polarizers [39]. Figure 5.9 shows the maximum obtainable
ambient light source
polarizer 1 (0°) λ/2 film (15°) λ/4 film (75°) ITO/substrate
MTN LC layer transflector substrate λ/4 film (75°) λ/2 film (15°) polarizer 2 (0°)
backlight (a)
Transmittance and reflectance
0.40 0.35 T-mode R-mode
0.30 0.25 0.20 0.15 0.10 0.05 0.00 0
1
2
3
4
5
6
Applied voltage (V ) (b) Figure 5.8 Transflective MTN LCD: (a) schematic device configuration; and (b) voltage-dependent transmittance and reflectance curves. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
114
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Maximum obtainable normalized T&R
1.0
0.8
0.6
0.4
T-mode with two circular polarizers T-mode with two linear polarizers R-mode with one circular polarizer
0.2
0.0 0
10
20
30
40
50
60
70
80
90 100
Twist angle of LC layer (deg.) Figure 5.9 The maximum obtainable normalized reflectance and transmittance in the transflective MTN LCD and the transflective TN LCD as a function of twist angle. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
normalized reflectance and transmittance in optimized transflective MTN and TN LCDs as a function of twist angle. Here the normalized reflectance and transmittance represent only the polarization state; modulation efficiency is taken into consideration and the light loss caused by the polarizers and reflector is all neglected. Due to the effect of the circular polarizer on both sides of the MTN cell, as long as the twist angle is larger than 0 , the maximum obtainable normalized transmittance gradually decreases in spite of the dn value of the MTN LC layer, as represented by the solid gray line in Figure 5.9. For instance, in the 90 MTN cell with a circular polarizer on both sides, the maximum obtainable normalized transmittance is 33%. On the other hand, the dark dashed line shows that the maximum obtainable normalized reflectance remains steadily at 100% until the twist angle reaches beyond 73 . In short, although the transflective MTN LCD overcomes the parallax problem, its maximum obtainable normalized transmittance in the T-mode is too low. Such a low normalized transmittance demands a brighter backlight which, in turn, would consume more battery power and reduces its lifetime.
5.3.4.3 Patterned-Retarder Trans£ective MTN/TN LCD If we replace the circular polarizers with linear polarizers on both sides of the MTN/TN cell, the maximum normalized transmittance can be boosted to 100% for any twist angle from 0 to 100 , as designated by the solid dark line shown in Figure 5.9. With a linear polarizer on each side of the cell, the T-mode operates in the same way as a conventional transmissive TN LCD. Philips Research Group proposed a dual-cell-gap transflective MTN/TN LCD using patterned phase retarders [39]. Figure 5.10(a) shows the schematic device structure. Each pixel is divided into a transmission region and a reflective region by a derivative openings-on-metal type transflector. A patterned broadband phase retarder is deposited on the inner side of the top substrate. More specifically, the patterned phase retarder is located on the right above the reflection region, while no phase retarder exists above the transmission region. In addition, the cell gap in the transmission region is around twice that of the reflection region and the LC layer twists 90 in both regions. The patterned phase retarder actually comprises a halfwave film and a quarter-wave film fabricated by wet coating techniques [40]. In the transmissive region, the cell is identical to the traditional transmissive TN LCD, while in the reflective region, it is an MTN mode. Figure 5.10(b) shows the voltage-dependent transmittance and reflectance curves with dn ¼ 476 nm in the transmission region and dn ¼ 240 nm in the reflective region. As we can see
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
115
Figure 5.10 Patterned-retarder transflective MTN/TN LCD: (a) schematic device configuration; and (b) voltagedependent transmittance and reflectance curves. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
116
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
from the figure, both T- and R-modes have very good grayscale overlap. Since the maximum normalized reflectance of the 90 MTN mode is around 88% (see the dark dashed line in Figure 5.9), the reflectance is slightly lower than the transmittance. This patterned-retarder transflective MTN/TN LCD has advantages in the matched grayscale, high contrast ratio, and low color dispersion. However, it still requires a dual-cell-gap configuration, which might produce a distorted twisted LC director profile on the border of the transmission and reflection regions.
5.3.4.4 Trans£ective Mixed-Mode LCDs To compensate for the intrinsic optical path differences between the transmission and reflection regions, Sharp Corp. proposed an approach to generate different director configurations simultaneously in both regions [41]. The different director configurations can be realized by, for example, applying different alignment treatments, exerting different driving voltages, generating different electric fields, producing different cell gaps in both regions, and so on. Thus, the transmission region may, in principle, operate in a different LC mode from the reflection region, which leads to the name of transflective mixed-mode LCDs. If two circular polarizers are indispensable in both sides of the cell, one solution to maximize the normalized transmittance is to decrease the LC twist angle to 0 in the transmission region while still maintaining a twist profile in the reflection region. Thus the transmission region can operate in electrically controlled birefringence (ECB) mode while the reflection region still runs in MTN mode. Figure 5.11(a) shows the device configuration of a transflective MTN/ECB LCD using the openingson-metal transflector [42]. The top substrate is uniformly rubbed while the bottom substrate has two rubbing directions: in the reflective region the LC layer twists 75 , while in the transmission region the LC layer has zero twist, i.e. homogeneous alignment. Therefore, the reflective region works in the 75 MTN mode while the transmission region operates in the ECB mode. Coincidently, their dn requirements are very close to each other; therefore, a single-cell-gap device configuration is adopted in both regions. Figure 5.11(b) plots the voltage-dependent transmittance and reflectance curves with dn ¼ 278 nm in both regions. Both T- and R-modes in the transflective MTN/ECB LCD almost simultaneously reach their maximum light efficiency through such a dual-rubbing process. Still, one might notice that the T-mode has slightly lower light efficiency than the R-mode. This is because the dn requirement for both T- and R-modes is slightly different and a compromise is taken to optimize the R-mode. Besides the above demonstrated dual-rubbing transflective MTN/ECB LCD, other similar dualrubbing transflective mixed-mode LCDs have also been reported by different research groups, such as dual-rubbing transflective VA/HAN (Vertical Alignment/Hybrid Aligned Nematic) LCDs [43] and dual-rubbing transflective ECB/HAN LCDs [44]. The common characteristic of these dual rubbing transflective LCDs is that different rubbing directions or different alignment layers are required on at least one of the substrates. This leads to two obstacles for commercial applications. First of all, the dual-rubbing requirement induces a complicated manufacturing process and hence an increased cost. More seriously, the dual rubbing usually introduces a disclination line on the border of different rubbing regions, which lowers the image brightness and deteriorates the contrast ratio as well. To avoid the dual-rubbing process while still maintaining a single-cell-gap device configuration, an alternative way to achieve different director configurations in both regions is to introduce different electric field intensities in both regions. For example, the transflective VA LCD utilizes periodically patterned electrodes to generate different LC tilt angle profiles in both regions [45]. Nevertheless, the metal reflector therein is insulated from its surrounding ITO electrodes, which increases the manufacturing complexity. On the other hand, the patterned reflector is either connected with the common electrode or electrically floated, which results in either a dead zone in the reflection region or charge stability uncertainties. Another example is the transflective IPS LCD, which uses the different
ambient light source
polarizer 1 (0°) λ/2 film (15°) λ/4 film (75°) ITO/substrate
LC layer
reflector
ITO
substrate λ/4 film (75°) λ/2 film (15°) polarizer 2 (0°)
backlight (a)
Transmittance and reflectance
0.40 0.35
T-mode R-mode
0.30 0.25 0.20 0.15 0.10 0.05 0.00 0
1
2
3
4
5
6
Applied voltage (V ) (b) Figure 5.11 Dual-rubbing transflective MTN/ECB LCD: (a) schematic device configuration; and (b) voltagedependent transmittance and reflectance curves. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
118
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
twist angle profiles in the horizontal direction of interdigitated electrodes for both the transmission and reflection regions [46]. In this design, the in-cell retarder is used between the transflector and LC layer. When no voltage is applied, the LC layer is homogeneously aligned. The LC cell together with the in-cell retarder plays the roll of broadband quarter-wave film [38]. Such a design has two shortcomings. First, unlike the conventional transmissive IPS LCD, here the dark state is very sensitive to the LC layer thickness. Second, the in-cell retarder is still difficult to assemble inside the cell even using stateof-the-art fabrication techniques.
5.3.4.5 Dual-Cell-Gap Trans£ective LCDs Unless identical display modes are adopted in both T- and R-modes, there are always some discrepancies between their voltage-dependent transmittance and reflectance curves. This is the reason why none of the abovementioned transflective mixed-mode LCDs have perfectly matched voltagedependent transmittance and reflectance curves. Different from the mixed display modes employed between the transmission and reflection regions as described above, Sharp Corp. also introduced the dual-cell-gap concept for transflective LCDs [41, 47]. Figure 5.12(a) shows the schematic device configuration of a dual-cell-gap transflective ECB LCD. Similar to the case of the dual-rubbing transflective MTN/ECB LCD, this dual-cell-gap transflective ECB LCD also has a circular polarizer on both sides of the cell. The role of the circular polarizer is to make the display operate in a ‘normally white’ mode so that its dark state is not too sensitive to the cell gap variation, as explained in Section 5.4.3.2. Each pixel is divided into a transmission region with cell gap dT and a reflection region with cell gap dR . The LC directors are homogeneously aligned within the cell; therefore, no dual-rubbing process is necessary and both regions operate identically in the ECB mode. Since the homogeneously aligned LC layer only imposes pure phase retardation on the incident polarized light, dR is set to be around half of dT to compensate for the optical path difference between ambient light and backlight. Figure 5.12(b) depicts the voltage-dependent transmittance and reflectance curves with dR n ¼ 168 nm and dT n ¼ 336 nm. As one can see, both curves perfectly match each other and both modes reach the highest transmittance and reflectance simultaneously. Here dR n and dT n are designed to be slightly larger than l=4 and l=2, respectively, in order to reduce the on-state voltage. The downside of the dual-cell-gap approach is two-fold. First, the thicker cell gap in the transmission region results in a slower response time than the reflective region. However, the dynamic response requirement in mobile applications is not as strict as those for video applications. This response time difference, although not perfect, is still tolerable. Second, the viewing angle of the Tmode is very narrow because the LC directors are tilted up in one direction by the external electric field. By substituting the quarter-wave film with a biaxial film on each side of the cell, the viewing angle can be greatly improved [48]. Because the manufacturing process is completely compatible with the state-of-the-art techniques, this dual-cell-gap transflective ECB LCD is so far the mainstream approach for commercial transflective LCD products. Besides the above dual-cell-gap transflective ECB LCD, other dual-cell-gap transflective LCDs have also been proposed, such as the dual-cell-gap transflective VA LCD [49], dual-cell-gap transflective HAN LCD [50], and dual-cell-gap transflective FFS (fringe-field switching) LCD [51, 52]. Similar to the dual-cell-gap transflective ECB LCD, both the dual-cell-gap transflective VA LCD and dual-cellgap transflective HAN LCD also operate in ECB mode although their initial LC alignments are different. On the other hand, in the dual-cell-gap transflective FFS LCD, LC directors are switched in the plane parallel to the supporting substrates. Its dark state is achieved by a half-wave film and the initially homogeneously aligned LC layer. Consequently, the dark state is very sensitive to the LC cell gap, which causes difficulties in maintaining a good dark state in both the transmission and reflection regions due to the dual-cell-gap device configuration.
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
119
ambient light source polarizer 1 (0°) λ/2 film (15°) λ/4 film (75°) ITO/substrate
dR dT metal reflector
LC layer
metal reflector
ITO/substrate λ/4 film (75º) λ/2 film (15º) polarizer 2 (0º)
backlight
(a) Transmittance and reflectance
0.40 0.35 T-mode R-mode
0.30 0.25 0.20 0.15 0.10 0.05 0.00 0
1
2
3
4
5
6
Applied voltage (V ) (b) Figure 5.12 Dual-cell-gap transflective ECB LCD: (a) schematic device configuration; and (b) voltage-dependent transmittance and reflectance curves. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
120
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
5.3.4.6 Single-Cell-Gap Trans£ective LCDs Different from the dual-cell-gap transflective LCD, the single-cell-gap transflective LCD renders a uniform cell gap profile throughout the cell. Therefore, the dynamic responses of both T- and R-modes are close to each other.
Wire grid polarizer-based single-cell-gap transflective LCDs Based on the unique feature of the wire grid polarizer (WGP), Wu [53] and Ge et al. [54] proposed a single-cell-gap transflective LCD design. Figures 5.13(a) and (b) show the device configuration operating at VA and TN modes, respectively. In both, the LC layer is sandwiched between two crossed linear polarizers, and the WGP is formed on the bottom substrate with its wire grid direction parallel to the transmission direction of the bottom polarizer. In addition, the transflective cell is divided into transmissive and reflective regions, and the WGP only covers the reflective region. Such a design overcomes the reversed image problem of the conventional WGP-based transflective LCDs as described in Section 5.2.4.3. Figure 5.13(a) depicts the WGP-based single-cell-gap transflective LCD operating in normally black VA mode. The boundary LC directors are initially rubbed at 45 with respect to the transmission direction of the top polarizer. In the voltage-off state, the linearly polarized light from the top polarizer keeps its polarization state after traversing through the homeotropic LC cell. With its polarization direction perpendicular to the wire grids, the light passing through the WGP is further absorbed by the bottom polarizer, resulting in a dark state in the R-mode. For the T-mode, the backlight transmitting through the bottom polarizer is blocked by the crossed top polarizer. Thus, a common dark state is achieved for both T- and R-modes. As the applied voltage exceeds the threshold voltage, the LC directors are reoriented by the electric fields. Under proper reorientation, the cell can be tuned to reach a state that is functionally equivalent to a half-wave plate. In the R-mode, the polarization axis of the incoming ambient light from the top polarizer is rotated 90 by the LC cell and becomes parallel to the wire grids. The light is reflected back by the WGP, penetrating the activated LC cell, and finally exits the top polarizer. Similarly, in the T-mode the backlight from the bottom polarizer is also rotated by the LC cell to transmit the top polarizer. As a result, a common bright state is achieved. Figure 5.13(b) shows another WGP-based single-cell-gap transflective LCD operating in normally white 90 TN mode. In the voltage-off state, the linearly polarized light generated by the top polarizer experiences a polarization rotation by the TN cell and becomes parallel to the wire grids. After it is reflected by the WGP, its polarization state experiences a second conversion by the TN cell and it further exits the top polarizer at maximum light efficiency, resulting in a bright state in the R-mode. In the T-mode, the backlight from the bottom polarizer is rotated by the TN cell and is transmitted by the top polarizer. Thus without voltage, the cell is in a bright state. In the high-voltage regime, the LC directors are reoriented perpendicular to the substrates. Neither the ambient light nor the backlight sees any phase retardation, thus a common dark state is achieved in both T- and R-modes. To validate this device concept and investigate its performance, Figures 5.14(a) and (b) show the voltage-dependent transmittance and reflectance curves when the WGP-based single-cell-gap transflective LCD runs under VA and TN modes, respectively. For the VA mode, a negative LC mixture MLC-6608 (from Merck) with dn ¼ 360 nm is used. As shown in Figure 5.14(a), in the voltage-off state, both T- and R-modes are dark. At V 5:5 Vrms , both T- and R-modes reach their highest light efficiency simultaneously. Nevertheless, in the intermediate gray levels, the R-mode has lower light efficiency than the T-mode. This is because, at the intermediate gray levels, the phase retardation of the LC cell is less than a half-wave. Therefore, when the polarized light reaches the WGP, it is generally elliptically polarized light, which has electric field components both parallel and perpendicular to the wire girds. Since the perpendicular component can pass through the WGP and is further absorbed by
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
ambient light source
121
ambient light source
polarizer ITO/substrate δ=0
VA-LC layer (45° rubbing) WGP
δ =λ/2
ITO/substrate polarizer backlight Voltage-off
(a)
ambient light source
Voltage-on
ambient light source
polarizer ITO/substrate δ=0 90° TN-LC layer WGP ITO/substrate polarizer backlight Voltage-off
Voltage-on (b)
Figure 5.13 Schematic illustration of the WGP-based single-cell-gap transflective LCD operating at (a) normally black VA mode and (b) normally white TN mode. From Reference [54], copyright # 2006 IEEE.
the bottom polarizer, only the parallel electric field component of the light is reflected by the WGP. Consequently, in the R-mode there is always some extra loss from the WGP in the intermediate gray levels. Only when the LC cell is fully turned on to be equivalent to a half-wave retarder can both R- and T-modes reach the same light efficiency level.
122
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS 0.40
Transmittance and reflectance
0.35 T-mode R-mode
0.30 0.25 0.20 0.15 0.10 0.05 0.00 0
1
2
3
4
5
6
Applied voltage (V) (a)
Transmittance and reflectance
0.40 0.35 T-mode R-mode
0.30 0.25 0.20 0.15 0.10 0.05 0.00 0
1
2
3
4
5
6
Applied voltage (V) (b) Figure 5.14 WGP-based single-cell-gap transflective LCD; voltage-dependent transmittance and reflectance curves at (a) VA mode with dn ¼ 360 nm and (b) TN mode with dn ¼ 480 nm. From Reference [54], copyright # 2006 IEEE.
A similar phenomenon is also observed when the device operates in the 90 TN mode, as shown in Figure 5.14(b). In this case a positive LC material ZLI-4792 (from Merck) with dn ¼ 480 nm is employed. Different from the VA cell as shown in Figure 5.14(a), the 90 TN cell operates in normally white mode: the T- and R-modes both have the highest light efficiency at V ¼ 0 and reach good dark state at a high voltage. Similarly, because of the extra loss from the WGP at the intermediate gray levels, the R-mode declines faster than the T-mode. In addition, because the positive LC material has a larger ", the TN cell has a lower operating voltage than the VA cell, which is good for the low power consumption consideration in mobile display devices.
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
123
Besides the high light efficiency for both T- and R-modes, viewing angle is another important requirement for transflective LCDs. The iso-contrast plots for the T- and R-modes of the VA cell without any film compensation are shown in Figures 5.15(a) and (b), respectively. The R-mode shows a wider view angle (CR>10:1 within 50 viewing cone) than the T-mode (CR>10:1 within 35 viewing cone). A similar phenomenon is observed in the iso-contrast plots for the TN cell-based design shown in Figures 5.16(a) and (b). The major factor contributing to the better viewing angle in the R-mode is due to the fact that the WGP functions similarly to an E-type reflective polarizer. Conventional LCD devices use the O-type sheet polarizer, whose absorption direction is parallel to the principal axis of the absorptive dye, which is also the optical axis with a larger light attenuation. Therefore, the O-type sheet polarizer absorbs the extraordinary ray and transmits the ordinary ray. On the other hand, in the E-type polarizer, the absorption directions of the dye molecules form a plane, which is perpendicular to the optical axis of the smaller light attenuation. Therefore, under normal view the E-type sheet polarizer transmits the extraordinary ray but absorbs the ordinary ray, while under off-axis view the E-type sheet polarizer absorbs the entire ordinary ray and a portion of the extraordinary ray [55]. Thus, in the dark state of the R-mode, the oblique incident ambient light experiences more absorption than the backlight does, resulting in a better dark state in the R-mode. This well explains why the viewing angle in the R-mode is wider than that in the T-mode. In real circumstances, the surface reflection might deteriorate the R-mode image contrast ratio under a strong ambient environment. Fortunately, the R-mode is mainly used outdoors and its image quality is more forgiving than that of the T-mode; therefore, viewing angle optimization design is always emphasized on the T-mode. Due to the simple device structure without any achromatic quarter-wave film involved, both VA and TN cells here can be compensated without much difficulty. Typical compensation for the normally black VA cell can be achieved by a negative C-film [56], while the normally white TN cell can be well compensated by the Fuji film [57]. This WGP-based single-cell-gap transflective LCD shows high light efficiency in both T- and R-modes. With a single-cell-gap configuration, the surface alignment of the LC cell is simple and both T- and R-modes could have the same response time. Furthermore, in the T-mode, no achromatic quarter-wave film is necessary, which makes the optical compensation scheme fairly easy. On the other hand, although the WGP is employed in the device, the WGP itself cannot help to boost the light efficiency in the T-mode unless another light recycling scheme is implemented. In the meantime, to fabricate the WGP in the visible spectral region is still quite challenging based on current technologies.
Other single-cell-gap transflective LCDs Besides the WGP-based single-cell-gap transflective LCD design, Huang et al. also proposed a single-cell-gap transflective LCD using an IER [58], which is similar to the structure described in Section 5.3.3. In their design, the backlight is reflected by the IER to the reflection area; as a result, the transmitted beam from the backlight traverses a similar optical path to that of the ambient beam, which leads to the same color saturation in both T- and R-modes. However, similar to the transflective cholesteric LCD described in Section 5.3.3, to produce an IER on the top substrate increases the manufacturing complexity. Besides, the mismatch between the IER and bottom pixel layout may cause light leakage. As a matter of fact, several transflective LCDs described in the above sections also belong to this single-cell-gap category, such as the transflective TN and STN LCDs, transflective MTN LCD, dualrubbing transflective MTN/ECB LCD, dual-rubbing transflective VA/HAN LCD, dual-rubbing transflective ECB/HAN LCD, transflective VA LCD utilizing periodically patterned electrodes, and transflective IPS LCD. Due to the fact that the ambient light travels twice while the backlight propagates only once in the LC layer, the light efficiency of both T- and R-modes cannot reach the maxima simultaneously unless mixed display modes are employed. This leads to the transflective
124
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 5.15 Iso-contrast plots for the (a) T-mode and (b) R-mode of the WGP-based single-cell-gap transflective LCD using VA cell. From Reference [54], copyright # 2006 IEEE.
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
125
Figure 5.16 Iso-contrast plots for the (a) T-mode and (b) R-mode of the WGP-based single-cell-gap transflective LCD using TN cell. From Reference [54], copyright # 2006 IEEE.
126
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
mixed-mode LCDs as described in Section 5.3.4.4. As discussed therein, the transflective mixed-mode LCDs require either a dual-rubbing process or complicated electrode designs. Consequently, such single-cell-gap transflective mixed-mode LCDs have not yet been commercialized.
5.3.4.7 Other Trans£ective LCDs In addition to the above described transflective LCDs, some other miscellaneous transflective LCDs have been proposed as well. For instance, Philips Research Group reported a transflective LCD using a cholesteric reflector [59]. However, it displays reversed images between the reflected ambient light and transmitted backlight due to the intrinsic orthogonal polarization features of the cholesteric reflector. By using a circular polarizer on each side of the cell, the cholesteric half reflection mode (CHARM) LCD overcomes the reversed image problem [60]. Nevertheless, it must use a half cholesteric transflector, which only partially reflects and transmits the desired circularly polarized light. As a result, its light efficiency is close to that of the transflective LCD using non-cholesteric reflectors. Consequently, this CHARM LCD design loses the high light efficiency property of the cholesteric reflectors. Besides, a transflective LCD using ferroelectric and antiferroelectric LC materials has also been reported [61]. Nevertheless, due to the intrinsic limitations of ferroelectric LC technology, these efforts are still in the lab demo stage.
5.4 Discussion We have described the basic operating principles of some of the main transflective LCDs. The simulation results are based on some ideal assumptions. It is understandable that many other factors can affect the display image qualities, such as color balance, image brightness, and viewing angle.
5.4.1 Color Balance Because the reflection beam passes through the color filter (CF) twice, while the transmission beam passes only once, generally speaking, the transflective LCD has a problem of different color balance between T- and R-modes. To solve the color imbalance problem, different CF approaches have been developed. Sharp Corp. proposed a multi-thickness CF (MT-CF) design for the transflective LCD [62]. In that design, the CF thickness in the reflection region is around half of that in the transmission region because the ambient beam passes through the thinner CF twice, while the transmission beam passes through the thicker CF only once. As a result, these two beams experience almost the same spectral absorption. Therefore, such a CF thickness difference ensures almost identical color saturation between the transmission and reflection regions, resulting in a good color balance between T- and R-modes. In addition to the MT-CF design, a pinhole type CF design has also been proposed [62]. Here, the thicknesses of the CF in both regions are equal, but the CF in the reflection region is punched with some pinholes. Therefore, a portion of the ambient light does not ‘see’ the CF; instead, it passes directly through the pinholes. The problem with such a pinhole type CF is its narrow color reproduction area because the ambient light spectrum is mixed with the RGB primary colors, respectively, which causes the color impurity. An alternative approach to obtain the same color balance between T- and R-modes is to fill the CF with some scattering materials in the reflection region [63]. The filled scattering materials have two functions. First, the equivalent CF thickness in the reflection region decreases to around a half of that in the transmission region. Second, the scattering materials can steer the reflection beam from the
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
127
specular reflection direction; therefore, a pure flat metal reflector can be used in the reflection region, which simplifies the manufacturing process.
5.4.2 Image Brightness Image brightness is a very important feature for transflective LCDs. However, many factors decrease the overall image brightness. For instance, the red, green, and blue color filters have different light attenuation, which affects the overall brightness of the display panel. Besides, the reflection region of the openings-on-metal transflector, usually made from aluminum, has 92% reflectivity over the visible spectral region [64], which leads to a slightly lower light efficiency in the R-mode. In the case of the openings-on-metal transflector, the transflector area is intended for either the reflection region or transmission region. To increase the backlight utilization efficiency while still keeping the ambient light efficiency unchanged, Yang et al. proposed a transflective LCD design using a microtube array below the transmission pixels region [65]. The microtube structure, which is similar to a funnel in shape, allows most of the backlight to enter from a larger lower aperture and to exit from a smaller upper aperture. Consequently, the backlight utilization efficiency can be greatly enhanced, provided that the transmission/reflection area ratio still remains unchanged. After optimization, the average backlight utilization efficiency is improved by a factor of 1.81. Except for the WGP-based single-cell-gap transflective LCD [53, 54] as discussed in Section 5.3.4.6 and the transflective LCDs using cholesteric transflectors [59, 60] as discussed in Section 5.3.4.7, all of the other transflective LCDs discussed in this chapter have the issue of ‘exclusive transflector’. This means that the transflector is exclusively used for either T- or R-mode. As a result, the light loss from the transflector itself is fairly considerable. To further improve the light efficiency of both T- and R-modes simultaneously, using the orthogonal polarization transflectors might be a potential solution. However, it should be conducted before the reversed image problem is resolved.
5.4.3 Viewing Angle Viewing angle is another important concern for transflective LCDs. As mentioned in Section 5.3.4.5, the dual-cell-gap transflective ECB LCD has a narrow viewing angle in the T-mode. But by substituting the quarter-wave film with a biaxial film on each side of the cell, the viewing angle of the T-mode can be greatly widened [48]. In the R-mode, surface reflection is the main factor deteriorating the image contrast ratio and viewing angle. To solve this problem, a bumpy reflector in the reflection region is commonly employed. The bumpy reflectors serve two purposes: (1) to diffuse the reflected light, which is critical for widening the viewing angle; (2) to steer the reflected light away from the specular reflection so that the images are not overlapped by the surface reflections. To design bumpy reflectors, one needs to consider the fact that the incident beam and reflected beam might form different angles with respect to the panel normal. In optical modeling for the R-mode, the asymmetric incident and exit angle features should be taken into consideration [66].
5.5 Conclusion In this chapter, we first introduced the transflector classifications. Then, based on the development history, we investigated some mainstream transflective LCDs, including their operating principles,
Table 5.1 Comparison between different types of transflective LCDs. Transflective LCD type
Dual-rubbing requirement
Dual-cell-gap requirement
Grayscales overlap capability
Advantages and disadvantages
1 2
No No
No No
Good Good
3
No
No
Good
4
No
No
Good
5 6 7
No No Yes
No Yes No
Good Good Good
8 9
No No
No Yes
Good Perfect
10 11
No Yes
Yes No
Good Good
12
No
No
Good
High brightness but low contrast ratio High brightness, but no dark state, parallax problem, and high driving voltage Good color balance, but complicated structure and low contrast ratio Simple device structure, but low reflectance and parallax problem Simple device structure, but low transmittance High brightness, but poor oblique viewing performance High brightness, but complicated manufacturing process and disclination line High brightness, but complicated device structure High contrast and high brightness, but different response speed in T- and R-modes High brightness but low cell gap tolerance Same response speed, but different rubbings or complicated structure/electrodes required Limitation in commercialization
Note: 1: Absorption type transflective LCDs [9], [20]; 2: Scattering type transflective LCD [25]; 3: Reflection type transflective cholesteric LCD [26], [27]; 4: Transflective TN and STN LCDs [30], [32]; 5: Transflective MTN LCD [37]; 6: Patterned-retarder transflective MTN/TN LCD [39]; 7: Dual-rubbing transflective MTN/ECB LCD [41], [42], dual-rubbing transflective VA/HAN LCD [43], and dual-rubbing transflective ECB/HAN LCD [44]; 8: Transflective VA LCD using periodically patterned electrode [45], and transflective IPS LCD [46]; 9: Dual-cell-gap transflective ECB LCD [41], [47], dual-cell-gap transflective VA LCD [49], and dual-cell-gap transflective HAN LCD [50]; 10: Dual-cell-gap transflective FFS LCD [51], [52]; 11: Single-cell-gap transflective LCDs [30], [32], [37], [41]–[46], [53], [54]; 12: transflective LCD using cholesteric reflector [59], CHARM LCD [60], and transflective LCD using FLC and AFLC [61]. # 2005 IEEE. Reproduced from [6] by permission of IEEE.
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
129
advantages, and disadvantages. For the convenience of comparison, Table 5.1 lists all of the transflective LCDs discussed in this chapter [67]. Among them, the dual-cell-gap transflective ECB LCD has the best overall performance. This explains why it dominates the commercial transflective LCD market. In addition, the image quality issues of transflective LCDs, such as color balance, image brightness, and viewing angle, are discussed as well.
References [1] Oh-e, M. and Kondo, K. (1995) Electro-optical characteristics and switching behavior of the in-plane switching mode, Appl. Phys. Lett., 67, pp. 3895–3897. [2] Lee, S.H., Lee, S.L. and Kim, H.Y. (1998) Electro-optic characteristics and switching principle of a nematic liquid crystal cell controlled by fringe-field switching, Appl. Phys. Lett., 73, pp. 2881–2883. [3] Ohmuro, K., Kataoka, S., Sasaki, T. and Koike, Y. (1997) Development of super-high-image-quality verticalalignment-mode LCD, SID Digest Tech. Papers, 28, pp. 845–848. [4] Kim, K.H., Lee, K.H., Park, S.B. et al. (1998) ‘Domain divided vertical alignment mode with optimized fringe field effect’, in Proc. 18th International Display Research Conference (Asia Display’98), pp. 383–386. [5] Wu, S.T. and Yang, D.K. (2001) Reflective Liquid Crystal Displays, New York: John Wiley & Sons, Ltd. [6] Zhu, X., Ge, Z., Wu, T.X. and Wu, S. -T. (2005) Transflective liquid crystal displays, IEEE/OSA J. Displ. Techn., 1 (1) pp. 15–29. [7] Ketchpel, R.D. and Barbara, S. (1977) ‘Transflector’, U.S. Patent 4,040,727. [8] Borden Jr., H.C. (1973) ‘Universal transmission reflectance mode liquid crystal display’, U.S. Patent 3,748,018. [9] Bigelow, J.E. (1978) ’Transflective liquid crystal display’, U.S. Patent 4,093,356. [10] Furuhashi, H., Wei, C.K. and Wu, C.W. (2004) ‘Transflective liquid crystal display having dielectric multilayer in LCD cells’, U.S. Patent 6806934. [11] Hall, D. R. (1998) ‘Transflective LCD utilizing chiral liquid crystal filter/mirrors’, U.S. Patent 5,841,494. [12] Schrenk, W.J., Chang, V.S. and Wheatley, J.A. (1997) ‘Birefringent interference polarizer’, U.S. Patent 5,612,820. [13] Hansen, D.P. and Gunther, J.E. (1999) ‘Dual mode reflective/transmissive liquid crystal display apparatus’, U.S. Patent 5,986,730. [14] de Gennes, P.G. and Prost, J. (1993) The Physics of Liquid Crystals (2nd edn), New York: Oxford University Press. [15] Ouderkirk, J., Cobb Jr., S., Cull, B.D. et al. (2000) ‘Transflective displays with reflective polarizing transflector’, U.S. Patent 6,124,971. [16] Bass, M., Van Stryland, E.W, Williams, D.R. and Wolfe, W.L. (1995) Handbook of Optics, vol. II, Devices, Measurements, & Properties (2nd edn), New York: McGraw-Hill, pp. 3.32–3.35. [17] Perkins, R.T., Hansen, D.P., Gardner, E.W. et al. (2000) ‘Broadband wire grid polarizer for the visible spectrum’, U.S. Patent 6,122,103. [18] Heilmeier, G.H. and L. A. Zanoni (1968) Guest-host interactions in nematic liquid crystals. A new electro-optic effect, Appl. Phys. Lett., 13, pp. 91–92. [19] White, D.L. and Taylor, G.N. (1974) New absorptive mode reflective liquid-crystal display device, J. Appl. Phys., 45, pp. 4718–4723. [20] Cole, H.S. (1983) ‘Transflective liquid crystal display’, U.S. Patent 4,398,805. [21] Morozumi, S., Oguchi, K., Araki, R. et al. (1985) Full-color TFT-LCD with phase-change guest-host mode, SID Digest Tech. Papers, pp. 278–281. [22] Doane, J.W., Vaz, N.A., Wu, B.-G., and Zumer, S. (1986) Field controlled light scattering from nematic microdroplets, Appl. Phys. Lett., 48, pp. 269–271. [23] Yang, D.K., Doane, J.W., Yaniv, Z. and Glasser, J. (1994) Cholesteric reflective display: Drive scheme and contrast, Appl. Phys. Lett., 64, pp. 1905–1907. [24] Hikmet, R.A.M. (1990) Electrically induced light scattering from anisotropic gels, J. Appl. Phys., 68, pp. 4406–4412. [25] Ren, H. and Wu, S.-T. (2002) Anisotropic liquid crystal gels for switchable polarizers and displays, Appl. Phys. Lett., 81, pp. 1432–1434. [26] Huang, Y.-P., Zhu, X., Ren, H. et al. (2004) Full-color transflective Ch-LCD with image-enhanced reflector, SID Digest Tech. Papers, pp. 882–885.
130
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
[27] Huang, Y.-P., Zhu, X., Ren, H. et al. (2004) Full-color transflective cholesteric LCD with image-enhanced reflector, J. SID, 12, pp. 417–422. [28] Lien, A. (1990) Extended Jones matrix representation for the twisted nematic liquid-crystal display at oblique incidence, Appl. Phys. Lett., 57, pp. 2767–2769. [29] Kahn, F.J. (1978) Reflective mode, 40-character, alphanumeric twisted-nematic liquid crystal displays, SID Digest Tech. Papers, pp. 74–75. [30] McKnight, W.H., Stotts, L.B. and Monahan, M.A. (1982) ‘Transmissive and reflective liquid crystal display’, U.S. Patent 4,315,258. [31] Gooch, C.H. and Tarry, H.A. (1975) The optical properties of twisted nematic liquid crystal structures with twist angles 90 degrees, J. Phys. D: Appl. Phys., 8, pp. 1575–1584. [32] Kawasaki, K., Yamada, K., Watanabe, R. and Mizunoya, K. (1987) High-display performance black and white supertwisted nematic LCD, SID Digest Tech. Papers, pp. 391–394. [33] Scheffer, T.J. and Nehring, J. (1984) A new, highly multiplexable liquid crystal display, Appl. Phys. Lett., 45, pp. 1021–1023. [34] Maeda, T., Matsushima, T., Okamoto, E. et al. (1999) Reflective and transflective color LCDs with double polarizers, J. SID, 7, pp. 9–15. [35] Ohyama, T., Ukai, Y., Fennell, L. et al. (2004) TN mode TFT-LCD with in-cell polarizer, SID Digest Tech. Papers, pp. 1106–1109. [36] Wu, S.-T. and Wu, C.-S. (1996) Mixed-mode twisted nematic liquid crystal cells for reflective displays, Appl. Phys. Lett., 68, pp. 1455–1457. [37] Molsen, H. and Tillin, M.D. (2000) ‘Transflective Liquid Crystal Displays’, International Patent Application No. PCT/JP99/05210, International Publication No. WO 00/17707. [38] Pancharatnam, S. (1955) Achromatic combinations of birefringent plates: Part I. An achromatic circular polarizer, Proceedings of the Indian Academy of Science, Section A, 41, pp. 130–136. [39] Roosendaal, S.J., van der Zande, B.M.I., Nieuwkerk, A.C. et al. (2003) Novel high performance transflective LCD with a patterned retarder, SID Digest Tech. Papers, pp. 78–81. [40] van der Zande, B.M.I., Nieuwkerk, A.C., van Deurzen, M. et al. (2003) Technologies towards patterned optical foils, SID Digest Tech. Papers, pp. 194–197. [41] Okamoto, M., Hiraki, H. and Mitsui, S. (2001) ‘Liquid crystal display’, U.S. Patent 6,281,952. [42] Uesaka, T., Yoda, E., Ogasawara, T. and Toyooka, T. (2002) ‘Optical design for wide-viewing-angle transflective TFT-LCDs with hybrid aligned nematic compensator’, in Proc. 9th International, Display Workshops, pp. 417–420. [43] Lee, S.H., Park, K.-H., Gwag, J. S. et al. (2003) A multimode-type transflective liquid crystal display using the hybrid-aligned nematic and parallel-rubbed vertically aligned modes, Jpn. J. Appl. Phys., part 1, 42, pp. 5127– 5132. [44] Lim, Y.J., Song, J.H., Kim, Y.B. and Lee, S.H. (2004) Single gap transflective liquid crystal display with dual orientation of liquid crystal, Jpn. J. Appl. Phys., part 2, 43, L972–L974. [45] Lee, S.H., Do, H.W., Lee, G.-D. et al. (2003) A novel transflective liquid crystal display with a periodically patterned electrode, Jpn. J. Appl. Phys., part 2, 42, L1455–L1458. [46] Song, J. H. and Lee, S. H. (2004) A single gap transflective display using in-plane switching mode, Jpn. J. Appl. Phys., part 2, 43, L1130–L1132. [47] Shimizu, M., Itoh, Y. and Kubo, M. (2002) ‘Liquid crystal display device’, U.S. Patent 6,341,002. [48] Shibazaki, M., Ukawa, Y., Takahashi, S. et al. (2003) Transflective LCD with low driving voltage and wide viewing angle, SID Digest Tech. Papers, pp. 90–93. [49] Liu, H. D. and Lin, S. C. (2002) A novel design wide view angle partially reflective super multi-domain homeotropically aligned LCD, SID Digest Tech. Papers, pp. 558–561. [50] Yang, C.L. (2004) Electro-optics of a transflective liquid crystal display with hybrid-aligned liquid crystal texture, Jpn. J. Appl. Phys., part 1, 43, pp. 4273–4275. [51] Jung, T.B., Kim, J.C. and Lee, S.H. (2003) Wide-viewing-angle transflective display associated with a fringefield driven homogeneously aligned nematic liquid crystal display, Jpn. J. Appl. Phys., part 2, 42, L464–L467. [52] Jung, T.B., Song, J.H., Seo, D.-S. and Lee, S.H. (2004) Viewing angle characteristics of transflective display in a homogeneously aligned liquid crystal cell driven by fringe-field, Jpn. J. Appl. Phys., part 2, 43, L1211–L1213. [53] Wu, S.T. (2005) ‘Reflective and transflective liquid crystal display using a wire grid polarizer,’ U.S. Patent 6,977,702. [54] Ge, Z., Zhu, X. and Wu, S. T. (2006) Transflective liquid crystal display using an internal wire grid polarizer, IEEE/OSA J. Display Technol. Lett., vol. 2, pp. 102–105.
TRANSFLECTIVE LIQUID CRYSTAL DISPLAY TECHNOLOGIES
131
[55] Yeh, P. and Gu, C. (1999) Optics of Liquid Crystal Display, New York: John Wiley & Sons, Ltd, pp.70–71. [56] Chen, J., Kim, K. -H., Jyu, J. -J. et al. (1998) Optimum film compensation modes for TN and VA LCDs, SID Digest Tech. Papers, pp. 315–318. [57] Mori, H. (2005) The wide view (WV) film for enhancing the field of view of LCDs, IEEE/OSA J. Display Technology, 1, pp. 179–186. [58] Huang, Y. P., Su, M. J., Shieh, H. P. D. and Wu, S. T. (2003) A single cell-gap transflective color TFT-LCD by using image-enhanced reflector, SID Digest Tech. Papers, pp. 86–89. [59] van Asselt, R., van Rooij, R. A. W. and Broer, D. J. (2000) Birefringent color reflective liquid crystal displays using broadband cholesteric reflectors, SID Digest Tech. Papers, pp. 742–745. [60] Hisatake, Y., Ohtake, T., Oono, A. and Higuchi, Y. (2001) ‘A novel transflective TFT-LCD using cholesteric half reflector’, in Proc. 8th International Display Workshops, pp. 129–132. [61] Park, W. S., Kim, S.-C., Lee, S. H. et al.(2001) A new design of optical configuration of transflective liquid crystal displays using antiferroelectric liquid crystals and frustelectric ferroelectric liquid crystals, Jpn. J. Appl. Phys., part 1, 40, pp. 6654–6657. [62] Fujimori, K., Narutaki, Y., Itoh, Y. et al. (2002) New color filter structures for transflective TFT-LCD, SID Digest Tech. Papers, pp. 1382–1385. [63] Kim, K.-J., Lim, J. S., Jung, T. Y. et al. (2002) ‘A new transflective TFT-LCD with dual color filter’, in Proc. 9th International Display Workshops, pp. 433–436. [64] Bass, M., Van Stryland, E. W., Williams, D. R. and Wolfe, W. L. (1995) Handbook of Optics, vol. II, Devices, Measurements, & Properties (2nd edn), New York: McGraw-Hill, pp. 35.28–35.42. [65] Yang, Y. S., Huang, Y. P., Shieh, H. P. D., Tsai, M. C. and Tsai, C. Y. (2004) Applications of microtube array on transflective liquid crystal displays for backlight efficiency enhancement, Jpn. J. Appl. Phys., part 1, 43, pp. 8075–8079. [66] Ge, Z., Wu, T. X., Zhu, X. and Wu, S. T. (2005) Reflective liquid crystal displays with asymmetric incidence and exit angles, J. Opt. Soc. Am. A, 22, pp. 966–977.
6 Wide Viewing Angle and High Brightness Liquid Crystal Displays Incorporating Birefringent Compensators and Energy-Efficient Backlight Claire Gu,1 Pochi Yeh,2 Xingpeng Yang,3 and Guofan Jin3 1
Department of Electrical Engineering, University of California, Santa Cruz, California, USA 2 Department of Electrical and Computer Engineering, University of California, Santa Barbara, California, USA 3 Department of Precision Instruments, Tsinghua University, Beijing, China
6.1 Introduction 6.1.1 Overview As we step into the information age, displays have become an integral part of our daily life. They are everywhere – from watches, calculators, cameras, cellular telephones, televisions, and computer monitors, to panels of global positioning systems (GPS), oscilloscopes, medical equipment, pilot
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
134
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
displays, and beyond. In the computer world, a display is essential for us (human beings) to interact with all the digital electronics. For decades, CRT (cathode ray tube) monitors have been the preferred displays for televisions and desktop computers. For about a century now, since its invention in 1897 by the German scientist Karl Ferdinand Braun and its first application in a television by the Russian scientist Boris Rosing in 1907, CRT technology has been steadily improving and maturing. When the first commercial computer was available in 1951, CRTs became the natural choice for displays. Even after the invention of liquid crystal displays (LCDs) in 1971 by James Fergason, CRTs remained the most popular computer displays for many years, owing to their true colors, wide viewing angles, high resolutions, and low cost. In recent years, LCDs have quickly moved from watches and calculators to televisions and computer monitors. In computers, LCDs were first used in laptops. As their performance improves and their price becomes comparable with that of CRTs, LCDs are finally replacing CRTs for most computer monitors. Similar replacement is also happening in televisions. In 2003, LCDs took over half of the display market. By 2005, LCDs exceeded CRT sales by a ratio of more than two-to-one. The advantages of LCDs have given them the dominant position in the display market. Compared with CRTs, they are lighter, thinner, flicker and radiation free, energy efficient, and portable. Compared with other flat panel displays (FPDs) such as Plasma Display Panels, Electroluminescent Displays, Field Emission Displays, and LED (light emitting diode) Displays, LCDs are more mature, therefore, provide higher performance, have a longer life, and cost less. Currently, about 99% of the shipped flat panel displays are LCDs. Liquid crystal displays have become the dominating technology in flat panel displays in recent years, even though they are not perfect. In the world of mobile products, such as cell phones, PDAs, handheld games, MP3 players, digital cameras, and navigation systems, it is essential to have a display which is small, light weight and, most importantly, has high energy efficiency. New technologies such as OLED (organic light emitting diode) are emerging and becoming promising as they develop. However, they still need to compete with LCDs that are currently dominating the display market at all sizes and levels, for both technological and economical reasons.
6.1.2 LCD Performance Limitations The main reason why LCDs are the most successful of the current flat panel displays is that their viewing angle, contrast ratio, color and brightness, performance has been improved to the extent that they are now widely acceptable [1]. Also, LCDs have the greatest installation base, which has been built over many years. However, there is still room for improvement. Further improvements in the brightness and viewing quality are necessary to make LCDs more enjoyable and applicable. The most significant drawback of conventional LCDs is their limited viewing angle. When looking at an LCD from the normal direction, its colors, gray levels, and contrast ratio are ‘picture perfect’. However, when viewed from an angle, colors change, pictures become less clear, and dark areas appear grayish. As we will discuss in the following sections, this is caused by the leakage of light through a pair of crossed polarizers at the dark state and the birefringence of the liquid crystal cell. Another disadvantage of LCDs is their low optical efficiency and limited brightness. As LCDs work with polarized light, a polarizer is usually used to generate a linearly polarized beam. The absorption or reflection loss at the polarizer is 50% for a completely unpolarized light source. The total loss, including color filter loss and miscellaneous loss, can be as high as 95%. This significantly limits the brightness and energy efficiency of LCDs. Although the problems of viewing angle and brightness have improved over the years, their existence still prevents LCDs from being adopted in certain applications. To maintain its leadership position in the display market, the performance of LCD needs to be further improved.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
135
6.1.3 Solutions To improve LCDs’ viewing angle characteristics, optical compensators and various liquid crystal (LC) modes have been used. An optical compensator is a birefringent optical film placed at proper locations in an LCD. The optical compensator, in conjunction with the LC cell, ensures the same polarization change for light coming from all directions of viewing. In addition, the compensators must also eliminate the leakage of light at large viewing angles due to the polarizers. Various alignment arrangements inside LC cells (known as LC modes) can also be used to improve viewing angle characteristics. The conventional LC mode in LCDs is the twisted-nematic (TN) mode. To improve the viewing angle characteristics, other LC modes, such as vertical alignment (VA) mode (including Multidomain, MVA) and in-plane-switching (IPS) mode, have been used for large LCD screens and are becoming increasingly important in the LCD market. To improve LCD’s brightness and energy efficiency, efforts have been focused on the backlights. The idea is to generate a polarized beam without losing much light. Since only one polarization state is used in an LC cell, the other polarization state needs to be reflected, converted into the useful polarization, and directed to illuminate the LC cell. 3M’s dual brightness enhancement film (DBEF) has been widely used in LCDs in this process. Alternative approaches have also been explored to achieve polarized backlight, as will be shown in section 6.3 of this chapter. In what follows, we discuss the principles and applications of various birefringent compensators for wide-viewing-angle (WVA) LCDs in section 6.2. In section 6.3, we introduce various approaches to accomplish energy-efficient generation of polarized beams in high-brightness LCD backlights.
6.2 WVA (Wide-Viewing-Angle) LCDs with Birefringent Compensators 6.2.1 Overview It is generally recognized that conventional liquid crystal displays (LCDs) have been and will continue to be very important display devices in conjunction with active matrix (AM) addressing technology such as thin-film transistors (TFTs). High quality (contrast, gray scale stability) information display can be obtained within a limited range of viewing angles centered on the normal incidence by using conventional LCDs. The degradation of the display quality at large viewing angles is a fundamental problem of virtually all modes and configurations of LCDs. Generally speaking, the angular dependence of the viewing characteristics is due to the fact that both the phase retardations and optical path in most LC cells are functions of the viewing angles. Furthermore, the leakage of light due to polarizers at large viewing angles leads to a poor contrast at these angles of viewing. The narrow angular viewing range has been a significant problem in advanced applications requiring high quality displays, such as avionics displays and wide-screen displays. In the case of avionics displays, the LCDs must provide the same (or nearly the same) contrast and gray scale for viewing angles from both the pilot and the copilot. Such high information content and high quality displays require LCDs whose contrast and gray levels must be as invariant as possible with respect to viewing angles. Various methods and modes of operation have been proposed so far to improve the viewing angle characteristics of LCDs. In the method of optical birefringence compensation, a thin film of birefringent material is inserted into the LCD at a proper location to eliminate or minimize the angular dependence and to eliminate the leakage of light at the dark state. It was first suggested that a film of negative birefringence can be employed to improve the viewing characteristics of LCDs based on vertically aligned nematic cells (VA-LCDs) [2,3]. The same film of negative birefringence film can also be employed in normally white twisted-nematic (TN)-LCDs to improve the viewing angle
136
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
characteristics [4,5]. There are several methods of producing the films of negative birefringence. These include the use of negative form birefringence in multilayers of alternating thin films [5–7]; negative form birefringence of co-planar alignment of polymers (e.g. spin-coated polyimide) [8]; and negative birefringent films of discotic compound with inclined optical axes [9]. Using multi-domain (e.g. 2 or 4) with different liquid crystal orientations in each pixel of TFT-LCDs, the angular dependence of the optical transmission characteristics can also be significantly reduced [10,11]. The viewing characteristics can also be improved by using various LCD modes of operation. These include optically compensated birefringence (OCB) mode LCDs [12,13]; in-plane switching (IPS) mode LCDs [14,15]; and the half-tone gray scale (HTGS) method[16]. The angular dependence of the optical transmission can be reduced or eliminated by using a beam of collimated light as the backlight. A high quality diffuser is needed in this case for wide angle viewing [17].
6.2.2 Extended Jones Matrix Method for Analyzing Large Viewing Angle Characteristics The Jones calculus, invented in 1941 by R. C. Jones [18] for studying the transmission characteristics of birefringent networks is a powerful technique in which the state of polarization is represented by a two-component column vector and each optical element (e.g. wave plate, polarizer, liquid crystal cell) is represented by a 2 2 matrix. This matrix method, however, is limited to normally incident and paraxial rays only. To illustrate this situation, we point out, as an example, that the two polarization states of an incident beam that excite only the ordinary mode or the extraordinary mode of a birefringent crystal plate, respectively, are, in general, not mutually orthogonal for off-axis light. This results from the Fresnel refraction and reflection of light at the plate surfaces, which are neglected in the Jones calculus. In addition, the conventional Jones matrix method does not offer an explanation for the leakage of off-axis light through a pair of crossed ideal polarizers. Many modern optical systems (e.g. liquid crystal displays) call for a birefringent network with a wide field of view. To accurately calculate the transmission characteristics of these systems for off-axis light, the effect of refraction and reflection at the plate interfaces cannot be ignored. The extended Jones matrix method, first introduced in 1982 by Yeh [19], is a powerful technique for treating the transmission of off-axis light in a general birefringent network. In this section, we describe the extended Jones matrix method and demonstrate its applications in the analysis of compensators for liquid crystal displays (LCD). The transmission of light through birefringent networks has been treated using various methods. Exact solutions can be obtained by using the 4 4 matrix method [20,21]. The 4 4 matrix method takes into account the effect of refraction and multiple reflections between plate interfaces. If the effect of multiple reflections is neglected, a simple 2 2 matrix (known as the extended Jones matrix) method [19, 22–27] is adequate for all practical purposes. The extended Jones matrix method is much easier to manipulate algebraically and yet accounts for the effect of the Fresnel refraction and the single reflection at the interfaces. The assumption of no multiple reflection is legitimate for most practical LCD networks. When a spectral averaging is employed in the 4 4 matrix method, the results obtained using the two different methods are almost identical. The Fabry-Perot effect due to multiple reflections at the interfaces, which is strongly frequency dependent, is often unobservable by the spectral averaging in the detection process. For analyzing the transmission property of LCDs, the 2 2 extended Jones matrix method is adequate and easier to use. In this section, we provide a mathematical formulation of the extended Jones matrix (2 2 matrix) method. We then compare that with the conventional Jones matrix method, and discuss its applications. We first discuss the reflection and refraction at the interface between an isotropic medium and a uniaxial medium. The results are then employed to derive the extended Jones matrix method. For the purpose of illustration, we first examine the situation when the c-axis is parallel to the plate interfaces. Then we discuss the general case of an arbitrary c-axis orientation. As an example, the extended Jones matrix method is used to treat twisted nematic liquid crystal (TNLC) cells.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
137
6.2.2.1 Re£ection and Refraction at the Interface To understand the transmission of light through a birefringent network we begin by studying the reflection and refraction of electromagnetic radiation at an interface between an isotropic medium (e.g. air) and a uniaxial medium (e.g. a polarizer or a LC layer). Unlike the Fresnel reflection and refraction of a dielectric interface between two isotropic media, s and p waves are no longer independent. They are coupled because of the optical anisotropy of the uniaxial medium. In other words, an incident wave polarized along the s direction will generate a reflected wave that is a mixture of s and p waves. In addition, an o wave incident from the inside of the uniaxial medium will generate a reflected wave that is a mixture of o and e waves. y x
c φ
z
θ k
(a) z
ke θe θo
β
ko β y
θ
θ
k
k'
(b) Figure 6.1 (a) xyz coordinate systems and the c-axis. The incident wave vector k lies in yz-plane and the c-axis of the uniaxial medium lies in xy plane; (b) reflection and refraction at the interface between an isotropic medium and a uniaxial medium. b ¼ k sin y.
Referring to Figure 6.1, we consider the incidence of a plane wave from the lower half-space ðz < 0Þ. Here we define c-axis as the optical axis of the uniaxial medium. This definition will be used through out this chapter. The coordinates are chosen such that the (x,y)-plane contains the interface and the z-direction is perpendicular to the interface. Let k, k0 be the wave vectors of the incident and reflected waves, respectively, and ko, ke be the wave vectors of the ordinary and extraordinary refracted waves, respectively. The electric fields of incident, reflected, and refracted waves are Incident: E ¼ ðAs s þ Ap pÞ exp½iðot k rÞ;
ð1Þ
138
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Reflected: E ¼ ðBs s þ Bp p0 Þ exp½iðot k0 rÞ;
ð2Þ
Refracted: E ¼ ðCo oeiko r þ Ce eeike r Þ expðiotÞ;
ð3Þ
where s is a unit vector perpendicular to the plane of incidence (yz-plane) and is given by s ¼ x in the chosen coordinate (see Figure 6.1) and p and p0 are unit vectors parallel to the plane of incidence and are given by p¼
ks ; jkj
ð4Þ
p0 ¼
k0 s : jk0 j
ð5Þ
The terms o and e represent unit vectors parallel to the electric field vector of the ordinary mode and extraordinary mode, respectively, in the uniaxially anisotropic medium. The wave vectors can be written as k ¼ by þ kz z; 0
k ¼ by kz z; ko ¼ by þ koz z; ke ¼ by þ kez z;
ð6Þ ð7Þ ð8Þ ð9Þ
where b, which remains constant throughout the media, is the tangential component of all the wave vectors. We note that all the wave vectors are in the plane of incidence. In Equations (1)–(3), As ; Ap ; Bs ; Bp ; Co ; and Ce are constants, where As and Ap are amplitudes of the incident s and p waves, respectively; Bs and Bp are those of the reflected waves; and Co , and Ce are the amplitudes of the transmitted o and e waves, respectively. The magnetic fields of the incident, reflected, and refracted waves can be derived from Equations (1–3) and Maxwell’s equation H¼
i r E; om
ð10Þ
and are given, respectively, by 1 k ðAs s þ Ap pÞ exp½iðot k rÞ; om 1 0 k ðBs s þ Bp p0 Þ exp½iðot k0 rÞ; Reflected: H ¼ om 1 ðCo ko oeiko r þ Ce ke eeike r Þ expðiotÞ: Refracted: H ¼ om Incident: H ¼
ð11Þ ð12Þ ð13Þ
The tangential component of E and H must be continuous at the boundary z ¼ 0. In terms of the fields (1–3) and (11–13), these boundary conditions at z ¼ 0 can be written As þ Bs y pAp þ y p0 Bp x ðk pÞAp þ x ðk0 p0 ÞBp y ðk sÞAs þ y ðk0 sÞBs
¼ x oCo þ x eCe ; ¼ y oCo þ y eCe ; ¼ x ðko oÞCo þ x ðke eÞCe ; ¼ y ðko oÞCo þ y ðke eÞCe :
ð14Þ ð15Þ ð16Þ ð17Þ
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
139
These four equations can be used to solve for the four unknowns Bs ; Bp ; Co ; and Ce in terms of the amplitudes As and Ap of the incident wave. Using the expressions for k, k0 , p and p0 from Equations (4–7), these equations can be written as kðAs þ Bs Þ ¼ x okCo þ x ekCe ;
ð18Þ
kz ðAp þ Bp Þ ¼ y okCo þ y ekCe ;
ð19Þ
kðAp Bp Þ ¼ x ðko oÞCo þ x ðke eÞCe ;
ð20Þ
kz ðAs Bs Þ ¼ y ðko oÞCo þ y ðke eÞCe ;
ð21Þ
where k ¼ jkj, kz ¼ k cos y. We now eliminate Bs and Bp and obtain ACo þ BCe ¼ 2kz AS ;
ð22Þ
CCo þ DCe ¼ 2kz AP ;
ð23Þ
where A, B, C and D are constants given by A ¼ o ðy kÞ þ o ðy ko Þ; B ¼ e ðy kÞ þ e ðy ke Þ; ðy kÞ ðko oÞ ; C ¼ o yk k ðy kÞ ðke eÞ D ¼ e yk : k
ð24Þ ð25Þ ð26Þ ð27Þ
In arriving at these expressions, we have used xkz ¼ ðy kÞ. Equations (22) and (23) can now be solved for Co and Ce in terms of As and Ap . This leads to the following linear relations: Co ¼ As tso þ Ap tpo ; Ce ¼ As tse þ Ap tpe ;
ð28Þ ð29Þ
where tso ; tpo ; tse , and tpe are the Fresnel transmission coefficients given by 2kz D ; AD BC 2kz B ; tpo ¼ AD BC 2kz C ; tse ¼ AD BC 2kz A ; tpe ¼ AD BC tso ¼
ð30Þ ð31Þ ð32Þ ð33Þ
with A, B, C, and D given by Equations (24–27). Notice that we now have four transmission coefficients. Here tso is the transmission coefficient for the case of an s-polarized incident wave and a transmitted o wave. The other coefficients have their similar physical meaning according to their subscripts.
140
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
The other two unknowns, Bs and Bp , which are the amplitudes of the reflected waves, can now be obtained by substituting Equations (28) and (29) into Equations (18–21). Similar linear relations are obtained: Bs ¼ As rss þ Ap rps ; Bp ¼ As rsp þ Ap rpp ;
ð34Þ ð35Þ
where rss ; rsp ; rps ; and rpp are the reflection coefficients given by A0 D BC ; AD BC AB A0 B rps ¼ ; AD BC 0 0 CD C D ; rsp ¼ AD BC 0 0 AD BC rpp ¼ ; AD BC rss ¼
ð36Þ ð37Þ ð38Þ ð39Þ
with A0 , B0 , C0 , and D0 given by A0 ¼ o ðy kÞ o ðy ko Þ ¼ 2x okz A; 0
B ¼ e ðy kÞ e ðy ke Þ ¼ 2x ekz B; ðy kÞ ðko oÞ ¼ 2o yk C; C 0 ¼ o yk þ k ðy kÞ ðke eÞ D0 ¼ e yk þ ¼ 2e yk D: k
ð40Þ ð41Þ ð42Þ ð43Þ
Among these four reflection coefficients, rss and rpp are the direct reflection coefficients, whereas rsp and rps can be viewed as the cross-reflection coefficients. The cross-reflection coefficients rsp and rps vanish when the anisotropy disappears (i.e. when ne ¼ no ). We are now ready to introduce some important concepts. According to Equations (28) and (29), both ordinary and extraordinary waves are, in general, excited by the incidence of a beam of polarized light. It is interesting to note that there exist two input polarization states of the incident wave that will excite only normal modes (either ordinary or extraordinary). According to Equations (28) and (29), these two polarization states are given by Ap tse O-wave excitation: ¼ ; As Ce ¼0 tpe Ap tso ¼ : E-wave excitation: As Co ¼0 tpo
ð44Þ ð45Þ
The expressions for the four reflection coefficients reduce to simple forms at normal incidence. When the incidence angle y is zero, all the wave vectors are parallel to the z axis, and the unit vectors o and e can be written o ¼ x sin f þ y cos f;
ð46Þ
e ¼ x cos f þ y sin f;
ð47Þ
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
141
where f is the angle between c-axis and x-axis. Referring to Figure 6.1, we note that e is parallel to c-axis, o is parallel to z c. In addition, we also note that s ¼ x, p ¼ y at normal incidence. Substituting Equations (46) and (47) into Equations (22–27) and Equations (40–43) and according to Equations (36–39), we obtain ðn2 no ne Þ nðne no Þ cos 2f ; ðn þ no Þðn þ ne Þ ðne no Þn sin 2f rps ¼ ; ðn þ no Þðn þ ne Þ ðne no Þn sin 2f rsp ¼ ; ðn þ no Þðn þ ne Þ ðn2 no ne Þ nðne no Þ cos 2f rpp ¼ ; ðn þ no Þðn þ ne Þ rss ¼
ð48Þ ð49Þ ð50Þ ð51Þ
where n is the index of refraction of the incident medium. Notice that the cross-reflection coefficients vanish when the c axis is either perpendicular ðf ¼ 0Þ or parallel ðf ¼ p=2Þ to the plane of incidence. At these special angles, the s and p waves are not coupled. The transmission coefficients for normal incidence are obtained in a similar fashion and are given by 2n sin f; n þ no 2n cos f; tpo ¼ n þ no 2n tse ¼ cos f; n þ ne 2n tpe ¼ sin f: n þ ne tso ¼
ð52Þ ð53Þ ð54Þ ð55Þ
According to Equations (46), (47), and (4), these expressions of transmission coefficients at normal incidence can also be written as 2n s o; n þ no 2n tpo ¼ p o; n þ no 2n s e; tse ¼ n þ ne 2n p e: tpe ¼ n þ ne tso ¼
ð56Þ ð57Þ ð58Þ ð59Þ
We notice that the transmission coefficients are mostly dependent on the scalar product between the electric field of the normal modes in the incident medium and that of the uniaxial medium (e.g. polarizer or liquid crystal).
6.2.2.2 Matrix Formulation Referring to Figure 6.2, we now consider a plate of uniaxial medium of finite thickness d. When the ordinary and extraordinary waves arrive at the output face of the plate, both the reflected and
142
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS (A's s + A'p p) (Bs s + Bp p) y
Coo Ce e
z (As s + Ap p) d Figure 6.2 Transmission of light through a uniaxial plate.
transmitted waves are generated. If we ignore the multiple reflections that are due to the two parallel faces of the plate, the amplitudes of the transmitted waves can be written as A0s ¼ Co tos eikoz d þ Ce tes eikez d ;
ð60Þ
A0p
ð61Þ
¼ Co top e
ikoz d
þ Ce tep e
ikez d
;
where tos ; tes ; top , and tep are another set of transmission coefficients that can also be derived from the continuity conditions. The transmission of light through such a plate can be managed by using Equations (28), (29), (60), and (61), which are now rewritten in the 2 2 matrix form
A0s A0p
¼
tos top
tes tep
eikez d o
0
eikez d
tse tso
tpe tpo
As ; Ap
ð62Þ
This matrix equation relates the transmitted wave amplitudes A0s and A0p in terms of the amplitudes of the incident waves As and Ap . Note that this equation is valid provided that the multiple reflections in the plate can be neglected. Equation (62) can be conveniently written as
A0s A0p
¼ Do PDi
As ; Ap
ð63Þ
where P is the propagation matrix and Di and Do are the input and output dynamical matrices, respectively. P, Di, and Do are given, respectively, by P¼
eikez d
Di ¼
o tse tso
0
eik0 tpe ; tpo
; d Do ¼
ð64Þ
tes tep
tos : top
ð65Þ
We note that both Equations (62) and (63) resemble the Jones matrix formulation. In fact, these equations can be viewed as the generalization of the Jones matrix method. Therefore, this method is
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
143
called the extended Jones matrix method. We must remember that Di and Do are no longer rotation matrices. In the special case of normal incidence with n no ne , (s, p) and (e, o) become two sets of coplanar coordinate axes. These two matrices Di and Do reduce to rotation matrices, according to Equations (56–59). The matrix formalism developed above can be used for calculating the transmission characteristics of a series of birefringent elements (i.e. liquid crystal cells, wave plates and polarizers) for off-axis light. To do this, we write down the Jones vector of the incident beam and then write down the 2 2 matrices of the various elements. The Jones vector of the emerging beam is obtained by carrying out the following matrix multiplication
A0s A0p
¼
M11 M21
M12 M22
As ; Ap
ð66Þ
where Mij ’s are the matrix elements of the overall transfer matrix obtained by the multiplication of all the 2 2 matrices in sequence. The fraction of energy transmitted is
T¼
jA0s j2 þ jA0p j2 jAs j2 þ jAp j2
;
ð67Þ
which depends on the matrix elements Mij as well as on the polarization state of the incident beam (i.e., As, Ap). In practice, one often deals with an incident beam of unpolarized light. In this case, the Jones vector of the incident beam can be written
As Ap
E0 ¼ pffiffiffi 2
eia1 eia2
;
ð68Þ
where a1 and a2 are random variables in a sense that the time-averaged quantities ½cosða1 a2 Þ and ½sinða1 a2 Þ vanish simultaneously. The fraction of energy transmitted in this case is thus, according to Equations (66–68), given by T ¼ 12 ðjM11 j2 þ jM12 j2 þ jM21 j2 þ jM22 j2 Þ:
ð69Þ
6.2.2.3 Small Birefringence Approximation As we have seen, the general expression for the reflection and transmission coefficients are complicated, especially for off-axis light with the c axis of the crystal oriented at an arbitrary angle f. It is desirable to have approximate expressions for these eight coefficients. When the birefringence is small (i.e. jne no j no ; ne ), the eight coefficients can be greatly simplified by an approximation. In this approximation, we derive the expressions for the reflection and transmission coefficients, disregarding the anisotropy of the elements. This is legitimate because the wave vectors ko and ke are almost equal (i.e. ko ke ), provided that jne no j no ; ne . Also, the refraction angles yo and ye are almost equal (i.e. ye yo ), and the polarization vectors o and e can be given approximately by o¼
ko c jko cj
ð70Þ
144
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
and e¼
o ko ; jo ko j
ð71Þ
respectively [note that Equation (70) is exact]. Here we select a proper sign for these two unit vectors so that (e, o, ko) form a right-handed set of orthogonal vectors. As a result of the small birefringence, an s wave retains its s polarization upon refraction from the interfaces. The same thing happens to the p wave. Consequently, these eight coefficients can be given approximately by rss ¼ rs ; rsp ¼ rps ¼ 0; rpp ¼ rp ;
ð72Þ ð73Þ ð74Þ
¼ s ots ; ¼ s ets ; ¼ p otp ; ¼ p etp ;
ð75Þ ð76Þ ð77Þ ð78Þ
and tso tse tpo tpe
where o and e are unit vectors of the normal modes in the uniaxial medium and are given by Equations (70) and (71), and rs , rp , ts , and tp are the Fresnel reflection and transmission coefficients given by n cos y no cos yo ; n cos y þ no cos yo n cos yo no cos y ; rp ¼ no cos y þ n cos yo rs ¼
ð79Þ ð80Þ
and 2n cos y ; n cos y þ no cos yo 2n cos y ; tp ¼ n cos yo þ no cos y ts ¼
ð81Þ ð82Þ
where we recall that n is the index of refraction of the incident medium (usually glass), no is the index of refraction of the birefringent element (since ne no ), y is the incident angle, and yo is the refraction angle ðne no Þ. Here ts and tp are the transmission coefficients for the s and p waves, respectively, on entering the birefringent element disregarding the anisotropy. In a similar approach, the transmission coefficients of the exit interface upon leaving the uniaxial plate can be written tos ¼ o sts0 ; tes ¼ e sts0 ;
ð83Þ ð84Þ
top ¼ o ptp0 ;
ð85Þ
tep ¼ e ptp0 ;
ð86Þ
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
145
where ts0 , and tp0 are the corresponding Fresnel transmission coefficients given by 2no cos yo no cos yo þ n cos y 2no cos yo tp0 ¼ no cos y þ n cos yo ts0 ¼
ð87Þ ð88Þ
These approximate expressions may be used to study the wide-field property of birefringent optical systems. In most liquid crystals, jne no j 0:1. Small birefringence approximation leads to satisfactory results in practical calculations. By using Equations (75–78) and (83–86), the input and output dynamical matrices Di and Do [Equation (65)] can be written as s ets p etp ð89Þ Di ¼ s ots p otp and Do ¼
e sts0 e ptp0
o sts0 ; o ptp0
ð90Þ
respectively.
p o
e
ψ
s
Figure 6.3 Rotation of coordinates.
In the case when (s, p) and (e, o) are two sets of coplanar orthogonal unit vectors, we may define an angle c such that (see Figure 6.3) e ¼ s cos c þ p sin c; o ¼ s sin c þ p cos c:
ð91Þ ð92Þ
This is the case when the refractive indices are similar (i.e. n no ). In fact,the (e, o) axes may be obtained by a right-handed rotation of the (s, p) axes by an angle c about the wave vector ko (see Figure 6.3). If we further define two diagonal transmission matrices as Ti ¼ To ¼
0 ; tp
ts 0 ts0 0
0 tp0
ð93Þ ! ;
ð94Þ
146
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
then the dynamical matrices Di and Do can be written, according to Equations (89–94), as Di ¼
cos c sin c sin c cos c
ts 0
0 tp
RðcÞTi
ð95aÞ
To RðcÞ;
ð95bÞ
and Do ¼
ts0 0
0 tp0
cos c sin c sin c cos c
respectively. Here, RðcÞ is simply the transformation matrix for the rotation of the coordinate by an angle c. Let f be the angle between the c axis and the x axis (see Figure 6.1) so that the unit vector c representing the c axis can be written as c ¼ x cos f þ y sin f:
ð96Þ
Then the angle c can be expressed in terms of f and yo as sin c ¼ cos c ¼
cos yo sin f ð1 sin2 yo sin2 fÞ1=2 cos f ð1 sin2 yo sin2 fÞ1=2
;
ð97Þ
;
ð98Þ
where we recall that yo is the refraction angle in the uniaxial medium ðno sin yo ¼ n sin yÞ. For normal incidence ðy ¼ yo ¼ 0Þ, this angle is simply c ¼ f. By using these new definitions, Equation (63) can now be written as
A0s A0p
¼ To RðcÞPRðcÞTi
As ; Ap
ð99Þ
where each of the five matrices has its own physical meaning. Counting from the right-hand side, the first matrix Ti accounts for the Fresnel refraction (or transmission) of the incident beam at the input surface; the second matrix RðcÞ accounts for the transformation that decomposes the light beam into a linear combination of the normal modes of propagation of the birefringent element (i.e., o wave and e wave). The third matrix P accounts for the propagation of these normal modes through the bulk of the plate; the fourth matrix RðcÞ transforms the light beam back into a linear combination of the s and p waves at the exit end of the plate; and the fifth matrix To accounts for the Fresnel refraction (or transmission) of the light beam at the output surface of the plate. In the conventional Jones calculus, the input and output vectors are related by
A0s A0p
¼ Rðco ÞPRðco Þ
As ; Ap
ð100Þ
where R is the rotation matrix, co is the angle between the c axis and the x axis ðco ¼ fÞ and P is the propagation matrix. In comparison with Equation (99), we note that the newly developed 2 2 matrix method includes the effect of unequal transmittances for the s and p waves as well as the angular
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
147
dependence of the azimuth angle c, which is different from co for off-axis light. Strictly speaking, the conventional Jones calculus is valid only at normal incidence ðy ¼ yo ¼ 0Þ when the azimuth angle c becomes co ðco ¼ fÞ and the transmission coefficients become independent of the polarization states (i.e. ts ¼ tp ; ts0 ¼ tp0 ) provided that jne no j ne ; no .
6.2.2.4 Arbitrary c-axis Orientation In the case of transmission through a uniaxial plate with an arbitrary c-axis orientation, the extended Jones matrix method [Equations (63–65)], as well as small birefringence approximation, are still applicable. The z-components of the wave vectors of the ordinary and the extraordinary waves koz and kez can be derived from the expression of the normal surface. Here we recall that the z-axis is perpendicular to the plate and that the x- and y-components of the wave vectors are the same as those of the incident wave vector. Referring to Figure 6.4, we define yc as the angle between the c-axis and the z-direction and fc as the angle between the projection of the c-axis on the (x,y)-plane and the xdirection. To calculate the z-component of the extraordinary wave kez , we use the principal coordinate system of the uniaxial medium. The unit vector c can be written as c ¼ x sin yc cos fc þ y sin yc sin fc þ z cos yc :
ð101Þ
where x, y, and z are unit vectors of the coordinate axes. The wave vector components of the extraordinary wave, in the principal coordinate system (see Figure 6.4), can be written as kea ¼ ða cos fc þ b sin fc Þ cos yc kez sin yc ; keb ¼ a sin fc þ b cos fc ; kec ¼ ða cos fc þ b sin fc Þ sin yc þ kez cos yc :
ð102Þ
z c k
θc b y φc a x
Figure 6.4 Orientation of the c-axis. yc is the angle between the c-axis and the z-direction and fc is the angle between the projection of the c-axis on the (x,y)-plane and the x-direction respectively.
148
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
where a and b are the x- and y-components of the incident wave vector, respectively, a and b directions are chosen such that b is perpendicular to z while both a and b are perpendicular to c. The normal surface for the extraordinary waves is given by o 2 2 2 kea þ keb k2 þ ec2 ¼ ; 2 c ne no
ð103Þ
where no and ne are the ordinary and the extraordinary indices of refraction. Substituting Equations (102) into Equation (103), we obtain 2 vkez þ w ¼ 0; ukez
ð104Þ
where sin2 yc cos2 yc þ ; n2e n2 o 1 1 v ¼ kd sin 2yc 2 2 ; ne no
u¼
w¼
ð105Þ
2 kd2 cos2 yc þ keb kd2 sin2 yc o2 þ ; c n2e n2o
with kd ¼ a cos fc þ b sin fc :
ð106Þ
Solving Equation (104), we obtain the z-component of the extraordinary wave kez ¼
vþ
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi v2 4uw ; 2u
ð107Þ
where we have taken the positive sign for the square root, since the light is transmitting in the þz-direction. The z-component of the ordinary wave koz is independent of the orientation of the c-axis and is given by koz ¼
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðno o=cÞ2 a2 b2 ;
ð108Þ
where a and b are the x- and y-components of the incident wave vector, respectively.
6.2.2.5 Application to Liquid Crystal Displays Most liquid crystal display (LCD) devices are made of a cell of liquid crystals sandwiched between transparent electrodes and polarizers. As a result of the applied electric field and the boundary conditions, a continuous distribution of the orientation of the liquid crystal director (c-axis) exists in the cell. Although the liquid crystal medium is not homogeneous, it can still be analyzed as a birefringent
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
149
network. This is done by dividing a liquid crystal cell into N layers (see Figure 6.5). Each layer can be considered as a homogeneous and uniaxially birefringent medium. The orientation of the director (c-axis) may change from layer to layer. The ordinary and extraordinary indices of refraction, no and ne , are constants for all layers. To derive the extended Jones matrix for such a LCD medium, we first examine the 2 2 matrix for a plane wave entering the ðn þ 1Þ-th layer from the n-th layer. The 2 2 matrix can be derived by matching the boundary conditions directly. However, since we have already obtained the 2 2 dynamical matrix for a boundary between an isotropic medium and a uniaxial medium, we can employ the previous results to derive the dynamical matrix between the n-th and the ðn þ 1Þ-th layers.
n=N+1 (polarizer) n=N n=N-1 n=N-2 Liquid Crystal
n=3 n=2 n=1 n=0 (polarizer)
c
(a)
n-th layer
A'e A'o
imaginary layer
(n+1)-th layer
Ae Ao
n
n+1
zero thickness (b) Figure 6.5 (a) A liquid crystal display medium divided into N layers. Each layer can be considered as a uniaxial medium. The orientation of the c-axis may change from layer to layer (e.g. a twisted nematic LCD). The ordinary and extraordinary indices of refraction, no and ne , are constants for all layers. (b) Schematic drawing showing the imaginary layer used in deriving the interlayer dynamical matrix.
A simple approach to derive the dynamical matrix between the n-th and the ðn þ 1Þ-th layers is to introduce an imaginary isotropic layer which is sandwiched between the n-th and the ðn þ 1Þ-th layers (see Figure 6.5b). The imaginary isotropic layer has an index of refraction of no , with an infinitesimal thickness (zero thickness) so that it does not introduce additional refraction or reflection. Denote the
150
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
0 Ae field amplitudes of the light before leaving the n-th layer as and those after entering the A0o n Ae . Following the recipe described in Section 6.2.2 [e.g. Equation (62)], they ðn þ 1Þ-th layer as Ao nþ1 are related by
Ae Ao
¼
nþ1
tpe tpo
tse tso
nþ1
tes tep
tos top
n
A0e A0o
;
ð109Þ
n
where the transmission coefficients are given by Equations (75–78) and Equations (81–88). Notice that since the index of refraction of the imaginary isotropic layer is n ¼ no , the Fresnel transmission coefficients under the small birefringence approximation are 1. Equation (109) can thus be written as Ae Ao
! ¼ nþ1
¼ ¼
se
pe
so po en enþ1
!
es
os
ep op ! ! A0e on enþ1 nþ1
en onþ1 on onþ1 ! ! A0e tee toe ; A0o n teo too
A0o
!
A0e n
A0o
! n
ð110Þ
n
where en and on represent the unit polarization vectors of the extraordinary and the ordinary waves in the n-th layer, and tij (i, j ¼ e, o) is the transmission coefficient between the i-component in the n-th layer and the j-component in the (nþ1)-th layer. Therefore, the dynamical matrix between the n-th and the ðn þ 1Þ-th layers is given by Dn;nþ1 ¼
en enþ1 en onþ1
on enþ1 : on onþ1
ð111Þ
It is important to note that the transmission coefficients of the eigenmodes through the interface between the two layers are determined by the projections (inner products) of the corresponding polarization state vectors. The overall Jones matrix for the LCD medium can thus be written as
A0s A0p
¼M
As ; Ap
ð112Þ
with M ¼ Do PNþ1 DN; Nþ1 PN DN1; N D1;2 P1 D0;1 P0 Di ;
ð113Þ
where the output and input dynamical matrices Do and Di , the propagation matrix Pn, and the interlayer dynamical matrix Dn;nþ1 ðn ¼ 1; 2; 3; . . . NÞ are given by Equations (65), (64) and (111), respectively; and D0;1 and DN;Nþ1 are the dynamical matrices for the interfaces between the corresponding polarizer and its adjacent LC layer [i.e. Equation (111), assuming the index of refraction of the polarizer is nearly the same as that of the liquid crystal]; the propagation matrices Po and PNþ1 are those for the entrance
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
151
and exit polarizers, respectively. We assume that the sheet polarizers are characterized by their complex refractive indices with ne ¼ no , o ¼ 0, 0 < e 1, and thickness d such that 1 2pe d=l. In the case of ideal polarizers, the propagation matrix is given by P¼
0 0
0 : 1
For each layer, the o and e vectors can be calculated by using Equations (70) and (71), and the zcomponent of the wave vector koz and kez can be obtained from Equations (108) and (107) respectively.
6.2.2.6 Generalized Jones Matrix Method The results obtained above can be further generalized to cover any media including biaxial crystals and gyrotropic materials which exhibit optical rotatory power and Faraday rotation. In the case of small anisotropy, the 2 2 dynamical matrix at the boundary can be derived as follows. Consider anisotropic materials with their principal axes oriented at arbitrary directions. The laboratory coordinate system is again chosen such that the z axis is normal to the interfaces. We will first derive the expressions of the eigen polarization states. Then we give the generalized Jones matrix formulation. Since the medium is not isotropic, the propagation characteristics depend on the direction of propagation. The orientations of the crystal axes are described by the Euler angles yc , fc , and cc with respect to a fixed xyz coordinate systems. The dielectric tensor in the xyz coordinate system is given by 0
e1 e ¼ R@ 0 0
0 e2 0
1 0 0 AR1 ; e3
ð114Þ
where e1 , e2 , and e3 are the principal dielectric constants and R is the coordinate rotation matrix given by 0
cos cc cos fc cos yc sin fc sin cc
B R ¼ @ cos cc sin fc þ cos yc cos fc sin cc sin yc sin cc
sin cc cos fc cos yc sin fc cos cc sin cc sin fc þ cos yc cos fc cos cc sin yc cos cc
sin yc sin fc
1
C sin yc cos fc A: cos yc ð115Þ
Since R is orthogonal, the dielectric tensor e in the xyz coordinate must be symmetric, that is, eij ¼ eji . The electric field can be assumed to have exp½iðot ax by gzÞ dependence in each crystal layer, which is assumed to be homogeneous. Since the whole birefringent layered medium is homogeneous in the xy plane, a and b remain the same throughout the layered medium. Therefore, the two components ða; bÞ of the propagation vector are chosen as the dynamical variables characterizing the electromagnetic waves propagating in the layered media. Given a and b, the z component g is determined directly from the wave equation in momentum space:
k ðk EÞ þ o2 meE ¼ 0;
ð116Þ
152
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
or equivalently, 0
o2 mexx b2 g2 @ o2 meyx þ ab o2 mezx þ ag
o2 mexy þ ab o2 meyy a2 g2 o2 mezy þ bg
10 1 Ex o2 mexz þ ag o2 meyz þ bg A@ Ey A ¼ 0: Ez o2 mezz a2 b2
ð117Þ
To have nontrivial plane-wave solutions, the determinant of the matrix in Equation (117) must vanish. This gives a quadratic equation in g that yields four roots , g , ¼ 1; 2; 3; 4. These roots may be either real or complex. Since all the coefficients of this quadratic equation are real, complex roots are always in conjugate pairs. These four roots can also be obtained graphically from Figure 6.6 if they are real. The plane of incidence is defined as the plane formed by ax þ by and z. The intersection of this plane with the normal surface yields two closed curves that are symmetric with respect to the origin of the axes. Drawing a line from the tip of the vector ax þ by parallel to the z direction yields, in general, four points of intersection. Thus, we obtain the four roots, g1 , g2 , g3 , and g4 . The four wave vectors k ¼ ax þ by þ y z all lie in the plane of incidence, which also remains the same throughout the layered medium because a and b are constants. However, the four group velocities associated with these partial waves are, in general, not lying in the plane of incidence. If all the four wave vectors k are real, two of them have group velocities with positive z components, and the other two have group velocities with negative z components. The z component of the group velocity vanishes when g becomes complex.
αx +βy k4 k3
k2 k1
z
Figure 6.6 Graphic method to determine the propagation constants from the normal surface.
The polarization of these waves can be written 0 1 ðo2 meyy a2 g2 Þðo2 mezz a2 b2 Þðo2 meyz þ bg Þ2 p ¼ N @ ðo2 meyz þ bg Þðo2 mezx þ ag Þðo2 mexy þ abÞðo2 mezz a2 b2 Þ A; ðo2 mexy þ abÞðo2 meyz þ bg Þðo2 mexz þ ag Þðo2 meyy a2 g2 Þ
ð118Þ
where ¼ 1; 2; 3; 4 and N’s are the normalization constant such that p p ¼ 1. The electric field of the plane electromagnetic waves can thus be written as 4 X E¼ A p exp½iðot ax by g zÞ; ð119Þ ¼1
where A’s are constants.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
153
In the case of propagating modes (four real g’s), two of them will be positive and the other two will be negative. It can be shown that, for applications in LCDs, all four roots are real. To discuss the transmission properties using 2 2 matrix method, we only need the two positive eigenvalues g1 and g2 (see Figure 6.6). The propagation matrix of each layer of thickness d is simply given by P¼
eig1 d 0
ig2 d : 0
e
ð120Þ
Once the eigenmodes are obtained, the dynamical matrix at the interface between two media can be derived. Suppose that the electric fields inside the two media are given by Incident: E ¼ ðA1 pi1 eiki1 r þ A2 pi2 eiki2 r Þeiot Reflected: E ¼ ðB1 pr1 eikr1 r þ B2 pr2 eikr2 r Þeiot Refracted: E ¼ ðC1 pt1 e
ikt1 r
þ C2 pt2 e
ikt2 r
Þe
ð121Þ
iot
where the subscripts i, r, and t indicate the incident, reflected, and transmitted waves, respectively; pmj (m ¼ i, r, t; j ¼ 1, 2) is the unit vector representing the polarization state of the mode of propagation; kmj (m ¼ i, r, t; j ¼ 1, 2) is the corresponding wave vector; Aj (j ¼ 1, 2) is the corresponding amplitude. The dynamical matrix, which relates the refracted and the incident waves, can be written
C1 C2
¼ D12
A1 A2
¼
t11 t12
t21 t22
A1 ; A2
ð122Þ
where t11, t12, t21, and t22 are transmission coefficients. To determine D12, we assume that the anisotropy is small for both media so that the average indices of refraction can be written as two constants n1 and n2 for medium 1 and medium 2, respectively. For the purpose of obtaining D12, we insert two imaginary isotropic layers of zero thickness between the two media (see Figure 6.7). The layer on the side of medium 1 (2) has an index of refraction n1 (n2). The Fresnel transmission coefficients between the two imaginary layers can be written 2n1 cos y1 n1 cos y1 þ n2 cos y2 2n1 cos y1 tp ¼ n1 cos y2 þ n2 cos y1 ts ¼
i
pi1
n1
n2
s, p1
s, p2
pi2
ð123Þ
t
pt1 pt2
zero thickness
zero thickness
Figure 6.7 Schematic drawing showing the two imaginary isotropic layers.
154
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
where y1 and y2 are angles of incidence and refraction (angles between the propagation direction and the surface normal) respectively. From the previous discussion, the dynamical matrix between two media of the same refractive index is determined by the projection of polarization states of the eigenmodes. Therefore, we obtain, according to the method described previously,
ts 0 pi1 s pi2 s s pt1 p2 pt1 0 tp pi1 p1 pi2 p1 s pt2 p2 pt2 ð124Þ ts ðpi1 sÞðs pt1 Þ þ tp ðpi1 p1 Þðp2 pt1 Þ ts ðpi2 sÞðs pt1 Þ þ tp ðpi2 p1 Þðp2 pt1 Þ ¼ ts ðpi1 sÞðs pt2 Þ þ tp ðpi1 p1 Þðp2 pt2 Þ ts ðpi2 sÞðs pt2 Þ þ tp ðpi2 p1 Þðp2 pt2 Þ
D12 ¼
where pi1, pi2, pt1, pt2, s, p1 and p2 are unit vectors representing polarization states of the two incident eigenmodes, the two transmitted eigenmodes, s waves in the imaginary layers, and p waves in the imaginary layers 1 and 2, respectively. Notice that when n1 ¼ n2, ts ¼ tp ¼ 1, p1 ¼ p2, and the above equation reduces to D12 ¼
pi1 pt1 pi1 pt2
pi2 pt1 : pi2 pt2
ð125Þ
We note that again the dynamical matrix consists of elements which are inner products of the normalized polarization state eigenvectors. This is always valid in the small birefringence approximation. With the dynamical and propagation matrices now available, the overall Jones matrix for a birefringent system containing parallel plates of any dielectric media can be written in the form of Equations (112) and (113).
6.2.3 Viewing Symmetry in LCDs It is known that the viewing characteristics, including contrast ratio and gray scales of most liquid crystal displays depend on the viewing angles. It is also known that some modes of TN-LCDs and STN-LCDs exhibit a left-right viewing symmetry. The exact viewing symmetry of NW TN-LCDs was first discussed in 1989 by using the rotation symmetry of the display structures and the principle of reciprocity [28,29]. The viewing symmetry of LCDs may be affected by the insertion of phase retardation films. Recently, there has been a significant interest in the development of thin film phase retardation compensators to improve the viewing characteristics (contrast ratios and gray level stability) at large viewing angles. It has been proven that inserting phase retardation compensators in series with the LC cell can minimize the dependence of contrast ratio and gray scales on the viewing angles. This has led to a significant improvement of the contrast ratio and gray scale stability [29, 2–9, 30–32] over large viewing angles. In addition to high contrast ratios and gray scale stability, it is desirable to have a left-right viewing symmetry. Although the symmetry of the viewing characteristics can be investigated by using various numerical techniques involving extended Jones matrix methods and 4x4 matrix methods [19–27,33–35], the calculations are time-consuming and the results are only approximate. In this chapter, we extend the discussion of the viewing symmetry of TN-LCDs to include STN-LCDs and various modes of LCDs. In addition, we discuss the effect of the phase compensator on the viewing symmetry of the displays and describe the split-element compensator configurations, which can be employed to preserve the viewing symmetry. Referring to Figure 6.8, we investigate the transmission properties of a normally white (NW) TN-LCD. Let the wave vector of the incident beam be written k ¼ ðkx ; ky ; kz Þ
ð126Þ
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
155
y
x z
Polarizer
TN-LC cell
Analyzer
Figure 6.8 A schematic drawing of a normally white TN-LCD. The double arrows indicate the transmission axes of the polarizers. The dashed arrows indicate the rubbing directions at the surfaces of the LC cell.
where the Cartesian components depend on the angle of incidence ðy; fÞ, where y is measured from the z-axis, and f is measured from the x-axis. Specifically, these components can be written kx ¼ k sin y cos f ky ¼ k sin y sin f kz ¼ k cos y For the purpose of mathematically deriving the viewing symmetry, we will write the transmission of the LCD as a function of the incident wave vector k. In other words, T ¼ Tðkx ; ky ; kz Þ
ð127Þ
Based on the principle of reciprocity, the transmission function exhibits the following symmetry Tðkx ; ky ; kz Þ ¼ Tðkx ; ky ; kz Þ
ð128Þ
We now examine the display structure shown in Figure 6.8. For the purpose of our discussion, we define a symmetry operation Cna as follows. Cna ¼ a rotation through an angle of 2p/n around an axis along the a-direction It is easily seen that the display structure shown in Figure 6.8 remains invariant under a rotation of 180 around the x-axis. In other words, the display structure possesses rotation symmetry of C2x. As a result, the transmission function exhibits the following relationship Tðkx ; ky ; kz Þ ¼ Tðkx ; ky ; kz Þ
ð129Þ
We note that the rotation C2x leads to a reversal of sign of the components ky and kz. The C2x rotation symmetry of the display structure requires that the tilt angle and twist angle distribution functions of the TN or STN cell (yLC and fLC, respectively) satisfy the following conditions, yLC ðzÞ ¼ yLC ðd-zÞ fLC ðzÞ ¼ p fLC ðd-zÞ
ð130Þ ð131Þ
156
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
where d is the thickness of the LC cell and z is the distance measured from the entrance surface of the LC cell. According to Equation (130), yLC ð0Þ ¼ yLC ðdÞ, i.e. the LC cell has symmetric pretilt. According to Equation (131), the LC director at mid-layer (z ¼ d/2) lies in the yz-plane. In addition, the transmission axes of the polarizers must be oriented in a symmetric arrangement as shown in Figure 6.8. By combining the results from the reciprocity principle [Equation (128)] and the C2x rotation symmetry [Equation (129)], we obtain Tðkx ; ky ; kz Þ ¼ Tðkx ; ky ; kz Þ
ð132Þ
which is exactly the left-right viewing symmetry. It is important to note that the left-right viewing symmetry will be broken in a LC cell that has asymmetric pretilt, where Equation (130) is no longer true. It can be shown that exact left-right and up-down viewing symmetry exists in a field-off state of NW TN-LCD with a zero tilt angle. This can be seen from Figure 6.8 where the system possesses both C2x and C2y rotation symmetry when there is no pretilt, i.e. the arrows on the dashed lines can be ignored and the tilt angle is zero throughout the cell. Just as the left-right viewing symmetry is a direct result of the C2x rotation symmetry, the up-down viewing symmetry is a direct result of the C2y rotation symmetry. It is important to note that normally black (NB) TN-LCDs, however, do not exhibit an exact leftright viewing symmetry. The loss of symmetry is a result of the parallel polarizer arrangement, which is not invariant under the rotation of 180 around the x-axis. A more careful examination of the NB-TN-LCD system shows that exact C2z viewing symmetry, i.e. Tðkx ; ky ; kz Þ ¼ Tðkx ; ky ; kz Þ, exists in a fieldoff state with a zero tilt angle. This is a direct result of the C2z rotation symmetry of the system. The same viewing symmetry can be extended to STN cells provided the cells are oriented with a C2x symmetry. Figure 6.9 shows a 270 STN-LCD cell that has the C2x symmetry. We notice that in an STNLCD, the polarizers are not aligned parallel or perpendicular to the adjacent LC directors. In order for a STN-LCD to have horizontal viewing symmetry, the rubbing directions of the entrance and exit layers of the LC cell need to be symmetric under C2x rotation. In addition, the polarizer and analyzer must be aligned symmetrically with respect to the x-axis, i.e. the bisector of the rubbing directions. More detailed discussions on the transmission properties of TN- and STN-LCDs can be found in Ref. [1]. y
x z
Polarizer
TN-LC cell
Analyzer
Figure 6.9 A schematic drawing of a STN-LCD. The double arrows indicate the transmission axes of the polarizers. The dashed arrows indicate the rubbing directions at the surfaces of the LC cell. The short gray arrows indicate the twisting direction of the LC molecules.
The same technique can be applied to examine the viewing symmetry of other LCDs involving the use of parallel aligned (PA) cells, vertically aligned (VA) cells, and bend-aligned (BA) cells, etc. Figure 6.10 shows a nematic liquid crystal display (N-LCD) using a PA cell operating in the vertical switching mode and the corresponding director distributions. The structure in Figure 6.10 in the field-off state
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS Polarizer
N-LC Cell
157
Analyzer
y x z
(a)
Transparent Electrodes
z
E
V=0
Vth < V (b)
Figure 6.10 (a) Schematic drawing of a nematic liquid crystal display (N-LCD) using a parallel aligned cell. (b) Director distribution in the yz-plane in the field-off and field-on states. V is the applied voltage, Vth is the threshold voltage.
possesses both C2x and C2y rotation symmetry. Therefore, it has both horizontal and vertical viewing symmetry. In the field-on state, since it possesses C2x rotation symmetry but not the C2y rotation symmetry, it has only horizontal viewing symmetry. Figure 6.11 shows a PA cell operating in the in-plane switching mode. We notice that in both the field-off and the field-on states, only C2z rotation symmetry exists. The corresponding viewing symmetry is the inversion symmetry, i.e. Tðkx ; ky ; kz Þ ¼ Tðkx ; ky ; kz Þ. Figure 6.12 shows a VA cell operated in normally black mode and Figure 6.13 shows the corresponding LC director orientations. Although the LC cell has continuous rotation symmetry around the z-axis in the field-off state (assuming no pretilt), the crossed polarizers have only C2x and C2y rotation symmetry. That leads to the horizontal and vertical viewing symmetry in the field-off state. In the field-on state, the viewing symmetry is horizontal as a result of the C2x rotation symmetry of the structure. In a VA cell, usually the polarizers are oriented in the horizontal and vertical directions, respectively, and the LC molecules are tilted in the f ¼ 45 , 135 , 225 , or 315 plane. In that case, the viewing symmetry is in the plane perpendicular to the plane in which the LC molecules are tilted. Figure 6.14 shows a BA-LCD operated in the normally white mode and Figure 6.15 shows the corresponding director distributions. We notice that the structure has a C2y rotation symmetry resulting in the vertical viewing symmetry. Other LCD cells can be examined for their rotation symmetry to determine the viewing symmetry in the same fashion. In summary, we have presented an analytical proof of the exact left-right viewing symmetry of NW TN-LCDs and STN-LCDs by using the principle of reciprocity and the C2x rotation symmetry of the cell configuration. In addition, we have shown the effect of the phase retardation compensators on the
158
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 6.11 Schematic drawing of the in-plane switching mode of the N-LCD with normally black operations. The shaded areas are electrodes. (a) In the field-off state, the LC director is parallel to the transmission axis of the polarizer. The transmission is zero due to the crossed polarizers. (b) In the field-on state, the director is aligned parallel to the electric field. Maximum transmission occurs with a proper choice of the cell thickness. V is the applied voltage, Vth is the threshold voltage.
viewing symmetry, and described the split-element compensator configuration which can maintain the left-right viewing symmetry. We have also pointed out that normally black TN-LCDs do not exhibit an exact left-right viewing symmetry and we have examined the rotation symmetry as well as the corresponding viewing symmetry of various modes of N-LCDs.
6.2.4 Birefringent Compensators for Liquid Crystal Displays 6.2.4.1 Leakage of Light in LCDs There are two fundamental sources of leakage of light in LCDs – leakage of light through crossed polarizers, and leakage of light through the birefringent medium inside the crossed polarizers. The leakage of light makes it impossible to achieve a completely dark state. An incomplete dark state leads to poor contrast ratios. Here in this section, we describe the physical origin of the leakage of light. This will pave the ground for the discussion on means of eliminating the leakage of light in LCDs.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
159
Figure 6.12 Schematic drawing of a vertically aligned liquid crystal display (VA-LCD) cell. (a) In the field-off state, the LC director is parallel to z-axis. (b) In the field-on state, the director is aligned perpendicular to the electric field. Maximum transmission occurs with a proper choice of the cell thickness.
Transparent Electrodes
z
E
V=0
Vth < V
Figure 6.13 Director distribution in a VA cell, in the yz-plane, under the influence of an applied electric field E which is parallel to the vertical alignment. V is the applied voltage, Vth is the threshold voltage.
Leakage of light through crossed polarizers Most of the sheet polarizers in LCDs are made of uniaxial materials which exhibit a strong attenuation for the extraordinary wave. Such polarizers, known as O-type polarizers, thus transmit ordinary wave and extinguish the extraordinary wave [1]. In a uniaxial medium, the polarization states of normal
160
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 6.14 A schematic drawing of a BA-LCD in the normally white operation. (a) The plane of bending is oriented at 45 relative to the transmission axes of the polarizers. (b) In the field-on state, the liquid crystal director is homeotropically aligned along the z-axis, leading to a dark state.
z E
V=0
Vth < V
Figure 6.15 A schematic drawing of the director distribution in a bend-aligned cell under the influence of an applied electric field. The shaded areas are transparent electrodes.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
161
modes of propagation (displacement vectors) can be written ko c jko cj o ke De ¼ jo ke j
Do ¼
ð133Þ ð134Þ
where c is a unit vector along the direction of the c-axis of the medium, ko and ke are wave vectors of the waves of propagation in the medium. E-type polarizers which transmit extraordinary waves also exist [36]. However, they are not commercially available at the moment. Referring to Figure 6.16, we consider a pair of crossed polarizers. We assume that the polarizers are ideal such that the extraordinary wave is completely extinguished whereas the ordinary wave is completely transmitted. Let the absorption axis (the c-axis) of the front polarizer be oriented along 45 measured from the horizontal axis, and that of the rear polarizer be oriented along 135 measured from the horizontal axis. The orientations of these absorption axes are indicated by the double arrows.
Figure 6.16 Transmission of light through a pair of crossed polarizers. c1 and c2 are absorption axes of polarizers, respectively.
Let Do1 and Do2 be the unit vectors representing the polarization states of the ordinary waves in the polarizer 1 and 2, respectively. The transmission of a beam of unpolarized light can be written 1 T ¼ jDo1 Do2 j2 2
ð135Þ
Using Equation (133) and the vector identity ðA BÞ ðC DÞ ¼ ðA CÞðB DÞðA DÞðB CÞ, we obtain 2 1 k2 ðc1 c2 Þ ðk c1 Þðk c2 Þ T ¼ ð136Þ 2 jk c1 jjk c2 j where k is the wave vector of the ordinary wave in the medium, and c1, c2 are the absorption axes (c-axes) of the polarizers. For crossed polarizers, c1 c2 ¼ 0, the transmission becomes 1 ðk c1 Þðk c2 Þ 2 ð137Þ T ¼ 2 jk c1 jjk c2 j
162
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
For normal incidence, the transmission is zero according to Equation (137). Thus a completely dark state is obtained for normal incidence. However, for oblique incidence, the transmission according to Equation (137) is finite. The leakage is maximum when the viewing plane is 45 from the absorption axes and is an increasing function of the angle of incidence (y measured from the normal). Assuming a refractive index of n ¼ 1.5 for the polarizer medium, the maximum leakage can reach up to 4% at extreme oblique angles. This is a severe leakage of light. Figure 6.17 shows a contour plot of the leakage of light through a pair of crossed polarizers.
Figure 6.17 Contour plot of leakage of light through a pair of ideal crossed polarizers.
We can discuss further the leakage of light in terms of the orientation of the polarization state measured relative to the plane of incidence. For this purpose, we define the following unit vectors, kn jk nj sk p¼ js kj s¼
ð138Þ ð139Þ
where n is a unit vector perpendicular to the plane of the polarizers, k is the wave vector of propagation in the medium. We note s is a unit vector perpendicular to the plane of incidence, and p is a unit vector parallel to the plane of incidence. For the pair of crossed polarizers shown in Figure 6.16, we examine the orientation of Do1 and Do2 relative to s and p. In particular, we will examine the angle measured from the unit vector p. For normal incidence (with infinitesimal angle of incidence), the angles are 45 measured from unit vector p. When the plane of incidence is perpendicular to the V-axis, i.e., the horizontal plane, the angle between the unit vector o1 and the unit vector p is an increasing function of the angle of incidence. The increase from 45 can be as big as 8 at y ¼ 80 . This corresponds to a leakage of 3.8%. Figure 6.18 illustrates the orientation of the D-vectors of the transmitted polarization state in the polarizers and the Poincare representation of the polarization states o1 and o2.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
Do1
53o
53o
163
A Do2 o2
V
H a
o1
θ = 80 o Figure 6.18 Orientation of polarization states of transmission modes (Ordinary mode) and their representations on Poincare sphere. Unit vectors a, o1 and o2 are Stokes vectors in Poincare space. The figure on the left is drawn in the plane perpendicular to the k vector. The vertical dashed line indicates the direction of p.
Leakage of light through c-plates A c-plate is a plate of uniaxial material with its surface perpendicular to the c-axis of the medium. In liquid crystal displays, this corresponds to a vertically aligned (VA) cell in its voltage-off state and a twisted-nematic (TN) cell in its voltage-on state. For normal incidence when the propagation is along the c-axis, both modes of propagation are propagating at exactly the same speed. As a result, the plate produces no phase retardation and the state of polarization of the light remains unchanged. When a c-plate is sandwiched between a pair of crossed polarizers, the transmission is zero at normal incidence. However, at large angle of incidence, the modes of propagation are no longer propagating at the same speed. Thus, a phase retardation exists
Figure 6.19 Leakage of light through a c-plate sandwiched between a pair of crossed polarizers.
164
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
between these two modes of propagation (ordinary and extraordinary waves). It can be shown that the phase retardation is given by ¼d
2p ne þ no 2 ðne no Þ sin y l 2no n2e
ð140Þ
where y is the angle of incidence, d is the thickness of the plate and no, ne are the refractive indices of the c-plate. Figure 6.19 shows the transmission of light through a c-plate sandwiched between a pair of crossed polarizers. Biaxial retardation plates (films) Although negative c-plates can be employed to compensate the phase retardation of positive c-plate of liquid crystals, additional plates are needed to eliminate the leakage of light due to the polarizers. These include a-plates and biaxial plates. For the purpose of describing various compensators, we start with a discussion of optical properties of biaxial media. The discussion is aimed at understanding both the phase retardation as well as the polarization states of the modes of propagation (slow mode and fast mode). The results are useful for the design of compensators. Referring to Figure 6.20, we consider a biaxial plate which is cut such that the plate surface is perpendicular to one of the principal axes (z-axis). s
y
p q θ
φ
x
θ
z
w
Figure 6.20 Here, xyz is the principal coordinate. z-axis is perpendicular to the plate surface. w is along the direction of propagation k. q is along the direction of the projection of k in the xy-plane. wq is in zp-plane and q is in xy-plane. ðy; fÞ can be considered as the angle of viewing (or angle of incidence). p is in the plane of incidence, s is perpendicular to the plane of incidence.
Let the principal refractive indices be nx, ny, and nz. There are two optic axes along which the two modes of propagation have exactly the same speed. These two axes lie in the principal plane that contains the largest and the smallest refractive indices. This plane is also known as optic axes plane (OAP). For example, if nz is between nx and ny, then xy-plane is the OAP. Many biaxial plates (or films) employed in LCDs are oriented in such a way that the surface of the plates (or films) is parallel to OAP. In what follows we examine the phase retardation as well as the polarization states of the slow wave and the fast wave. Here, we assume ny < nz < nx . The direction of the optic axes in OAP is given by
tan V ¼
nx ny
n2z n2y n2x n2z
!1=2 ð141Þ
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
165
where the angle V is measured from the x-axis, or axis of the largest index (nx). If V is less than 45 , the medium is said to be positive biaxial. On the other hand, if V is greater than 45 , then the medium is said to be negative biaxial. If V is exactly 45 , the medium is said to be neutral biaxial. In the limit when V approaches zero, the positive biaxial medium becomes positive uniaxial. On the other hand, if V approaches 90 , then the negative biaxial medium becomes negative uniaxial. Let the incident wave vector in air be written ki ¼ ðsin y cos f; sin y sin f; cos yÞ
ð142Þ
and the wave vector in the biaxial medium be written k ¼ ðsin y cos f; sin y sin f; kz Þ ðkx ; ky ; kz Þ
ð143Þ
Here, for simplicity we normalized the wave vector to o=c ¼ 1. we note that kix ¼ kx and kiy ¼ ky . This is a result of boundary conditions that require the continuity of the tangential components of E and H vectors. Given an angle of incidence ðy; fÞ, we need to find the z-component of the propagation wave vector kz in the biaxial medium. Closed form expression for kz can be obtained by solving the Fresnel equation of wave normals [1]. For simplicity, we define a ¼ kx2 ¼ sin2 y cos2 f; a¼
b ¼ ky2 ¼ sin2 y sin2 f;
n2x ;
b¼
n2y ;
g ¼ kz2 c ¼ n2z
ð144Þ
where we recall that y is the angle of incidence measured from the z-axis., f is the angle between plane of incidence (plane formed by z-axis and the incident wave vector) and the x-axis. Then the solutions for kz is governed by the following quadratic equation Ag2 þ Bg þ C ¼ 0
ð145Þ
A¼c B ¼ aða þ cÞ þ bðb þ cÞ cða þ bÞ C ¼ ða þ bÞðaa þ bbÞ aaðb þ cÞ bbða þ cÞ þ abc
ð146Þ
with
The exact solutions are
g ¼ kz2 ¼
B
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi B2 4AC 2A
ð147Þ
Since both a and b are less than unity, we obtain approximate solutions for kz by expanding Equation (147) in terms of series of a and b. Assuming nx 6¼ ny this leads to
k2z k1z ¼ ðny nx Þ þ
1 nx 1 1 1 ny 2 2 sin sin2 y sin2 f þ Oðsin4 yÞ y cos f þ 2 n2z ny 2 nx n2z
ð148Þ
166
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
where O(sin4y) stands for high order terms. Here we note ðy; fÞ are angles in air. A similar expression was derived by John Evans in 1949 [37]. We notice that ðk2z k1z Þ in Equation (148) is virtually independent of the angle of incidence when pffiffiffiffiffiffiffiffiffi nz ¼ nx ny , in other words, when nz is the geometrical average of nx and ny. The angle of the optic axes in this case is tan2 V ¼
nx >1 ny
ðfor nz ¼
pffiffiffiffiffiffiffiffiffi nx ny Þ
ð149Þ
So an approximately constant ðk2z k1z ) would require a negative biaxial medium. This is a wide-field wave plate whose phase retardation is virtually independent of the angle of incidence. For most biaxial films employed in LCDs, nx ny , so the optic angle V is approximately 45 . For the case when nx ¼ ny no 6¼ nz ne (c-plates), B2 4AC in Equation (147) becomes ða þ bÞ2 ðn2e n2o Þ2 , we obtain the following exact expression sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi n2 k2z k1z ¼ kez koz ¼ n2o o2 sin2 y n2o sin2 y ne
ð148aÞ
We note that the phase retardationis a function of y only in c-plates. For the case when ny ¼ nz no 6¼ nx ne (a-plates), B2 4AC in Equation (147) becomes ða n2o Þ2 ðn2e n2o Þ2 , we obtain the following exact expression sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi sin2 y cos2 f sin2 y sin2 f sin2 y k2z k1z ¼ kez koz ¼ ne 1 no 1 2 2 2 no ne no
ð148bÞ
We next examine the polarization state of the mode of propagation in biaxial plates. Particularly, we will find the inclination angle of the polarization state measured from the plane of incidence (zwqp plane in Figure 6.20). Although the polarization state (in terms of electric field vector) can be obtained by using Fresnel’s formula as soon as kz is obtained. It is easier to obtain the polarization state (in terms of displacement vector) by using the index ellipsoid, x2 y2 z2 þ þ ¼1 n2x n2y n2z
ð150Þ
Following the procedure of the method of index ellipsoid, we draw a plane (ps-plane in Figure 6.20) perpendicular to the incident wave vector through the origin (center of ellipsoid). This plane intersects with the index ellipsoid in an ellipse. The ellipse is given by 2 0 2 0 cos y cos2 f0 cos2 y0 sin2 f0 sin2 y0 cos2 f0 2 sin f þ þ þs þ p a b c a b 1 1 cos y0 cos f0 sin f0 ¼ 0 þ 2ps b a 2
Here, we note that ðy0 ; f0 Þ are angles inside the biaxial medium.
ð151Þ
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
167
The major and minor axes of the ellipse give the eigen indices of refraction of the modes of propagation. The orientation of the major axis of the ellipse is the polarization state of the slow mode. The above equation can be diagonalized via a rotation of the coordinate axes, as shown in Figure 6.21. The diagonal form in UV-coordinate yields both the eigen refractive indices of the modes as well as the polarization state of the slow mode (and the fast mode). s
v
u ψ p Figure 6.21 A rotation of coordinate. Here, p and s are unit vectors for the s-component and p-component of the electric field vector of the beam. The axis of rotation is the direction of propagation. u and v are unit vectors along the modes of propagation (slow and fast modes).
A few steps of algebra yields the following angle of rotation measured from the plane of incidence, 1 1 1 cos y0 sin 2f0 2 b a tan 2c ¼ 1 1 sin2 y0 cos2 y0 cos 2f0 þ c a b
ð152Þ
For angle of incidence near normal ðy0 1Þ, the angle c can be written c ¼ f0 sin 2f0
2ab cða þ bÞ 02 y 4cða bÞ
ð153Þ
This is the inclination (or azimuth) angle of the slow mode. The inclination (or azimuth) angle of the fast mode is ðc þ p=2Þ. A good knowledge of this angle is useful in the design of compensators. For example, we consider the case of a pair of O-type sheet polarizers as shown in Figure 6.16. For uniaxial media, we have a 6¼ b ¼ c. Let the plane of incidence be the horizontal plane. Using Equation (153), we obtain Modes Polarizer 1 Polarizer 2
E1 O1 E2 O2
Azimuth Angle
c(0)
c
p=4 þ y2 =4 3p=4 þ y2 =4 p=4 y2 =4 p=4 y2 =4
p=4 3p=4 p=4 p=4
y2 =4 y2 =4 y2 =4 y2 =4
Note: y0 here is the angle of beam measured from z-axis inside the medium
We note the azimuth angles obtained from Equation (153) are consistent with those in Figure 6.18. Both transmission modes are tilted toward the plane of incidence as the angle of incidence y0 increases.
168
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
6.2.4.2 Compensators for Eliminating Leakage of Light through Crossed Polarizers As we can see from the example above, the leakage of light through a pair of crossed polarizers is due to the change of the azimuth angle of the polarization states of the transmission modes in polarizers as a function of the angle of incidence. At normal incidence, the Stokes vectors of the transmission modes o1 and o2 are along the S2 axis on the equator. As the angle of incidence increases, the polarization states of these transmission modes begin to tilt toward the horizontal (see Figure 6.22) plane. This is the origin of the leakage.
S3
S2
Q
e1 o2 S1 V
H
e2
o1
e2 o1
Figure 6.22 Polarization states of normal modes in crossed polarizers and their representation on the Poincare sphere [39].
To eliminate the leakage, we must transform the polarization state from o1 to e2 by using retardation plates. It was first pointed out by Seiji and co-workers that two a-plates can be employed to eliminate the leakage of light through crossed polarizers in liquid crystal displays [38]. The principle of operation of the dual a-plates can be described with the help of Figure 6.23. The c-axis of the first a-plate (positive birefringence with ne > no ) is oriented perpendicular to the absorption axis of the first polarizer. The slow axis of this a-plate is parallel to e2. With a retardation of p/3, the polarization state o1 is rotated by 60 to an intermediate polarization state as represented by the point Q on the Poincare sphere. The c-axis of the second a-plate is parallel to the absorption axis of the first polarizer. As a result of its negative birefringence ðne < no Þ, the slow axis of this plate is parallel to o1. Using a retardation of p=3, the polarization state is then transformed from Q to e2 [39]. This state is orthogonal to the transmission state o2 of the second polarizer. Thus, the beam is completely extinguished. There are several other schemes that can be employed to eliminate the leakage of light through crossed polarizers. Figure 6.24 illustrates an example involving the use of two positive a-plates and a positive c-plate. The retardance of the c-plate can be estimated as follows. The geodesic distance between p(o ffiffiffi 1, Q) is approximately 4cp=3. So, the geodesic distance between (Q, Q’) is approximately 4 3cp=3. According to Equation (140), the phase retardation of the c-plate is given approximately as d
pffiffiffi 2p 1 ðne no Þ 2 y2 4 3cp=3 l n
ð154Þ
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
169
Polarizer
abs axis
- 92 nm
c-axis
92 nm
Q λ/6
−λ/6
c-axis
Equator Polarizer
e2
abs axis
2∆Ψ
2∆Ψ
o1
Backlight Figure 6.23 Seiji’s scheme employs a combination of a positive a-plate and a negative a-plate. Retardance of each plate is: nd ¼ l=6 ð¼ 92 nm at l ¼ 550 nmÞ. The þa plate transforms the polarization state from o1 to Q. The a plate then transforms the polarization state from Q to e2.
where we assume ne no n. Using the table in Section 6.2, we have c y2 =4 y2 =ð4n2 Þ. Thus we obtain the following retardance for the c-plate dðne no Þ
pffiffiffi l 3 6
ð155Þ
Reversing the sign of phase retardations in the schemes described in Figures 6.23 and 6.24 is equivalent to reversing the signs of the arrows. Thus, they work equally well in the elimination of the leakage of light. Figure 6.25 describes a scheme involving the use of one a-plate and one c-plate. This scheme was first proposed by Chen and co-workers [40].
Figure 6.23a Equi-transmittance contours of unpolarized light of the scheme in Figure 6.23 using Extended Jones matrix method.
170
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Polarizer
abs axis 92 nm
c-axis 159 nm
Q
+c λ/6
Equator
92 nm
c-axis
e2
Polarizer
2∆Ψ
λ/6
abs axis
2∆Ψ
o1 3 λ/6
Q'
Backlight
Figure 6.24 (þa, þc, þa) combination. Retardance of a plate is: nd ¼ l=6ð¼ 92 nm at l ¼ 550 nmÞ. pffiffiffi Retardance of c plate is: nd ¼ 3l=6 ðd ¼ 159 nm at l ¼ 550 nmÞ. The þa plate transforms the polarization state from o1 to Q. The þc plate then transforms the polarization state from Q to Q’. The last þa plate then transforms the polarization state from Q’ to e2.
Figure 6.26 describes a scheme involving the use of a single biaxial plate for the elimination of the leakage of light. This method was first proposed by Uchida and co-workers [41]. A proper selection of the principal refractive indices can ensure that the slow axis is always aligned at 45 measured from the plane of incidence for all angles of viewing. Similarly, a proper selection of the principal refractive indices can ensure that the phase retardation is virtually independent of the viewing angle. According to Equation (148), the phase retardation is virtually independent of the angle of viewing provided nz is a geometric average of nx and ny. Similarly, according to Equation (153), the azimuth angle can be kept at 45 degrees ðc ¼ f ¼ 45 Þ, provided n2z ¼
2n2x n2y n2x þ n2y
Figure 6.24a Equi-transmittance contours of unpolarized light for the scheme of Figure 6.24.
ð156Þ
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
171
Polarizer
abs axis +c
86 nm
Equator
λ/(2π)
e2
138 nm
λ/4
Q
∆Ψ
∆Ψ
o1
c-axis Polarizer
abs axis Backlight Figure 6.25 (þa, þc) combination. Retardance of a plate is: nd ¼ l=4ð¼ 138 nm at l ¼ 550 nmÞ. Retardance of c plate is: nd ¼ l=ð2pÞ ð¼ 86 nm at l ¼ 550 nmÞ. The þa plate transforms the polarization state from o1 to Q. The þc plate then transforms the polarization state from Q to e2.
According to Equation (141), this condition corresponds to a neutral biaxial plate with V ¼ 45 . Most biaxial films employed in LCDs have three principal indices of refraction that are quite close to each. In this case, it can be shown that Equation (156) is equivalent to a geometric average (or arithmetic average). So, the choice of either arithmetic average or geometric average can ensure both a constant slow axis and a constant phase retardation. In the above discussion, the elimination of the leakage of light is based on a single color (e.g. Green light at l ¼ 550 nm). As the retardance of wave plates depends on the wavelength, a complete extinction of green light usually means a small leakage of red or blue light due to a difference of wavelength of more than 10%. As illustrated in Figure 6.27, the biaxial plate designed for green light may over-compensate for blue light and under-compensate for red light. The leakage of light in the blue and red spectral regimes can be estimated by the geodesic distance between (A, R) and (A, B). Let be the difference between the phase retardation of green light and red (or blue). The geodesic distance AR (or AB) is given approximately by AR 2c
ð157Þ
Polarizer
abs axis 275 nm λ/2 Equator
Polarizer
e2
∆Ψ
∆Ψ
o1
abs axis Backlight Figure 6.26 Biaxial plate compensator. A single biaxial plate with its principal axis of the largest index aligned parallel to the absorption axis of the first polarizer. Furthermore, the plate surface is parallel to OAP.
172
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 6.26a Equi-transmittance contours of unpolarized light for the scheme of Figure 6.26 with nx ¼ 1.52, ny ¼ 1.50, nz ¼ 1.51 and d(nx-ny) ¼ 275 nm.
R
∆Γ A
e2
Equator
2∆ψ
2∆ψ
o1
B Figure 6.27 A compensator (e.g. biaxial plate) designed for Green light may not work as well for blue and red light. The biaxial compensator will transform the polarization from o1 to A which is orthogonal to o2 leading to a complete extinction. The plate will transform the polarization state to R for red light and B for blue light. These two points (R, B) are not orthogonal to o2. Note: o2 is antipodal to e2.
The transmission (leakage) due to nonzero AR is given by 1 1 1 1 T ¼ jDA Do2 j2 ¼ ð1 þ SA So2 Þ ð1 cosðARÞÞ ðcÞ2 2 4 4 2
ð158Þ
where SA and So2 are Stokes vectors corresponding to DA and Do2. To minimize the leakage at red and blue while maintaining a complete extinction of green light, an achromatic design is needed. It was pointed out by Uchida and co-workers that the use of two biaxial plates (one positive biaxial and one negative biaxial) can be employed to minimize the leakage of light
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
173
at red and blue. Figure 6.28 illustrates the principle of operation of such an achromatic compensator. By properly choosing the principal refractive indices, the slow axes of the biaxial plates can be oriented at the desired angles as shown in Figure 6.28 based on Equation (153).
bs2
A B,R
e2
∆Γ
2∆ψ Equator
bs1 o1
2∆ψ
Figure 6.28 Achromatic biaxial compensator for the elimination of leakage of light through crossed polarizers. bs1 is slow axis of first biaxial plate, bs2 is slow axis of the second biaxial plate.
If we define
¼
2ab cða þ bÞ 2n2x n2y n2z ðn2x þ n2y Þ ¼ cða bÞ n2z ðn2x n2y Þ
ð159Þ
then we note ¼ 0 for the case of Figure 6.26 where only one biaxial plate is employed. For the achromatic design with two biaxial plates, the first plate must be positive biaxial with ¼ 1=2, and the second plate be negative biaxial with ¼ 1=2. The slow axes of the plates are parallel to the absorption axis of the first polarizer. Figure 6.29 shows the variation of and optic angle o
1
90
σ
V
0.5
V 45 o
σ 0
–0.5
Positive Biaxial –1
Negative Biaxial
0o n y = 1.5
1.51
1.52
1.53
nx = 1.54
nz Figure 6.29 Optic angle V and biaxial parameter as functions of nz with a given set of (nx, ny). In this example, nx ¼ 1:54; ny ¼ 1:50.
174
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
V as a function of nz, for a given set of (nx, ny). It can be easily shown that ¼þ
1 2
when
n2z ¼
4n2x n2y 3n2x þ n2y
¼
1 2
when
n2z ¼
4n2x n2y n2x þ 3n2y
ð160Þ
The leakage of light at red and blue can be estimated by the geodesic distance AB or AR. The geodesic distance AR (or AB) in this case is given approximately by
AR cðÞ2
ð161Þ
The transmission (leakage) of red and blue light due to nonzero AR is thus given by 1 1 1 1 T ¼ jDB Do2 j2 ¼ ð1 þ SB So2 Þ ð1 cosðARÞÞ ½cðÞ2 2 2 4 4 8
ð162Þ
where SB and So2 are Stokes vectors corresponding to DB and Do2. Comparing Equations (158) and (162), we note that the drop in the leakage in the red and blue light is by a factor of ð=2Þ2 . For (B ¼ 480 nm, G ¼ 550 nm, R ¼ 630 nm), is approximately p=14. The drop in the leakage is approximately two orders of magnitude. This is a significant decrease in the leakage of red and blue light.
6.2.4.3 Compensators for the Retardation of LC Cell The extended Jones matrix can now be employed to investigate various birefringent optical thin films that can be used to improve the viewing angle characteristics and gray scale stability of conventional twisted nematic liquid crystal displays (TN-LCDs). As discussed in the previous section, birefringent thin films with the proper retardation values can be employed to eliminate the leakage of light at the dark state. The leakage of light can be due to either the polarizers or the birefringence of the LC cell. In what follows we discuss the principle of phase retardation compensation using various birefringence thin films, including uniaxial and biaxial films, to achieve high contrast ratios and gray level stability in LCDs. We first summarize the basic properties of TN-LCDs at large viewing angles. Viewing angle characteristics of LCDs As mentioned earlier, the transmission properties of most LCDs depend on the angle of viewing for virtually all modes of operations based on a thin film of liquid crystal materials, including VA-LCDs, TN-LCDs, BA-LCDs, STN-LCDs, etc. Consider the simplest case of a normally black LCD based on a vertically aligned liquid crystal cell. In the field-off state (dark state), the cell is effectively a c-plate (c-axis perpendicular to the film surface) with a positive birefringence sandwiched between a pair of crossed polarizers. For on-axis viewing, the light is propagating along the c-axis with a zero phase retardation. The transmission is zero due to the crossed polarizers. As we know, leakage of light occurs at off-axis viewing due to a finite phase retardation of the LC cell, as well as due to polarizers. The leakage becomes severe at large viewing angles, leading to reduced contrast ratios. The leakage of light
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
Polarizer
Liquid crystal cell
175
Analyzer
y x z
(a) E-mode Polarizer
y
Liquid crystal cell
Analyzer
x z
(b) O-mode Figure 6.30 Schematic drawing of a NW-TN-LCD. (a) E-mode operation with polarizer transmission axis parallel to the director at adjacent surface of LC cell. (b) O-mode operation with polarizer transmission axis perpendicular to the director at adjacent surface of LC cell. The arrows on the surfaces of the LC cell indicate the rubbing directions.
at dark states and the dependence of the contrast ratios on the viewing angles occur in virtually all modes of operation of LCDs. In the following we consider the viewing angle characteristics of TN-LCDs. Contrast and stability of gray levels are important attributes in determining the quality of a liquid crystal display. The primary factor limiting the contrast ratio achievable in a liquid crystal display is the amount of light which leaks through the display in the dark state. To illustrate the viewing angle characteristics of a typical TN-LCD, we examine the transmission curves at various horizontal and vertical viewing angles. Referring to Figure 6.30, we consider a typical NW (normally white) TN-LCD with either an E-mode or O-mode operation. The liquid crystal cell is expanded to show the rubbing directions and the tilt of the liquid crystal director at mid-layer. In an E-mode operation, the transmission axis of the polarizer is parallel to the director of the liquid crystal on the surface adjacent to the polarizer. In an O-mode operation, the transmission axis of the polarizer is perpendicular to the director of the liquid crystal on the surface adjacent to the polarizer. For an NW display configuration, the 90 twisted nematic liquid crystal cell is placed between a pair of crossed polarizers. In the absence of an applied electric field, the direction of polarization (E-field vector) of an incoming beam of polarized light will follow the twist of the director (known as the waveguiding) in traveling through the liquid crystal layer. The polarization state of the light will thus be aligned parallel to the transmission axis of the analyzer, leading to a white state (high transmission). When a sufficient voltage is applied to the transparent electrodes, a strong electric field along the z-axis is established in the liquid crystal cell. The applied electric filed causes the director of the liquid crystal material to tend to align parallel to the field (z-axis). With the liquid crystal material in this state, the cell exhibits no phase retardation to an
176
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS 0.5
0V
0.4
Transmission
Transmission
0.5
0.3 0.2 0.1
5.49 V
0V
0.4 0.3 0.2 0.1
5.49 V
0
0 -60 -40
-20
0
20
40
-60 -40
60
Vertical Viewing Angle (Deg)
0.4 0.3 5.49 V
0.1 -20
0
20
40
60
Horizontal Viewing Angle (Deg)
(a) Uncompensated
40
60
0V
0.4 0.3 0.2 0.1
0 -60 -40
20
0.5
0V
0.2
0
Vertical Viewing Angle (Deg)
Transmission
Transmission
0.5
-20
5.49 V
0 -60 -40
-20
0
20
40
60
Horizontal Viewing Angle (Deg)
(b) Compensated
Figure 6.31 Viewing characteristics of a typical NW TN-LCD with an O-mode operation. (a) Uncompensated TN-LCD. (b) Compensated TN-LCD with a negative c-plate compensator as in Figure 6.11.
incoming beam of polarized light at normal incidence. The light transmitted through the polarizer is thus extinguished by the analyzer, leading to a dark state (zero or low transmission). As a result of the different optical path, the birefringence, and distribution of liquid crystal director n(z), the transmission of the display is intrinsically dependent on the angle of incidence ðy; fÞ. This may lead to an incomplete extinction of light at the analyzer (leakage). We now discuss the contrast ratios and the gray levels as functions of viewing angles.
Contrast ratios We first discuss the contrast ratio of the normally white TN-LCDs. Referring to Figure 6.31(a), we notice that the display exhibits an exact left-right symmetrical horizontal viewing characteristics. The vertical viewing characteristics, however, is quite asymmetrical due to the distribution of the tilt angle of the liquid crystal director. In the non-select state (0 Volt), the display exhibits a high transmission (50%) at normal incidence (y ¼ 0 ), with a slight decrease at large viewing angles. We recall that 50% of the light energy is absorbed by the polarizer. In the select state (dark state with 5.49 Volts), the display exhibits a zero transmission at normal incidence (y ¼ 0 ) as desired. In the horizontal direction, we note that the leakage of light (dark state with 5.49 Volts) becomes severe for horizontal viewing angles greater than 20 , reaching over 30% at 60 . In the vertical direction, the leakage is extremely severe for the lower viewing angles, reaching almost 50% at y ¼ 60 . The leakage of light (dark state with 5.49 Volts) in the upper vertical viewing angles is not as severe. As a result of the leakage of light at these large angles of viewing, the contrast ratios decrease accordingly. The leakage of light at these large angles can not be eliminated by increasing the applied voltage, even if the liquid crystal molecules are homeotropically aligned.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
177
0.5
0.5 0V
0.4
0.4 Transmission
1.56 1.63
0.3 T
1.72 1.83
0.2
1.95
0.1
0.3 0.2 0.1
2.10 3.35
0
0 -60
-40
-20
0
20
40
60
-60
-40
-20
0
20
40
60
40
60
Vertical Viewing Angle (Deg)
Vertical Viewing Angle (Deg)
0.5
0.5 0V 0.4
0.4
1.56
0.3
Transmission
1.63 T
1.72 1.83
0.2
1.95
0.3 0.2 0.1
0.1 2.10 0
3.35 -60
-40
-20
0
0 20
40
Horizontal Viewing Angle (Deg)
(a) Uncompensated
60
-60
-40
-20
0
20
Horizontal Viewing Angle (Deg)
(b) Compensated
Figure 6.32 Vertical and horizontal viewing characteristics at various gray levels of a typical TN-LCD. (a) Uncompensated TN-LCD. (b) Compensated TN-LCD with the inclusion of a pair of o- and a- plates compensators.
Gray levels We now discuss the issue of gray levels. To ensure a high quality display with gray levels, it is important that the transmission follows the applied voltage in a monotonic fashion. In the case of NW operation, the transmission must decrease monotonically with the applied voltage. This can be easily obtained at normal incidence ðy ¼ 0Þ. Referring to Figure 6.32(a), we note that the transmission at normal incidence indeed decreases with the applied voltage. To ensure a high quality display of gray levels, the transmission curves at various voltages must be well separated. Examining Figure 6.32(a), we find that some of the transmission curves cross at large viewing angles. For viewing angles beyond the crossing point, the gray levels are reversed meaning that the transmission no longer follows the applied voltage monotonically. This leads to a degradation of the so-called gray level stability. In the horizontal viewing angles, we find that the transmission curves at various voltages are well separated, except the transmission curve at 3.35 Volts. In the vertical viewing angles, each of the transmission curves reaches a minimum near zero in the upper viewing directions. The transmission curves rebound beyond these minima. Thus, beyond these large viewing angles, the gray levels are actually reversed. This severely degrades the gray level stability in TN-LCDs. In what follows, we describe some of the techniques of using optical compensators to improve the viewing characteristics at large viewing angles. These include the use of uniaxial c-, a- and o-plates, as well as biaxial films, as compensators. According to conventional definitions, a c-plate has its optic axis perpendicular to the plate surface, an a-plate has its optic axis parallel to the plate surface, and an oplate has an oblique optic axis. The numerical modeling is performed with the extended Jones matrix method discussed earlier. Negative c-Plate compensators As mentioned earlier, a thin film of negative birefringence can be employed as a phase retardation compensator for LCDs based on vertically aligned liquid crystal cells (VA-LCDs) [2,3]. In the field-off
178
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
state, the phase retardation due to the positive birefringence of the LC cell can be compensated by a thin film of negative birefringence. This leads to a net phase retardation of near zero for virtually all angles of incidence. For a completely dark state, we also need to eliminate the leakage of light due to the polarizers. Thus, a combination of a-plate and c-plate can be employed for this purpose. The c-plate here can be lumped with the negative c-plate mentioned earlier. Figure 6.33 shows the schematic of a VA-LC cell with a negative c-plate compensator and a positive a-plate compensator. The negative c-plate has a retardance of nd ¼ l=ð2pÞ l=2 so that when combined with the VA cell of retardance l=2, the net c-plate becomes l=ð2pÞ. This net c-plate of l=ð2pÞ combined with the positive a-plate of l/4 lead to the elimination of leakage of light at the dark state.
B Polarizer
-c
λ /(2π)
+c
λ /2 LC cell
+a
c axis
λ /4
abs axis Polarizer Figure 6.33 Schematic of a VA-LC cell with a negative c-plate compensator and a positive a-plate compensator. In the field-off state, the VA-LC cell is approximately a positive c-plate.
Figure 6.34 shows the contrast ratio comparison between the VA-LC cell without and with the compensators. Without the compensators, the viewing angles that have a contrast ratio of >500 is limited to a region within 15 both horizontally and vertically. With the compensators, the viewing angles that have a contrast ratio of >500 is extended to a region >40 in all viewing directions. By eliminating the leakage of light due to polarizers and the birefringence of the LC cell, we are able to achieve a nearly complete dark state. This allows the possibility of obtaining high CR as high as 500 or 1000. For viewing with a requirement of CR ¼ 10, the VA-LCDs with compensators can accommodate viewing angles of 170 . The same idea can be employed for phase retardation compensation in the field-on state of TNLCDs. As discussed earlier, the liquid crystal cell is approximately a positive c-plate in the field-on state when the liquid crystal molecules are approximately homeotropically aligned (vertically aligned). When a positive c-plate is sandwiched between a pair of crossed polarizers, a dark state is achieved only for light propagation along the z-axis (normal incidence). As we already know, the phase retardation is zero only for propagation along the c-axis. For a beam of light with an oblique incidence,
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
179
Figure 6.34 (a) Contour plot of the contrast ratio of a VA-LC cell without the compensators. The iso-contrast contours indicate CR ¼ 10, 100, 500, respectively. (b) Contour plot of the contrast ratio of a VA-LC cell with the compensators shown in Figure 6.33.
the light is no longer propagating along the c-axis of the liquid crystal cell. The liquid crystal cell thus exhibits a birefringence and a finite phase retardation which can change the polarization state of the light, leading to a leakage of light at the analyzer. The phase retardation increases with the angle of incidence y. This is the main cause of leakage of light at large viewing angles in the field-on state of a normally white (NW) TN-LCD. A simple and convenient way of improving the transmission characteristics of TN-LCDs at large viewing angles is to include a negative c-plate with the same birefringence thickness product (nd). The basic concept of such an optical compensator is illustrated in Figure 6.35. Notice in Figure 6.35, and in Figures 6.36, 6.38, 6.40, 6.41 and 6.42 later, index ellipsoids (also known as ellipsoid of wave normals, optical indicatrix, or the reciprocal ellipsoid [42]) are used to indicate the orientation and birefringence of uniaxial media. The negative c-plate must be placed between the polarizer and the analyzer. Being in a homeotropic alignment, the TN-LC with a positive birefringence (no < ne) can be represented optically by a prolate ellipsoid (sphere elongated along the z-axis). The birefringence property of the negative c-plate is represented by an oblate ellipsoid (sphere flattened along the z-axis). The presence of the negative c-plate with its c-axis aligned along the z-axis does not create any additional phase retardation for normally incident light. For beams with oblique incidence, the positive Polarizer
TN-LC cell
Negative c-plate
Analyzer y
z
Figure 6.35 Basic concept of negative c-plate compensator (side view). In the field-on state, the TN-LC cell is approximately a positive c-plate with a prolate index ellipsoid (elongated sphere). Note this is very much like a vertically aligned cell. The negative c-plate has an oblate index ellipsoid (flattened sphere).
180
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Polarizer
TN-LC cell
Analyzer
y
z
Figure 6.36 A different scheme of a compensator with two negative c-plates.
phase retardation due to the liquid crystal cell can be compensated by the negative phase retardation of the negative c-plate. A similar compensation scheme is illustrated in Figure 6.36, in which the negative c-plate in Figure 6.35 is divided equally into two plates. The two halves are then placed on both sides of the TN-LC cell. The compensation scheme described in Figure 6.36 preserves the left-right viewing symmetry. To illustrate the effectiveness of the phase retardation compensation, we recalculate the transmission properties of the TN-LCD by including a negative c-plate. Figures 6.31(b) and 6.37(b) show the transmission properties of the TN-LCD (with negative c-plate compensators) as functions of horizontal and vertical viewing angles. Referring to Figures 6.31 and 6.37, we focus our attention on the
0.5
0.5 0V
0.4
Transmission
Transmission
0V
0.3 0.2 0.1
0.4 0.3 0.2 0.1 5.49 V
5.49 V 0 -60
-40
-20
0
20
40
0 -60
60
Vertical Viewing Angle (Deg)
-40
-20
0
20
0.5
0.5
60
0V
0V
0.4 Transmission
0.4 Transmission
40
Vertical Viewing Angle (Deg)
0.3 0.2 5.49 V
0.1
0.3 0.2 0.1 5.49 V
0 -60
-40
-20
0
20
40
Horizontal Viewing Angle (Deg)
(a) Uncompensated
60
0 -60
-40
-20
0
20
40
60
Horizontal Viewing Angle (Deg)
(b) Compensated
Figure 6.37 Viewing characteristics of a typical NW TN-LCD (with two negative c-plates as in Figure 6.36) with an O-mode operation. Notice that the left-right viewing symmetry is preserved in this case.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
181
transmission curves of the field-on state (5.49 Volts). We note that the inclusion of a negative c-plate leads to an improvement in reducing the leakage of light in the dark state at large viewing angles. Further improvement can be obtained at higher applied voltages (e.g. 5.49 Volts). As discussed above, a negative c-plate compensator is designed to correct for the angular dependence of the phase retardation introduced by the optical propagation through the central region of the TN-LC which is almost homeotropically aligned. Such a compensator is effective to the extent that the optical property of this region dominates the field-on state of the liquid crystal cell. This implies that the negative c-plate works best when strong fields are employed for the energized state to achieve nearly homeotropic alignment. The use of a negative c-plate in TN-LCDs has been experimentally demonstrated to significantly reduce the leakage of light in the dark state over an extended field of view, leading to an improved contrast ratio and a reduced color desaturation [7]. Although the negative c-plate is capable of providing a significant improvement in reducing the leakage of light at the dark state, further reduction of the leakage is possible by using more complicated compensators. As we know, homeotropic alignment of the liquid crystal molecules occurs only in the central region of the cell which behaves like a positive c-plate. The regions, near each surface of the cell, behave like positive a-plates each with its optic axis aligned with the rubbing direction of the proximate substrate. Thus, a more precise cancellation of the phase retardation would require the inclusion of two negative c-plates and two negative a-plates. This is illustrated in Figure 6.38. The TNLC is sandwiched between the two negative a-plates which are employed to cancel the phase retardation due to the liquid crystal regions near the surfaces. As shown in Figure 6.38, the optic axes of the negative a-plates must be aligned parallel to the rubbing directions of the adjacent surfaces. The negative c-plates aligned with their optic axes parallel to the z-axis are then placed outside the (-a)plates and between the polarizers. Figure 6.39 shows the improvement of contrast ratio by using such compensators. Polarizer
TN-LC cell
Analyzer y'
z
-c
-a
-a
-c
Figure 6.38 Optical compensators using negative a-plates and c-plates. The index ellipsoids for the TN-LC cell are prolate, and those of the compensators (a, c) are oblate. The c-axes of the negative a-plates are parallel to the rubbing direction of the proximate substrate.
Due to the elastic restoring force inside the LC medium, we find that even with a strong applied field, the liquid crystal cell is a positive birefringent medium with a continuous variation of its director in the cell. Thus, strictly speaking, a complete cancellation of phase retardation would require negative birefringent plates with a similar continuous variation of the tilt and twist angles of the c-axis. Each thin slice of the TN-LC cell with a given tilt and twist can be compensated by a thin layer of the negative birefringent plate with the same tilt and twist angles. A stack of infinite number of thin TN-LC layers can be compensated by a corresponding stack of infinite number of negative birefringence layers in a proper sequence. Such a novel but complicated compensator has been proposed and demonstrated by using films of discotic compound which exhibits a negative birefringence [9]. Such a compensator effectively suppresses the leakage of light in the dark state of TN-LCDs for almost any angle of
182
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS 0.5
0V
Transmission
Transmission
0.5 0.4 0.3 0.2 0.1
0.4 0.3 0.2 0.1
5.49 V
0 -60
-40
-20
0
20
40
0 -60
60
Vertical Viewing Angle (Deg)
0
20
40
60
0.5
0V
0.4
Transmission
Transmission
-20
Vertical Viewing Angle (Deg)
0.5
0.3 0.2 5.49 V
0.1 0 -60
-40
0.4 0.3 0.2 0.1
-40
-20
0
20
40
60
Horizontal Viewing Angle (Deg) Uncompensated
0 -60
-40
-20
0
20
40
60
Horizontal Viewing Angle (Deg) Compensated
Figure 6.39 Viewing characteristics of a typical NW TN-LCD. The TN-LC is sandwiched between two (-c-a)plates.
viewing. The inclusion of such a compensator in TN-LCDs leads to a very high contrast ratio at wideviewing angles. The compensation film also provides a reasonable improvement in the gray scale stability at large viewing angles [9]. Fuji wide viewing films are good approximations of the compensation scheme discussed above [43]. Most TN-LCDs today are equipped with Fuji compensation films. Compensation films with positive birefringence (o-Plate) While the use of the negative c-plate is important for improving the contrast ratio and reducing the color desaturation, the issue of gray scale stability remains unsolved. The problem of gray scale degradation in an uncompensated LCD can be easily seen in Figure 6.32(a), where the transmission curves are not well separated at large viewing angles. A high quality display with gray scale capability requires a gray scale linearity over the field of view. In other words, the brightness levels between the select state (Level 0 with minimum or zero brightness in NW operation) and the nonselect state (Level 7 with maximum brightness in NW operation) must vary linearly (or at least monotonically) with the assigned gray levels. The linearity (or near linearity) requires that the transmission curves be well separated without crossing in the range of viewing angles. To understand the gray scale linearity problem of a conventional TN-LCD, we re-examine the transmission curves in Figure 6.32(a). Notice that the brightness levels near the normal incidence ðy ¼ 0Þ are monotonic functions of the applied voltages. Thus, linearly spaced (brightness) gray levels can be chosen with a set of properly assigned applied voltages. We also note that the linearly spaced brightness levels are reasonably well maintained at large horizontal viewing angles (see Figure 6.32a), except the transmission curve of 3.35 Volts. The gray scale linearity problem appears when the vertical viewing angle varies (see Figure 6.32a). We note that each of the transmission curves decreases initially with the positive vertical viewing angle, and
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
183
then rebounds after reaching a minimum (near zero). Notice that the zero transmission angle (point of rebound) decreases with the applied voltage. The transmission curves cross after the rebounds. The crossing of the transmission curves removes the separation needed for linearity. This is the region of gray scale reversal (inversion). In a conventional TN-LCD, the gray scale linearity needed for high quality displays disappears in the region near the zero transmission angles (points of rebound). The gray scale reversal can lead to a severe problem in the display of desired colors as the gray scale is varied. To solve the gray scale linearity problem, we must first understand the variation of the transmission (brightness level) as a function of the applied voltage at various viewing angles. Referring to Figure 6.40, we examine the mid-layer tilt angle of the liquid crystal director in the cell as the applied voltage increases. The mid-layer tilt angle is a measure of the average orientation of the liquid crystal director in the cell. The tilt angle of the liquid crystal director is initially zero (except the pretilt) with no applied voltage. The tilt angle then increases with the applied voltage and reaches almost 90 at the highest applied voltage. We note that for viewing at normal incidence, the phase retardation is highest at the nonselect state voltage (zero voltage) when the liquid crystal director is perpendicular to the viewing direction. Maximum transmission (brightness) is obtained in this state due to the waveguiding and the crossed polarizers. As the applied voltage is increased, the liquid crystal director is tilted toward upper forward direction, leading to a smaller phase retardation and less waveguiding for the viewing at normal incidence. This leads to a correspondingly lower transmission. The phase retardation decreases with the mid-layer tilt angle which increases monotonically with the applied voltage. Thus, for viewing at normal incidence ðy ¼ 0 Þ, the brightness level decreases monotonically with increased voltage. This explains the gray scale linearity for viewing at normal incidence.
Increasing voltage Backlight
y
Normal incidence z θ=0
Mid-layer liquid crystal tilt
Figure 6.40 Schematic drawing of the mid-layer liquid crystal tilt (in yz-plane) in the cell.
Let us now consider the viewing from positive vertical angles (light propagating in the upper forward direction for viewers looking down at the TN-LCD). At some intermediate applied voltage level, the mid-layer tilt angle is pointed toward the viewing direction. For this applied voltage and the viewing angle, the viewer is looking along the director of the liquid crystal at mid-layer in conventional TN-LCDs. The transmission in this case is near zero due to the vanishing phase retardation. For viewing at this angle, the viewer sees a brightness which initially decreases with the voltage, reaching a
184
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
minimum (near zero) at an intermediate voltage when the mid-layer director is tilted toward this particular direction. The brightness then rebounds beyond this intermediate voltage, destroying the linearity and creating a gray level reversal problem. For the negative vertical viewing angles, the midlayer liquid crystal director is always oriented at a large angle around 90 relative to the viewing direction. The average director at these orientations provides large phase retardations and waveguiding which lead to higher transmissions in the negative viewing angles. As a result of the large phase retardation at these viewing angles, the viewer can see a monotonic decrease in the brightness as the voltage is increased. Based on this discussion, we can eliminate the reversal of gray levels and improve gray scale stability by including a special birefringent thin film as a phase retardation compensator. The birefringent thin film must provide proper positive phase retardations for positive vertical viewing angles. In addition, the film must also provide a zero phase retardation for normal incidence. We first consider the possibility of using a thin film of positive uniaxial birefringent material. The most important issue then is the orientation of its c-axis. As we know, a positive c-plate will severely degrade the contrast ratios at large viewing angles. Thus, such a positive birefringent thin film must be oriented at an angle other than the z-axis. To maintain the left-right symmetry of the viewing, it is natural to orient the c-axis in the central vertical plane (yz-plane). Knowing the severe problem due to the viewing along the average director orientation of the liquid crystal in the positive vertical angles, we can orient the c-axis of the positive birefringent plate along a direction which is perpendicular to the mid-layer liquid crystal director at the middle gray level. The mid-layer tilt angle for most TN-LCDs is around 45 toward the upper forward direction. Thus, a natural choice for the orientation of the c-axis of the positive birefringent compensator is around 45 with the c-axis along the lower forward direction. This is known as an o-plate due to its substantially oblique orientation of the c-axis relative to the display plane (xy-plane). Such an orientation with the proper thickness would somewhat symmetrize the vertical viewing characteristics for the gray level when the mid-layer tilt is about 45 . The o-plate would also push the zero transmission angles toward larger angles beyond the viewing angles of interest. To maintain the same viewing characteristics at normal incidence, the compensator must be configured to ensure that no phase retardation is introduced for light traversing the liquid crystal cell at normal incidence. This can be accomplished by adding a positive a-plate with its c-axis perpendicular to the o-plate. Further numerical simulation can be employed to fine tune the orientations of the c-axes of both the o-plate and the a-plate for optimized overall performance, including gray scales and contrast ratios. Figure 6.41(a) shows a schematic drawing of the concept of o-plate compensator. The birefringent wave plates can also be split into two equal parts and then placed on both sides of the TN-LC cell [see Figure 6.41(b)]. This latter scheme of birefringence compensation preserves the horizontal viewing symmetry. It is clear that by applying the compensator, we can significantly improve the gray level stability and viewing angles [see Figure 6.32(b)]. We note that with the compensator, there are no more crossings throughout 60 to 60 horizontal viewing angles. In the vertical viewing direction, the crossings are pushed from þ20 (uncompensated) to around þ35 (compensated). There are several different variations of the arrangement of the o-plate and a-plate, including the addition of c-plates in TN-LCDs. Interested readers are referred to [30,31]. To demonstrate the preservation of left-right viewing symmetry, Figure 6.42 shows the transmission contour plot of a compensated TN-LCD with the split (þo, þa) compensator shown in Figure 6.41, where the o-plate is tilted 65 with respect to the z-axis. The TN-LCD is at the select state corresponding to the dark state. With the compensator the dark area (e.g. with a transmission less than 0.05) has been significantly widened [29] compared to an uncompensated LC cell (e.g. Figure 6.37a).
Biaxial compensation film We consider the possibility of using a biaxially birefringent thin film as a compensator to improve the gray scale characteristics. For the purpose of discussion, let the principal refractive indices be
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS Polarizer o-plate a-plate
TN-LC cell
185
Analyzer y
z
(a) Polarizer
TN-LC cell
Analyzer y
z
+o
+a
+a
+o
(b) Figure 6.41 (a) Schematic drawing of the concept of o-plate compensator (side view). The c-axis of the a-plate is parallel to the x-axis. (b) A different scheme of a compensator with two positive o-plates and two positive a-plates. The latter scheme preserves the horizontal viewing symmetry. All ellipsoids in the figures are prolate in shape.
Figure 6.42 Transmission contour plot of a compensated TN-LCD with the split compensator shown in Figure 6.41.
186
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
na < nb < nc. Here, a, b, c are the principal axes of the biaxial medium. Being biaxial, the thin film has two optic axes. There is no phase retardation for the propagation of light along the optic axes. These two optic axes are in the ac-plane on both sides of the c-axis in the direction, nc tan y ¼ na
sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðn2b n2a Þ ðn2c n2b Þ
ð163Þ
where y is measured from the c-axis. We note that in the limit when nb ¼ na, y becomes zero. In this case, the two optic axes become the c-axis. That is why we use the c-axis to represent the optical axis in uniaxial media. We now consider the proper orientation of the principal axes of the thin biaxial film. To maintain the left-right viewing symmetry, we consider the orientation with the b-axis parallel to the x-axis, which is in the horizontal direction, and the two optic axes in the vertical plane (yz-plane) containing the z-axis. Figure 6.43 shows the schematic drawings of the biaxial compensator. For the purpose of maintaining Biaxial film
Polarizer
c
TN-LC cell
Analyzer
y
a z
b
(a)
Polarizer
TN-LC cell
c
a
Analyzer
c
y
a z
b
b
(b) Figure 6.43 Schematic drawings of the biaxial film compensator. The split configuration in (b) preserves the leftright viewing symmetry.
the same viewing characteristics for normal incidence, we can orient the biaxial thin film such that one of the optic axes is pointed along the z-axis. This ensures zero phase retardation for light propagating along the z-axis (normal incidence). The c-axis is pointed in the lower forward (having no polarity, this is the same as upper backward) direction. Figure 6.43(b) is a configuration which preserves the leftright viewing symmetry. For viewing at normal incidence, the biaxial thin film exhibits no phase retardation due to the orientation of one of the optics axes being parallel to the z-axis. Thus, a single biaxial film has the
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
187
combined effect of an o-plate plus an a-plate. For positive viewing angles, the light is propagating along the direction of the mid-layer director which offers no phase retardation. The presence of the biaxial film offers a birefringence of ðnc nb Þ as the a-axis is also in the same direction of viewing. This additional phase retardation can remove the zero transmission at these angles. The viewing characteristics of a TN-LCD with a pair of biaxial compensators is similar to those in Figure 6.32(b) with a pair of o- and a-plates compensators. To further improve viewing characteristics, we need to reduce the leakage of light at large angles. This can be achieved by additional negative c- and a-plates. By adjusting various parameters for compensators, we can improve the gray level stability and viewing angles. An example of compensated viewing characteristics is shown in Figure 6.44. We notice that the compensators have dramatically improved the gray level stability of the LCD. In the horizontal direction, the curves are much flatter than before (see Figure 6.32). In the vertical direction, there is no crossing between 20 and þ40 . However, the lowest gray level in the vertical viewing direction has a significant leakage of light. This leads to lower contrast ratio and color desaturation. The viewing angle characteristics can be further improved by adjusting the compensator parameters (including orientation, birefringence and thickness). 0.5
Transmission
0.4 0.3 0.2 0.1 0 -60
-40 -20 0 20 Vertical Viewing Angle (Deg)
40
60
-40 -20 0 20 40 Horizontal Viewing Angle (Deg)
60
0.5
Transmission
0.4 0.3 0.2 0.1 0 -60
Figure 6.44 Vertical and horizontal viewing characteristics at various gray levels of a typical TN-LCD with the inclusion of a pair of biaxial plus uniaxial compensators.
The biaxial film compensators discussed above require a tilted orientation of the principal axes. These oriented films are relatively difficult to manufacture. There are other configurations of using biaxial films in TN-LCDs and VA-LCDs. The simplest scheme involves the use of biaxial film with its normal (z-axis) parallel to one of the principal axes. This principal axis often corresponds to the axis
188
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
with the largest principal index of refraction. As a result, this films behaves approximately a negative c-plate, and thus can be employed to compensate the positive birefringence of the LC cell. The difference in the other two principal indices (nx, ny) has the effective property of an a-plate. Such a property can be employed to eliminate the leakage of light due to polarizers [43].
6.2.5 Summary of Section 6.2 In summary, we have described the extended Jones matrix method which can be employed to analyze the transmission property of general birefringent networks at large viewing angles. The birefringent network consists of a sequence of arbitrary dielectric plates including uniaxial and biaxial media and gyrotropic materials which exhibit optical rotation and Faraday rotation. The method is employed to obtain an explicit form of the extended Jones matrix of a liquid crystal display medium. In addition, we have discussed the principle of operation of various birefringent compensators that can be employed to improve the viewing angle characteristics and gray scale stability of conventional twisted nematic liquid crystal displays (TN-LCDs). The birefringent phase retardation film compensators include negative uniaxial, positive uniaxial and biaxial films of various orientations. We show that a combination of birefringent thin films can be employed to improve the contrast ratio and gray scale stability at large viewing angles. These birefringent thin films can also be arranged to maintain the leftright viewing symmetry. The extended Jones matrix method described in this chapter offer a simple, intuitive and efficient way to investigate the transmission properties of various LCDs at large viewing angles.
6.3 High Brightness LCDs with Energy-Efficient Backlights This section presents an introduction to the design and analysis of backlights for LCDs. It briefly describes the components of the conventional backlight systems, and introduces light-guide plate (LGP) based on the v-cut technology. It also introduces the concept of the polarized LGP, which emits linear polarized light, and describes the design and fabrication of the polarized LGP.
6.3.1 Overview All transmissive LCDs require a lighting system often referred to as backlight system. The light source is always located at the edge of the backlight system to minimize the thickness of the LCD. This kind of backlight system is called edge-lit backlight system. A conventional edge-lit backlight system consists of a light source, a lamp reflector, a Light Guide Plate (LGP) and some optical sheets, such as the reflection sheet, the diffusion sheet and two crossed brightness-enhancement films (BEF, prism sheets), as shown in Figure 6.45. The light source in backlight systems can be Cold Cathode Fluorescent Lamp (CCFL) or LightEmitting Diode (LED). The optical efficiency of CCFL (about 90lm/W) is much higher than that of LED (about 30lm/W). However, the LED backlight system has the advantage of green technology (no Hg), high reliabilities, distinct colors, long lifetime, etc. compared with the CCFL backlight system. At present, the CCFL is used as light sources in large-size LCDs, and the small-size LCDs use one or more LEDs. With the development of the LED technology, we believe that large size backlight systems will also adopt LEDs as their light sources in the near future. A lamp reflector is used to improve the coupling efficiency of the source and the LGP. In the backlight systems, the LGP is an important component. As shown in Figure 6.44, in the backlight system, the light rays from the source are incident on the side surface of the LGP and propagate in it based on the principle of total internal reflection, and then the rays are scattered by the
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
189
Figure 6.45 Schematic diagram of a conventional backlight system for LCDs.
patterns (dots, as shown in Figure 6.44) at the bottom of the LGP. Because of the scattering, the total internal reflection is not satisfied any more, and some rays are extracted and emitted from the top surface of the LGP. By adjusting the patterns distribution density and pattern size, the backlight illumination uniformity can be controlled. The LGP side surface adjacent to the CCFL is named as light-incident surface and the LGP top surface is named as light-emission surface. The pattern distribution density and pattern size can be defined as a fill factor. In the design of the backlight system, one challenge is to uniformly extract the light in the normal direction of the panel. There are several technologies used to extract light from LGP. The most common methods are printed light extraction and molded light extraction. In the printed light extraction method, an array of white dots is printed at the bottom surface of LGP. Some large size backlight systems, such as the backlight for laptop computers, use this technology. For the molded light extraction technology, the molded features are added to the LGP, instead of a printing process. The molded features are added to the light-incident surface, the bottom surface or the light-emission surface of the LGP. Molding techniques include chemical-etching, laser-etching, and microstructures technologies. Molded light extraction is a less expensive than printed light extraction, and is widely used in small size backlight systems. The LGP can be rectangular or wedged in shape, and is made of polymethyl methacrylate (PMMA), polycarbonate (PC) or polyolefins. To reduce the optical loss from the LGP bottom surface, one reflection sheet is placed under the LGP. On the top of LGP, there is at least one diffuser to make the spatial and angular luminance distribution more uniform. Without the diffuser, the patterns at the LGP bottom surface could be seen by the viewer. In conventional backlight systems, two crossed BEFs (prism sheets) are laid above the diffuser to collimate and enhance the brightness along the normal direction of the panel. The backlight system has many optical sheets, such as reflection sheet, diffuser, BEFs, DBEF, et al. The functions of all layers in the LCD are summarized in Table 6.1, below. These optical layers increase the cost and the thickness of the whole LCD system. Today, notebook computers, palm-size PCs, personal digital assistants, mobile phones are all equipped with transmissive color thin-film transistor (TFT) LCDs. For these portable information terminals, high battery-life is very important, but the backlight is the greediest power consumer of all system components. The power consumption of a typical backlight is more than 60% of the total system power of LCD modules. For some notebook computers, the backlight power consumption is more than 30% [44]. However, the optical efficiency in the LCD with conventional backlight is very low, less than 5%, as shown in Figure 6.46. To satisfy the market demand for LCDs, it is more and more necessary to modify the backlight system and make it thinner, lighter, and brighter all at once. The development of the energy-efficient integrated backlight system is especially important.
190
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Table 6.1 The functions of each layer in an LCD system. Layer
Function
Explanation
AG layer AR layer Front Polarizer Compensator
To diffuse the ambient light. To distract the light from ambient light. Linear polarized light analyzer. To enhance contrast ration and extend viewing angle.
CF Substrate
Anti-glair Anti-reflection Polarization Optical Phase Compensation Color Filter
PI Film
LC Alignment
PI Film
LC Alignment
TFT-array substrate Compensator Rear Polarizer
Drive element Optical Phase Compensation Polarization
DBEF/PCF
Enhance Brightness
Protective Diffuser BEF II
Protection and Scatter Light Collection Light
BEF I
Collection Light
Diffuser
Scatter light
LGP Reflection Sheet
Guidance light Reflect light
Lamp Reflector
Collection and reflection
CCFL
Light source
To offer the Red, Green, Blue color pixel for color display. To offer LC alignment direction and control the direction of LC. To offer LC alignment direction and control the direction of LC. To drive LC pixels. To enhance contrast ration and extend viewing angle. To generate linear polarized light from unpolarized light. Polarizing beam splitter, improve the optical efficiency. Protect front optical film and uniform light guided from prism. To collimate the light toward the viewer in the perpendicular direction. To collimate the light toward the viewer in one direction. Eliminate reflective light shadow of LGP and offer LCD uniform area light. To balance the light on the plate. Reflecting the light to LGP, recycling the leaking out light. To guide the light into LGP, improve optical coupling efficiency. To offer the light.
6.3.2 Backlight without Optical Films Recently, many researchers have devoted a tremendous amount of effort toward improving the qualities of backlight systems. Ka¨la¨nta¨r, Matsumoto, and Onishi [45] have proposed a backlight system with a modified LGP made by PMMA, in which only one prism sheet is used instead of using one diffusion sheet and two collimating prism sheets in the conventional type. In their paper, the LGP was characterized by optical micro-deflectors and micro-reflectors. As shown in Figure 6.47, there is a prismatic structure at the top surface of the LGP and one prism sheet is set on the LGP. The prism is on the opposite direction of the viewer. Based on the total internal reflection, the illumination cone of the proposed backlight is limited to horizontal range of 17 degrees and vertical range of 11 degrees. It is reported that the peak brightness of this backlight is 1.44 times that of the conventional one. Because only one prism sheet is used in this backlight structure, a thickness reduction of 250 mm is achieved. Usually, researchers think that transparent polymers, such as PMMA, are more suitable for LGPs to minimize scattering. However, Akihiro Tagaya et al. [46] have designed and fabricated a novel LGP made by a highly scattering optical transmission (HSOT) polymer. The schematic diagram of their backlight is shown in Figure 6.48. As we know, a structure with a different refractive index from that of
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
191
Figure 6.46 Schematic diagram of optical power efficiency in an LCD. The light transmission of the whole system is less than 5%.
z x
y
z y
Prism sheet
x
LGP CCFL
PMMA LGP
Reflection sheet
Figure 6.47 Schematic diagram of Ka¨la¨nta¨r proposed backlight system in which one prism sheet is used instead of one diffuser and two prism sheets. The right part is the cross-section schematics of the LGP.
z y
x
z y
Prism sheet LGP
CCFL
Reflection sheet
x HSOT
LGP
Figure 6.48 Schematic diagram of the HSOT backlight system in which one prism sheet is used instead of one diffuser and two prism sheets. The right part is the cross-section schematics of the HSOT LGP.
192
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
the surrounding homogeneous medium scatters light. The HSOT polymer contains transparent spherical particles which cause scatterings. Based on Mie scattering theory, the inhomogeneous structure of the HSOT is optimized to produce homogeneously scattered light in the forward direction. In the HSOT LGP, the injected light would be scattered and emerge from the LGP. As shown in Figure 6.48, in the y-z plane, the light is controlled by the prismatic structure at the bottom surface of LGP. The incline angle of the output light from LGP is about 70 . Then, the light is collimated along z direction in x-z plane by the top prism sheet. Compared with conventional backlight (dot-pattern light guide and two crossed prism-up film sheets), the scattered light distributes in a very narrow range of angles (i.e. high directivity), and the optical efficiency is higher. It is reported that the HOST backlight system achieved twice the axial brightness compared to conventional backlight systems, about 2300cd/ m2 [45]. However, the luminance viewing angle of this solution is not as large as the conventional backlight. As the prism structure is on the bottom surface of prism sheet, solutions with this kind of configuration always named ‘turning film’ backlight system. Later, Takemitsu Okumura et al. [47] reported another HSOT backlight system that has no optical sheets and can control the light directly into the front direction. This backlight system is named as HSOT-II, and the schematics of it are shown in Figure 6.49. As reported, the prismatic structure reflects the light directly into the z direction in x-z plane. The condensed light distribution is obtained by the scattering of HSOT, multiple refraction and reflection of the prism arrays. The angles of prism are optimized, so there is little optical loss due to the light passing through prisms. This backlight requires no optical sheets, include reflection sheet, and the thickness is very small. A high optical efficiency of 62% is achieved. z x
y
HSOT
CCFL
LGP
Figure 6.49 Schematic of the HSOT-II backlight system in which no optical sheet is used.
The LGPs proposed by Takemitsu Okumura are made of a HSOT polymer, not traditional polymers, such as PMMA or polycarbonate (PC). It is not easy to design such a LGP according to customers’ specific requirements. The micro-prismatic structure has the ability to control the illumination angle along one direction. Hence, it is common to design the micro-prismatic structure along x axis on the top surface of an LGP. The v-cut (v-type grooves) structure at the bottom surface of the LGP can create a well-collimated light in the x-z plane due to the specular reflection. With the prismatic structure and v-cut patterns, the illumination angles in both x direction and y direction can be controlled easily. Based on v-cut technique, the backlight system without prism sheets can be designed as shown in Figure 6.50 and Figure 6.53. The v-cut patterns can be concave (as shown in Figure 6.50) or convex (as shown in Figure 6.53). According to the customer’s requirement, the angles of the v-cut can be optimized to achieve certain angular distribution of illumination. The simulated angular profile of the backlight in Figure 6.50 is shown in Figure 6.51. We can see the emitted light is well-collimated. The measured result is shown in Figure 6.52, which agrees with the simulated result. As shown in Figure 6.53, the convex v-cut LGP has the ability to control the illumination angle too.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS z
z y
193
y
x
x
LGP PMMA CCFL
Reflection sheet
LGP
Figure 6.50 Schematic diagram of the backlight system with the v-cut LGP. The v-cut is concave. The right part is the cross-section schematics of the LGP.
Figure 6.51 Simulated angular profile of concave v-cut backlight. The angular distribution of illumination is wellcollimated.
Figure 6.52 Measured angular profile of the concave v-cut backlight. The angular distribution of illumination is well-collimated.
194
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS z
z y
x
y
x
LGP PMMA CCFL
Reflection sheet
LGP
Figure 6.53 Schematic diagram of the backlight system with the v-cut LGP. The v-cut is convex.
As shown in Figure 6.54, due to the specular reflection of the high-reflectivity coating on the bottom surface of the v-cut LGP, most of the light can be easily converted to the z direction, and the reflection sheet is not needed any more [48].
z
z y
y
x
x
LGP PMMA CCFL High-reflectivity coating
LGP
Figure 6.54 Schematic diagram of the backlight system with coated LGP. No optical sheet is needed in this system. The right part is the cross-section schematics of the LGP.
In order to control the illumination uniformity, the density and size distribution of the v-cut patterns varies along the x direction. However, because the v-grooves is fixed in the direction of light propagation (x direction), compared with the dot pattern backlight, it is not easy to obtain a uniform light distribution. If an LED is selected as the light source instead of CCFL, the illumination uniformity will be even more difficult to modulate. By properly designing the fine structure on LGP, optical sheets can be removed from the backlight system which reduces the optical absorption and scattering loss. The brightness along the normal direction of the panel can be significantly improved by the control of the illumination angle. That is to say the optical efficiency is increased by the adjustment of the angular distribution and by the reduction of optical sheets.
6.3.3 Polarized Light-Guide Plate Based on the Sub-Wavelength Grating In the above backlight systems the output light is unpolarized; therefore, they are called unpolarized backlight systems. These unpolarized backlights have low optical efficiency due to the lack of polarization conversion. To generate a linearly polarized light about 60% of the energy is absorbed by the rear polarizer of the LCD panel, as shown in Figure 6.46. It greatly decreases the overall energy efficiency. For high optical efficiency, recent researches focus on the polarized backlight system which recycles the undesired polarized light and emits linearly polarized light directly. In these systems, the two
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
195
essential elements are a reflective polarizing beam splitter (PBS) and a polarization converter. The reflective PBS separates light with different polarizations; e.g. transmits the p-polarized light and reflects the s-polarized light. Then, the reflected s light is converted into p light by the polarization converter, such as a quarter wave plate or a polarization scrambler, and then emitted by the PBS. Finally, both the p-polarized and the s-polarized light are utilized, and the optical efficiency is improved greatly. In the following figures, the dot inside a circle denotes s-polarization light, and the double-arrow denotes the p-polarization light. The extinction ratio is defined as the transmittance of p-polarized light divided by that of s-polarized light (Tp/Ts). Pang and Li have proposed a high-efficiency polarized backlight system based on the broadband thin film PBS on the bottom surface of the LGP [49], as shown in Figure 6.55. The metal-dielectric PBS separates light with different polarization states. The p-light transmits through the PBS, and then is reflected by the micro-prism sheet toward the z direction. The s-light will be converted to p-light by the quarter wave plate at the end of the LGP. Then the converted p light can emit from the LGP too. The light recycling is realized by the PBS and the wave plate. In simulations, the backlight can achieve a high efficiency of 80% in the visible. This is impossible for the unpolarized backlight.
z x
y
LGP Mirror CCFL Micro-prism sheet Quarter wave plate
Thin film PBS
Figure 6.55 Schematic diagram of polarized backlight based on thin film polarizing beam splitter.
Due to volume scattering and depolarization, some unpolarized light can leak out of the LGP from the top surface. This decreases the extinction ratio. In experiment, the extinction ratio is not high indeed. For red light, the extinction ratio is about 60, and for blue light, the extinction ratio is only 10. Because there are no patterns on the bottom surface of the LGP, the backlight uniformity is difficult to control. Jagt et al. reported a polarized backlight system, in which the s-polarized light is extracted owing to the selective total internal reflection at microgrooves in the anisotropic layer [50–52], as shown in Figure 6.56. The PMMA substrate of LGP has a relief structure, which is designed to have a top angle z y
Anisotropic layer : LCP x Relief structure of substrate
Substrate : PMMA
Mirror
CCFL
Reflection sheet
LGP
Quarter wave plate
Figure 6.56 Schematic of the backlight system based on selective total internal reflection.
196
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
of 52 and a depth of 6 mm. A birefringent film which is made of a uniaxially aligned liquid crystalline polymer (LCP), adheres to the relief layer, and is mechanically deformed such that the relief structures are pressed into the LCP. The extra-ordinary refractive index of this film is 1.66, and the ordinary refractive index is 1.51, which is almost equal to the refractive index of the isotropic substrate. At the interface between the two layers, the s-polarized light will be extracted by reflection, while the p-polarized light will continue its propagation inside the substrate. This is known as selective total internal reflection. The p-polarized light will be converted by the quarter wave plate at the end of LGP. In this system, along the normal viewing direction, the extinction ratio can be as high as 64. Compared with unpolarized backlight, a 1.6 gain factor is obtained. However, the smoothness requirements of microgroove (relief structure) surfaces are stringent for reducing scattered light. The smoothness of the microgrooves has an important effect on the extinction ratio. Ko-Wei Chien et al. reported a polarized LGP based on a slot structure and a two-layer subwavelength grating [53], as shown in Figure 6.57. The grating has two layers: aluminum (100 nm thick) and SiO2 (200 nm thick). The period is 200 nm and the duty cycle is chosen as 0.5. The sub-wavelength grating as a PBS extracts the p-polarized light and reflects the s-polarized light. The s-light is converted to p-light by the quarter wave plate under the LGP. The slot structure is designed at the bottom surface of the LGP to couple light out of the LGP. Then the light will be reflected by the reflection sheet. At the LGP top surface, the p component of the light (p-light) can transmit through the sub-wavelength grating to illuminate the LCD panel and the s-component will be reflected. Then, the s-light will be converted into p-light by the quarter wave plate under the LGP. Finally, it emits from the top surface of LGP. In this system, a 1.7 gain factor of the polarization efficiency is achieved compared with an unpolarized backlight system. The illumination uniformity is 80%.
z
Al x
y
Sub-wavelength grating SiO 2
LGP CCFL
Slot structure Quarter wave plate Reflection sheet
Figure 6.57 Schematic of the backlight system proposed by Ko-Wei Chien.
As a sub-wavelength grating, the period of this grating is too large. Optical diffraction will occur at the top surface, and it will reduce the extinction ratio. In experiment, the extinction ratio is not high indeed, only about 10. Additionally, the slot structures on the bottom surface of the LGP can not control the illumination angle easily. All of the above polarized backlight systems require achromatic quarter wave plates to achieve the polarization state conversion. Due to surface reflection and scattering caused by the quarter wave plate, usually a 10% reduction in the throughput of light will occur. If the quarter wave plate is removed, the polarized backlight system will be more efficient and more compact. In the following, an integrated polarized backlight system without any quarter wave plates while being thin and highly efficient is described in detail [54]. Moreover, because the extinction ratio is high enough, even the rear polarizer can be removed from the LCD system. This backlight consists of a light source, a novel LGP and a reflection sheet. The novel LGP is developed for three wavelengths, 625 nm, 533 nm and 452 nm, emitted by red, green and blue (RGB) LEDs, respectively. The LGP substrate with
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
197
the stress-induced birefringence realizes the conversion of polarized light. An aluminum SWG is designed on the top surface as a PBS to extract linearly polarized light.
6.3.3.1 Polarized Light Conversion In this section, the stress-induced birefringence is introduced and the stress is optimized. The stress-optical law of the plane photoelasticity can be expressed as n ¼ ny nx ¼ C
ð164Þ
The amount of produced birefringence ðnÞ is proportional to the stress difference ð ¼ y x Þ provided the stress is not too large. C indicates the stress-optical coefficient.
Figure 6.58 Coordinate system of the LGP substrate with the applied stress.
As shown in Figure 6.58, the stress is applied along x axis and y directions respectively. The x axis is perpendicular to y axis and the angle between x axis and x axis is denoted as b. The thickness of the LGP substrate is a. For a light wave that passes the substrate twice, the phase retardation value can be written as ¼ 2pCL=l;
ð165Þ
where l is the wavelength of the incident light, and L ¼ 2a. In the x-y coordinates, the Jones matrix T of the substrate can be expressed as T ¼ RðbÞT RðbÞ ¼
cos b sinb
sin b cosb
1 0 0 ej
cos b sin b : sin b cosb
ð166Þ
0 ; here A denotes the light The incident light polarized in the y-direction can be written as Ei ¼ A 1 wave amplitude. The light that passes the substrate twice can be written as Eo ¼
Eox Eoy
sin b cos b sin b cos bej ¼ TEi ¼ A sin2 b þ cos2 bej
ð167Þ
198
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
The intensity of the x-polarized light transmitted from the PBS is Iox ¼ A2 sin2 2b sin2 ð=2Þ ¼ A2 sin2 2b sin2 ðpCL=lÞ:
ð168Þ
Under the conditions of
b ¼ p=4 ; ¼ 2kp þ pk¼0;1;2;3;4;...
ð169Þ
the intensity Iox reaches the maximum value A2 , and the efficiency of polarization conversion is 100%. Equation (169) means that the LGP with the applied stress is similar to a quarter wave plate. For the achromatic backlight system, the phase retardation value should be close to 2kp þ p for the multiple wavelengths lR ð625 nmÞ; lG ð533 nmÞ and lB ð452 nmÞ. Hence, the stress difference should be optimized. The optimization problem can be expressed as y ¼ min fwR abs½modðR ; 2pÞ p þ wG abs½modðG ; 2pÞ p þ wB abs½modðB ; 2pÞ pg;
ð170Þ
where mod denotes modulus after division, abs returns absolute value, and R , G and B denote the phase retardation values for wavelengths of lR , lG and lB respectively. wR ; wG and wB denote weight factors of the light of red, green and blue respectively. In our design, all the weight factors are set to be 1.0. The LGP substrate of 0.8mm thickness is made of Bisphenol-A Polycarbonate (BAPC), which is a traditional plastic material and widely used in backlight systems. The value of the BAPC’s stressoptical coefficient C is 8:9 1012 Pa1 [55]. The objective function value (y) with respect to the stress difference is plotted in Figure 6.59.
9
Objective function value
8 7 6 5 4 3 2 1 0
6
7
8
9 Sigma (Pa)
10
11
12 7 x 10
Figure 6.59 The objective function value with respect to the stress difference.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
199
When ¼ 10:31 107 Pa, the local minimum value of Equation (170) is obtained. The phase retardations of the red, green and blue light are: 8 < R ¼ 46p þ 0:97p ¼ 54p þ 1:07p : : G B ¼ 64p þ 0:94p
ð171Þ
Compared with the ideal achromatic wave plate, the maximum error is only 7%. When b ¼ p=4, Equation (168) indicates that the polarization conversion efficiency of this substrate is more than 99%. The 7% error of phase retardation only leads to a 1% decrease in conversion efficiency for the target monochromatic light. The conversion efficiency is a function of wavelength, as shown in Figure 6.60. For the wider spectrum band, the efficiency will be lower. Nevertheless, the integral conversion efficiency for the spectrum of one normal red LED, as shown in Figure 6.60, is still higher than 60%. And the narrower spectrum can benefit the color saturation of LCDs.
1.2
Spectrum Efficiency Converted
1 0.8 0.6 0.4 0.2
667
657
662
652
647
642
631
636
626
621
616
611
600
605
595
585
590
580
0
Figure 6.60 The relation between the wavelength and the conversion efficiency. The solid line indicates the spectrum of red LED, the dotted line indicates the conversion efficiency curve, and the dashed line indicates the spectrum of one time converted light.
As such, the LGP substrate with the applied stress can realize the polarization conversion, and the quarter wavelength plate can eliminated. The stress-induced birefringence can stably remain in LGP by using the stress-freezing techniques [56]. Similarly, the strain-induced birefringence can also be applied to achieve the polarization conversion.
6.3.3.2 Polarizing Beam Splitter The PBS is the other key issue in the polarized backlight system. As a PBS, the wire-grid SWG was studied for visible light in both classical and conical mountings by Xu et al. [57]. Here, we introduce a SWG designed on the top surface of the LGP as shown in Figure 6.61. The SWG as a reflective PBS transmits p-polarized light and reflects s-polarized light. Aluminum is chosen as the material of the grating, and the material of the substrate is BAPC as mentioned in the above section. The period of the SWG d is chosen as 0.14 mm. In order to achieve achromatism, the SWG should have the same performance for the red, green, and blue light. The goals are to achieve the maximum transmission for the p light, the minimum transmission for the s light, the maximum extinction ratio, and the achromatism.
200
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 6.61 The coordinate system and the structure of the SWG on the top surface of the LGP. Here, y denotes the incident angle, f denotes the azimuthal angle, and f denotes the grating duty cycle.
The rigorous coupled-wave analysis is used to design the structure of the SWG. As shown in Figure 6.62, the transmission of the p- and s-polarized light depends on the duty cycle f of the grating. Figure 6.62 shows that the duty cycle of 0.5 is appropriate. Transmission of p light
Transmission of s light
1
0.9
0.9
0.8
0.8
0.7
0.7
Transmission ratio
Transmission ratio
1
0.6 0.5 0.4 0.3 0.2
452nm 533nm 625nm
0.6 0.5 0.4 0.3 0.2
452nm 533nm 625nm
0.1
0.1
0
0 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Duty Cycle
(a)
0.9
1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Duty Cycle
(b)
Figure 6.62 The calculated dependences of the transmission on the duty cycle for (a) p-polarized light and (b) s-polarized light of red, green and blue. Here, incident angle y ¼ 0.
As shown in Figure 6.63, the transmissions of the red, green, and blue light also depend on the depth of the aluminum h. Based on the desired requirements, the depth of the grating h is optimized as 0.16 mm. At this depth, the p-polarized transmissions of the red, green, and blue light are all equal to 0.91 as shown in Figure 6.63(a). The calculated results of the designed SWG are shown in Figure 6.64. When y < 40 , the transmission of the p-polarized light is high, and that of the s-polarized light is low. The minimum extinction ratio is as high as 11,000. Because of the high extinction ratio of the SWG, the rear absorbing polarizer of the liquid crystal display panel is no longer necessary.
6.3.3.3 Polarized Light-guide Plate As shown in Figure 6.65, in the LGP, the substrate acts as an achromatic quarter wavelength plate, and the SWG on the top surface acts as a reflecting PBS. The patterns on the bottom surface of the LGP are
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS Transmission of s light
1
1
0.9
0.9
0.8
0.8 Transmission ratio
Transmission ratio
Transmission of p light
0.7 0.6 0.5 0.4 0.3
452nm 533nm 625nm
0.7 0.6 0.5 0.4 0.3
0.2
0.2 452nm 533nm 625nm
0.1 0
201
0
0.05 0.1
0.15 0.2
0.25 0.3
0.35 0.4
0.1
0.45 0.5
0
0.05 0.1
0
0.15 0.2
Aluminum depth h (µm)
0.25 0.3
0.35 0.4
0.45 0.5
Aluminum depth h (µm)
(a)
(b)
Figure 6.63 The calculated dependences of the transmission on the grating depth for (a) p-polarized light and (b) s-polarized light of the red, green and blue light. Here, incident angle y ¼ 0.
Transmission of p light
1
8 625nm 533nm 452nm
0.9
x 10 -5
Transmission of s light 625nm 533nm 452nm
7
Transmission ratio
Transmission ratio
0.8 0.7 0.6 0.5 0.4
6 5 4 3
0.3 2
0.2 1
0.1
0
0 0
10
20
30 40 50 60 Incident angle θ (deg.)
70
80
90
0
10
20
30 40 50 60 Incident angle θ (deg.)
(a)
70
80
90
(b) 15
x 10
4
Extinction ratio
625nm 533nm 452nm
10
5
0
0
10
20
30 40 50 60 Incident angle θ (deg.)
70
80
90
(c) Figure 6.64 Calculated results of the designed SWG with different wavelengths; (a) transmission of the p-polarized light, (b) transmission of s-polarized light and (c) extinction ratio as a function of the incident angle.
202
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 6.65 Ray tracing in the proposed LGP. The s light is converted into the p light by the stress-induced birefringence of the LGP substrate.
v-type grooves, the base angle and the apex angle are optimized as 53 and 74 degrees, respectively. Replacing the prism sheets, they are used to change the incident angle (to y < 40 ) and control the backlight illumination angle. There is a reflection sheet under the LGP, and the source is an LED array. The 200 polarized backlight system with the novel LGP is simulated by the Monte Carlo non-sequential ray tracing program. The luminous intensity viewed from the top light-emitting surface of the polarized backlight system and that of a conventional backlight system, which uses an absorbing polarizer, are compared in Figure 6.66. For the polarized backlight system, the peak luminous intensity is about 28, and that of the conventional backlight system is about 14. The gain factor is 2, and the optical efficiency is substantially improved. By optimizing the prism distribution and the prism size on the bottom surface of the LGP, the polarized backlight system can achieve an illumination uniformity of 78%.
Figure 6.66 Polar angular distribution plots of polarized light emitting form (a) the proposed backlight system and (b) the conventional backlight system.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
203
6.3.3.4 Experiments The SWG is fabricated by the nanoimprint technique (one of the mold assisted lithography technologies) and the lift-off technique. The process of the fabrication is shown in Figure 6.67 [58].
Mold Resist Substrate Press mold Mold 1. Imprint Substrate Remove mold Mold
Substrate Resist
2. RIE
Substrate
Aluminum
3. Evaporation
Substrate
Aluminum 4. Lift-off
Substrate
Figure 6.67 Illustration of the fabrication process of the sub-wavelength grating.
The first step in the fabrication is to make an imprint mold using electron-beam lithography. The mold is used to create a thickness contrast in the resist (PMMA), then, separated from the substrate. In step two, the pattern is transferred down to PMMA using anisotropic oxygen etching to remove the residual PMMA in the compressed areas. After metal (aluminum) deposition and lift-off in warm acetone, the subwavelength grating is obtained.
204
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 6.68 Scanning electron microscope photograph of the designed sub-wavelength grating. The right part (b) is the cross-section photograph.
The SEM photograph of the fabricated grating is shown in Figure 6.68. From the measurement, the period is about 142.5 nm (design value 140 nm, error about 1.8%), and the height is 157.7 nm (design value 160 nm, error about 1.4%). The extinction ratio as a function of the incident angle is measured in our experiment. In the experiment, the light sources are red, green, blue, as well as white LEDs, respectively. The spectra of these sources are shown in Figure 6.69.
1 Red LED Green LED Blue LED White LED
0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 300
400
500
600
700
800
900
1000
Wavelength nm Figure 6.69 The LED Spectra measured by Renishaw 1000 Raman Spectroscopy.
The measured transmittances of the p-polarized light from red, green and blue LEDs are shown in Figures 6.70, 6.71 and 6.72. They are close to the calculated results. As shown in Figures 6.73, 6.74 and 6.75, the minimum extinction ratios with red, green and blue LEDs are 324, 337 and 283, respectively. The transmittances of the p-polarized and s-polarized light from a white LED are shown in Figure 6.76.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
205
Red Light LED 1
P-polarized Transmittance
0.95 0.9 0.85 0.8 0.75 0.7 0.65 0
10
20
30 40 50 Incident Angle (deg.)
60
70
80
Figure 6.70 The measured transmittance (diamond line) and calculated transmittance (solid line) of p-polarized light using red light LED.
The measured extinction ratio using a white light LED is shown in Figure 6.77. The minimum extinction ratio is greater than 264. We believe it is high enough for the rear polarizer to be removed from the LCD system. The visual performance of the grating is shown in Figure 6.78. No chromatic aberration is observed. In this proposed backlight system, the stress-induced birefringence is introduced to the substrate of the LGP. A SWG is designed on the top surface of the LGP to emit linearly polarized light. Both the stress and the structure of the SWG are optimized for the red, green, and blue light to realize achromatism. The polarization conversion and separation functions are combined into ONE LGP. For
Green Light LED
P-polarized Transmittance
0.9
0.8
0.7
0.6
0.5
0.4
0
10
20
30 40 50 Incident Angle (deg.)
60
70
Figure 6.71 The measured transmittance (diamond line) and calculated transmittance (solid line) of p-polarized light using green light LED.
Blue Light LED
P-polarized Transmittance
0.9 0.88 0.86 0.84 0.82 0.8 0.78 0.76
0
10
20
30
40
50
60
Incident Angle (deg.) Figure 6.72 The measured transmittance (diamond line) and calculated transmittance (solid line) of p-polarized light using blue light LED.
Red Light LED
9000 8000
Extinction Ratio
7000 6000 5000 4000 3000 2000 1000 0
0
10
20
30
40
50
60
Incident Angle (deg.) Figure 6.73 The measured extinction ratio as a function of incident angle using red light LED.
Green Light LED
1000
Extinction Ratio
900 800 700 600 500 400 300 0
10
20
30
40
50
60
Incident Angle (deg.) Figure 6.74 The measured extinction ratio as a function of incident angle using green light LED.
Blue Light LED
550
Extinction Ratio
500
450
400
350
300
250 0
10
20
30
40
50
60
Incident Angle (deg.) Figure 6.75 The measured extinction ratio as a function of incident angle using blue light LED. 3.5
S-polarized Transmittance
P-polarized Transmittance
–3
White Light LED
0.9 0.88 0.86 0.84 0.82 0.8 0.78 0.76 0
10
20
30
40
50
White Light LED
x 10
3 2.5 2 1.5 1 0.5 0
60
0
10
20
30
40
50
60
Incident Angle (deg.) (b)
Incident Angle (deg.) (a)
Figure 6.76 The measured transmittance of (a) p-polarized light and (b) s-polarized light using white light LED.
White Light LED
2200 2000
Extinction Ratio
1800 1600 1400 1200 1000 800 600 400 200
0
10
20
30
40
50
60
Incident Angle (deg.) Figure 6.77 The measured extinction ratio as a function of incident angle using a white light LED.
208
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 6.78 The visual performance of the grating: (a) it transmits the polarized light emitted by the mobile phone; (b) after 90 rotated, it does not transmit the polarized light emitted by the mobile phone.
the backlight system with this LGP, the prism sheets, the diffusion sheet and the quarter wavelength plate are no longer necessary. Moreover, the rear polarizer can be safely removed from the LCD system. The simulation results show that the luminous intensity is greatly increased. Experimently, the minimum extinction ratio larger than 264 is achieved for white light illumination. Although LEDs are used in our design as the light source, other sources such as CCFL can also be used in similar designs. The material of the LGP is not limited to polycarbonate, other polymers, such as PET, can be chosen too. Mass production is possible by using the nanoimprint lithography technique. We believe that the new backlight system will make the future LCDs thinner, lighter and brighter.
6.4 Conclusions To satisfy the demand of high quality displays, performance improvement is critical for LCDs to maintain its leadership position among all display technologies. The two intrinsic problems in LCDs, limited viewing angle and low brightness, can be improved by using birefringent compensators and polarized backlights, respectively. In this chapter, we have provided detailed discussions on these two issues. We believe the technologies discussed in this chapter will be helpful to achieve wide-angle LCDs with higher brightness and energy efficiency.
Acknowledgements Claire Gu would like to acknowledge partial support from the National Science Foundation, ECS0401206, the UCSC special research grant, and the University Affiliated Research Center’s Aligned Research Program.
References [1] See, for example, Yeh, P. and Gu, C. (1999) Optics of Liquid Crystal Displays, Chichester: John Wiley & Sons, Ltd. [2] Clerc, J.F, Aizawa, M., Yamaguchi, S. and Duchene, J. (1989) Japan Display ’89, p. 188; Clerc, J.F. (1991) Digest SID ’91, p. 758.
WIDE VIEWING ANGLE AND HIGH BRIGHTNESS LIQUID CRYSTAL DISPLAYS
209
[3] Yamaguchi, S., Aizawa, M., Clerc, J.F., Uchida, T. and Duchene, J. (1989) Digest SID’89, p. 378; Yamamoto, T. et al. (1991) Digest SID’91, p. 762. [4] Ong, H.L. (1992) Digest Japan Display’92, p. 247. [5] Yeh, P. et al. (1993) ‘Compensator for liquid crystal display’, US Patent No. 5,196,953. [6] See, for example, Yeh, P. (1988) Optical Waves in Layered Media, Chichester: John Wiley & Sons, Ltd. [7] Eblen, Jr., J.P., Gunning, W.J., Beedy, J. et al. (1994) Digest SID’94, p. 245; Eblen, Jr., J.P., Gunning, W.J., Taber, D. et al. (1994) Proc. SPIE, 2262, 234–245. [8] Wu, S.-T. (1994) Film-compensated homeotropic liquid-crystal cell for direct view display, Journal of Applied Physics, 76 (10), 5975–5980. [9] Mori, H., Itoh, Y., Nishiura, Y. et al. (1997) Performance of a novel optical compensation film based on negative birefringence of discotic compond for wide-viewing-angle twisted-nematic liquid-crystal display, Japanese Journal of Applied Physics, 36, 143–147; Mori, H. (1997) Novel optical compensators of negative birefringence for wide-viewing-angle twisted-nematic liquid-crystal displays, Japanese Journal of Applied Physics, 36, 1068–1072. [10] Yang, K. H. (1991) IDRC’91 Digest, p. 68. [11] Takatori, K., Sumiyoshi, K., Hirai, Y. and Kaneko, S. (1992) Digest of Japan Display’92, p. 521. [12] Bos, P.J. and Rahman, J.A. (1993) Digest SID’93, p. 273. [13] Yamaguchi, Y., Miyashita, T. and Uchida, T. (1993) Digest SID’93, p. 277. [14] Oh-e, M., Ohta, M., Aratani, S. and Kondo, K. (1995) Digest Asia Display’95, p. 577. [15] Ohta, M., Oh-e, M. and Kondo, K. (1995) Digest Asia Display’95, p. 68. [16] Sarma, K. R., McCartney, R. I., Heinze, B. et al. (1991) Digest SID’91, p. 68. [17] See, for example, McFarland, M., Zimmerman, S., Beeson, K. et al. (1995) Digest Asia Display’95, p. 739. [18] Jones, R.C. (1941) New calculus for the treatment of optical systems. I. Description and discussion of the calculus, Journal of Optical Society of America, 31, 488. [19] Yeh, P. (1982) Extended Jones Matrix Method, Journal of Optical Society of America, 72, 507–513. [20] See, for example, Yeh, P. (1979) Electromagnetic propagation in birefringent layered media, Journal of Optical Society of America, 69, 742–756. [21] Berreman, D.W. (1972) Optics in stratified and anisotropic media: 4 4-matrix formulation, Journal of Optical Society of America, 62, 502–510. [22] Gu, C. and Yeh, P. (1993) Extended Jones Matrix Method II, Journal of Opt. Soc. Am. A., 10, 966–973. [23] MacGregor, A.R. (1990) Method for computing homogeneous liquid-crystal conoscopic figures, Journal of Opt. Soc. Am. A, 7, 337–347. [24] Lien, A. (1990) The general and simplified Jones matrix representations for the high pretilt twisted nematic cell, Journal of Applied Physics, 67, 2853–2856. [25] Lien, A. (1990) Extended Jones matrix representation for the twisted nematic liquid-crystal display at oblique incidence, Applied Physics Letters, 57, 2767–2769. [26] Ong, H.L. (1991) Electro-optics of electrically controlled birefringence liquid-crystal displays by 2 2 propagation matrix and analytic expression at oblique angle, Applied Physics Letters, 59, 155–157. [27] Ong, H. L. (1991) Electro-optics of a twisted nematic liquid-crystal display by 2 x 2 propagation matrix at oblique angle, Japanese Journal of Applied Physics, Part 2 – Letters 30, L1028–L1031. [28] Cuypers, F. and De Vos, A. (1989) Optical symmetry in liquid crystal displays, Liquid Crystal, 6, 11–16. [29] Yeh, P. and Gu, C. (2000) Symmetry of viewing characteristics of liquid crystal displays and split compensator configurations, Displays, 21, 31–38. [30] Winker, B.K. et al. (1996) ‘Optical compensator for improved gray scale’, US Patent No. 5,504,603. [31] Taber, D.B., Hale, L.G., Winker, B.K. et al. (1998) Gray Scale and Contrast Compensator for LCDs Using Obliquely Oriented Anisotropic Network, Orlando: SPIE. [32] Yeh, P. and Gu, C. (1998) Birefringent Optical Compensators for TN-LCDs, Proc. SPIE, 3421, 224–235. [33] Ong, H.L. (1999) Broken and Preservation of Symmetrical Viewing Angle in Film Compensated LCDs, SID’99, p. 673. [34] Ong, H.L. (1993) 2 2 propagation matrix for electromagnetic wave propagation obliquely in layeredinhomogeneous uniaxial media, Journal of Optical Society of America A, 10, 283. [35] Gu, C. and Yeh, P. (1999) Extended Jones matrix method and its application in the analysis of compensators for liquid crystal displays, Displays, 20, 237–257. [36] Yeh, P. and Paukshto, M. (2001) Molecular crystalline thin film E-polarizer, Molecular Materials, 14, pp. 1–19. [37] Evans, J. (1949) The birefringent filter, Journal of Optical Society of America, 39, 229–242.
210
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
[38] Seiji, U., Yasuo, F., Nagatsuka, T. and Suguru, Y. (1992) ‘Polarizing plate and liquid crystal display device’, Japanese Patent H4-305602, 28 October. [39] There are several conventions employed in polarization transformation on Poincare sphere. If we define the phase retardation as ¼ ðksz kfz Þd, where ksz and kfz are the z-component of the slow and fast waves respectively, then the final polarization is obtained via a rotation of the slow axis by angle equal to . In this convention is always positive. The rotation is always a right-handed rotation around the slow axis. [40] Chen, J., Kim, K.-H., Jyu, J.-J. et al. (1998) Optimum film compensation modes for TN and VA LCDs, Proc. SID, paper 21.2. [41] Ishinabe, T., Miyashita, T. and Uchida, T. (2000) Society for Information Display Int. Symp. Dig. Tech. Papers (SID, Long Beach), p. 1094; Ishinabe, T., Miyashita, T. and Uchida, T. (2002) Wide-viewing-angle polarizer with a large wavelength range, Japanese Journal of Applied Physics, 41, 4553–4558. [42] See, for example, Born, M. and Wolf, E. (1987) Principles of Optics (6th edn), London: Pergamon Press; Yariv, A. and Yeh, P. (1984) Optical Waves in Crystals, Chichester: John Wiley & Sons, Ltd. [43] Abileah, A. and Xu, G. (2001) ‘Normally white LCD including first and second biaxial retarders,’ US Patent No. 6,229,588, US Patent No. 5,570,214 (1996). [44] Naehyuck Chang, Member, Inseok Choi, and Hojun Shim (2004) DLS: Dynamic Backlight Luminance Scaling of Liquid Crystal Display, IEEE Transactions On Very Large Scale Integration (VLSI) Systems, 12, 8. [45] Ka¨la¨nta¨r, K., Matsumoto, S. and Onishi, T. (2001) Functional light-guide plate characterized by optical microdeflector and micro-reflector for LCD backlight, IEICE Trans. Electron,. E84-C, 1637–1646. [46] Tagaya, A., Nagai, M., Koike, Y. and Yokoyama, K. (2001) Thin liquid-crystal display backlight system with highly scattering optical transmission polymers, Applied Optics, 40, 6274. [47] Okumura, T., Tagaya, A., Koike, Y. et al. (2003) Highly-efficient backlight for liquid crystal display having no optical films, Applied Physics Letters, 83, 2515. [48] Di Feng, Yb Yan, Xingpeng Yang, Guofan Jin and Shoushan Fan (2005) Novel integrated light-guide plates for liquid crystal display backlight, Journal of Optics A: Pure and Applied Optics, 7, 111–117. [49] Pang, Z. and Li, L. (1999) Novel high-efficiency polarizing backlighting system with a polarizing beam splitter, SID’99 Technical Digest, California: SID, 916. [50] Jagt, H.J.B., Cornelissen, H.J. and Broer, D.J. (2002) Micro-structured polymeric linearly polarized light emitting lightguide for LCD illumination, SID’02 Technical Digest, 1236–1239. [51] Jagt, H.J.B., Cornelissen, J.B. and Broer, D.J. (2004) Polarized light LCD backlight based on liquid crystalline polymer film: a new manufacturing process, SID’04 Technical Digest, 1178–1181. [52] Ko-Wei Chien, Han-Ping D. Shieh and Hugo Cornelissen (2004) Polarized backlight based on selective total internal reflection at microgrooves, Applied Optics, 43, 4672–4676. [53] Ko-Wei Chien and Han-Ping D. Shieh (2004) Design and fabrication of an integrated polarized light guide for liquid-crystal-display illumination, Applied Optics, 43, 1830–1834. [54] Xingpeng Yang, Yingbai Yan and Guofan Jin (2005) Polarized light-guide plate for liquid crystal display, Optics Express, 13 (21), 8349–8356. [55] Winberger-Friedl, R., de Bruin, J.G. and Schoo, H. F. M. (2003) Residual birefringence in modified polycarbonates, Polymer Engineering and Science, 43, 62. [56] Shyu, G. D., Isayev, A.I. and Li, C.T. (2003) Residual thermal birefringence in freely qnenched plates of amorphous polymers: simulation and experiment, Journal of Polymer Science Part B-Polymer Physics, 41, 1850. [57] Xu, M., Urbach, H.P., deBoer, D.K.G and Cornelissen, H.J. (2005) Wire-grid diffraction gratings used a polarizing beam splitter for visible light and applied in liquid crystal on silicon, Optics Express, 13, 2303–2320. [58] Chou, S.Y., Krauss, P.R. and Renstrom, P.J. (1995) Imprint of sub-25 nm vias and trenches in polymers, Applied Physics Letters, 67 (21), 3114–3116.
7 Backlighting of Mobile Displays Philip Watson, and Gary T. Boyd 3M Optical Systems Division, Petaluma, California, USA
7.1 Introduction Over the last three decades, an impressive amount of progress has occurred in the development of LCD panel technologies. LCDs with diagonal sizes ranging from 100 inches have demonstrated the full color, high speed, and wide viewing angle performance suitable for displaying high quality text, graphics, video and other information. The primary purpose of an LCD backlight is to illuminate the panel so that viewers can see the image information addressed to the panel. The LCD panel is an array of red, green, and blue light valves whose transmission is controlled by electrical signals. In order for the viewer to perceive an image, the panel needs to be illuminated with light that passes through the panel to the viewer. The backlight is therefore a critical element of the image creation system, and in fact directly affects many of the key optical properties that are traditionally associated with the panel: color rendition, viewing angle, contrast, and brightness. As panel technology has progressed, the lighting technology that provides the illumination for the panel has similarly evolved to enable improved perception of the intended image. High levels of uniformity and color control, and well-defined angular output distributions are now required of the illumination system. Similarly, as mobile devices become ever more ubiquitous, and as consumers’ expectations of reduced weight and longer battery life continue to become more stringent, illumination efficiency, brightness, durability and physical dimensions become all the more important. Backlight
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
212
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
design plays a critical role in determining the total thickness, weight, energy usage, and image quality of a mobile device. A large number of factors go into the design and selection of LCD backlights for mobile devices, which impact the following: cost; power efficiency; voltage/current requirements; axial (normal) brightness; angular distribution of output light; physical parameters: thickness, bezel requirements, weight; polarization output; spatial uniformity of output light; ruggedness; waste management; thermal stability; color gamut; contrast. Requirements on each of these properties place strong demands on backlight design. Their relative importance is highly dependent on the needs of the model and market segment. Often, some of these specifications are keyed to particular industry standards and regulatory requirements, while others may be dictated by style, usage, or OEM specifications. LCD backlights can be categorized in terms of two standard backlight architectures: edge-lit, utilizing a transparent plastic lightguide; and direct-lit, incorporating either an array of light sources distributed across the backlight area, or an area source that emits light directly toward the viewer. Nearly every portable LCD device today makes use of an edge-lit design, either with a linear fluorescent tube or with one or more white LEDs providing the illumination. It is common for larger devices, such as LCD TVs, to use direct lighting with an array of cold cathode fluorescent bulbs (CCFLs) or LEDs behind the panel, but this type of design is not readily compatible with mobile display lighting due to the requirements on thickness, uniformity and power efficiency. For a large LCD TV, the total luminous output per unit area of the backlight must often be more than an order of magnitude larger than that of a mobile device. This is due to the fact that a TV is expected to provide 400 to 500 nits of brightness to the viewer in the normal direction, and maintain sufficient brightness across all viewing angles, often through a VA or IPS LCD panel with relatively low transmission. Mobile devices, by comparison, often require only 100 to 250 nits of brightness over a significantly narrower viewing angle intended for a single viewer. Additionally, TV panels may have color filters that are more highly absorbent in order to ensure a wider color gamut. Mobile device panel transmittance varies significantly by application, with some transflective panels having aperture ratios less than 30% and some transmissive LTPS panels having aperture ratios greater than 60%. This directly affects the required brightness of the backlight, which in turn can affect the selection of light sources. If LEDs are used, the quantity is chosen to provide sufficient brightness. Typically in mobile devices using CCFLs, a single bulb is employed; the dimensions and power of the bulb are selected to ensure sufficient brightness.
BACKLIGHTING OF MOBILE DISPLAYS
213
7.2 Edge-lit Backlight Components and Function An edge-lit backlight system typically includes five or six foundational components (Figure 7.1). First is the light source with its associated electrical drive circuitry, which converts electricity into light. Often the light source is surrounded on three sides by a bulb reflector. The light from the source is coupled into a transparent lightguide that extends over the area of the LCD panel. The lightguide incorporates some type of optical means for extracting light forward in the direction of the panel and/or backward toward a back reflector. This back reflector, positioned under the lightguide, reflects light toward the display panel. Above the lightguide (on the opposite side from the back reflector) is a stack of optical films that condition the light for more efficient, uniform, or otherwise controlled illumination of the display panel. Finally, a white plastic frame holds all of these components in place and provides some structural stability to the backlight system.
A
B
C
D
E
Figure 7.1 Schematic cross section of main backlight components. (A) light source; (B) bulb reflector; (C) lightguide; (D) back reflector; (E) optical film stack. The frame is not pictured. The LCD panel would be located above the optical film stack.
7.3 Light Source A common light source used in edge-lit backlights for LCD notebook computers and monitors is the cylindrical cold-cathode fluorescent lamp (CCFL), which is placed along an edge of the lightguide. Although early CCFLs were quite large – 5 mm or more in diameter – current commercially available bulbs are often 2 mm or smaller. This reduction in bulb diameter has resulted in thinner, lighter, and more efficient backlights. Because a CCFL emits light with a cylindrical symmetry, it is necessary to have a reflector surrounding the bulb on three sides to direct the majority of the light toward the lightguide. Recently, a row of LEDs has supplanted CCFLs in smaller mobile devices including cell phones, and it is anticipated that within the next several years, LEDs will become an increasingly prevalent light source for notebooks. Very small devices such as cell phones often use a small number of LEDs, sometimes even just one, for illumination. For small portable devices, white LEDs are most common. These are formed by combining a blue LED die and a yellow-emitting phosphor, either embedded in an encapsulating material or conformally coated.
7.4 Lightguide The lightguide’s function is to spread light uniformly across the backlight area with minimal loss. This requires the use of polymers with very low optical absorbance. The most common material used today is PMMA. The lightguide typically incorporates surface features that extract internal light either toward the display or toward a bottom reflector. Without these surface extractors, the light would essentially travel to the end of the lightguide and never reach the viewer. These extraction features can be printed white reflective dots on the back side of the lightguide that scatter light. Molded physical features with refractive or diffractive [1] optical extraction functionality are commonly employed. It is typical for these features to
214
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
increase in density, by varying size or spatial density as the distance from the light source increases. The arrangement of these features is optimized for spatial uniformity of the extracted light output. Although early notebook computers utilized slab lightguides with parallel top and bottom surfaces, nearly all of today’s lightguides are wedge-shaped, with these surfaces canted to one another. The wedge serves to steepen the internal reflection angle of the light with each reflection, aiding light extraction as it progresses down the lightguide. The wedge shape also reduces the overall weight of the lightguide.
7.5 Back Reflector and Bulb Reflector Opposite the panel to the rear of the lightguide is the back reflector. Depending on the needs of the optical system, this reflector may be specular like a mirror or white and diffuse. In addition to its primary function of redirecting backward-traveling light toward the panel, a diffuse reflector also may serve to scramble the polarization state of light, which can boost efficiency in systems incorporating linear reflective polarizers. A diffuse reflector can also randomize the direction of reflected light, which can improve the uniformity of light output, and boost efficiency in systems incorporating prismatic brightness enhancement films, as will be discussed later in this chapter. Multiple bounces off the back reflector necessitate high reflectivity for optimal performance. 3M’s multilayer polymeric mirror [2], ESR, has a typical reflectivity of >98%. In backlights with CCFL light sources, a specular reflector usually surrounds the bulb to direct light toward the lightguide. Multiple reflections occur within the bulb reflector. Therefore, its reflectivity is a critical parameter for providing efficient illumination. The bulb reflector can be a multilayer polymeric mirror [3], or may be a silver-coated film [4].
7.6 The Optical Film Stack Light typically exits the lightguide at high angles, is unpolarized, and may be both spatially and angularly non-uniform. An optical film stack is employed to alleviate these issues. These may include diffuser sheets, microreplicated prism films, protective cover sheets, and reflective polarizers. A diffuser sheet primarily serves the function of obscuring optical extraction features of the backlight. Most commonly, the diffuser sheet is a plastic film with a roughened top surface [5]. Microreplicated prism films direct obliquely incident light toward the normal direction (i.e. toward the viewer). These films are typically categorized in terms of the prism direction. With prisms oriented toward the viewer, these components are referred to as ‘prism-up’ films. An example is 3M’s brightness enhancement film (BEF). When the prisms are oriented toward the back reflector, there is reference to ‘prism-down’ films. The latter are also known as Turning Films (TFs). Both system types are found in mobile displays, with the prism-up configuration being more common. These systems are described in more detail in Figure 7.2, below.
Figure 7.2 Film layouts for prisms up and prisms down systems. In the top (prisms down) picture, the parts are: (a) lightguide; (b) turning film; (c) diffuser sheet. In the bottom (prisms up) diagram, the components are: (a) lightguide; (b) diffuser sheet; (c) BEF film; (d) BEF film (axis crossed); (e) optional cover sheet diffuser.
BACKLIGHTING OF MOBILE DISPLAYS
215
In some backlights, a diffuser known as a ‘cover sheet’ is placed between a BEF film and the LCD panel to improve uniformity. Cover sheets have become less common in recent years as BEF prism structures with variations in the prism height have reduced optical coupling between the prisms and the display polarizer.
7.7 Prisms-Up Systems One common type of backlight system in mobile devices is the ‘crossed BEF’ (XBEF) architecture [6]. A backlight of this type employs two prismatic BEF films, with the prism axes aligned perpendicular to each other. These films have several key functions. They direct light upward toward the viewer after it exits the bottom diffuser, homogenize the illumination for better uniformity, and recycle light rays for improved efficiency. One of the more prevalent prism-up films in usage today is 3M’s BEF II [7, 8]. This film consists of linear polymeric prisms with a 90 included angle formed on a PET substrate. Somewhat counter intuitively, the film, as indicated in Figure 7.3, functions by reflecting normally incident light backwards through a combination of two TIR reflections (off both surfaces of a prism), while refracting a large portion of obliquely incident light forward toward the viewer.
Figure 7.3 Simplified operation of BEF. The ray on the left is incident on the film from the norgmal direction, and is reflected back due to two TIR reflections. This ray will then travel to the back reflector, where it can be recycled and again intersect the optical film stack. The ray on the right is first refracted by the bottom surface of the film, then refracted to the forward direction by the prism surface.
Due to the linear nature of the prismatic optical structure, the film tends to narrow the angular transmission of light only in one dimension, perpendicular to the prism axis. If a second sheet of BEF II is situated with its prisms perpendicular to the first sheet, then the combination of the two will narrow the light output in two directions. The net result of placing these two films into the backlight system is a significant increase in the axial intensity at the expense of high-angle light outside the display’s normal viewing cone. The degree of the axial increase of a given backlight is dependent on the angular output distribution of light from the lightguide, transmission characteristics of the bottom diffuser, and the reflectivity and diffusivity of the back reflector. For mobile devices, light exiting the lightguide and diffuser tends to be highly directional and usually hits the film at a very shallow angle. As such, a prism-up film is an integral part of the backlight system, providing brightness enhancement in the process of being a direction-conversion film, as Figure 7.4 illustrates. In addition to BEF II, a variety of other related prism films from 3M are commonly used for prismup backlight systems [9]. Some of these films incorporate diffusion on the substrate or random variations in height along the length of the prism in order to improve spatial uniformity, while other
216
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 7.4 Conoscopic views of the angular light distribution from a lightguide (left) and lightguide with crossed BEF II films (right). Note that nearly all of the energy from the lightguide is at a very grazing angle. The image on the right indicates the angular distribution of light exiting the second sheet of BEF II.
films feature extremely thin substrates to reduce the total thickness of the optical system. Additionally, prism-up films have been introduced in the marketplace with a variety of prism spacings to reduce moire´ between these films and the pixel array of the LCD. Light in a XBEF backlight system is repeatedly recycled between the prismatic films and the back reflector. This has several consequences. First, it becomes important that the back reflector has low optical loss. Second, it is valuable to have some diffusion in either the back reflector or between the prism-up films and the back reflector in order to randomize the direction of light reflected from BEF. Finally, with the increased circulation of light in the system, spatial uniformity is increased.
7.8 Prisms-Down Systems An alternative architecture for edge-lit backlights incorporates a single prismatic Turning Film, with prisms pointed downward toward the back reflector rather than upward toward the panel [10]. In this type of backlight, the prism included angle is significantly sharper than that of prism-up films, ranging typically between 60 and 75 degrees. Additionally, the operation of the film in redirecting light is significantly different. Turning films are designed to transfer light from a shallow input vector to a nearly normal output vector. Light passes through one prism face, as shown in Figure 7.5, and then reflects by way of TIR off of the second prism face. This is an efficient way to redirect light. The output light from a prism-down system tends to be narrowly distributed in angle about the normal direction (see Figure 7.6). This leads to a bright on-axis illumination, with a limited viewing cone suitable primarily for single-viewer usage. Some challenges with designing this type of system have been linked to spatial uniformity of the output. Without recycling of light, careful engineering of the lightguide extraction technique is often required to maintain uniformity. Additionally, the angular extent of the illumination cone is principally defined by the lightguide output distribution, since a prism-down film primarily redirects light. Following from the above discussion, we see that the primary difference between a prism-up system and a prism-down system is that in the former, the output characteristics of the light are determined primarily by the prism films, while in the latter, the output characteristics are determined primarily by the lightguide. Due to this fact, there has been a proliferation of proposed lightguide designs in TF systems. Some of the features found in commercial or experimental TF backlights include ‘V-cut’ prismatic surfaces and controlled internal scattering [11].
BACKLIGHTING OF MOBILE DISPLAYS
217
Figure 7.5 Simplified operation of a Turning Film. Light that exits the lightguide passes through the first face of the prism and reflects off the second prism face by TIR.
Figure 7.6 Conoscopic views of the angular light distribution from a lightguide (left) and lightguide with a turning film (right). Note that the input light distribution from the lightguide largely determines the shape of the output light.
7.9 Reflective Polarizers and Polarization Recycling Another key component that is commonly found in mobile LCD backlights is a reflective polarizer (RP). In virtually all current commercial LCD backlighting systems, the light source and lightguide provide largely unpolarized light toward the panel. Since conventional LCDs have absorbing iodine polarizers on both faces, more than 50% of the light that hits the panel may be wasted. The purpose of the RP is to prevent much of this loss. This is done by reflecting light of the polarization state that would be absorbed by the iodine polarizer back into the backlight for recycling, while transmitting light of a polarization state that will be passed by the polarizer. The reflected light undergoes polarization scrambling or conversion so that on its return trip an additional fraction of the light passes through the RP. The conceptual maximum improvement in brightness of a system by adding an RP is 100%. That is, rather than half of the light successfully passing through the entrance polarizer and half that light being absorbed, all of the light would be converted to the desired polarization. In practice, losses in the
218
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
system resulting from multiple passes in the backlight often limit brightness improvements to around 60%. The significance of such losses in the backlight will be discussed in the next section. Reflective polarizers are often used in conjunction with XBEF systems. Note that the additional recirculation of light that occurs with a reflective polarizer in the system also improves spatial uniformity of luminance. The most common type of reflective polarizer used today is supplied by 3M, and is based on multilayer optical film (MOF) technology [12]. An MOF reflective polarizer consists of hundreds of layers of alternating materials with carefully engineered indices of refraction. Consecutive layers will have the index of refraction approximately matched along one physical axis of the film, so that light traveling with its electric field along that axis will pass unabated. Along the orthogonal axis of the film, alternating layers possess a significant index mismatch, so light traveling with its electric field along that axis will reflect if its wavelength meets the Bragg condition [13]. The thickness of each optical layer is varied from the top to the bottom of the film surfaces to ensure that all visible wavelengths are reflected. This technology is sold under the DBEF product name [14]. The film may be included as a free-standing film or laminated to the lower LCD absorbing polarizer, which is then laminated to the panel. Another type of reflective polarizer that has been used in LCD systems is based on cholesteric liquid crystal (CLC) polymers [15]. In such a system, a CLC polymer material is formed into a grandjean (planar) state with a gradient in its cholesteric layer thickness. A number of methods have been considered for forming such a layer pitch gradient, including combining two formed films of polymerized cholesteric liquid crystalline material and heating to encourage diffusion between the layers. A CLC polarizer operates using Bragg Reflection as in MOF polarizers described above, with the difference that it transmits and reflects circular rather than linear polarized modes of light. In order to provide linearly polarized light for the LCD, a broadband quarter wave film is laminated between the CLC and the absorbing polarizer. CLC polarizers have been largely replaced by linear reflective polarizers in commercial applications. Another type of reflective polarizer that has had some success in the market is based on a blend of polymeric materials [16]. In such a polarizer, a birefringent host material contains included isotropic particles. The index of refraction along one axis of the film matches the index of the inclusions, while the index of refraction along the orthogonal axis is significantly mismatched. In such a system, light with its electric field along the index-matched axis passes through unabated, while light with its electric field along the mismatched index direction is scattered. The material is engineered such that the scattering is primarily back toward the light source, so the light can be recycled. This type of polarizer is sold by 3M, primarily into larger LCDs such as monitors and TVs. It is not commonly used in mobile devices at this time. A final type of reflective polarizer that has been under development for some time is based on wiregrid technology [17]. A wiregrid polarizer operates under the principle of induced currents in a conductor. If light is directed toward a finely spaced linear array of wires such that the electric field direction is approximately parallel to the wires, a current will be induced. This current in turn acts to reverse the direction of light, similar to how a metallic mirror reflects. However, light incident on the wiregrid with an electric field perpendicular to the wire directions will not induce such a current, and will travel through largely unabated. In order for this type of system to perform efficiently in the visible wavelengths, the wires must be very narrow (on the order of 100 nanometers) and the spacing must be carefully controlled. Typically, lithographic processes are used to create such an array. This technology is not currently used in mobile LCDs, but is under development by Moxtec and Asahi Kasei. One trend in LCD backlights is to combine multiple film technologies into a single optical element. In larger displays, it is common to find a reflective polarizer like DBEF laminated between sheets of diffuse polycarbonate; this provides both homogenization of light and physical robustness. In handheld devices, particularly some cell phones, films with BEF prisms on top of a DBEF reflective polarizer are now in use. Creating prismatic structures directly on the DBEF provides a significantly thinner construction than a simple layering of two separate films, while the combination film provides the gain using angle and polarization recycling. This product is currently being sold by 3M as BEF-RP.
BACKLIGHTING OF MOBILE DISPLAYS
219
7.10 System Efficiencies in Highly Recycling Backlights In an LCD backlight system, there are a number of factors that influence the total system efficiency. The percentage of light that successfully reaches the viewer can be thought of in terms of a series of interactions between the light and multiple system components. One approach to modeling these systems is to consider the interactions of light from a Monte-Carlo ray evaluation perspective, where each component has a probability distribution of output directions and polarization states for every input direction vector and polarization state of the light. Due to the very large number of potential direction vectors and initial polarization states that may be supplied by a CCFL or LED light source and the subtleties of the behavior of various optical elements, accurate prediction of the performance of a new backlight design is very difficult without sophisticated simulation software. A variety of commercial software products are available to meet this need [18, 19, 20]. Nevertheless, experimental verification of the performance of a new design relative to a standard backlight is still the most trusted approach to assessing the efficiency of a system. In this section, we will consider the path of light, from its origin at the light source to the point where it either reaches the viewer or is lost, as a series of interactions with various optical elements, with particular consideration of what happens when it contacts prisms-up films and linear reflective polarizers. Light is influenced by both deterministic and probabilistic interactions in its journey through the backlight system. The light source may be thought of as possessing a probability distribution of output ray vectors. Diffusers, diffuse reflectors, and any other diffuse elements can randomize the direction and optical properties of rays. Their output may be thought of in terms of probability distributions that are dependent on wavelength, polarization, and input direction. Additionally, the extraction features on the lightguide may have probabilistic scattering properties. On the other hand, the behavior of a specular reflector is essentially deterministic in that the input ray direction uniquely determines the output ray direction. Prismatic films are often similarly deterministic, as are planar and prismatic surfaces of lightguides. A full ray trace analysis of prism-up films is essential to understand its recycling characteristics, and the ultimate gain it can provide to a backlight. However, we can explore a more simplified approach to acquire an understanding of its basic operation. Figures 7.7, 7.8, 7.9 and 7.10 show four classes of
–15º
–30º
–90º
Figure 7.7 Optical ray paths for light incident on a BEF film which may result in light may be directed into a desired viewing cone.
220
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
5°
-10°
Figure 7.8 Optical ray paths for light incident on a BEF film which may result in light being predominantly returned into the backlight.
incident light rays being shone onto the planar surface of a prism-up film, in the plane perpendicular to the prism direction. Each figure describes a basic operation of a prism-up film with a top included angle of 90 degrees, as in 3M BEF. In Figure 7.7, below, rays are incident from the right over a wide range of angles up to about 15 degrees from the normal. These rays are refracted by the first surface and by the prism facet shown to produce an output useful for so called on-axis viewing. The range of these rays is defined by those incident near þ or 90 degrees, as shown in light blue. These rays that exit toward an on-axis viewer accounts for about 37% of the light, if it were equally incident from a range of þ to 90 degrees from the normal (a Lambertian source). In Figure 7.8, below, where incidence angles range from about þ to 10 degrees from the normal, light is predominantly reflected by the prism facets back into the backlight for recycling. For a Lambertian source, approximately 46% of the light is recycled in this manner. At higher incidence angles from the left, the first facet reflection is followed by transmission at the adjacent facet, where it then re-enters the next prism and returns to the backlight. About 12% of the light from a Lambertian source is recycled in this way. Finally, at still higher angles from the left, the diagram shows that rays which reflect off of a facet miss the next prism, and are sent out of the backlight at high angles. In situations where it is desirable to have the maximum concentration of light near on-axis, these so called lobe rays are less desirable, and accounts for approximately 5% of the light transmitted from a Lambertian source (see Figure 7.9, below). As discussed earlier, most backlighting systems contain one or more light recycling films, such as two BEF (prism-up) films and DBEF. While operating by very different mechanisms, all recycling films share a common process. For simplicity, we’ll consider a light source that gives off light equally in all directions and in all states of polarization (unpolarized). The aim is to transform this light to a state of higher collimation (light directed predominantly toward the viewer), as in the case of BEF, or to polarize the light into a state that that the LCD will accept, as in the case for DBEF. In either case, there is a desired output state. The purpose of a recycling film is to transmit the desirable output, and reflect the undesirable light. The backlight system below such a film must convert the undesirable state into
BACKLIGHTING OF MOBILE DISPLAYS
90º
221
40º
Figure 7.9 Optical ray paths for light incident on a BEF film which may result in light being redirected into the undesirable ‘‘lobe’’ directions.
some portion of desirable light and send it back to the recycling film. As this process of reflection, conversion and transmission repeats, the net effect is an increase in the desired optical state, resulting in a useful gain. In reality, all of these processes are associated with some degree of loss, which limits the gain to be less than 100%. The gain process can be examined more quantitatively by the following analysis. We will divide the backlight into two portions, one containing an Enhancement film and the other a Conversion element. The latter could be a bottom diffuse reflector that randomizes the polarization state or light direction. Let us say the initial light power consists of states that are desirable, which will be transmitted by the Enhancement film, PT, and states which are undesirable, which will be reflected by this film, PR. We will designate the transmission of the Enhancement of the desirable states as t, its reflection of the undesirable states as r, with associated losses (absorption or stray light) Lt, and Lr, respectively. Energy conservation implies that t þ Lt ¼ 1, and r þ Lr ¼ 1. For a perfectly efficient Enhancement film, r ¼ t ¼ 1. The Conversion element converts light from the undesirable state to the desirable state with a conversion efficiency designated c. The fraction of light returned by the Conversion element which remains in the undesirable state will be designated s. The loss associated with this process is denoted by Lc. Energy conservation dictates that c þ s þ Lc ¼ 1. The enhancement process can then be described as a series of transmission, reflection, and conversion, and is shown in Figure 7.10, below. Initially, a power tPT is transmitted through the Enhancement film, while a power rPR is reflected. The latter power is converted by the Conversion element and then transmitted by the Enhancement film, with a power tc(rPR). The fraction of light which was not converted is s(rPR). This will be reflected by the Enhancement film, converted by the Conversion element, and then transmitted by the Enhancement film, with a power tcs(rPR). This sequence is repeated, and the total light transmitted by the Enhancement film in the desired state is: Ptotal ¼ tPT þ tcPR r þ tcPR sr2 þ tcPR s2 r 3 þ tcPR s3 r 4 þ . . . ¼ tPT þ trcPR ½1 þ sr þ ðsrÞ2 þ ðsrÞ3 þ . . . rcðPR =PT Þ ¼ tPT 1 þ 1 sr
222
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 7.10 Illustration of the light recycling process through which enhancement film gain occurs.
The gain of an Enhancement film is the ratio of this total power to that without the Enhancement film (PT), so that rcðPR =PT Þ Gain ¼ t 1 þ 1 sr In the case of polarization recycling, we can assume we are dealing with an unpolarized light source, so that PR ¼ PT. For DBEF as the Enhancement film, a reasonable approximation is that t and r 1 (high reflectivity of the undesired polarization state). For Conversion elements such as a diffuse reflector þ lightguide þ diffuser film, we might assume that c ¼ s ¼ 0.5, as light reflected in one polarization state becomes unpolarized on its return to the Enhancement film. This would result in the ideal gain of 2, where all of the undesired polarization state is converted. Typically, DBEF will result in gains of about 1.7 when placed on a diffuse light source (measured with a polarizer oriented in the desired polarization state). Using c þ s þ Lc ¼ 1, and assuming c ¼ s (loss equally shared by the conversion and non-conversion processes), gives Lc 17% as a typical backlight system loss. In the case of angle recycling, we are interested in transmitting light on-axis. Prism-up films accomplish this by redirecting light incident at a specific angle (for a single sheet of BEF, this is about 30 degrees to normal, in the plane perpendicular to the prisms), while reflecting or scattering light incident at other angles. Loss for such films can come in the form of light absorption or transmission of rays into higher angles. In a typical backlight, the amount of the light incident within an angular range required to produce on-axis illumination from BEF is quite small (PT/PR 2%). Similarly, the chance of conversion of a light ray’s direction into this angle range is also small (c < 2% by ray trace calculation). A detailed ray trace for a single BEF film shows that approximately 45% of light rays incident from all angles are ultimately transmitted into angles that are considered off-axis. This implies that the Enhancement film has an effective loss for reflection of undesired rays of Lr 45%. A typical gain for a single sheet of BEF film is 1.66, which implies a conversion loss Lc 20%–30%, and s 70%–80%. Conversion losses here may be the result of absorption by the backlight elements, or light exiting out of these elements where they are unable to participate in the recycling process. For conceptual purposes, a simpler analysis is sometimes used. The Enhancement film transmits a fraction T of the desired states, and reflects a fraction R of the undesired states. The Conversion element sends a fraction R’ of the light reflected from the Enhancement film back to this film. The loss
BACKLIGHTING OF MOBILE DISPLAYS
223
from the Conversion element is LC ¼ 1 R’. Using the same idea of adding up the light power transmitted by the Enhancement film after multiple bounces, one obtains: Gain ¼
T 1 RR0
The two gain expressions are equated by: sr cþs R0 ¼ c þ s R¼
This correspondence is only true if: s PR ¼ c PT This latter assumption states that the likelihood of conversion to a desirable light state is equal to the fraction of such light states from the backlight in the absence of an Enhancement film. This is typically only true for a fully diffuse and unpolarized source. Nevertheless, the RR’ method can be used to quickly estimate system loss under the above assumptions. A key lesson from these analyses is that the gain observed using an Enhancement film will be dependent on the losses of the backlight system. In a lossless cavity, the gain approaches an ideal of 2 for polarization recycling films, and can be greater than 2 for angle recycling films, depending on the degree of collimation performed. In practice, small absorptions or light leaks through the Enhancement films lead to loss, which are compounded after multiple reflections that reduce the net gain. This is one reason why it is important to utilize low loss elements (such as high reflectivity films) when considering a backlight design.
7.11 Trends in Mobile Display Backlighting Over the last decade, mobile LCD backlighting has evolved extremely quickly in parallel with panel development. Backlights have become cheaper, thinner, lighter, brighter, more uniform, and more power efficient. Currently, the two product segments in which the most change is occurring are cell phones and ultra-mobile PCs (UMPCs). For these devices, the most critical product differentiation points are thinness, weight, and battery life. The demand for very thin, efficient designs for small mobile electronics markets has led device integrators to take a full-systems approach to product development. A highly efficiency backlight can enable a reduction in the device power budget in a UMPC or cell phone. This may be used to extend the battery life, or to reduce the battery size and weight, or may enable the use of an LCD with better viewing performance that is less transmissive. Today, a number of significant trends are becoming prevalent in mobile device backlights. First, we are seeing drastic reductions in the thickness of the system. Samsung SDI announced that they would begin mass production of a 1.9 mm thick module in Q2 2007. This new thin module is enabled through a combination of thinner LCD and backlight components with reduced thickness. They also demonstrated a functional prototype module at a mere 0.74 mm thickness [21], 10% thinner than the previous record of 0.82 mm by Samsung SEC. For reference, a single sheet of LCD glass in mobile devices is commonly between 0.2 mm and 0.6 mm thick. In order to enable these extremely thin LCD
224
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
systems, suppliers such as 3M now have available optical films for LCDs with thickness less than 0.07 mm [22], and lightguide manufacturers are introducing thinner designs as well. Another trend in mobile devices is the spread of LED lighting to larger and larger screen diagonals. Although currently LEDs are more expensive than CCFLs in terms of lumens / dollar, white LEDs have been improving in efficacy (lumens/watt) and now some exceed that of CCFLs. There are several additional advantages of LED illumination. One of these is the flexibility of design layout. The smallest commercially available CCFLs for device illumination have the ability to provide more luminous flux than the smallest LCDs require. An LCD backlight may therefore use only one or two LEDs for illumination at a lesser cost than a single CCFL while still meeting product brightness specifications. Another advantage of LEDs is their small size, enabling efficient light coupling into very thin lightguides. The size and shape of LEDs are flexible design parameters, enabling greater design freedom for backlights. A significant focus in the development in mobile LCD backlights relates to the performance of individual components in the backlight. As has been mentioned above, engineering of lightguides for prism-down systems has enabled greater collimation of light, resulting in a brighter image for a viewer directly in front of the LCD, but often at the expense of uniformity and system cost. In addition to lightguides, the efficiency of a number of other key components is continually improving. For example, back reflector films have greatly increased in spectral flatness and efficiency and therefore have reduced light loss. To date, the highest reflectivity specular reflector is offered by 3M. The spectral average reflectivity is 98.5%, whose spectrum is shown in Figure 7.11, below.
Figure 7.11 The reflectivity of 3M’s current ESR polymeric reflector with 98.5% average reflectivity.
Similarly, recent developments in reflective polarizers have led to MOF polarizer products from 3M that have significantly reduced transmission of the undesired polarization state, as shown in Figure 7.12; reducing this transmission leads directly to improved efficiency [23]. Another development focus for mobile backlight design is in the area of alternative lightguide constructions. Backlights for prism-down systems have been engineered to provide a carefully tuned input light profile to the turning film, to ensure strongly forward-directed output light to the viewer. This type of backlight, often featuring a prismatic back face, is found in the small notebook segment where sizes and specifications have become standardized around a limited number of specific sizes and aspect ratios. The majority of mobile system backlights utilize prism-up backlights, often incorporating very thin BEF films. Finally, a variety of efforts have been focused on developing lightguides that emit polarized light or that direct light efficiently toward a viewer without the use of optical films. While many of these systems have shown promise in laboratory prototypes, they have shown reduced efficiency and uniformity, and do not currently play a major role in the market. As mobile computing and communications continue to become more and more a part of daily life, the expectations of performance, power efficiency, form factor and cost of LCD panels and their backlights will continue to increase. Recently, we have seen significant improvements in all of these areas; in particular, the rate of reduction of module thickness has been accelerating. The backlight
BACKLIGHTING OF MOBILE DISPLAYS
225
Figure 7.12 Spectral properties of 3M’s advanced reflective polarizer film. The bottom curve indicates transmission of normally incident light polarized along the polarizer’s index-mismatched (reflective) axis, while the top and middle curves indicate transmission of normal and obliquely incident light polarized along the polarizer’s index-matched (transmissive) axis.
increasingly provides differentiation between display modules and helps define the user experience. We anticipate that the backlights used in future devices will be thinner, brighter, more robust, and more cost effective than today’s devices.
References [1] Parikka, M., Kaikuranta, T., Laakkonen, P., Lautanen, J., Tervo, J. Honkonen, M., Kuittinen, M. Turunen, J. (2001) ‘Deterministic diffractive diffusers for displays’ Applied Optics, 40(14) 2239–2246. [2] For example, 3M’s Vikuiti ESR and Vikuiti ESR PT products. [3] For example, 3M’s Vikuiti DESR-M. [4] For example, silver-coated PET products from Asahi, Teijin and others. [5] For example, diffuser sheets provided by Keiwa and Tsujiden. [6] Whitehead, L. (1985) University of British Columbia, Lighting panel with opposed 45 degree corrugations, United States patent US4542449. [7] For example, 3M’s Vikuiti BEF II 90/50. [8] For example, 3M’s Vikuiti BEF II 90/24. [9] For example, 3M’s Vikuiti BEF III 10T 90/50. [10] For example, Mitsubishi Rayon’s DIAART film. [11] Horibe, A. and Koike, Y. (1997) ‘Application of highly scattering optical transmission polymer to LCDs’, Institute of Image information and Television Engineers Technical Report, 21(19), 17–22. [12] Weber, M. (2000) Giant birefringement optics in multilayer polymer mirrors, Science, 287(5462), 2451–2456. [13] Weber, M. (1992) ‘23:3: retroreflecting sheet polarizer,’ SID 92 Digest, 427–429. [14] For example, 3M’s Vikuiti DBEF P2 film. [15] Nakajima, T. (1999) ‘Development of ‘NIPOCS‘ Enhancing Brightness up to 1.5 times for LCDs’, Function and Materials, 19(12), 47–53. [16] For example, 3M’s Vikuiti DRPF and DRPF-H products. [17] For example, Moxtec’s PPL Series polarizers. [18] Optical Research Associates’ LightTools. [19] Breault Research Organization’s ASAP. [20] Lambda Research’s TracePro. [21] SDI press release as reported in The Korea Times and other channels, 02-26-2007. [22] For example, 3M’s TBEF2 T 62i (90/24). [23] Denker, M., Ruff, A., Derks, K., Jackson, J., Merrill, W. (2006) Advanced polarizer film for improved performance of liquid crystal displays, SID Symposium Digest of Technical Papers, 37(1), 1528–1530. [24] Okumura, T., Tagaya, A., Koike, Y., Horiguchi M., Suzuki, H. (2003) ‘Highly-efficient backlight for liquid crystal display having no optical films’, Applied Physics Letters, 83, 2515.
8 LED Backlighting of LCDs in Mobile Appliances Josef Hu¨ttner,1 Gerhard Kuhn,2 and Matthias Winter1 1
OSRAM Opto Semiconductors GmbH, Regensburg, Germany OSRAM China Lighting Ltd (Shanghai Office), China
2
8.1 Introduction The consumer electronics, wireless appliance, and automotive systems industries all desire enhanced technology for the manufacture of their mid- to large-sized flat-panel displays – and have devoted considerable resources to pursuing it. In reality, this highly sought-after technology already exists, in the form of light emitting diodes (LEDs). A type of LED – white LED – is already standard for small displays precisely because it ideally meets the prevailing desires for miniaturization and superior operating times in cell phones and other small mobile appliances. But LEDs are an increasingly viable solution for mid- to large-sized displays, as well. Yet, despite this strong potential for LEDs, the dominant technology for medium- and large-sized LCDs remains the cold cathode fluorescent lamp (CCFL). Historically, the popularity of CCFLs was driven largely by their superior lumen/cost ratio. But with ongoing (and significant) improvements in LED output, that ratio differential has narrowed considerably, making LEDs a more viable option than ever before. This gap appears poised to continue closing with each succeeding year.
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
228
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Furthermore, LEDs possess a variety of performance and other advantages compared to CCFLs: They are based on semiconductor dies and are therefore mercury- and lead-free technologies. They require no high-ignition voltages to initiate necessary gas discharge, thereby reducing the cost of electronic components, while facilitating electromagnetic compatibility and interference (EMI). They possess a superior color gamut (range of available colors). In fact, three LEDs of the colors red, green, and blue are able to cover more than 110% of the common color standards for televisions established by the National Television System Committee (NTSC). They can be turned on or off (switched) in fewer than 100 nanoseconds, minimizing the blurring effects common to CCFLs as requirements for contrast and brilliance increase. They are vibration and shock-proof, have exceptionally long service lives, and can operate within a temperature range of 40 to þ85 C. This all leads to an obvious question: what is preventing the broader adoption of LED technology for mid- and large-sized flat-panel displays? To a degree, it’s a result of CCFLs being such an established technology for these applications. Beyond that, the main challenges relate to LEDs’ thermal management and, as intimated above, perceptions regarding their cost. Nevertheless, because of the ongoing improvements in overall LED performance and cost, combined with LEDs’ their numerous existing advantages compared to CCFLs, LED manufacturers are increasingly optimistic about their ability to further penetrate the market in the near future. The balance of this chapter delves deeper into the existing and emerging benefits of LEDs for larger mobile appliances, as well as the requirements for BLU systems; the physics of LEDs; and potential LED package solutions for LCD backlighting. The chapter also evaluates the primary potential applications for LEDs, such as cell phones and notebook LCDs.
8.2 Basic Physics of LED Technology The tremendous advances in bright-LED technology are fueling a revolution that has impacted a wide variety of lighting applications. This revolution is particularly driving the development of blue InGaN LEDs, which are poised to open enormous new markets that were previously inappropriate for LED technology, including LCD backlighting, general lighting, and large-scale signs and displays for videowall applications. This section covers the basics of LED physics, beginning with a brief overview of semiconductor diode and current–voltage characteristics. This is followed by similarly brief overviews of radiative recombination, semiconductor bulk light extraction, the electro-optical characteristics of LEDs, and LED packaging technologies.
8.2.1 History of LEDs The first LED products were commercialized in the early 1970s. Since then, the luminous flux and efficiency of LEDs have improved by an approximate factor of 20 in each subsequent decade (estimated by authors at OSRAM). Figure 8.1 demonstrates this increase in brightness compared to standard-lamp technologies. In Figure 8.1, it can be seen that LED efficacy has already overtaken that of most of the standard technologies and is still increasing steeply. It goes without saying that a similar decrease can be observed for the cost per lumen.
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
229
Figure 8.1 Increase of efficacy of different lamp technologies.
8.3 Basic Physics of Semiconductor Light Emission 8.3.1 Semiconductor Basics [1] In order to truly understand LEDs, it is important to first understand the physics that governs light emission in semiconductors. Generally speaking, all materials can be categorized as conductors, semiconductors, or insulators, depending on their ability to conduct electricity. The conductivity of a material is a measure of how strongly the material opposes the flow of an electric current. A high conductivity indicates a material that readily allows the movement of electrons. The band theory of materials explains on a qualitative level the differences between conductors, semiconductors, and insulators. Electrons occupy energy levels from the lowest energies. The highest-filled level is called a valance band. Electrons in the valence band do not participate in the conduction process. The first unfilled level above the valance band is known as the conduction band. Figure 8.2, below, shows the differences between the band structures of metals, semiconductors, and insulators. Elemental semiconductors are semiconductors in which each atom is of the same type. These atoms are bound together by covalent bonds; these bonds cause neighboring atoms to share electrons, creating the formation of strong bonds. So-called compound semiconductors are formed from two or more elements belonging to groups III and V of the Periodic Table. Other types of semiconductors, called ternary semiconductors, are formed by the addition of small quantities of a third element (AlGaAs is an example of a ternary semiconductor). Likewise, the addition of yet another element results in the formation of quaternary semiconductors, such as AlInGaP. So-called ‘doping’ p- or n-type semiconductors can be created by adding atoms to an essentially pure semiconductor material. N-type semiconductors are doped with materials that contain an extra electron in the valence band, which is then added to the crystal. The resulting materials are called ‘donors’. P-type semiconductors, on the other hand, create holes and have one fewer electron than n-type semiconductors.
230
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 8.2 Energy band differences.
8.3.2 The p–n Junction and Photons [2, 3] Semiconductors have a band structure, which refers to the area between the p- and n-type semiconductor. The injection of electrons into a p-type semiconductor is the critical mechanism for LED efficiency. A schematic sketch of the band gap, both with and without forward bias, is shown in Figure 8.3, below. This recombination, in which electrons are injected into the p-type semiconductor and holes are injected into an n-type semiconductor, results in near band-gap radiation. This corresponds to the emission of photons, whose wavelength is correlated to the size of the band gap, and the amount of energy the photon possesses. Both the band gap and temperature have a significant influence on the probability of radiation occurring, as well as on the wavelength of the emitted photon. The wavelength is determined by the following equation, in which l is the wavelength of the photon, and Eg the band gap given in electronvolts (eV). The equation reads as follows: l
1:24 mm Eg
Temperature has a great impact on the statistical distribution of electrons and holes within a semiconductor: the higher the temperature, the larger the local distribution of the electrons and holes. As a result, the temperature influences the radiative recombination between electrons and holes in terms of wavelength (the different ‘distance’ of the band gap) and efficiency (the electrons required per radiative recombination). See Figure 8.4, below, for more detail. The temperature of this active region is referred to as the ‘junction temperature’. LED efficiency is only a single, albeit important, consequence of this parameter. Another critical consequence relates to lifetime with respect to the degradation of the device. LED encapsulation and the integrity of the housing are especially sensitive to temperature and blue light. Typical LED efficiencies are shown in Figure 8.5, below.
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
231
Figure 8.3 The p–n junction under (a) zero bias and (b) forward bias. Under forward bias conditions, minority carriers diffuse into the neutral regions where they recombine. Reproduced from [1] by permission of Cambridge University Press.
8.4 LED Efficiency and Light Extraction One important measure of LED efficacy is the internal quantum efficiency, which refers to the radiative recombination of LEDs – the conversion of electrical carriers (electrons) into light (photons). Non-radiative recombination refers to the transition of electrons into phonons (heat). Another critically
Figure 8.4 Carrier distribution at (a) low and (b) high temperatures. Recombination probability decreases at high temperatures due to reduced number of carriers per dk interval. Reproduced from [1] by permission of Cambridge University Press.
232
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 8.5 Typical efficiencies of LEDs as a function of emitted wavelength (dotted line: eye sensitivity).
important characteristic of LEDs is their so-called ‘wall plug’ efficiency. In fact, every process in the LED value chain contributes to this level of efficiency. Figure 8.6 illustrates the entire process chain for opto emitters, and how each process step contributes to wall plug efficiency.
Figure 8.6 Process chain for opto emitters and contribution to wall plug efficiency.
While light can be efficiently generated inside an LED (internal 90%), due to the large difference in the refractive index between the semiconductor and the surrounding medium, only a small percentage of the generated photons can actually escape from the semiconductor material.
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
233
Semiconductor materials have a refractive index of n ¼ 33:5, while the surrounding medium has a refractive index of only n ¼ 11:5. This means that the semiconductor surface acts as a nearly perfect mirror and, as such, specific actions are required to increase light extraction ðextraction Þ.
8.4.1 Chip Technology The aim of chip development is to increase extraction efficiency. Gains against this goal have been significant in recent years, thanks in large part to the shift from absorbing substrate materials to implementing transparent base material. In addition, extraction efficiency has doubled as a result of breakthroughs that enable the geometrical shaping of chips. This breakthrough, branded as the ATON1 chip, was originally dependent on strong reflector designs to propel extracted light to the top of the LED package. So-called flip chip technologies were eventually developed to mitigate this requirement. Since then, an ongoing series of additional developments have led to the creation of even more efficient and powerful solutions, such as the Thinfilm chips described in Chapter 9.
8.4.2 Thinfilm and ThinGaN$ Technology Common light sources for LED backlighting are based on a new platform generation of high-current and high-brightness chips, called ‘Thinfilm’ AlGaInP for red light, and ‘ThinGaN1’ for blue and green light (Figure 8.7). These chips have inspired several new designs that, in turn, have led to high efficiency and outstanding power levels. Due to their design, these chips are pure top surface emitters with a true Lambertian radiation pattern, offering all of the benefits associated with this property [4–7].
Figure 8.7 Different views of emitting Thinfilm chips.
Readers should note that, in the following sections, ‘Thinfilm’ AlGaInP chips as well as ‘ThinGaN1’ are referred to as Thinfilm chips unless otherwise stated.
8.4.3 Design and Manufacturing Thinfilm chips are manufactured as clip chips, following standard die-manufacturing procedures. The top surface is then metallized, after which the chip is literally ‘flipped’ over and attached to a carrier
234
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 8.8 Manufacturing of Thinfilm chips.
material. The base substrate material (now on top) is then removed and the newly exposed top surface is roughened (Figure 8.8). The resulting Thinfilm chip comprises a substrate-less thin active layer centered between a rough surface on the top and a highly reflective mirror on the bottom which is attached to the carrier substrate [4].
8.4.4 Benefits The special design of Thinfilm chips increases optical efficiency (in terms of light output per electrical input power) and decreases forward voltage (otherwise characterized as an increase in electro-optical efficiency.) Because, as previously mentioned, Thinfilm chips are pure top surface emitters (Figure 8.9), the reflector has less importance than in conventional volume-emitting chips. Its top surface emission
Figure 8.9 Top/side emission ratio of a Thinfilm chip (right) compared to a conventional Sapphire chip (left).
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
235
enables excellent optical designs to be implemented – for example, the implementation of lightspreading optics in LED backlight systems that minimize stray light and avoid hot spots. As an additional consequence of pure surface emission, luminous flux scales up with the size of the chip. It is therefore possible to implement very efficient large chips for high-flux operation at high currents. Conventional volume-emitting chips with transparent substrates do not scale up at the same ratio and lose efficiency with increasing sizes [4].
8.5 Packaging Technologies and White LED Light Almost all LED chips are mounted in packages that provide electrical leads, a transparent encapsulent for light extraction, and a housing body to mount the LEDs. The entire package design typically features surface mount technology (SMT), which enable the LED to withstand mechanical and thermal stress during printed surface board assembly.
8.5.1 Creation of White Light LEDs emit well-saturated colors because they result in the creation of light inside the semiconductor. Because white light is actually a mixture of several colors, there are many ways to create it, all of which are based on additive color mixing. Two of those methods follow.
Method1Using Red/Green/Blue LEDs This method is described in Figure 8.10. In order to achieve the required white color, the brightness of every color has to be adjusted and kept stable. Using this method, all colors contained in the color gamut can be created, defined by the emission spectra of the LEDs.
Figure 8.10 RGB color mixing to create white light. (See Plate 8).
236
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
In order to achieve a uniform white appearance on a backlight, the light of various LEDs requires mixing, and then needs to be guided through the LCD (Liquid Crystal Display). If the requirements for the white appearance are very tight, it is necessary to stabilize the brightness of each LED, over both temperature and lifetime, by using an optical sensor and an electrical feedback loop.
Method 2 Using Blue LED Emission and Phosphor Conversion In this method, the white light is once again created by mixing several colors. The light of a blueemitting chip is guided through phosphor particles, which absorb the blue wavelength and emit light in a wavelength range which contains less energy. This effect is commonly called ‘phosphor conversion’. Depending on the requirements, either a yellow-emitting phosphor or two (or more) phosphor materials are mixed. This results in light that appears white to the human eye. In commonly available LEDs, the phosphor is mixed in the resin which covers the chip (Figure 8.11). In standard white LEDs, a yellow phosphor is used, and only two colors are combined: the blue color from the LED and the yellow color from the phosphor. For the resulting spectrum, refer to Figure 8.6.
Figure 8.11 Principle of phosphor conversion.
Using this method, the LED manufacturer has to control the amount of phosphor material in order to achieve the desired white color appearance. For the user, the LED can be driven like any LED with saturated colors. Because a Thinfilm chip emits light only from the surface, phosphor material can be placed only on the top side of the chip. This is called CLC, or Chip Level Conversion. The principle is illustrated in Figure 8.12, below. A thin phosphor layer is placed on top of the chip, which results in a uniform and stable white color appearance. This type of white chip features the highest luminance levels, giving them significant advantages when used in tandem with any kind of optics.
8.6 Requirements and Designs for LED-Based Backlight Solutions LEDs are often referred to as ‘point light sources’. That means that the light of an LED is created in a very small volume, thus it has to be distributed evenly in order to create the uniform backlight
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
237
Figure 8.12 Principle of chip level conversion.
illumination necessary for mobile flat-panel displays. While this may sound like a disadvantage, it actually results in giving LEDs tremendous flexibility which can be adapted to a variety of requirements and applications. The resulting availability of LEDs in different packages, chip sizes, colors, and brightness helps make them a truly cost-effective solution for many kinds of backlight applications. In general, LED backlight solutions are highly flexible in terms of backlight size, and can be extended to any diagonal. If the same LED packing density (LEDs per unit area) is maintained, then luminance remains constant for all backlight unit (BLU) sizes. To adapt the luminance to specific requirements, either the packing density or the drive current should be changed [8].
8.6.1 Requirements for BLU Systems While LEDs have yet to become the dominant technology for backlight design, the wide range of actual and potential benefits described above is leading to a growing number of important specifications. In order to target the mass production market, basic guidelines must be established to define the specific requirements of LED backlight units. These guidelines have already been established for the LED package design of BLU systems, as well as for the semiconductor die. For example, small backlight applications such as mobile phones and personal digital assistants (PDAs) require luminance levels between 60 cd/m2 and 150 cd/m2. By using either white LEDs or RGB LEDs (depending on the specific requirements of the application), the brightness level of these devices can be modified to the brightness of the environment in which they are being used. This can be realized by adapting the luminous flux of the LED with the help of an ambient light sensor. Ultimately, the resulting brightness is controlled by the size of the die; LEDs enable thickness levels as low as 0.2 mm (with light guides) for the complete backlight system. As far as larger LCD screens are concerned, the luminance level typically should fall between 200 cd/m2 for monitor applications and 500 cd/m2 for large-area TV applications, relying on a uniformity of better than 85% in brightness and color. The wavelength ranges of the red, green, and blue LED dies need to be specified in order to achieve a minimum color gamut of minimum 100% NTSC. The mixing ratio of the luminous flux of RGB needs to be chosen for color temperature ranges between 6,500 K and 12,000 K, depending on specifications and customer preferences in different regions.
238
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
All LCD backlights, regardless of application, share one requirement: the number of LEDs must be minimized in order to ensure competitive cost levels and high-quality backlights with very low levels of failures in time (FIT). Special long-life housing materials and robust package and die designs have to be considered to achieve frequently desired lifetimes ranging from 20,000 hours to more than 50,000 hours, depending on the operating conditions [8]. Going forward, it is clear that different LED arrangements and backlight technologies must be developed in order to adapt to BLU sizes and customer requirements pertaining to: power consumption; brightness; thickness of the backlight unit; uniformity; available space; preferred drive conditions; cost; thermal design capabilities.
8.6.2 LED Component Design There are two different ways in which the LED components can be arranged (see Figure 8.13): by light guide arrangement (left); and by direct backlight arrangement (right).
Figure 8.13 Basic LED backlight technologies.
The light guide arrangement (1) leads to very thin (from 0.2 mm to 10 mm) and uniform backlight units. This design will generally yield good optical efficiencies for backlights up to about 2000 diagonal. With the direct backlight arrangement (2), backlight units are thicker (between 25 mm and 40 mm), but result in low power consumption, good thermal design, and excellent scalability. This type of design is practically limitless in terms of the diagonal. Both technologies can be adapted to different specifications and requirements.
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
239
Given the nature of LED backlights using multiple primaries with different opto semiconductor devices, a color and brightness control system is highly recommended. This is because LEDs have different thermal and aging behaviors for different wavelengths. To ensure stable white point settings for operating life, a sensor feedback loop is needed [8]. For both design types, an optimization must be found between the following parameters: the number of LEDs, which largely determines cost; power consumption, which also influences the effort required for thermal management; size and thickness, which determine space requirements and brightness/color uniformity. These three parameters are interrelated, with each one strongly influencing the others. Therefore, it is imperative from the outset of the design process that backlight designers consider which of these characteristics should be emphasized.
8.7 LED-Backlighting Products The following section discusses LED product solutions for LCD backlighting. First, the different benefits for RGB and white LEDs are discussed, followed by an overview of the Micro SIDELED1 package, and the benefits of an ambient light sensor.
8.7.1 White versus RGB Backlight Units As mentioned in the introduction to this chapter, among the advantages of LED-backlighting solutions compared to CCFLs are the lack of EMI complications, the color gamut, and the battery life. There are specific LED products that are best suited to generate each of these benefits. The highest efficiencies for LED packages can be achieved by using a blue wavelength and a converter-doped casting material. These GaN-based LEDs are covered by a yellowish phosphor coating, which converts part of the blue LED light efficiently to a broad spectrum centered at about 580 nm (yellow). The combination of yellow and blue gives the appearance of white. Figure 8.14, below, shows the spectral data of white and RGB LEDs. In addition, the V(l) curve is plotted to demonstrate the sensitivity of the human eye. Since eye sensitivity (in bright environments) has its maximum at 555 nm, the efficiency of any light source is dominated by the intensity in that range. As a result, the efficiency of RGB LEDs is dominated by the emission of the green LED, while white LED efficiency is determined by the conversion of blue LED light into the broad yellowish spectrum. The impact of LED efficiency on the whole system is covered in more detail later in this chapter. Another very important characteristic of the whole LCD system is the color gamut. The Commission Internationale de l’E´clairage (CIE) defines a two-dimensional color space, which represents all possible colors, visible to the human eye. This CIE diagram can be used to visualize the color gamut obtainable from a particular light source. In order to evaluate different light sources with respect to the color gamut in an LCD system, it is important to understand the technical background of this technology. As described elsewhere, the LCD system comprises color filters and a matrix display to address each single pixel. When using white light sources, the spectrum must be evaluated with respect to the transmission spectra of the color filters. After this evaluation, the CIE color coordinates can be calculated and displayed. For a typical notebook display the different color gamuts are shown in Figure 8.15, below, in comparison to the NTSC standard. Comparing these areas, the color gamut can be calculated to be
240
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS 1.2
Relative Intensity
1.0 0.8 0.6 0.4 0.2 0.0 380
420
460
500
Blue LED
540 580 620 Wavelength in nm Green LED
Red LED
White LED
660
700
740
780
V(lambda)
Figure 8.14 Spectra of different LEDs and eye sensitivity.
roughly 45% for CCFLs and marginally higher for white LEDs. The color space for RGB LEDs can be up to more than 110% of the NTSC standard. This comparison shows that color filters have to be adapted to the light source in order to maximize system efficiency. Otherwise, system efficiency will be reduced significantly. Numerous studies demonstrate that RGB LEDs are the optimal solution for achieving the highest possible color gamut.
Figure 8.15 Color gamut comparison between different light sources.
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
241
These LEDs do not need color filters in a conventional sense. In certain systems, each color is switched quickly on only for a brief period, and only the matrix display defines the color for each single pixel. This approach can reduce costs and the brightness loss caused by the three using color filters. It is important to bear in mind, however, that such an approach requires highly developed electronic devices to achieve optimal results.
8.7.2 Micro SIDELED$ (LW Y1SG and LW Y3SG) Micro SIDELED1 is a phosphor-converted white LED available in two sizes – 0.8 mm (LW Y1SG) and 0.6 mm (LW Y3SG). This small SMD LED emits its light to the side by means of a reflector (Figure 8.16). Therefore, it is fairly easy to direct light into the light guide of an LCD backlight.
Figure 8.16 Micro SIDELED1.
Micro SIDELED1 is ideal for notebook applications, and is now a standard product used in small LCDs in cell phones, digital cameras, and other consumer electronics. There are a variety of shades of white available (Figure 8.17, below). One could use, for example, group ‘EL’ to achieve a more bluish appearance on the backlight, while group ‘JM’ would result in a more yellowish hue. Micro SIDELED1 is sorted by its luminous intensity and color coordinates. This kind of binning is very tight. On one reel, there is only a single intensity and a single white bin. Therefore, when using this LED for backlighting, especially in the larger displays common to notebooks, it is strongly recommended to mount only LEDs from one reel. This precaution will help to achieve a good color and brightness uniformity over the backlight and LCD. When operating the LED, it is recommended to use pulse width modulation (PWM) for the dimming, in order to avoid color shifts (Figure 8.18, below). It is further advisable not to exceed the grouping current of 20 mA for the peak current. The lower the peak current, the better the LED efficiency, which will help reduce power consumption. LED lifetime is less critical for mobile phones, but it is extremely important for notebooks. In either case, it is realistic to expect similar or better performance than would be expected when using CCFLs for backlighting. This means that an LED lifetime of at least 15 kh is required. This is where the LW Y3SG excels, because it can offer a lifetime of up to 40 kh depending on driving conditions. Further details of the optical characteristics of LEDs can be found in the corresponding datasheets. For further details of Micro SIDELED1, refer to the datasheet section of www.osram-os.com.
8.7.3 Ambient Light Sensors: Product Introduction Sensors are now adding functionality, reducing power consumption, and offering more and more convenient features in a wide range of mobile devices such as cell phones, handhelds, and digital
242
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 8.17 Micro SIDELED1 white binning scheme.
Figure 8.18 White shift versus direct current.
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
243
cameras. The most widely used sensors are the ones that adjust display backlighting according to ambient lighting conditions. Comparatively new are orientation sensors that detect whether a mobile device is being held vertically or horizontally Consider the familiar, ‘real world’ circumstance of taking photographs. Taking outdoor photos with cell phones or even digital cameras is often complicated by bright sun, which can make it difficult to view the device’s display panel. Why? Because the device’s backlighting, quite logically, typically defaults to medium-lighting levels in order to preserve battery power. Frequently, this presents no problem. But in cases of bright sunshine or, conversely, extreme darkness, displays are often too dark or too bright to accommodate the desired function. Ambient light sensors help avoid this effect by measuring the ambient brightness, as perceived by the human eye. This measurement causes the display backlighting to adjust accordingly. The overall result is the more effective use of available power, because, for approximately 40% of the time, the display backlighting is too bright. The concept of ‘human eye perception’ has been raised more than once in this chapter. It is important for sensors to be sensitive to the same wavelengths that the human eye perceives, because light sources emit more than just visible light; they also emit wavelengths that the human eye cannot see. Normal photo detectors also react to this ‘invisible’ light. They therefore ‘see’ the surroundings brighter than the human eye, and make incorrect adjustments to the lighting (Figure 8.19; Figure 8.20, below). As a result, displays are often too difficult to read.
Figure 8.19 Conventional silicon photo detectors (blue curve) are also sensitive to invisible infrared light. The left hand curve shows the spectral sensitivity of the human eye.
Photo detectors have been gradually modified so that their spectral sensitivity (the wavelengths to which a sensor is sensitive) is comparable to that of the human eye. OSRAM was the world’s first manufacturer, with the SFH 5711, to achieve optimum matching with the sensitivity of the eye (Figures 8.21 and 8.22, below). This new sensor measures the ambient brightness perceived by the eye so precisely that the lighting can be adjusted almost seamlessly. Display backlighting can therefore be controlled so that it is always as easy as possible to read. These sensors are consequently also of interest for devices for which energy savings are not the primary concern, such as all types of flat screens and large-format advertising panels.
244
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 8.20 Because some light sources emit a large amount of infrared light, a silicon detector receives a much higher signal than an ‘eye-like’ detector would.
8.8 LED Backlighting of Notebook LCDs Notebook users in particular demand long battery life. This is imperative when, for example, working on a long-distance flight without an external power supply. This kind of performance requirement requires holistic energy-saving system architecture. At present, the display subsystem accounts for the highest proportion of the power consumed inside a notebook (about 30%). While the LCD itself needs only a small fraction of this power, the backlight consumes a great deal more. This underscores the great potential of modern and efficient LED technology, which can replace conventional backlighting and lead to a drastic reduction in power consumption, as demonstrated in the following example. The requirements concerning color reproduction are not very high at the moment so there is no need for RGB backlighting; white LEDs, such as Micro SIDELED1, can be used instead.
relative spectral sensitivity [%]
180
80
60
40
20
0 400
500 V-lambda
600 SFH 3410
700 800 lambda [nm] SFH 2430
SFH 5711
900 standard Si
1000
1100
SFH 3710
Figure 8.21 Development of ambient light sensors at OSRAM OS. Starting with standard silicon photo detectors, the ambient light sensors were constantly adapted to the sensitivity curve of the human eye. The optimum response is achieved by the SFH 5711.
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
245
Figure 8.22 SFH 5711, the new ambient light sensor from OSRAM.
Example of Indirect LED Backlighting for a15.4} Notebook Display For the following test, two conventional, identical notebooks were used. One was modified with LED backlighting, while the other was not. This affords a perfect opportunity for a direct performance comparison of notebooks with and without LEDs. An overview of the components is shown in Figure 8.23.
Figure 8.23 Overview of the components.
This picture shows the LCD and the backlight unit as separate entities. The backlight comprises the CCFL and the light guide with its associated optical films (diffuser sheet, brightness enhancement films, etc.). The retrofit mainly consisted of replacing the CCFL with a PCB strip with a linear LED arrangement. An external power supply was used in this case, but it is generally recommended that the existing onboard voltages in the notebook are used by integrating appropriate LED drivers. When compared side by side, it is difficult to distinguish between the CCFL and LED versions (see also Figure 8.24, below). This is also verified by the measurement results. For both displays, the same maximum luminance of approximately 180 nits was achieved, with a uniformity of approximately 80% (Figure 8.25, below). The color gamut for both versions was about 45% of NTSC (Figure 8.26, below).
Figure 8.24 Display with conventional backlighting (left) and with LED backlighting (right).
Figure 8.25 LCD with LED backlight: luminance and uniformity measurement.
Figure 8.26 Color gamut of CCFL vs. white LED backlighting in notebooks.
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
247
In notebook LCD panels, the color filters are overlapping and wider than in desktop panels, which lead to a higher transmission at the cost of a smaller color gamut (Figure 8.27, below).
Figure 8.27 Notebook LCD color filters.
Ultimately, the power consumptions for both notebooks were compared. In Figure 8.28, it is apparent that the notebook with LED backlighting consumed significantly less power at all luminance levels.
Power consumption [W]
6 5 CCFL
4 3 2
LED
1 0
0
20
40
60 80 100 120 140 160 180 Luminance [nits]
Figure 8.28 Power consumption of CCFL vs. white LED backlighting.
For example, at 60 nits, where the CCFL consumed 3.4 W, the LED power consumption was only 0.7 W. This level of difference can extend battery life by 60 minutes or longer. Combined with an ambient light sensor (ALS) which automatically adjusts the screen luminance, even further power savings are possible. The table in Figure 8.29 below shows the number of LEDs and typical power consumption levels for various LCD diagonals. Remarkably, the power consumption is less than 1 W at 60 nits for all display sizes.
248
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 8.29 Specification for various LCD diagonals.
8.9 Summary and Outlook There is ample evidence that LEDs already provide a strong alternative to CCFLs for a variety of LCDbacklighting applications in mobile appliances, and that their potential is expanding rapidly. Based on application requirements, the best LED solution can be chosen by considering: required thickness of the entire system; performance of the LED backlight; total power consumption of the light source. As manufacturers are increasingly realizing, application requirements should also take actual use into account. Consider so-called ultra-mobile appliances, such as notebook personal computers, which benefit from the lowest power consumption possible in order to extend battery life. For this kind of application, the light source has to be as efficient as possible. As of today, this efficiency can be best achieved with single-chip white LEDs, such as white Micro SIDELED1. Furthermore, the demand for very small solutions is also growing rapidly. The very low profile of Micro SIDELED1 (only 0.6 mm in height) makes it very viable where miniaturization is a goal. As described in this chapter, the achievable color gamut of such solutions is currently limited. For ultra-mobile notebook personal computers, the color gamut is not a priority. But, for other applications such as picture processing, gaming, or television, higher color gamuts – and, in particular, color saturation – are extremely coveted. The highest possible color gamut available today can be achieved through dedicated RGB solutions. As demonstrated in the CIE diagram above, LEDs can achieve more than 110% of the NTSC standard. If a high on-screen contrast is also targeted, direct backlighting solutions made from RGB LEDs can deliver superior results. The LCD panel set up today uses color filters. As these color filters are limited in transition and are adding several costs, especially for RGB backlighting solutions, sequential coloring should be targeted. Such kinds of solutions would lead to at least double efficiency when compared to solutions using color filters. The main stumbling block for this realization is the switching times of the LC. As each color has to be addressed separately and the picture must not be blinking (also know, as color breakup, caused by three separate pictures in each color), significant improvements have to be targeted at the LC as well. The inherently faster response of optically compensated bend (OCB) LCD panels makes these ideal candidates for use with color field sequential technology. Furthermore, LEDs possess different temperature and aging characteristics. This means that the white point of the display has to be actively controlled. Brightness and color feedback sensors are state of the art, but the additional costs of the total system have to be considered. All of these factors lead to the assumption that RGB backlighting solutions will primarily target the high-end market. Finally, in order to cover high-color-gamut needs within low-cost applications, in the future so-called ‘high-color-gamut white’ LEDs are likely to be developed. Such LEDs will consist of a
LED BACKLIGHTING OF LCDS IN MOBILE APPLIANCES
249
blue LED and a red and green phosphor. This breakthrough would enable the advantages of a singlechip solution (driving, temperature, and aging performance) to be combined. The breakthrough would combine the advantage of a single-chip solution in terms of easy operating conditions (like driving), no necessity for feedback sensors, and even high color gamuts in a range between white and RGB LEDs.
References [1] Schubert, F. (2006) Light Emitting Diodes, Cambridge: Cambridge University Press. [2] Nakamura, S. (2001) Introduction to Nitride Semiconductor Blue Lasers and Light Emitting Diodes, London: Taylor & Francis. [3] Dakin, J. and Streubel, K. (2006) Handbook of Optoelectronics, London: Institute of Physics. [4] Kuhn, G., Groetsch, S., Breidenassel, N. et al. (2005) ‘A New LED Light Source For Projection Applications’, SID conference 2005, 58.3. [5] Schmidt, W., Scherer, M., Jaeger, R. et al. (2001) Efficient light-emitting diodes with radial outcoupling taper at 980 and 630nm emission wavelength, Proc. SPIE LEDs: Research, Manufacturing and Applications V, 4278, 109–118. [6] Illek, S., Jakob, U., Ploessl, A. et al. (2002) Buried Microreflectors boost performance of AlGaInP LEDs, Compound Semiconductors, 8, Nr. 1, 39–42. [7] Baur, J., Hahn, B., Fehrer, M. et al. (2002) Phys. stat. sol. (a) 194, (2), 399–402. [8] Zeiler, M., Ploetz, L., Schwedler, W. and Ott, H. (2005) ‘Highly Efficient LED Backlight Solutions for Large LCDs’, 1st Crystal Valley Conference LCD Backlight 2005, Korea, 4.2, 51–57.
9 Advances in Mobile Display Driver Electronics James E. Schuessler National Semiconductor, Santa Clara, California, USA
9.1 Introduction Mobile displays are undergoing an accelerating pace of innovation both in evolutionary improvements for mainstream displays, and revolutionary display types which are seeing volume production for the first time. In fact, it may be seriously argued that the small format mobile displays are experiencing more diversity than any other segment of the vast $100bn-plus [1] displays market. Over the past few years, not only have conventional passive and active matrix LCD panels undergone tremendous advances, but completely new display technologies such as electrophoretic, organic light emitting diode (OLED) and organic electro-luminescent (OEL) displays have also gone into production. Even as active-matrix LCDs continue to dominate the main viewing display in handheld devices, a host of even newer technologies are vying for secondary, always-on status. These include displays based on electrowetting (a principle that changes the surface tension of a liquid in the presence of an electric field), and interferometric modulation (utilizing MEMS structures to control the constructive and destructive interference of light waves). The pace of change shows no signs of slowing, as the 1bn unit per year [2] (and growing) cellular handset market attracts more venture capital for ever newer display concepts.
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
252
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
9.2 Rapid Evolution Small mobile display technology has evolved in lock step with displays used for cellular telephones due to the immense volume of small displays used for this broad market segment. Early on, vacuum fluorescent (VF) and character-based LED displays were used. These both gave way to LCD due to their higher power consumption, size, electro-magnetic interference (EMI) or limited information density, although VF displays held on at one manufacturer for a surprisingly long time. Some of the first LCD displays were also character-based and categorized by the number of characters and lines each could display (e.g. three lines of 10 characters). The characters were limited to numbers at first, but then matrices were developed that could display the Roman alphabet as well. However, these were again limited in the information density and would not work with non-Roman character sets. Graphic LCD displays were the solution, based on an array of small uniform dots and by the late 80s were being used in some of the first portable computers. Since the dots – initially pixels – could be addressed randomly, any character or image could be displayed. Graphic LCD displays in the early 90s were mainly on/off or black and white (B/W) arrays of pixels using passive matrix LCD drive schemes. Ramping quickly, grayscale passive matrix LCD (PM-LCD) displays allowed much better icons and images to be displayed, and in September 1999 color filters overlaid on these grayscale dots allowed the first color LCD displays to arrive in the market[3]. These first color PM-LCDs used the concept of sub-pixels – three of them actually – to make up a pixel capable of displaying several and then many colors. By using a red, green and blue sub-pixel, coupled with independent grayscale data to each, colors from white to black and a rainbow in between could be created. For instance, if a sub-pixel was capable of resolving 23 or 8 shades of gray, then a pixel made up of three of these could create 29 or 512 distinct colors. The color filter limited the gamut or range of color; addressing, multiplexing and backlight intensity limited the contrast and brightness of such a display. As displays grew larger – beyond about 96 128 pixels – the desire for higher contrast and faster response time drove early cross-over to Active Matrix LCD (AM-LCD) drive techniques again taken from methods used in displays for portable personal computers. The chasm was crossed to AM-LCD in the mid-90s with the introduction of QQVGA format (120 160) displays used in all Japan’s i-Mode mobile handsets. These started at 24 * 24 * 24 or four bit grayscale producing 12-bit ‘full color‘ images with up to 4096 colors never before seen on volume production handheld mobile devices. Of course PM-LCD development responded, producing better contrast and more colors with new addressing schemes, but the adoption of color displays in cellular handsets mirrored that of color adoption in laptop PCs, and almost completely killed B/W and grayscale displays in a span of only two to three years. The battle continued with lower power and lower cost PM-LCD versus higher contrast and faster AM-LCD. In spite of incredible improvement in the front-of-screen performance of PM-LCD, and while they maintained their low power advantage, AM-LCD continued to make inroads into the market share as the cost difference compressed and mobile content became more about displaying data and less about pure audio functions. By 2001, use of PM-LCD in mobile handsets peaked and continues to decline today. While the adoption of AM-LCD increased, rapid innovations fragmented matrix design and drive techniques. Amorphorous Silicon (a-Si) thin film transistor (TFT) glass matrix design has dominated the AM-LCD volume but in the past couple of years (2005–2007) yields on polymorphorous Silicon (p-Si) TFT arrays have improved to the point that still-faster speed and larger aperture ratio1 is possible as these displays have become cost-competitive. p-Si-TFT offer other benefits to mobile system
1
(The ratio of area within a pixel allowing light to pass verses the area used for the control circuits and routing which is opaque.)
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
253
Power Conversi Conversion Passives System Connector
(under EMI shield) shi
Separate Backli Backlight Power Leads Approximate Driver Approxi Locati (COG) Location Figure 9.1 A modern p-Si display with compact component positioning. It shows the approximate display driver, passive components, system connector and backlight power connection within a typical p-Si-TFT display electrical system.
designers, such as a reduced bezel or non-display ledge around the active area of the display, a reduced number of interconnects offering the potential for more reliable displays, and lower power consumption as the larger aperture offers the ability to reduce the backlight power while maintaining brightness equal to a-Si-TFT competitors.
9.3 Requirements If small format mobile displays use essentially the same PM and AM LCD array technology as displays used for laptop PCs, then it would make sense that the same functional partitioning, interface and drive techniques would be used. However, small displays differ significantly by these measures. Why is that? To answer the question we must look back to how the end systems differ in their requirements. While it is true that laptop PC designers strive mightily for longer battery life, the different usage models, much larger displays and batteries produce a different set of answers when asking about the optimal way to partition the display drive electronics, interface choices and drive schemes. Probably the biggest difference is the sheer number of rows and columns in a typical laptop display. For instance, a WSXGAþ display contains 1; 680 1; 050 pixels which requires 1,680 * 3, or 5,040 columns, while today’s typical Smartphone screen size of QVGA is just 320 240 pixels, and is sometimes driven in the portrait direction to minimize the number of columns to 240 * 3 or 720. This 7x difference in sub-pixel columns drives the choice to address the display with separate column driver chips and a data routing timing controller in the case of a laptop, while the 720 columns can be driven from one chip for a Smartphone. Even this number of outputs is reduced to 1/3 (or back to 240) when using p-Si-TFT displays that contain on-glass multiplexers. Of course, this architectural cross-point is continually moving up in size as p-Si-TFT multiplex ratios increase and semiconductor process geometries decrease. Already single chip VGA (640 480) drivers are available. Historically, another major difference between PC and mobile handset systems has been the amount of main CPU MIPS and specialized display processing available. As recently as five years ago, the majority of cellular handsets contained only one CPU, and this was almost exclusively devoted to maintaining the tight timing requirements of the wide area wireless link. The processor could not be
254
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
interrupted to refresh the display, and since the display did not change quickly in a phone anyway, displays with full frame memory or buffers were a requirement. The processor could update the display when it had ‘spare’ cycles completely asynchronous to the timing requirements of display refresh. Another advantage was that the display interface consumed essentially zero power (only CMOS leakage levels) when not being used. For these reasons, and due to the generally small display matrices and color depths that only required small amounts of memory, it was cost effective to include a full color depth frame buffer in mobile handset displays. However, this aspect is quickly changing as well. In the past couple of years, as more data content (web pages, photo albums, movies, TV, PIM functions and even traditional PC applications) migrate to handheld devices, displays are getting larger and able to display more colors. This makes providing a full frame buffer much more expensive, especially as the display driver semiconductor process technology does not decrease as rapidly as pure RAM processes. This fact, coupled with the lower cost of providing specialized video and display processors is driving a set of functional partition changes at the system level. Since the display no longer contains the frame memory, high speed serial interfaces (typically greater than 200Mb/s per data lane) are being used to lower the power, size and EMI generated by this always-on data link. Since the link is able to convey packetized data, other signaling is possible over the link, further minimizing the number of wires routed in a mobile terminal. This opens the possibility of integrating functions like touch input, backlight control and even audio functions into what had been a sub-system only concerned with the LCD matrix itself.
9.4 Packaging Techniques In the search for the optimal balance of size, reliability, power dissipation and cost numerous IC packaging techniques have been used with small format displays. Some of the first LCD displays used in mobile handsets used conventional plastic encapsulated ICs soldered to the main PCB (or also called the printed wiring board – PWB). The outputs were routed to gold plated pads on the PCB and then an elastomeric connector used to form a low impedance path to Indium-Tin Oxide (ITO) plated on the glass substrate of the display. An elastomeric connector was constructed with thin layers of insulator and conductive flexible rubberized plastic compound. A compression fit between the PCB and glass substrate connected the drive signals from the IC to the LCD matrix. These connectors would sometimes fail in very cold temperatures or with age as they lost their flexibility. A much more reliable interconnect technique uses flexible PCB material (a polyimide) that can withstand the temperatures of soldering. It is used with almost all display interconnects today. The drive IC mounting techniques generally fall into one of two categories: flex mounted or glass mounted. When flex mounted (called chip-on-flex or COF), driver chips can be packaged conventionally with plastic encapsulation in any of a number of configurations. Although almost any plastic package can be mounted to polyimide flexible substrates, due to the flexibility, edge mounted pads are preferred. This aids inspection and increases reliability, as open connections are difficult to identify and rework with a ball grid array (BGA) or any array style package on a flexible substrate. Examples of these ‘edge soldered’ packages include older FPGA and more modern leadless leadframe (LLP) style packages. The functions that tend to be more cost effective for these packages include integrated drivers for the smaller matrices used for sub-displays (secondary caller ID displays on mobile handsets) or subsets of the functionality of fully integrated display drivers, such as multi-output power supplies, EEPROM for configuration and identification data and high speed data interfaces. Over the last several years, mainstream AMLCD displays for mobile terminals have moved to single chip solutions. These are almost exclusively glass mounted, or ‘chip-on-glass’ (COG). This mounting technique had been refined on larger format displays in the mid-90s, yet has fallen out of favor as cheaper alternatives evolved with smaller bezel requirements. Yet the methods have migrated largely unchanged to the smaller displays, perhaps because the higher costs of this technique represent a smaller percentage of the overall display cost, since only one driver chip needs to be mounted.
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
255
Figure 9.2 Chip-on-Glass (COG) assembly techniques use many types of conductive spheres and carrier adhesives.
First, the semiconductor die is gold bumped, tested and shipped in trays for assembly. At the LCM maker, the glass substrate is plated conventionally with indium tin oxide (ITO), the semi-transparent conductor used for much of the routing of signals within the display glass. (Aluminum and copper are also used.) Next, in the most common process, an anisotropic conductive film (ACF – or a film that has a heterogeneous construction) is applied between the two that is quite special (see ‘1’ in Figure 9.2). It performs the dual function of mechanically stabilizing the die, and forming a conductive link between the gold bumps and the glass conductors. The carrier material that forms the ‘glue’ carries numerous small conductive spheres sometimes surrounded by an insulating layer. They are made from a latex or polymer material that is typically plated with gold. The density of spheres in suspension is critical. Enough conductive spheres must be present so more than one is caught between the ITO and the Gold Bump on the die, yet too many will cause shorts between pads. This is why an insulating material is often applied to the exterior of the spheres before they are suspended in the epoxy film or paste. As a mounting chuck applies heat and presses the die to the glass, these insulating layers break (or non-insulated spheres are simply crushed), but only under the gold bumps. The conductive plating of the crushed spheres completes the circuit and forms a conductive cylinder between the bump and the glass. With the right ‘recipe’, this works quite reliably and perhaps the only negative side effect is the relatively high resistance of the resulting connection and the wide tolerance on this resistance. Depending on the conductive pad area, the resistance can be as low as 1 ohm, but with the very tight pad pitch (40um) required to minimize die size, the resistances are more often a minimum of 10 ohms and can range as high as 50 ohms. When compared with conventional soldered interconnects, these resistances are quite high, but as most of the inputs and outputs from the driver are high impedance anyway, these additional resistances have been relatively innocuous. The most troublesome area for this type of bond has been in moving the power supplies onto the driver chip. Here, small changes in resistance have significantly degraded power supply efficiency. Double, triple and quad bonds have been tried to mitigate this problem, but there are diminishing returns here. See Section 9.6.6.4 on display driver power supplies for more detail.
9.5 Passive Matrix LCD Comprising the most simple equivalent circuit for an LCD, an array of capacitors between consecutive rows and columns in the display matrix, the passive matrix type LCD was the first to ship in high volume for mobile applications. It did so for several reasons, perhaps most importantly due to the higher manufacturing yields for the simple pixel circuit. Controlling the brightness and the color could not be easier. By varying the voltage on the capacitor, the twist of the LC material caused more or less light to be blocked, and thus a visual grayscale was created. By combining the three sub-pixels or red,
256
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
green and blue (for more background on alternatives, see 9.6.2 Typical LCD Matrix), theoretically, any color could be created. Yet today, the market share of PMLCD for mobile applications continues to decline and market researchers are predicting the death of monochrome STN (MSTN) and color STN (CSTN) has already been almost completely replaced by AMLCD for main displays, and continues to decline from a large base in secondary or sub-displays [4]. In large measure this is due ultimately to costs, but size and performance also play substantial roles. Electrically, STN display drivers have a tough job to get the right voltage on each pixel in an array, as they must deal with high resistances that vary greatly from top to bottom and right to left. Scanning speed has always been a difficult parameter to maintain, especially as the matrix size increases. With shorter and shorter dwell time per pixel, higher voltages and currents were needed to charge the capacitor correctly. With array sizes at VGA (640480) and smaller, STN enjoyed a substantial power consumption advantage over AMLCD. To improve speed in CSTN displays that moved from VGA to SVGA (800600) for notebook computers, display makers divided the array in half, driving each half from its respective edge to gain a 2X factor in scanning speed. These Dual Scan STN (DS-STN) displays reigned supreme in notebook computers as color was introduced for the first time. Yet, in the span of just a couple of years from 1994 to 1996, AMLCD overtook PMLCD in notebook computers. In the move to XGA (1024768) and higher color depth (moving beyond 12bpp), the STN array proved too slow to maintain sufficient contrast. (Response time was also very slow, but as video content was not a factor in computers at the time, this was not a reason for the displacement.) However, just a few years earlier in 1992, a unique scanning method was proposed [5] that over a short period of time became known generically as Multi-line Addressing (MLA). Perhaps too late for large format LCD, it was applied approximately seven years later to smaller displays with great success. As scanning one line at a time was too slow, through matrix processing of the image data, multiple lines could be addressed. This technique required Nþ1 different voltages to be driven for any set of N lines. Driver complexity increased significantly, but as they were dominated by the output circuits and contained relatively little digital circuitry, this was absorbed without substantial cost penalty. This technique benefited the two most important deficiencies of CSTN, namely the contrast and response time. By this time, displays that would reproduce video were becoming more important, and for a time MLA-CSTN displays made inroads against AMLCD for mobile devices. As we approach the situation today however, CSTN costs are declining at a slower pace compared with both a-Si and p-Si AMLCD. Power consumption has always been dominated by backlight power, so this is becoming less of a differentiator as well. Additionally, although MLA improved the contrast and response time and eased the move to 16-bit color (64K color), noticeable differences still existed between the two technologies in these areas. Now, with the mobile display volume moving beyond quarter common interchange format plus (QCIFþ or 176220) and color depth to 18bpp (256K color), the performance differences are widening. As CSTN PMLCD continues to decline in importance, new designs are focused mainly on different AMLCD techniques. Yet, check later into Section 9.7.2.1 for a surprising resurgence of MLA concepts.
9.6 Active Matrix LCD Operation 9.6.1 Generalized AM-LCD System All AM-LCD displays can generally be broken into the blocks shown in Figure 9.3, below. The frame memory is the only block or function that may not be present. Let’s examine each block starting with the LCD matrix – the circuit on the glass substrate [6].
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
Frame Memory
257
Timing Controller
Column Drivers Dri vers
Light Source
W-LED RGB-LED
Controller Driver
DC/DC DC/DC Conv Conv
DiDisplay splay
Vcom Vcom Buffer
Row Driver
Gamma Gamma DAC Ref
Grey scale Data Input
PWR Conv
Figure 9.3 The major functions within an Active Matrix LCD
9.6.2
Typical LCD Matrix
Computer graphic and by extension, data-centric handheld mobile devices, use square pixel LCD matrices. Other arrangements, such as an RGB triangle (delta) are used in consumer camcorders and some digital camera displays for their reportedly superior performance with images and video, especially when larger pixel pitches are used. More recent advancements to increase brightness, such as exhibited by the Clairvoyant Red, Green, Blue, White matrix called PenTileTM, and to increase color gamut, such as with two green or red sub-pixels are making inroads into the traditional RGB stripe. However, these advancements continue to use square pixel arrangements to insure backward compatibility with graphic and video processing algorithms which assume this.
9.6.3 Active Matrix LCD Array The active matrix LCD (AMLCD) is distinguished from its passive matrix counterpart by building an active element or switch at the intersection of each row and column line within the LCD matrix itself.
Figure 9.4 Colors are created by the eye’s ability to mix different percentages of the red, green and blue stripes from a single square pixel.
258
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 9.5 The single TFT FET passes charge to the capacitor in parallel with the LC cell.
In some instances this has been as simple as a two-terminal diode, but volume solutions today use a three terminal FET with the source connected to the column line, gate to the row line, and drain to the plate of the LC cell and a parallel holding capacitor.
9.6.4 Poly-Silicon LCD Array The p-Si-TFT array in volume production today locates an analog multiplexer on the glass at the top of every column. With a typical 3:1 ratio, this reduces the number of column connections, and DACs within the source (column) driver by this same factor of three. Different multiplex ratios are used – expect this to increase over time!
9.6.5 Row Driver Operation The row driver is perhaps the easiest element within an LCD display to understand. At its core, it is nothing more than a shift register that walks a ‘1’ or active level along the axis that contains the gate
Figure 9.6 The p-Si TFT array includes a multiplexer to reduce interconnect and Display Driver complexity. It also integrates the complete row driver function.
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
259
Figure 9.7 Rotation by 90 permits fewer column DACs to be used for an equal X Y pixel format.
lines of the TFT array. Due to voltage requirements of the TFT, it also contains a level shifter to produce the correct voltage to turn the row of TFTs on. Typically, a-Si TFT LCD locate this function within the display driver, and p-Si TFT LCD located it on the glass substrate, but the trend is to build this circuit into the glass in both types of AM-LCD. When the gate driver turns on a row of TFTs, the R-C time constant of the wire and the propagation delay of the signal means that TFTs near the gate driver turn on before TFTs at the end of the row. The Output Enable (OE) off-width must insure that all pixels in the row are allowed to fully charge, but also insure that all TFTs are fully off when the next row is enabled. The family of curves for the entire row is illustrated in Figure 9.9. As with most electronic circuits, if they worked along their first order principles, it would result in much simpler systems! The first non-deal element we will examine is the parasitic gate-drain capacitor. This is formed from the crossing of the row and column conductors. Its effect is to remove some of the charge from the Clc and Cstore capacitors, raising the gate line slightly (Figure 9.10) and reducing the gray level voltage on Clc. The typical way of compensating for this downward shift toward the most negative side of the gate voltage, is to shift Vcom an equal amount, thus gaining back the intended voltage on Clc. To model this offset we look back to the circuit on the glass, and the voltages driving it. Unfortunately, these errors are visible to the human eye, and therefore must be compensated for. To a first order of approximation, this delta V can be calculated by estimating the values of the various capacitors within the on-glass circuit and multiplying the capacitor ratio by Vcom.
Figure 9.8 The row driver is a shift register that turns the TFTs on in successive rows.
260
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 9.9 The Output Enable timing is critical to maximize contrast and eliminate crosstalk.
Figure 9.10 The parasitic capacitor Cgd steals charge from the Clc, Cstore pair.
Figure 9.11 The small charge taken from Clc by the parasitic Cgd results in a pixel error voltage ‘delta V’.
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
261
Figure 9.12 As the gate voltage turns off the TFT, it can be removed from the equivalent circuit.
Compensation can then be developed and applied inside the column driver, as shown in Figure 9.13, below. To make matters more difficult, the capacitance of the LC cell (Clc) is grayscale (or voltage) dependent. Since the applied gray level is known to the column driver, this dependency can be taken into account.
9.6.6 The AM-LCD Driver Let’s turn our attention now to another key requirement for any LCD (active or passive) and that is the circuitry that generates the analog voltage on VLC. As we will see, it is not as simple as translating a digital representation of the gray level to an analog voltage. Although that is the core function, the physics of the LC display make creating a long lived and high quality display require a bit more complexity in the implementation. As we have discussed, gray level voltages must be alternately applied positive-negative, and negative-positive with respect to the two plates of the LC capacitor. This is necessary to avoid an image sticking or image retention problem if only a single polarity were applied. If only a DC drive (with the different gray levels) were used, any impurities in the LC would electroplate to one side of the capacitor altering its optical properties and becoming visible as slight but constant gray level differences between pixels. AC drive keeps any impurities in suspension and prevents this phenomenon from causing the image retention.
Figure 9.13 Grayscale dependent offset is applied inside the column driver.
262
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 9.14 Light output verses applied voltage, or a gamma curve is shown for a normally white twisted nematic LC.
The curves shown in Figure 9.14 also illustrate the nonlinear relationship between the gray level voltage and transparency or light output (when a transmissive display is constructed and a backlight is used). In a normally white, twisted nematic type LCD, low voltages allow more light to pass, but as voltage is increased the LC twists and changes the polarity of the light to be blocked by the external polarizer film. The nonlinear curve is called the gamma curve. Without getting into a discussion of gamma itself, we will simply say that different display types (e.g. cathode ray tube, plasma phosphors, etc.) have different intrinsic color responses or gamma. Much effort goes into matching the source material (the color data driven to a display) to the particular gamma response of that display. What can be learned from these curves? First, observe that relatively large voltage changes at the extreme ends of the curves cause relatively small changes in gray level. One may conclude therefore that errors in offset and absolute value with nearly black or nearly white (or saturated red, green or blue subpixels) matter less than errors in the middle part of the curve where very small changes in voltage makes for a large difference in the transmission of light. Should the inverse curves match exactly? Unfortunately, the answer is no, as this would result in flicker. The gamma curve response is where the nonlinear offset (error voltage) discussed above is compensated. The gray level dependent error can be expressed as an offset to the positive or negative gamma curve. Column drivers typically have a programmable way of handling this offset in order to match the response of the glass to which they are attached. Let’s look now at how, in a very practical sense, column driver circuitry achieves these non-linear and inverse drives to the LCD sub-pixel.
9.6.6.1 DAC Architectures During the introduction of color AMLCD when color depths were lower, building 8 or 16 references, or ‘buffer amplifiers’ as a discrete external device was not too difficult. These provided for 512 or 4096 color displays respectively (e.g. 4-bit grayscale on each of three primary colors yields 2^12 colors.). These external references were tuned to the desired gamma curve (in piecewise linear fashion) and fed into a column driver and connected directly via an analog bus to an analog multiplexer for each column. Thus the DAC was really a big analog multi-pole switch, with
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
263
the digital grayscale data selecting one of N voltages representing the voltage corresponding to that digital value. Although some mobile AMLCD displays were built with 12-bit color depths, high volume was first reached with 16-bit (64K color) displays in a 5-6-5 for red, green and blue arrangement. This higher color depth, along with the more severe space constraints of mobile displays forced use of internal references from the beginning. Up through 6-bits of accuracy, the resistor DAC type column driver has been the most popular. Between VLC (max) and VLC (min), a series string of resistors is assembled. If the parasitic CGD is compensated in the gamma curve, a separate string can be used with slightly different resistor values. If the compensation is elsewhere, the same string can be used and the voltages on either end inverted during each inversion phase of the pixel. All 64 voltages must be delivered to each decoder in each column. Commonly three sub-pixels make up a pixel, and we conclude then 26 * 26 * 26 ¼ 218 or 256K colors are possible for each of them. Since including 64 different analog buffers (x2 for the inverse curve) would result in a very large die area, a piece-wise approximation of the gamma curve is typically implemented. More fixed references are included toward the middle of the curve where accuracy is more important. The number of fixed references varies, but by choosing 4, 6 or 8 points, resistor dividers between them can approximate the curve shape and fill in the gaps, resulting in 64 different values. For the inverse curve, and to avoid another resistor ladder, the voltages at the end points of the ladder are swapped. However, as we concluded earlier, the curve must be altered slightly to compensate for the VLC offset error. This may be done by using different reference values within the ladder for each phase, or more commonly by adjusting VCOM when using row or column inversion techniques. The digital gray level data coming into the column driver becomes essentially an address to a large data selector, an analog demultiplexer, or decoder (see Figure 9.15). Data must come into a holding register first because each decoder must be updated at the same time, when the load output is asserted in the figure. Finally, a unity buffer is usually included before driving the column outputs to isolate the column loads from the gray level references. Care must be taken that the input loads of these buffers not influence the accuracy of the references. The error in the reference when only one output uses it, and when all outputs use the same value should be within acceptable limits. For more on this topic, see ‘9.6.6.3 Moving to 16.8 million colors’.
.
Figure 9.15 Resistor DAC Column Driver Architecture.
264
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 9.16 An AC VCOM panel drive architecture.
9.6.6.2 Inversion Methods Perhaps the most straightforward method of driving opposite polarities to CLC is to fix the common plate of the capacitor and drive the top plate in opposite directions in each phase. This is called ‘direct drive architecture’. One historic disadvantage of this method is that the column driver outputs must be capable of the full peak to peak CLC voltage – both phases summed. Due to process limitations, the 20Vpp voltage tolerance has been costly. A more common architecture in small format displays is an AC VCOM panel drive architecture (see Figure 9.16). By moving the CLC common plate in opposite drive phases, only half the Vpp CLC voltage is presented at any one time to the column driver outputs. This method has allowed lower cost semiconductor processes to be used. However, each method has implications toward the polarity reversal method they are compatible with. An AC VCOM architecture is compatible with frame and row inversion, as illustrated in Figure 9.17. Frame inversion has a low power advantage over all alternatives. Since the panel’s polarity is
Figure 9.17 Frame and row (line) inversion methods.
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
265
switched as the relatively slow frame rate, power to charge and discharge the panel capacitances is minimized. But, in the ‘no free lunch’ department, the method is very sensitive to flicker due to the slight mismatches in the two gamma curves. If the transmission of light through the panel on each polarity does not exactly match, the eye can pick this up as flicker. Exacerbating this problem are the possible beat frequencies created with fluorescent lights at 50 or 60Hz. The method is also sensitive to both horizontal and vertical crosstalk. Crosstalk is the optical effect of blurring a high contrast edge, and is caused by charge coupling from one column (vertical crosstalk) or row (horizontal crosstalk). A common test for this phenomenon is to locate a white square in a black field, or vice-versa, and check for smearing of the edges. To counteract both of these negatives, row or line inversion is often used with AC VCOM panels. It spatially averages the gamma curve mismatches on opposite polarities, and virtually eliminates vertical crosstalk. It is, however, still susceptible to horizontal crosstalk, and it consumes more power due to the polarity reversal at the horizontal line rate. Can we do better? Definitely. If a direct drive panel architecture is used, abandoning the advantage of limited Vpp on the column drive outputs, then both column and dot inversion methods are possible. Column inversion requires less power than row inversion since the panel common plate is fixed. Flicker is eliminated due to the spatial averaging of the opposite polarity on each column. This very simple inversion method also eliminates horizontal crosstalk, yet is susceptible to vertical crosstalk. From a visual quality standpoint, nothing beats dot (sub-pixel) inversion, but as one may expect, it comes with power, complexity and cost penalties. By combining the row and column inversion methods, the crosstalk in both directions is minimized and spatial averaging removes the perception of flicker. However, the driver must keep both gamma curves ‘active‘ at the same time and handle the full Vpp CLC voltages. From an implementation standpoint, this driver is also the most complex.
9.6.6.3 Moving to16.8M Colors Although spatial and temporal dithering demonstrate viable quality of 24-bit source material on 18-bit displays at similar power levels, a full 24-bit data path through to 8-bit per primary DACs will grow in popularity as more and more high quality video content is displayed on handheld devices. Just as with large format LCD TV displays that transitioned in volume to 24-bit at the turn of the century (and are
Figure 9.18 Column and pixel (or dot) inversion methods.
266
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Portion of R-string
Final grey level voltage
Selector switch moves entire circuit to appropriate resistor Figure 9.19 Each level of a 6-bit R-string DAC can be expanded by moving a capacitive divider circuit to the appropriate level, and adding four more gray levels.
moving to 30-bit and beyond now), the fundamental design challenges remain the same, yet there is additional concern over power consumption and cost for integrated mobile display drivers. Looking forward, there are two methods; each with their own trade offs. First, the RDAC can simply be expanded to 256 levels. Doing this directly, it is extremely difficult to maintain the sub-1mV accuracy necessary for the full 256 levels to be observed. (For 3V LC material, 10% p-p accuracy per gray level is 1mV, for 5V LC material, it is 2mV) One technique being used is to build a single ‘expander’ divider that splits each of the existing 64 R-string levels into four new levels (the idea being 22 * 26 ¼ 28). This expander circuit would then be moved via a switch matrix to any level that needed expansion, depending on the input data gray level. So as not to change the total resistance of the 6-bit string when the expander is attached, a capacitor divider is used instead of resistors (see Figure 9.19, above). Another DAC architecture used in 24-bit and above column drivers is the algorithmic or cyclic DAC. The technique here is to successively build the correct voltage on a storage capacitor as the DAC goes through a cycle for each bit in the gray level word [7]. After the voltage on the storage capacitor C2 (see Figure 9.20) is zeroed, the LSB selects the state of SW1; to VREF if a ‘1’ and to ground if a ‘0’.
Figure 9.20 The Cyclic DAC successively builds the proper voltage on C2 as the selector switch moves through the gray level data from LSB to MSB [7].
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
267
Figure 9.21 Small format display drivers typically have integrated switched capacitor boost and invert functions coupled with LDOs to create the required supplies.
VREF can be any voltage, but would typically be set to the maximum VLC for the panel. Next, SW2 is closed and VREF appears on C1. Then SW2 is opened and SW1 is free to move on to the next bit. SW3 is then closed and VREF is divided in half [(VREF þ 0V)/2] due to the capacitor divider, and saved on C2 by opening SW3. The process repeats for each bit in the word (it would work the same for 6, 8, 10 or 12 bits, simply taking more cycles for greater accuracy) moving toward the MSB, which has the greatest effect due to being last to influence the C2 voltage. Each column or sub-pixel does this in parallel, so while 8 or 10 cycles may be necessary (depending on the gray level bit depth), an entire line time is available for the conversion. It remains to be seen whether this DAC architecture will make the move to mobile display drivers, but it does offer many of the same die area advantages as demonstrated in large format LCD TV drivers.
9.6.6.4 Power Supply Architectures Small format displays require several regulated voltages and these have traditionally been supplied via a combination of switched capacitor boost and invert functions combined with low dropout
268
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 9.22 Many resistance factors must be taken into account when modeling on-glass power supplies.
linear regulators (LDOs). Specifically, the VGATE on and off voltages must bracket or be wider than the VLC voltages. The supply for the column driver outputs must be virtually noise free and accurate. VCOM, depending on the panel architecture must sometimes switch rapidly and with higher current than the other supplies. The majority of these supplies are integrated with the display driver circuitry today, but dis-integrated approaches still exist and may offer efficiency advantages. The chief difficulty with the integrated approaches is the relatively high resistance of the interconnects between the driver chip, external passive components and the supply rails (see Figure 9.22). This additional resistance is especially troublesome with switched capacitor boost supplies because they add to the equivalent resistance of the internal switches and reduce the efficiency of the conversion. For displays that can tolerate the space required, separate multi-output supplies are still used in a substantial number of small format displays and can offer approximately 10% efficiency improvements over the integrated alternative. The passive component area and height still exceed the integrated circuit dimensions. Packaged die as thin as 0.4 mm are mounted on the flex tape near the glass interconnect. To both improve display power conversion efficiency and reduce LCM cost, inductive boost architectures are starting to be integrated into small format display drivers. Only recently have inductors become thin enough to consider this option. The inductor, along with required resistors and capacitors are mounted on the flex tape near the glass bond, just as more traditional switched capacitor components are. Use of an initial inductive boost stage gains back much of the efficiency lost due to interconnect resistances and also provides a much more consistent conversion efficiency over a wider input voltage range. The wide input range can allow displays to be connected directly to a single cell Lithium Ion battery – the most common energy source for mobile handsets and handheld portable devices – and thereby reap an additional efficiency improvement. This additional improvement is often not considered in LCM design because it is outside the scope of the display system. In conventional systems, an inductive buck supply is often used to pre-regulate the battery, cleaning the noise from the supply and delivering a constant voltage (typically 1.8V). Although this supply is quite efficient, even at 85% average efficiency over the input range, the power used by the display is multiplied by this efficiency before being consumed. Thus conventional displays are consuming power with say 85% * 70%, or 60%, instead of using one conversion at 70% directly from the battery. Of course, an additional consideration when connecting directly to any battery is the power supply rejection ratio (PSRR) in the frequency bands that can produce visual artifacts. This varies widely for every display, but with increased care in this area, lower power consumption can be realized.
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS V000c (DC input voltage)
L1
SCHOTTKY
269
V00c C9
C10
=5.5V SWITCHER CIRCUIT pwr_down V00CR VDD_ADJ[4:0]
Adj
TBD[3:0]
Adj
VSDP (TO GAMMA REF)
TBD[7:0]
Adj
VRDU (TO GAMMA REF)
VDDGR_ADJ[5:0]
C7
LDO 1
2X
C3 V00G
Adj C6
switch_clk VSSGR_ADJ[5:0]
SIGNALS TO/FROM INTERNAL LOGIC
Adj
SCHOTTKY LDO 2
-1X
C4 V53C
Ref V008 V00CG
+ Comp
C1
V53CR C5
V95.3 XDON
Figure 9.23 To gain higher efficiency, some display drivers are moving away from switched capacitor and toward to inductive boost architectures for the primary supply.
9.6.7 The System Interfaces Early displays for mobile devices relied on common low speed serial interfaces such as SPI and I2C, and many small secondary or sub-displays still do. As on-display buffer memory sizes increased for graphic displays, the two common microprocessor bus interfaces were introduced, these based on the Intel 8080 (i80) and Motorola 6800 (m68) microprocessors. Amazingly, these are still in use some 25-plus years later, although engineers have taken some license with respect to bus widths. Examples are available at 8, 9, 16 and 18 bits. They are commonly used for fully buffered, memory mapped, or sometimes called CPU-type or ‘smart’ displays. As graphic displays became larger still, and became bufferless, the standard raster interface used in notebook computers was introduced, based on an RGB color data bus the width of a full pixel, and control signals for pixel clock (PCLK), horizontal and vertical synchronization signals (HS, VS) and sometimes a data enable (DE). These bufferless displays are often called ‘dumb’ displays, yet some contain partial frame buffers accessed via a slow serial bus and execute some low-level commands. As displays reach beyond qVGA, several factors converge and multiply the deleterious effects of parallel data interfaces. Most obvious is the PCLK rate increase as the number of pixels track to the squared power with X and Y length increases. Color depths are moving beyond 18-bits as devices become more entertainment centric, and novel mechanical designs proliferate as manufacturers seek to engage ever more repeat customers in an upgrade cycle with better ease of use, more comfortable form factors and eye catching fold outs. Each of these serves to either increase the data bandwidth to the display, or constrain the dimensions of the physical interconnect. Manufacturers have responded by packing more conductors into smaller spaces, increasing the number layers in flex interconnects, moving to more expensive micro-coax, and also moving to high speed serial data transfer. These high speed serial interfaces are treated more fully in Chapters 10 and 11.
270
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 9.24 Wide parallel interconnects add cost, increase power consumption and decrease reliability when compared to their high speed serial alternatives.
9.6.8 Frame Memory and Buffer Architecture It’s worthwhile examining the system implications of the so-called smart and dumb display alternatives, as they differ not just in the type of system interface, but have profound implications to cost, power consumption and even the operating system of the mobile device. The choice of which type to use represents a complex balance at the system level. To help our discussion, some assumptions are useful, but remember – each has exceptions and particular implementations can vary widely. Furthermore, the operating mode, or use case of the mobile device alters the display requirements at any instant. Let’s first consider a ‘full operating’ mode that requires the full capabilities of the display system, in terms of format (using all the pixels available) and color depth. Based on this, we assume the display must be refreshed at about 60Hz. and the data necessary for this must come from a frame buffer. Now, in Figure 9.25, we show this in one of three locations marked by ‘A’, ‘B’ or ‘C’. Position ‘A’ may represent an Applications or Graphics processor that uses a very large external memory for working space, textures, and perhaps multiple frame buffers that may get summed or blended before final rendering to the physical display. Position ‘B’ represents an architecture that is more common to graphics systems highly concerned about power consumption and cost. Both ‘A’ and ‘B’ architectures can be used with so called ‘dumb’ or bufferless displays. These have typically been RGB parallel interfaces, sometimes with a low speed serial secondary interface. Within the MIPI Display Serial Interface Specification, they are called ‘video mode’ displays. A frame buffer in position ‘C’ represents a so called ‘smart’ display that fully decouples it’s refresh rate necessary for the visual quality, from the update rate, or how often the image content changes. Using architecture ‘C’ means that designers have complete freedom to match the content speed to the interface, i.e. a still image just needs to be sent once, while video can be sent at the natural frame rate of the rendering, and not need to be upsampled to a higher rate dictated by the display itself. To put some bounds on power consumption the figure shows a 1x factor for a host system with internal frame buffer displaying a still image, and 3x if an external memory buffer is used. The main display interface is given a 3x power consumption level, and the display – significantly – with or without a frame buffer is given a 3x figure. (Although adding the RAM will increase display power
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
271
Figure 9.25 Simplified system power consumption versus frame memory location
consumption, it is estimated at a small percentage of the total display driver and glass power.) For those who must attach an absolute number to these, 1x is about 10mW, where this would be fairly large for a QVGA system, but very good for VGA. The backlight power can easily be 10x that needed for the display itself, based on common RGB stripe formats and the need for 200 to 400nits to display low contrast video content with sufficient brightness. So let’s examine the effect of moving the frame buffer from an internal host side position (position ‘B’) into the display (position ‘C’). For a still image, the display signal path consumes less than half the power (3 verses 7 units), if we neglect the backlight power; a significant improvement. However, the difference reduces to only 12% when we include the backlight power estimate. This may still be significant, as every milliwatt reduction lengthens the run time of the device. The final decision is Table 9.1 Relative power consumption estimates for three frame buffer architectures (Reference Figure 9.25 for ‘A’, ‘B’ and ‘C’ cases). Static Image Frame Buffer Location A B C
Host side external FB Host side internal FB Display FB
Host
Interface
Display
3 1 0
3 3 0
3 3 3
Subtotal
Backlight 30 30 30
9 7 3
Total 39 37 33
Video at 30fps Frame Buffer Location A B C
Host side external FB Host side internal FB Display FB
Host 7 5 5
Interface 3 3 2
Display
Subtotal
Backlight
Total
3 3 3
13 11 10
30 30 30
43 41 40
272
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
likely to be one based on cost, as the amount of memory required for a full frame buffer on larger mobile displays will impact the die area, even with new small geometry processes. Furthermore, these small processes tend to have a much higher wafer or processing cost than process generations that are more depreciated. For a moving image, for instance when decoding video, the differences in power consumption verses buffer placement are reduced. Although having a full frame buffer on the display does fully decouple the display refresh rate (typically 60fps) from the video update rate (typically 30fps or less), the reduction in interface power is still a small percentage of the total system power required for the operating mode. (A 3% difference between cases ‘B’ and ‘C’) One may accurately conclude that full frame buffers always save some power with use cases that put a display into a fully-on state. However, the amount of savings may be small on a percentage basis and must always be traded off against the cost. For this reason, designers have looked to partial frame buffers as a middle ground, to achieve most of the power savings, especially with use cases that do not fully utilize the display capabilities, and also significantly do not cost as much as full frame buffers. Partial frame buffers are most beneficial in lowering power consumption in use cases that require only limited display function, such as smaller area or lower color depth. An example of such a use case is the standby mode of a mobile phone that uses only one display. In this case, a wallpaper with lower color depth significantly reduces the memory requirements. Let’s take a Wide VGA example (about 850480 pixels): with only three bits per pixel, an 8-color mode can be created with only 12.5% of the memory required for a full 24bpp display. Drivers that can utilize this memory with some versatility can make icon rows at higher color depth, or choose to spread the memory over the entire display area. Some drivers allow a choice of background color around the area refreshed from on-chip memory, thus with careful color selection, the user would not perceive any difference between full and partial buffer with some user interface designs. In each of these use cases, the graphics processing sub-system and the system interface can be shut down, saving considerable power. Future work in minimizing memory requirements in buffered displays will look at compression technologies.
9.6.9 Display Lighting and Lighting Control When choosing a non-emissive mobile display, the vast majority use backlighting of some type. With older character based STN-LCD, cheap green LEDs were often used. These were supplanted by various colors of electroluminescent panels in the late ’90s and these are still used as keypad lighting on some mobile phone models. Other LED colors were also used in high volume. With the availability of low cost white LEDs (WLED), also in the late ’90s, color LCDs ramped to over 90% penetration within five or six years. Although PDA displays at 3.5" and above continued to use Cold Cathode Florescent Tubes (CCFT) for some years, by 2005 these had almost disappeared from volume use, being overtaken by four to six WLED. Now, WLED have virtually 100% market share in mobile display lighting, yet RGB LEDs are on the horizon. As we have shown, mobile LCD backlight units may consume on the order of 10x the power consumed by the display glass and driver in full operating modes. Purely transmissive displays are not popular in mobile devices because the backlight power to make them sunlight readable would be prohibitive. Some amount of transflective mode is usually implemented. However, this has the unfortunate side effect of lowering the aperture ratio and increasing the power required for a given brightness in the transmissive mode. Likewise, purely reflective LCDs are not popular because they require a frontlight diffuser in low light conditions and this has posed moirı´ artifacts, bright spots due to the LED junctions being visible and increased parallax when using touch overlays. The increased popularity of data-centric or entertainment-centric mobile devices requires higher brightness displays because lower contrast image content (still and video images) needs more brightness for proper comprehension. (High contrast image content such as text and line drawing
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
273
can use lower brightness for an equal perceptibility.) System designers are moving from 200nit (cd/m2) to 400nit displays for this reason. As one may imagine all this puts more pressure on the battery and depresses operating life of the device, so heroic efforts are being made to find methods to reduce this power. In many cases, knowledge of the image data aids control of the backlight brightness. In 2003, the first driver vendor introduced a production display driver that directly controlled the backlight power supply. This was coupled with a built-in contrast detection circuit that allowed autonomous control of brightness based on image contrast. A recent area of investigation has been a move to combined Red-Green-Blue LEDs to replace the WLED in the backlight diffuser. This was developed first in LCD TV backlights to improve the color gamut beyond that achievable with CCFT. Today, production LCD TVs routinely achieve >100% of the NTSC and PAL color spaces. While the same benefits are desired for mobile displays, especially those that are image-centric, another implication of the switch may be more compelling. The other major benefit of a move to RGB LED backlight is power savings. To understand this, it is useful to refer to a spectrum plot of the WLED, versus the transmission characteristics of typical RGB filters overlaid on the LCD (see Figure 9.26). The light blue color plot line shows the common WLED with a sharp blue spike, as the initial blue junction emits directly, then the lower and more broad reemission of the yellow phosphor in the green and red spectrum. Unfortunately, the emission of the WLED matches very poorly with the cutoff frequencies of the color filters. Any place there is a crosshatch, light and consequently power is being lost. To compensate, WLED power must be increased to achieve the required brightness.
Figure 9.26 White LED spectrum mapped over typical RGB LCD filter response [8].
Substituting red, green and blue LEDs, sometimes mounted in the same physical package, for the multiple WLED used in the majority of mobile LCD, meaningful power savings can be achieved. Published empirical results quantifying this are not available yet, but calculated results range from 10% to 20% savings without changing the color filter response. Results of 20% to 30% should be achievable when tuning the color filters to the emission spectrum of the LEDs. It is rare that a power-saving technique results in better color performance, but the same wider color gamut and controlled white point LCD TV obtains are realized by the mobile display for better video and still image performance.
274
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
RGB
Sub-Pixel
R’G’B’W
Figure 9.27 Commonly found RGB color space data is used to create the logical pixels with an RGBW matrix.
Beyond ambient light sensing (ALS) which has been used for several years now (yet is still not deployed in the majority of mobile devices), the next frontier for backlight control is active feedback. LED output varies with age and with temperature. Both these effects can be compensated with a closed loop feedback of device age (known by an hour meter counter) and a temperature sensor. Finally, some systems will use sensing of the light spectrum itself to close a loop around backlight white point and brightness to compensate for ambient light, age and temperature within one system.
9.7 Requirements for Driving Example Emerging Display Technologies 9.7.1 Sub-pixel Rendering Displays The vast majority of TFT-LCD displays create color through the use of three sub-pixels; one each of red, green and blue. Sub-pixel rendering schemes provide benefits by either processing the grayscale data to those sub-pixels differently, or by changing the sub-pixel matrix itself. An example of the former is Microsoft’s CleartypeTM which uses an algorithm to make the rendering of textual information more readable on conventional RGB sub-pixel displays. The most popular sub-pixel rendering scheme that differs from the RGB stripe is the PentileTM matrix that consists of a repeating pattern of RGB and white sub-pixels. The particular arrangement of sub-pixels determines the processing algorithm and in the case of Pentile, the reuse of physical sub-pixels to create logical pixels. On the surface, it seems like data expansion is going on here, as a data set comprising three subpixels is used to create grayscale data for four. And that would be true if no sub-pixel reuse were going on. However, the difference is that for the conventional RGB stripe, each sub-pixel belongs to only one pixel (a logical and physical pixel being the same in this case), but Pentile creates physical pixels that reuse each sub-pixel in multiple logical pixels. Because of this, there is actually a net decrease in the number of sub-pixels needed for any given physical pixel format by a factor of one third. In other words, only two-thirds the number of sub-pixels is used to create a display of a given format verses the conventional RGB stripe arrangement. This two-thirds factor brings benefits to the entire display signal path. If the sub-pixel rendering logic is located before a serial data interface, that interface could operate with lower bandwidth and power. Likewise, the data driver contains one-third fewer columns, saving power and space there as well. As brightness is enhanced while color gamut is not degraded with the Pentile matrix, the backlight power savings is especially compelling for small format mobile displays with very high pixel densities.
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
275
Table 9.2 In this table, display drivers for these three general areas of OLED displays are covered: PM with Current Drive, AM with Current Drive and AM with Voltage Drive.
Passive Matrix Active Matrix
Current Drive
Voltage Drive
Used exclusively Used with simpler AM circuits, including 2T approaches
N/A Used with more complex constant current AM circuits
9.7.2 OLED/OEL Although the initial effect of light emission from organic materials was published in the oft cited paper by C. W. Tang and S. A. Van Slyke in Applied Physics Letters in 1987 (#51, p913), it is in the last ten years that ever increasing amount of investment capital have been applied into creating commercially viable OLED displays. In 2000 the number of OLED related papers at the SID Symposium numbered less than 10. In 2007, two full sessions were devoted to AMOLED alone, and a total of over 130 papers mentioned OLEDs. Since the first commercial deployment of an OLED display in automotive stereo equipment in 1997, organic (carbon-based) light emitting materials have received an immense amount of investment that has paid off with commensurate advancements in the technologies. Ten years later, we are now seeing volume shipments of active matrix organic light emitting diode (AMOLED) displays from Samsung based on a poly-silicon matrix. However, the length and breath of OLED products is amazingly diverse and still advancing rapidly. Due to this technical diversity, no one display matrix or driver architecture dominates in this space. In almost every aspect, including chemistry (e.g. small and large molecule fluorescent or phosphorescent), substrate (e.g. glass, steel or plastic), matrix circuit (e.g. passive, 2T to 6T active), driver type (e.g. current or voltage) and emissive direction (top, bottom, or both!) one will find active investigation or development. We will explore OLED drivers from the two most revealing axis; passive or active matrix, and current or voltage drive. As the name suggests, the fundamental difference between OLED displays and LCD is that we are driving a diode junction, and not a capacitor as in LCD. Thus to control grayscale (or more correctly ‘brightness’ as this is an emissive display), one must control current and not voltage. (The ‘on time’ or temporal brightness control applies equally to both.)
9.7.2.1 PMOLED Drivers A passive matrix array assembles an OLED at each intersection of row and column lines. Applying an appropriate current between row(n) and column(m) will cause that diode to light. To scan the entire array, each row is turned on for the frame time divided by the number of rows in a frame. Thus, the larger the array, the lower the amount of time for each row given a constant frame time. To achieve a constant brightness as array size increases, more and more current must be applied in a shorter amount of time. Even though the threshold voltage of organic electro-luminescient (OEL) junctions is just over 6V [9], drive voltages far beyond this have been applied to achieve sufficient peak current. In each case however, current must be controlled along the column line for a specific time period. For instance, if a 60Hz. frame rate is used, the 16.66mS/# rows time can be modulated (0% to 100%) for brightness control. Often this is the higher quality approach, at the expense again of higher peak currents, as some color shift occurs with varying current in OLEDs.
276
.
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 9.28 Simple passive matrix OLED matrix Reproduced from [6] by permission of SID.
Beyond a few tens of rows, the currents become high enough to degrade the lifetime of the OLED, so these two factors of short ‘on’ time and high peak currents have limited the practical size of PMOLED displays. Nevertheless, the market for small size monochrome or multi-sector color PMOLEDs in MP3 players, cellular phone sub-displays and similar applications resulted in early success for this type of OLED display. Recently, the general concept of multi-line addressing has been applied to PMOLED displays [10]. This technique, dubbed Total Matrix Addressing (TMATM) addresses multiple lines at a time which increases the ‘on‘ time for each pixel and thus reduces the peak currents necessary to achieve a given brightness level. This in turn increases the lifetime of the matrix. The technique works by preprocessing the image into a horizontal drive matrix and a vertical drive matrix, the product of which results in the appropriate gray level applied to each pixel or sub-pixel. The total scan time becomes proportional to the number of gray levels desired, instead of the size of the array – a fundamental and significant improvement that may reenergize the PMOLED market into larger arrays that compete with some active matrix OLED (AMOLED) displays.
9.7.2.2 AMOLED Drivers The first active matrix applied to OLEDs was poly-silicon, due to its higher electron mobility and more repeatable threshold voltage (Vth). Initially, the motivation was to minimize the in-pixel circuit since for bottom emitting OLEDs, any opaque transistors reduced the aperture ratio, just like in LCDs. Like all diodes, OLEDs share a nonlinear voltage transfer curve and are more easily controlled by constant current. So unfortunately, the simple constant voltage source using one TFT for LCD pixels would not work for OLEDs. A minimum of two transistors was proposed in 1997[11], but this had severe uniformity problems due to the difficulty in fabricating consistent VTH in p-Si TFT and highly variable OLED thresholds that also changed over time [see Figure 9.29 (a) and (b)]. To overcome these uniformity problems in OLED forward voltage and TFT thresholds, current mirror approaches for a-Si and p-Si were developed [12]. Two p-Si examples are shown in Figure 9.29 (C) and (D) that require constant current drivers. Case C is a high side driver, and case D, a low side driver. In early 2000, a high side constant current display driver was introduced by Clare Micronix (MXED101) and 2003 saw a low-side CC Driver described by MEI [13]. However, a lower aperture ratio of bottom emission 4T circuits required a higher current density in the OLED to achieve sufficient brightness. This higher current density resulted in faster aging of the OLED. This factor, the high
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
277
Figure 9.29 Initial 2T AMOLED circuits, and constant current approaches that followed [8].
dependence on panel routing resistance with current, and the lack of a single dominant current drive scheme resulted in low adoption overall of current drive type AMOLED panels. Voltage drive schemes improve the display by making the long column resistances on the panel less important to producing uniform brightness from top to bottom. Early schemes required several control signals, but important progress was made in 2002 and 2003 toward 4T[14] and 6T circuits that minimized these signals, and offered the important improvement of voltage drive. Just like current drive circuits, the voltage drive must also compensate for the non-uniformities of the OLED and the TFTs. With bottom emission OLED, these structures would result in an unacceptable aperture, especially with 6T structures, but by using top emission construction, the aperture ratio becomes less important as the emission layers are stacked above the active pixel drive elements. Many 6T structures for p-Si have been reported, but this one (shown in Figure 9.30) described early in 2003 reduced control signal routing and produced the highest ppi (186ppi) reported to date [15].
Figure 9.30 Example of a six transistor (6T) p-Si AMOLED pixel with voltage drive Reproduced from [15] by permission of SID.
As p-Si AMOLED move into volume production, more recent development activities have focused on moving to the cheaper a-Si substrates. The rapid progression to ever-larger LCD substrates to improve efficiencies and lower costs has left numerous smaller LCD fabrication lines looking for business. Inefficient for large panel sizes, yet fully depreciated, there is tremendous economic incentive to apply them to smaller displays with as few changes to large panel processes as possible. This will be
278
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
shown as a driving factor in several non-LCD display types described in the following sections, but is equally applicable to small format LCDs for mobile devices. With the addition of a hydrogenization step in the construction of a-Si substrates (called a-Si:H), electron mobility has been improved to the point where AMOLED pixel circuits can be driven by higher voltage, yet cost effective display drivers. An early 3T circuit [16] demonstrated promise for the a-Si:H AMOLED, yet voltages were quite high with scan (gate) voltage at 20V and data voltages ranging from turn on at 6V to maximum brightness at over 30V. Although compensated for Vth drift, newer proposals [17] claim superior performance, as this 5T structure speeds charging time, lowers control and data voltages and compensates for OLED threshold shifts. Yet in spite of somewhat lower gate voltages, it has been observed that the TFTs degrade over time and that some compensation is necessary for this effect as well. The 6T structure in Figure 9.31 (B) not only compensates for VTH shift, but also applies a socalled negative bias annealing to improve the stability of the array over time [18].
Figure 9.31 (A) a-Si:H 5T structure speeds charging time and compensates for TFT and OLED threshold shifts [17] (B) a-Si:H 6T structure applies a negative bias annealing for lifetime improvement in addition to VTH shifts Reproduced from [18] by permission of SID.
These newer a-Si:H circuits still require display driver voltages in excess of those for modern p-Si AMOLED pixels, yet they are not so different from PMLCD and AMLCD driver requirements from only five years ago. Smaller process geometries for the digital component of display drivers makes design of the high voltage module more difficult, yet cost effective display drivers for these newer a-Si:H structures are possible. More exciting still are the developments being reported for the creation of organic TFT (OTFT) pixel circuits for OLED emitters. If this can be achieved with low temperature fabrication methods, such as reported in a paper by Sony Corporation at SID in 2007 [19], plastic substrate flexible displays could be constructed at very low cost. Still, the technology is relatively new utilizing an uncompensated 2T pixel circuit [see Figure 9.29(a)] and requires a scan voltage (turning on the Gate TFT) of 30V, a signal voltage (propagating through the Gate TFT) of 12V, and a VCC VCATH (the drop through the current source TFT and the OLED) of 20V. While these voltages are within the domain of older dual gate oxide CMOS, they are difficult to achieve cost effectively by following the fine line CMOS trends in driver process for p-Si TFT LCD. Yet, as more of the p-Si drive circuit and data path migrate to the glass [20], the functional partition of monolithic CMOS circuits will continually change to rebalance the cost optimization scale. Choices for
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
279
Figure 9.32 BiNem(R) drive waveforms vary the falling edge waveform to produce white (a) or black (b) pixels Reproduced from [22] by permission of SID.
semiconductor makers continue to exist at both ends of the integration spectrum; one of up-integration that subsume more function from other areas of the larger system, and the other of minimalism that aims to cost optimize a single function to the exclusion of that wider system. Both have merits and risks; the former offering a lower cost system as long as the functional integration choices are correct, and the later offering lowest display cost while ignoring synergies with other closely related functions (i.e. backlight control, touch, light sensing feedback, and image content dynamics such as response time compensation and contrast enhancement).
9.7.3 Bistable and Electrophoretic Drive The beauty of a bistable display is that if the displayed image changes slowly, the power consumption can be very, very low. Bistable – in one of two states – displays only consume power when the pixel state is changed, and no display of this type is commonplace yet. Many applications are foreseen for displays that can either be ‘programmed’ or preset with an image needed over a long period of time, or are contained in systems with their own power supply yet change infrequently enough that batteries can be used for completely new usage models. For example, supermarket price labels normally change infrequently enough for a clerk to change them via a programmer machine. Smartcards could also
Figure 9.33 Spheres which either rotate or contain colored pigments are used to display images based on opposite electric field polarity Reproduced from [24] by permission of SID.
280
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
benefit from the extremely thin and flexible properties of recent developments in plastic bistable displays. eBooks or electronic newspapers are also applications that are mainly black and white (hence bistable) and change infrequently enough so a battery operated device would last many days, however the relatively long switching time required for bistable displays has proven a challenge in these applications. Page write times for large matrix displays on the order of one second are common today. Bistable display technologies range over several physical principles and perhaps the two most promising at the moment are nematic LCD and an electrophoretic type based on colored spheres.
9.7.3.1 Bistable LCD Drive Both cholesteric and nematic LCD materials have been used to make bistable displays and from a driver perspective have historically required quite high drive voltages to change states. Precise multilevel row and column waveforms with up to 100V (200Vp-p) are sometimes required to achieve high contrast [21]. Due to the high voltages required, it has been difficult to build small die area drivers with hundreds of outputs that compare in cost with lower voltage dynamically addressed (or transient image) LCDs. However, more recent progress has made impressive gains in lowering the required voltages and bringing the drive circuits more in line with LCD’s higher volume cousins. By lowering the cell gap to about half that used with TFT-LCD, the drive voltage for the Nemoptic Corp. BiNem1 display has been reduced to about 15Vp-p. This is within range of current STN, and some TFT LCD drivers. Changing the pixel state consists of applying a pulse to the pixel that always has a single step to the peak, and either has a single step to return, which produces a white pixel in a reflective display, or a double step to return, which produces a black pixel. By using conventional STN and TFT drivers together, grayscale and full color ‘bistable’ displays have been produced [22]. Although the 32 gray levels demonstrated are not strictly bi-stable, they are ‘sticky’ in the sense that they are programmed by the step level on the falling edge and stay at that level until addressed again. By varying where the step occurs over a 3V range, each sub-pixel varies spatially between black and white. In other words, a percentage of the sub-pixel is white and a percentage black, and this varies to produce an averaged gray level to the eye. As both drive voltages and fabrication processes are brought closer to high volume LCDs, the cost of bistable LCD will decline and new applications will open for these low power displays.
9.7.3.2 Electrophoretic Drive An ink-jet printing technique of organic polymer TFT structures for an electrophoretic display was demonstrated in 2002 [23]. By polarizing the spheres or the contents of pigment within the spheres, and locating them in an electric field, their orientation can be changed by reversing the field. Does this sound similar to the conditions under which LC molecules twist? It is, and therefore the drive structures can be very similar to LCD panels. As you can see in Figure 9.34, just like TFT-LCD arrays, a single TFT controls the voltage applied to each pixel or sub-pixel. From a semiconductor standpoint, the difficultly in driving these types of electrophoretic displays has historically been in the voltage requirements. The optical switching speed of the electrophoretic display is directly dependent upon the applied voltage [24]. Typical voltages have been reported in the þ 20V range, making a higher voltage driver necessary, and even higher voltages can be more desirable to speed up the switching time which can be between 500mS and 1S. However, commercial drivers that achieve þ15V / 15V are used successfully in 4-level grayscale products. Recent studies are showing that pulsed wave forms applied to the TFT can achieve grayscale performance that leads to a full color display [24].
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
281
Figure 9.34 A single TFT drives electrophoretic pixel [23].
9.7.4 IMod -- the Interferometric Modulator Recently Qualcomm has invested in commercializing a reflective display based on light interference. This type of MEMs pixel was first reported in 1997 [25] and worked on for several years by Iridigm Display Corporation before they were purchased by Qualcomm. The structure has several advantages; it is bistable requiring no power to maintain an image, the MEMs membrane structure itself has hysterisis so no active matrix driver is required, and by varying the air gap height in the pixel, colors can be created. In Figure 9.35, pixels (or sub-pixels) are shown in the ‘on’ or light state and ‘off’ or dark state. The ‘on’ state can be centered at any wavelength by varying the gap between the reflective membrane and the glass substrate.
Figure 9.35 The iMoDTM pixel structure.
Although no drive circuit schematic has been published, it has been reported that the membrane is deformed at voltages below 5V within several microseconds, and that existing STN drivers have sufficient drive and programmability to operate the display [26]. Colors are created in the same way as mainstream LCD panels, using red, green and blue sub-pixels.
9.8 Summary Although there is incredible diversity within flat display types for mobile applications, paralleled by an equally diverse set of display driver requirements, the high manufacturing volume of AMLCD drivers continues to attract the most investment and thus it becomes the comparison by which all other techniques are judged. Many new display technologies not only endeavor to reuse existing LCD panel manufacturing infrastructure, whatever their physics, but likewise display driver electronics tend to emphasize or converge toward as much similarity with AMLCD drivers as possible. (One example would be AM-OLED drivers moving to voltage drive and away from current drive.) As we look toward promising new display technologies for mobile appliances, including the BiNem1 or colesteric LCD, electrophoretic reflective, the recently announced time multiplexed
282
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
optical shutter (TMOS) from Unipixel, and several variations on OLED fabrication, each trys to minimize the cost of change by reusing drive techniques, voltages and currents, and circuits from AMLCD where possible. Next to the panel manufacturing itself, the drive electronics will play the next largest role in how quickly and at what cost the new technology can be adopted. Based on the 1B unit per year mobile phone market, coupled with growing PMP, music player, GPS, gaming and a myriad other handheld portable items, the small format display market shows no signs of slowing innovation. The huge pressure on power consumption and the unfulfilled potential of existing displays offers fertile ground within which new display technologies, and their display driver electronics will grow into the future.
References [1] Smenza, P. (November, 2006) Display Market Outlook, presentation delivered at iSupply Flat Information Display Conference. [2] Ford, D. (November, 2006) Global Mobile Handset Overview, presentation delivered at iSupply Mobile Handset Briefing. [3] Nikkei Electronics Asia (2000), Organic EL Panels Challenge LCDs for Mobile Phone Use, June. [4] Jakhanwal, V. (2006) ‘Handset displays – as unit growth slows, increases in content create opportunity’, iSuppli North America Briefing, November. [5] Scheffer, T. J. and Clifton, B. (2002) ‘Active Addressing Method for High-Contrast Video-Rate STN Displays’, SID Symposium Digest, p. 228. [6] Drawings courtesy of Dick McCartney, Displays CTO, National Semiconductor Corporation internal training materials, 2007. [7] McCartney, R. and Bell, M. (2004) A third generation timing controller and column driver architecture using point-to-point differential signaling, SID Symposium Digest, Paper 60.1. [8] Latvala, Jaakko (2007) National Semiconductor internal communication. [9] Na, Y., Lim, B. and Kwon, O. (2002) 1.8-inch Full-Color OEL Display for IMT2000 Terminals, SID Digest, Session 42.2; Division of Electrical and Computer Engineering, Hanyang University, Seoul, Korea C. Kim, Y. Nam, H. Kim, S. Kim, Innovation Center, LG Electronics Institute of Technology, Seoul, Korea. [10] Smith, E.C. (2007) Total Matrix Addressing (TMATM), SID Digest, Session 8.3, Godmanchester: Advanced System Development Group, Cambridge Display Technology, Cambridgeshire, UK. [11] Tohma, T. SID Proceedings of IDRC 1997, F1-F4 as cited in J. L. Sanford and F. R. Libsch, TFT AMOLED Pixel Circuits and Driving Methods, SID Symposium Digest, 2003, 4.2. [12] Van de Biggelaar, T. et al. (2001) Passive and Active Matrix Addressed Polymer Light Emitting Diode Displays, Proceedings of SPIE, 4295, p134–140, as cited in J. L. Sanford and F. R. Libsch, TFT AMOLED Pixel Circuits and Driving Methods, SID Symposium Digest, 2003, 4.2. [13] Date, D. et al. (2003) Development of Source Driver LSI for AMOLED Displays Using Current Driving Method, Matsushita Electric Industrial Co., H. Tsuge, et. al., Toshiba Matsushita Display Technology Co., SID Symposium Digest 2003, Late News Paper 36.5L. [14] Jung, S.H., Nam, W.J., Han, M.K. (2002) A New Voltage Modulated AMOLED Pixel Design Compensating Threshold Voltage Variation of Poly-Si TFTs, SID Symposium Digest, p104. [15] Kwak, W.K. et al. Samsung SDI Co., A 5.0-in WVGA AMOLED Display for PDAs, SID Symposium Digest, Session 9.2. [16] Kim, J.H. and Kanicki, J. (2003) 200dpi 3-a-Si:H TFTs Voltage-Driven AM-PLEDs, Dept. of EECS, University of Michigan, SID Symposium Digest. [17] Lu, H.Y., Liu, P.T., Hu, C.W. et al. (2007) A Novel a-Si TFT Pixel Circuit with High Immunity to the Degradation of the TFTs and OLEDs Used in AMOLED Displays, SID Symposium Digest, Paper 13.3. [18] Lee, J.H., Park, H.S., Choi, S.H. et al. (2007) Highly Stable a-Si:H TFT Pixel for Large Area AMOLED by Employing Both Vth Storing and the Negative Bias Annealing, SID Symposium Digest, Paper 13.2. [19] Iwao Yagi, Nobukazu Hirai, Makoto Noda et al. (2007) A Full-Color, Top-Emission AM-OLED Display Driven by OTFTs, SID Symposium Digest, Paper 63.2. [20] Ohshima, H. and Fu¨hren, M. (2007) High-Performance LTPS Technologies for Advanced Mobile Display Applications, SID Symposium Digest, Paper 46.2.
ADVANCES IN MOBILE DISPLAY DRIVER ELECTRONICS
283
[21] Doutreloigne, J., Vermandel, M., De Smet, H. and Van Calster, A. (2005) A Multifunctinal High-Voltage Driver Chip for Low-Power Mobile Display Systems, IEEE International Symposium on Circuits and Systems. [22] Joubert, C., Angele, J., Bollenbach, N. et al. (2005) Nemoptic: Latest Developments in BiNem1 LCDs, SID Symposium Digest, Session 62.1. [23] Proceedings of the SID (2002) Late-News Paper: Active-Matrix Operation of Electrophoretic Devices with Inkjet-Printed Polymer Thin Film Transistors, Takeo Kawase, Christopher Newsome, Epson Cambridge Laboratory, 8c King’s Parade, Cambridge, CB2 1SJ, UK, Satoshi Inoue, Takayuki Saeki, Hideyuki Kawai, Sadao Kanbe, Tatsuya Shimoda, Seiko-Epson Corporation, 281 Fujimi, Fujimi-machi, Suwa-gun, Nagano, 3990293, Japan, Henning Sirringhaus, Devin Mackenzie, Seamus Burns and Richard Friend, Plastic Logic Limited, 34 Cambridge Science Park, Milton Road, CB4 0FX, Cambridge. [24] Proceedings of the SID 2003, Paper 20.2: Drive Waveforms for Active Matrix Electrophoretic Displays; R. Zehner, K. Amundson, A. Knaian, B. Zion (E Ink Corporation), M. Johnson, G. Zhou (Philips Research Laboratories) p842. [25] Miles, M.W. (1997) A New Reflective FPD Technology Using Interferometric Modulation, SID Symposium Digest, Session 7.3. [26] Miles, M., Larsen, E., Chui, C. et al. (2002) Digital PaperTM for Reflective Displays, Iridigm Display Corporation, SID Symposium Digest, Session 10.1. [27] Tang, C.W. and Van Slyke, S.A. (1987) Applied Physics Letters, 51, 913.
10 Mobile Display Digital Interface (MDDI) George A. Wiley,1 Brian Steele,2 Salman Saeed,1 and Glenn Raskin1 1
Qualcomm Incorporated, San Diego, California, USA Qualcomm Incorporated, Boulder, Colorado, USA
2
10.1 Introduction 10.1.1 The Need for Speed The progression to 3G and faster cellular standards are enabling the migration of rich media content onto the handset. In addition, the handset is evolving into the ultimate convergence device, integrating the capabilities of many single purpose tools into a single device [4]. At the same time, users expect the integrated handset to provide the same level of performance as these dedicated tools. These consumer demands are rapidly driving the migration toward larger, more colorful display screens; better interactivity via touch screens; higher resolution cameras; and rich, vibrant sound. These features are increasing handset baseband as well as display and camera interconnect clock speeds, resulting in higher electromagnetic interference (EMI) emissions and increased power consumption. The need to reduce EMI is paramount; handset manufacturers are spending tens of thousands of dollars testing and designing methods to clamp down on such emissions.
10.1.2 Handset Display and Camera Trends Handset displays are increasing in physical size and resolution. It took about a decade for the industry to progress from the tiny one-inch monochrome segmented displays to four-thousand color 1-1/2 inch Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
286
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 10.1 The mobile handset as an ultimate convergence device.
graphic displays of QCIF or smaller resolution. Today, device capabilities are evolving even more rapidly, offering 24-bit true color and resolutions expanding from QCIF to VGA and beyond. Not surprisingly, the bandwidth requirements of these displays have skyrocketed, as shown in Figure 10.3. Handset cameras are also placing demands on bandwidth. As devices compete for ‘pocket space’, handset manufacturers are progressively raising the mega-pixel bar. Camera resolutions increased from
Figure 10.2 Phone handset display trends.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
287
Display Bandwidth (Mbps) 1600.0
Bandwidth (Mbps)
1400.0 1200.0 1000.0 800.0 600.0 400.0 200.0 0.0 1080p
WSXGA+
SXGA+
SXGA
WXGA+
WXGA
720p
XGA
SVGA
WVGA
VGA
QHD
HVGA
CIF
QVGA
QCIF
Figure 10.3 Display bandwidths for various resolutions at 24bpp and 30 Hz refresh.
0.3Mpix (VGA) to 1.3Mpix (SXGA), the most common resolution in cameras today; 2Mpix, 3Mpix, 5Mpix, and 8Mpix offerings are on the horizon. Several manufacturers are planning to offer multiple cameras in one handset, to support videoconferencing. The combination of handset displays and cameras are pushing the limits of current bandwidth and signal routing capabilities. The large number of traces required to connect the camera and display to the baseband processor cannot be supported by parallel interfaces, which have been used successfully in the past. For example, a 24-bit display requires a total of 28 signals: 24 for red, green, and blue (RGB), 3 for video control, and 1 pixel clock signal. A HVGA 24bpp display at 60 fps requires a pixel clock (PCLK) of about 10 MHz, and pixel bandwidths approach 220 Mbps. Camera requirements are pushing bandwidth requirements higher as well: a 5Mpix camera needs just over 1Gbps bandwidth with a PCLK of 95 MHz. Increasing pin count and faster pixel clocks create two problems: space constraints and EMI emissions. The 28 pins required for a parallel RGB display must travel from the display driver, which could be a Chip-On-Glass (COG) or Chip-On-Flex (COF) design, through a flexible printed circuit (FPC), and through the phone hinge to where the baseband processor is located. This requires a wide FPC, larger interconnects, and sophisticated routing in the printed circuit board (PCB) to the baseband processor, all of which substantially increase cost. EMI emissions are also an unwanted side effect; the large number of traces and faster pixel clocks result in emission components across a wide frequency range. This could be potentially disastrous for a handset because EMI components can easily degrade (desense) any of the multiple integrated radios for GSM, CDMA, BlueTooth, WiFi, and TV that are becoming commonplace. Although EMI emissions can be reduced with additional capacitive loading, it comes at the expense of higher power consumption.
10.1.3 The Solution is Serial Introduced in 2004, the Mobile Display Digital Interface (MDDI) is a cost effective, low-power solution. This Video Electronics Standards Association (VESA1) approved standard [1,2] uses a digital packet data link to enable high-speed, short-range communications with a display device. MDDI supports audio transducers, keyboards, pointing devices and other input devices integrated with a mobile display [3, 4]. MDDI can also support devices external to the handset, up to two meters from the host. In external mode,
288
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 10.4 Clamshell phone display connections.
Figure 10.5 MDDI Connections to mobile phone peripherals.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
289
the handset provides power to the external device as well, so head mounted displays can connect to the handset with a very thin cable. MDDI is extensible and can support user-defined data types, such as: full-motion video in the form of full-screen or partial screen bitmap fields, or compressed video, depending on device capability; static bitmaps at low rates to conserve power and reduce implementation cost in some portable devices; pulse code modulation (PCM) or compressed audio data at any resolution, or rate compatible with the serial link speed; pointing device position and selection; control and status information in both directions to detect the capability of the opposing device and set its operating parameters. Since its inception, MDDI has been widely used across a range of baseband processors manufactured by Qualcomm, and is found in many commercial phones.
10.2 MDDI Advantages 10.2.1 Space Constraints MDDI operates over a minimum of four signals, not including power and ground (as described later in this chapter in Section 10.5 ). Routing four MDDI signals through the PCB and FPC to the display or camera is a much simpler task for handset designers compared to routing a full parallel interface, which accounts for the growing number of MDDI-based phones in the market. A narrower FPC traversing the hinge enables new form factors and innovative designs. For example, a clamshell phone once limited to only open and closed positions can now offer an upper half that rotates in multiple dimensions to enhance new user experiences. The reduced pin count frees up space originally consumed by the parallel port. This
Figure 10.6 EMI emissions compared with a parallel interface and serial MDDI link.
290
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
space can be used to pack other components, enabling thinner, smaller handsets. Since the MDDI host is integrated into the baseband processor, external, third-party serializer–deserializer (SERDES) devices are no longer needed, reducing bill of materials (BOM) cost by $1 to $2. External devices such as head mounted displays can also be supported by external MDDI with as few as six signals, which can fit into a 3 mm diameter cable and provide video and audio data as well as power.
10.2.2 EMI Reduction The low voltage differential serial interface design significantly reduces EMI. The spectrogram in Figure 10.6 illustrates the differences between a parallel interface and a serial MDDI interface running at 450 Mbps. The spectrogram shows visible noise reduction in the frequency domain for the MDDI interface. For example, notice the reduction in noise in the 470 MHz to 770 MHz range, which is used for digital TV broadcast in Japan.
Figure 10.7 Display buffer refresh rates.
10.2.3 Power Reduction Significant power savings can also be achieved by using the high bandwidth ‘burst mode’ feature of MDDI. Data can be sent to the display frame buffer at a rate faster than the frame update rate, after which the MDDI host circuit nodes can be switched to hibernation mode. Hibernation mode switches off the various clocks and driver-receiver circuit nodes in the host to reduce power consumption (later sections explain this in greater detail). Using simple clock recovery circuits built into the MDDI client eliminates the need for power hungry phase locked loops (PLLs) in the client, saving even more power. The architecture block diagram in Figure 10.7 illustrates this scheme. The display is refreshed from a frame buffer (generally co-located within the display controller/driver IC) at a frame rate required for flicker free viewing; typically 60–85 Hz. On most handsets, full motion video is 30 frames per second (fps); this rate is lower for static images. Figure 10.8 illustrates the power savings for two cases: a display being updated at 15 fps, and another at 30 fps. The benefits of hibernation are clearly visible. In most cases, using burst mode transfers to the frame buffer (faster MDDI host clock) reduces power consumption by half compared to the consumption needed to update the display at 15 or 30 fps.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
291
Figure 10.8 MDDI host power versus data rate.
10.2.4 Scalability MDDI supports a wide range of display resolutions, frame update rates, and link data rates. Scalability provides support for data rates beyond high-definition television and multiple data types in both directions, such as video with 7.1 channel audio, control/status, keyboard and pointing device.
10.2.5 MDDI System Connections With MDDI, many clamshell handset configurations are possible. Several examples are discussed below.
Table 10.1 MDDI 1.5 data rates. Type 1 2 3 4
MDDI Bandwidth Capability Signals 4 6 10 18
B/W (Mbits/sec) 1000 2000 4000 8000
Signal count does not include ground and power Signal count includes differential pairs
292
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 10.9 MDDI in a clamshell phone, discrete LCDC.
10.2.5.1 Integrated MDDI Client and LCDC, Separate Drivers The configuration illustrated in Figure 10.9 allows the main and sub-display to exist in separate modules, with individual liquid crystal display (LCD) row and column drivers controlled by a single MDDI client and LCD controller (LCDC). This configuration is useful when the location of the main and sub-displays prohibits a tightly integrated module. Note that the camera module includes a MDDI host and, in the baseband processor end, a MDDI client. This design provides data transfer rates not achievable via the reverse link.
10.2.5.2 Integrated MDDI Client, LCDC and Driver Figure 10.10 is the most common configuration of a clamshell phone. The MDDI client, LCDC and display driver for both main and sub-display are tightly integrated into one package and exist in a single module design. The MDDI client can provide the LCDC with content for both the main and subdisplay, and the display driver controls the main and sub-display simultaneously. Separate vertical synchronization (VSYNC) control is required when reverse link capability is removed from MDDI client.
10.2.5.3 Integrated MDDI Client, LCDC and Camera MDDI Figure 10.11 shows an adaptation of the configuration of Figure 10.9, where the MDDI host, MDDI client, LCDC, and camera controller exist together in a single integrated circuit.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
Figure 10.10 MDDI in a clamshell phone, integrated chip on glass.
Figure 10.11 Integrated MDDI camera and display controller.
293
294
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
10.2.5.4 External LCDC, Integrated MDDI Client and Driver With display resolutions surpassing 640 by 480 pixels, the bulk of the display controller/driver chip consists of the memory required for the frame buffer. The frame buffer and the display controller can be integrated into the baseband processor. This enables thinner bezel displays and reduces BOM costs for display controller manufacturers. In Figure 10.12, the double data rate dynamic random access memory (DDR DRAM) (commonly part of the baseband processor/handset architecture) contains the display frame content, and the LCDC is integrated within the baseband processor. In this example, the MDDI link provides frame content to the MDDI client and display driver at the rate required for flicker free viewing, typically 60–85 Hz.
Figure 10.12 External LCDC with integrated MDDI client and LCD driver.
10.2.5.5 Integrated MDDI Client, LCDC and Driver, and Touch Screen With the bi-directional features (reverse link) of MDDI, touch screen data can now flow directly into the LCDC and back to the baseband processor. This configuration, shown in Figure 10.13, minimizes extra routing from the touch screen controller and allows touch screen processing algorithms to reside within the baseband processor, reducing the size and cost of the touch screen device.
10.3 Future Generations of MDDI Key advantages of the MDDI architecture are its very low overhead rate and the ability for packets to transmit more than display data only. This enables handset manufacturers to integrate multiple data streams over the MDDI link, not just display connections. Many applications have already been implemented over MDDI, based on industry trends that we will touch upon in this section.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
295
Figure 10.13 MDDI in a clamshell phone with touch screen.
10.3.1 Audio Multiplexed with Video With data rates approaching several gigabits per second, MDDI can support static display implementations and fast refresh rates, which enable efficient video transfer. Video playback in handsets is one application that is seeing a vast increase in usage. Application processors are increasing their capability to support video processing and content providers are eager to distribute content to cell phone users. Handset makers have begun rolling out TV/cell-phone combinations for users who want to watch programming while they commute or to fill in downtime. The industry has recognized three main standards that are in various stages of roll-out in Europe, the United States, and Asia. The video distribution to mobile device standards are Digital Video Broadcasting for Handheld (DVB-H), MediaFLO1, and Digital Multimedia Broadcasting (DMB). The proliferation of infrastructure and business models for mobile media are driving the need for video over a serial display interface, such as MDDI, which enables easy connectivity to displays with higher resolutions, footprints, and refresh rates. In addition to video, downloads will include audio content. MDDI supports multiplexing of audio content interleaved with display data.
10.3.2 High Speed IrDA Concurrent on Reverse Link Another advantage of the MDDI link is that it offers a reverse communication channel from the display back to the baseband device. Touch screen displays were the first applications to take advantage of this capability, as illustrated in Figure 10.13. Another application that can be passed over the reverse link is data that is ported from an IrDA (Infrared Data Association) or IrSimple controller. IrDA typically is utilized for short-range exchange of data over infrared light, for uses such as Personal Area Networks (PANs) or exchanging photos and media clips between mobile phones. Some exciting applications that
296
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
are being implemented in IrDA include IrFM (Infrared Financial Messaging), which can handle payment transactions by linking financial systems to a user’s cell network and transmitting information on infrared locally. This is known as ‘Point & Pay’.
10.4 MDDI Roadmap MDDI was introduced into the VESA standards group in 2002. By 2003, products supporting data rates of up to 384 Mbps per lane were in production. Since the final ratification of the VESA MDDI Standard [1,2] in July 2004, the implementation of MDDI has progressed from 130 nm to 90 nm to 65 nm 45 nm., and beyond. While the lithography continues to shrink, the speed of MDDI continues to improve. By taking advantage of the reduced capacitance and better lithography control of the new generation of silicon processes, MDDI Gen1.1 has improved its performance by well over 400 Mbps. Design architecture improvements and new, high-speed architectures being developed in VESA increase performance even more.
10.4.1 MDDI Gen 1.2 Architectural improvements in MDDI Gen1.2 further extend the performance of MDDI. Production products were introduced in 2007 that support data rates of up to 1Gbps per lane. The eye diagram in Figure 10.14 illustrates the typical performance of an MDDI Gen 1.2 system.
Figure 10.14 MDDI Gen1.2 test chip eye pattern.
Next generation MDDI technologies strive to preserve compatibility with previous generations while simultaneously increasing data rates and reducing power consumption. Most improvements were made to the physical, or ‘PHY’, layer and utilize innovations in high speed complimentary metal oxide semiconductor (CMOS) design and layout techniques. Architectural enhancements have been made as well, to take advantage of the higher data rates. Driving MDDI at higher data rates can eliminate the need for an external display bridge chip. Running at twice the rates of MDDI 1.1 enables MDDI Gen1.2 to support ‘RAM-less’ displays. At an architectural level, handset manufacturers can use the system memory as a frame buffer and integrate the LCD Controller into the baseband. Gen 1.2 adds packets to the link layer to simplify its use with RAM-less displays. The displays can directly interface
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
297
via MDDI Gen1.2 and utilize an integrated frame buffer in the host – saving the handset maker cost, part count and valuable area.
10.4.2 Next Generation MDDI Future enhancements to MDDI’s architecture may more than double existing data rates to 3Gbps per link. This next generation architecture will feature lower transition rates and a lower power interface. By supporting higher data rates, applications that require greater than 1Gbps transmission can be implemented using only a single link. Reducing the required pairs of serial links will simplify interconnect designs and handset signal routing while minimizing the EMI emissions that are problematic in today’s handsets. Key trends anticipated by handset manufacturers require even higher display data rates. Larger mobile displays of ultra-mobile personal computers (PCs) are merging the portability of a cell phone with the processing power of a laptop. To meet the needs of this market, low power, high data rate display interconnection is vital. Next generation MDDI architectures are being developed to support these displays. Higher performance MDDI serial architectures will help reduce the cost of these larger portable devices. The multi-functional integration and flexibility discussed previously for features such as touch screen, audio, video or even IrDA will enable these future devices to support a multitude of applications over a single high-speed link.
10.5 MDDI Technical Overview 10.5.1 Overview and Terminology The devices connected by an MDDI link are called the host and client. Data from the host to the client travels in the forward direction, and data from the client to the host travels in the reverse direction, as illustrated in Figure 10.15, below.
Figure 10.15 MDDI terminology (image courtesy of VESA).
A majority of the link data traffic is a result of the host sending pixel data to the client (forward traffic). Figure 10.16 illustrates how the host enables reverse link communication by sending a special packet that allows the client to take over the bus for a specified duration so it can send packets to the host (reverse traffic).
Figure 10.16 Bi-directional MDDI communication (image courtesy of VESA).
298
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 10.17 Physical connection of MDDI host and client (image courtesy of VESA).
10.5.2 Physical Connection
Current Consumption (mA)
The total bandwidth of an MDDI link may be increased by utilizing additional data pairs. The MDDI Standard specifies the use of one, two, four, or eight data pairs (referred to as a Type 1, Type 2, Type 3, or Type 4 link, respectively) plus a strobe signal. The data pairs are bi-directional, but the strobe signal is always driven by the host and received by the client. Depending on system requirements, the forward link may operate in a different type number than the reverse link. The strobe is used together with one of the data signals to create a recovered clock in the client device. It is advantageous to operate the link as fast as possible while using as few data pairs as possible. This is because the current consumed by the physical interface, a.k.a. the PHY, is largely independent of the link data rate, while the current consumed by the link controller core logic is proportional to the link data rate. Given this characteristic, which is illustrated in Figure 10.18, the system power
PHY Current
nt
urre
rC
olle
Link
tr Con
C ore
Data Rate (Mbps) Figure 10.18 Current consumption of PHY and link controller core versus data rate.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
299
consumption can be reduced by operating the link as fast as possible using as few signal pairs as possible while keeping the link shut down in a hibernation state as often as possible.
10.5.3 MDDI Physical Layer All signals sent over the MDDI link always operate as differential pair signals, to minimize electromagnetic interference and prevent system malfunctions due to ground-bounce between the host and client devices. MDDI_Data0 and all other MDDI data signals are carried over a bi-directional differential cable, while MDDI_Stb is carried over a unidirectional differential cable driven only by the host. All pairs are terminated at the client end only. In addition to carrying data in both directions, the MDDI_Data0 signal pair can be used by either the host or client to send a link wake-up pulse to the opposite end. A special low-speed low-power differential receiver is used to detect this wake-up pulse. This is described in more detail later in this chapter. Figure 10.19, below, illustrates the primary functions of the MDDI physical layer circuitry in both the host and client. Data sent over the MDDI forward link is encoded using a data-strobe format. Figure 10.20 illustrates the waveforms resulting from encoding the data sequence ‘1110001011’. At any bit boundary, if a change of state is occurring on data, strobe maintains its previous state. However, strobe toggles to the opposite state if data does not change. In other words, between data and strobe there is always one and only one transition at each bit boundary. At the receiving end of the link,
Figure 10.19 Example of MDDI PHY circuit.
300
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 10.20 Example of data-strobe encoding.
a clock signal is produced by computing the exclusive-OR of data and strobe. Figure 10.21 shows an example of a simple data-strobe encoder in the host and a decoder in the client, and a latching circuit to capture the received data. A practical high-speed implementation of the encoder and decoder might be combined with part of the serializer and deserializer and be co-located with the host and client PHY circuitry.
Figure 10.21 Example of data-strobe encoder, decoder, and data-recovery circuit.
The use of data-strobe encoding has advantages over other data encoding methods: data-strobe encoding has about twice the tolerance to skew between data and strobe compared to a simple data-clock method; data-strobe encoding causes fewer signal transitions, which reduces electromagnetic emissions and interference. Skew in the path of MDDI_Data0 and MDDI_Stb can be caused by differences in the path delays through the drivers and receivers in the host and client as well as differences in the delays through the connections of MDDI_Data0 and MDDI_Stb. A quantitative discussion of the link timing budget affected by the skew and other imperfections in the host and client implementations is presented later in this chapter. For Type 2, Type 3, and Type 4 modes (which have two, four, and eight data pairs, respectively), MDDI_Data0 and MDDI_Stb still operate using the same data-strobe encoding rule as used for a Type 1 link. The other MDDI_Data signals pass data with identical rate and phase as MDDI_Data0. The recovered clock in the client device (created from the exclusive-OR of MDDI_Data0 and MDDI_Stb) is also used to sample MDDI_Data1 through MDDI_Data7. The differential signals that carry the data and strobe signals may be implemented as twisted pairs in a cable, or differential impedance-controlled traces on a printed wiring board or flex-circuit. The characteristic impedance of the transmission line may vary by up to 20%, but the termination resistance must be accurate to 2%. The reason for this requirement is that it is relatively economical to implement an accurate termination resistance but more difficult to tightly control the impedance of the transmission line. On-chip terminations can reduce the number of external components in the display, but this is not always practical, particularly when the MDDI client resides inside an LCD controller/ driver IC mounted on the LCD glass. The indium-tungsten-oxide (ITO) traces on the LCD glass typically have a resistance similar to the MDDI termination resistance. If an on-chip termination were
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
301
used, the high-resistance ITO traces in series with the on-chip termination would form a voltage divider that would significantly attenuate the signal seen by the differential receiver. Use of an external termination resistor on the interconnect flex circuit eliminates this signal attenuation issue while keeping the termination close enough to the differential receiver on the display driver IC to remain effective. An example of these two cases is shown in Figure 10.22, below.
10.5.4 Internal and External Modes External Mode describes the use of an MDDI link to connect a host in one device to a client outside of that device that is located up to two meters away from the host. In this situation the host must send power to the external client so both devices can easily operate together in a mobile environment. Internal Mode is used to connect a host to a client contained within the same device, over a cable up to about 18 cm in length. The VESA MDDI Standard does not specify requirements for the power connection to the client for internal mode. The standard has slightly different electrical requirements for internal mode compared to external mode, mainly due to differences in the received signal quality through a shorter versus a longer cable. There are also some differences in the link layer requirements, because internal mode devices are permanently connected to each other, while external mode clients may become disconnected and be reconnected with a host. An external mode host must be able to detect both the connect and disconnect events and be able to discover the capabilities of a newly attached client device.
10.5.5 Multiple Stream Synchronization An MDDI link can carry virtually any combination of display, audio, keyboard, pointing device, and control information. Both isochronous streams and asynchronous updates are supported. Any simultaneous combination of data types may occupy the link as long as the aggregate data rate is less than the maximum MDDI link rate, which is limited only by the maximum serial data rate and the number of data pairs. A concept called Common Frame Rate (CFR) is used to synchronize simultaneous isochronous streams. This is accomplished by grouping together blocks of media stream data having a predetermined size within a frame structure that is established on the MDDI link. The client device uses the frame or sub-frame arrival rate as its time reference to generate video frame rates and audio sample rate clocks, keeping audio, video, and other stream data perfectly in sync. A low CFR increases channel efficiency by decreasing overhead to transmit the framing information. On the other hand, a high CFR decreases the latency, and a smaller elastic data buffer for some stream data such as audio samples may be utilized. The CFR is programmable and may be any size that is appropriate for the isochronous streams used in the end application. Table 10.2 shows the number of bytes required per sub-frame (programmable) for multiple isochronous streams used by one particular head-mounted display application. Fractional counts of bytes per sub-frame are easily obtained using a simple programmable M/N counter structure. For example: a count of 26-2/3 bytes per sub-frame is implemented by sending two sub-frames containing 27 bytes followed by one sub-frame containing 26 bytes. A smaller CFR may be chosen to produce an integer number of bytes per sub-frame. However, the hardware implementation of the simple M/N counter is probably less costly than that of a larger audio sample first-in first-out (FIFO) buffer.
10.5.6 Overview of the Link Layer The basic MDDI frame structure is illustrated in Figure 10.23. The highest level of a timed interval in the MDDI system is the media frame. The media frame contains all of the media content necessary for a defined period of time. In a system sending both audio and video streams over the MDDI link, the media frame contains all of the video content to update the screen, as well as all of the audio content for
302
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 10.22 Differential signal termination with chip on glass.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
303
Table 10.2 Example, ideal stream rates for the common frame rate.
Computer Game Computer Graphics Video CD Audio Voice
X
Y
720 800 640 1 1
480 600 480 1 1
Common Frame Rate (CFR) ¼300 Hz Bits/ Frame Rate # of Samp Sample Rate Channels 24 24 24 16 8
30 10 29.97 or 30 44100 8000
Rate (Mbps)
Bytes per Sub-frame
248.832 115.200 221.184 1.4112 0.064
103680 48000 92160 588 26-2/3
1 1 1 2 1
the duration of one video frame update. For example, if a system has set the media frame size to update the displayed image at 30 frames per second, the MDDI audio data sent in the same media frame contains a sufficient number of audio samples to be presented to the audio output for a duration of 33.33msec. This video and audio content in one media frame can be transmitted over several subframes. Media frames are divided into an arbitrary number of sub-frames as needed for each system. The beginning of each sub-frame is identified by a sub-frame header packet. This packet provides much of the information about the operation of the link. Contained within the sub-frame header packet is a sub-frame count that identifies a particular sub-frame’s location within a media frame. When this sub-frame count rolls over to zero, the client can identify the media frame boundary. This sub-frame header packet also contains a field identifying the length of a sub-frame. The client can expect to receive the next sub-frame header packet exactly after the specified number of bytes specified by the sub-frame length field in the sub-frame header packet. Another key feature of the sub-frame header packet is the Unique Word field. This is examined by the client in combination with the packet-type field to form a four-byte pattern that the client uses for synchronization. The combined four-byte pattern is chosen specifically for its auto-correlation properties, which makes it highly likely for the client to be properly aligned to the incoming data stream. The majority of packets are structured using a very similar format. They contain a packet-length field that identifies the number of bytes until the next packet, a packet-type field to identify the contents of the packet, a client identifier (ID) field to identify the client to which the packet is addressed or which client has sent the packet, and the final cyclic redundancy check (CRC) field which is used to verify that the packet has been received reliably. Packets are transmitted one right after another without any breaks. The CRC field of the last packet of a sub-frame will be the last two
Figure 10.23 MDDI frame structure (image courtesy of VESA).
304
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
bytes before the packet-length field of the sub-frame header packet that marks the beginning of the next sub-frame. A special packet that allows the system to attain transmission of a continuous uninterrupted stream of packets is the filler packet. It is used to keep the MDDI stream sending packets even when no media packets or control packets are ready to be transmitted. This concept allows a constant steam of data and recoverable clock to be available to the client core for processing. The filler packet is flexible in length and can be sent at the end of a sub-frame so that the last bytes of a packet are transmitted immediately prior to the beginning of the next sub-frame header packet. Some packets have a slight deviation from the basic format. The video steam packet contains an additional CRC field that covers the packet header contents and not the pixel data contents. The advantage of this is that any corruption of the header can be verified before pixels are improperly written to a display buffer. This same approach is taken with register access packets to verify the packet header information separately from the data to be written to registers and ensure no invalid writes take place. Another set of unique packets are those used to receive data or timing back from the client, and those used to calibrate the client for skew in the link. These packets are the round-trip delay measurement packet, the reverse link encapsulation packet, and the forward link skew calibration packet. They contain a header portion consisting of a packet length, packet type, client ID, additional packet fields, and a CRC. However, their structure is different from the basic packet structure because, after the CRC, the host may send many more bytes of data to perform the function of the packet. The packet length does not identify the number of bytes until the CRC field; instead, it identifies the number of bytes until the start of the next packet, which will be many bytes after the CRC field. The CRC itself still helps to validate the contents of the packet header, so it serves a valuable purpose. The remaining bytes that follow the CRC either contain no information or it is not practical or necessary to verify this data using a second CRC field.
10.5.7 Link Hibernation The ability of the MDDI link to enter the hibernation state and wake up quickly allows the system to efficiently force the MDDI link into the hibernation state frequently to minimize power consumption. A high-level view of the process of entering hibernation, remaining in the hibernation state, and waking up from hibernation is shown in Figure 10.24.
Figure 10.24 MDDI hibernation process.
The link enters the hibernation state in response to the host sending a link shutdown packet to the client. This packet acts as a signal to both the client and host link controllers to enter a special mode where they prepare to halt packet transfers and place the PHY circuits into a low-power hibernation state. The host continues to send 64 pulses on the MDDI_Stb signal pair following the CRC of the link shutdown packet to give the client an opportunity to empty any pipelined data prior to the event that stops all link activity.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
305
During the hibernation state, the MDDI_Data and MDDI_Stb differential drivers are disabled in the high-impedance state and the differential voltage across all differential pairs is pulled to zero volts by the differential line termination. The high-speed differential receivers used to receive packet data are also disabled during the hibernation state. Disabling both the drivers and receivers significantly reduces system current consumption. The host and client each have a low-speed low-current differential line receiver, sometimes called the hibernation receiver, which has an intentional offset built into its threshold voltage. These hibernation receivers are only used during the wake-up process to detect the wake-up pulses generated by the host or client. In these receivers, the differential voltage threshold between a logic-one and logic-zero level is approximately þ125 mV. This causes an un-driven differential pair to be seen as a logic-zero level during the link wake-up sequence. Of course, a normal 250 mV to 450 mV logic-one level driven by the differential driver is still interpreted as a logicone level by the hibernation receivers having the þ125 mV offset. An example of the behavior of the special low-power hibernation differential receiver with the þ125 mV offset compared to a standard differential receiver with zero input offset voltage is shown in Figure 10.25. The waveform is not representative of a typical MDDI signal, but it is helpful for the purpose of illustrating the difference between the two types of differential receivers.
Figure 10.25 Example of thresholds of standard and hibernation receivers (image courtesy of VESA).
Either the host or client may wake up the link from the hibernation state by sending a pulse on the MDDI_Data0 pair. The pulse is detected using the low-speed differential hibernation receivers that consume only a tiny amount of current compared to the high-speed differential receivers that receive the high-speed data and strobe signals. Since either the host or client can wake up the link, the wake-up protocol must be able to handle possible contention that can occur if both the host and client attempt to wake up simultaneously. The host-initiated wake-up from the hibernation state, illustrated in Figure 10.26, is performed by the host enabling its MDDI_Data0 and MDDI_Stb differential driver outputs while driving MDDI_Data0 to a logic-one level and MDDI_Stb to a logic-zero level. The client detects the wake-up pulse with its low-power differential hibernation receiver, and enables its high-speed differential receivers and internal logic. Then the host continues the wake-up process by outputting a stream of pulses on MDDI_Stb. The host holds MDDI_Data0 in the logic-one level for a duration of 150 MDDI_Stb pulses, and then it drives MDDI_Data0 to a logic-zero level for 50 MDDI_Stb pulses. During this
306
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 10.26 Example of a host-initiated wake-up from hibernation (image courtesy of VESA).
Figure 10.27 Example of a client-initiated wake-up from hibernation (image courtesy of VESA).
start-up period, lasting for 200 MDDI_Stb pulses, the host simply toggles MDDI_Stb as if it were a clock signal; it does not obey the data-strobe encoding rule during this time. Next, the host begins to transmit data on the forward link by sending a sub-frame header packet. The packet ID and unique word fields of the sub-frame header packet serve as a special time marker to the client so it can synchronize itself to the necessary timing boundaries it must have for proper packet reception. The host transmits normal forward link traffic following the sub-frame header packet. The client may also initiate the wake-up sequence in a similar manner by driving the MDDI_Data0 pair to a logic-one state. The client will also enable its MDDI_Stb receiver and enable a temporary voltage offset at the input of the MDDI_Stb receiver. This guarantees that the state of the received version of MDDI_Stb is interpreted as a logic-zero level in the client before the host can enable its MDDI_Stb driver. Within 1 msec, the host recognizes the wake-up pulse and, from the host’s perspective, the normal link restart sequence continues. The host enables the MDDI_Data0 and MDDI_Stb driver outputs, and drives MDDI_Data0 to a logic-one state for a duration of 150 MDDI_Stb pulses. When the client recognizes the presence of pulses on MDDI_Stb, it disables the offset in its MDDI_Stb receiver. The client continues to drive MDDI_Data0 to a logic-one level for 70 MDDI_Stb pulses, and then the client disables its MDDI_Data0 driver by placing the driver output into a high-impedance state. The host continues to drive MDDI_Data0 to a logic-one level for a duration of 80 additional MDDI_Stb pulses, after which the wake-up process proceeds in the same manner as for a host-initiated wake-up. If the host and client both initiate the wake-up from hibernation simultaneously, this will resolve itself in exactly the same manner as a client-initiated wake-up.
10.5.8 The Reverse Link The sending of MDDI data in the forward or reverse direction is called the forward link or reverse link, respectively. On the reverse link, the client can pass a wide variety of information back to the host. This allows the host to process the data and potentially adjust the operational mode to accommodate any necessary changes. This also allows for connectivity within the phone when ‘generators’ of data are colocated with the client. During reverse link transmission, the MDDI host drivers are disabled on the MDDI link and the client drivers are enabled. Upon completion of this transmission period, the client drivers are disabled and the host drivers are enabled. These transition periods overlap to ensure that the data lines always have a driver. This prevents unpredictable activity on the data pairs, and unpredictable outputs of the
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
307
differential receivers monitoring the pairs. While the link is changing direction, and while reverse data is being sent by the client, the MDDI host will continue to toggle the strobe signal as if the host data pairs were a transmitting constant logic zero. The client will use only the strobe signal as a clock source during these unique periods when it is driving the data pairs. In contrast, the client uses MDDI_Data0 XOR MDDI_Stb as a clock source to receive forward link data, based on the standard data-strobe encoding rules. Using only strobe as a clock allows the client to maintain a clock source derived from the link while it is driving the data pairs. The MDDI reverse link is designed to operate at a slower rate than the forward link. The primary reason for this is to simplify the implementation of the reverse data capture circuit in the host. By running the reverse link at a slower rate, the host does not require a special high-speed clock source to over-sample the incoming data. Although reverse data coming from the client is generated with a clock source supplied by the client, it is ultimately clocked based on the strobe signal sent by the host, which travels through many delays of transmission circuitry in the host, client, and interconnecting cables. Therefore, the relationship of transitions in the reverse data received at the host will be unknown with respect to the host clock source that is used to sample the reverse link data. Due to this uncertain timing relationship between the reverse data and host clock, a clock source faster than the reverse data is required. The first step to enable reverse data transmission is to have the host transmit the round trip delay measurement packet. This packet is used to measure the delay within the link both in the physical transmission lines, drivers and receivers, as well as in the processing logic within the client. By determining this delay, the host can know how many clock cycles will pass between the time when the host ideally expects a response to when one will actually arrive. Knowing this delay allows the host to predict when the first reverse data bit will arrive from the client, and allows the host to determine the ideal sampling point to capture subsequent reverse data bits. Figure 10.28 shows the data pairs as controlled by the host and client,
Figure 10.28 Example of the round-trip delay measurement (image courtesy of VESA).
followed by the aggregate view of the data pair at the host. In a system with no round-trip delay, we would expect the pulse received from the client to arrive right at the beginning of the measurement period. The amount of time from the start of the measurement period at the host until the pulse is received by the host is the round trip delay measurement.
308
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Reverse link data is transmitted in the middle of a packet called the reverse link encapsulation packet, shown in Figure 10.29. This packet can request specific status from the client, and provide an open pipe
Figure 10.29 Reverse link encapsulation packet (image courtesy of VESA).
Figure 10.30 Basic reverse data sampling example (image courtesy of VESA).
for data transmission from the client back to the host. The header portion of the packet specifies the reverse rate divisor, used by the client to reduce the rate of the reverse link data so it can be reliably sampled based on the host architecture. The amount of space allocated for the reverse data transmission is determined by the packet length field and the reverse rate divisor. Specifically, the amount of reverse data the client can send is the number of forward link bytes of the reverse data portion divided by twice the reverse rate divisor. Additional all zero and turn-around fields allow drivers to be enabled and disabled, and provide an overlapping period to ensure that the data pairs are nearly always driven.
Figure 10.31 Advanced reverse data sampling example (image courtesy of VESA).
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
309
There are two possible methods to sample reverse data: basic and advanced. The basic sampling method calculates the reverse link divisor so that each bit time is greater than or equal to the round trip delay time. This allows one reverse link bit to be sampled each round trip delay period. Using this method, the host can reliably sample in the middle of a large reverse link data bit on the rising clock edge of a divided fraction of the forward link data rate. Additional processing delay in the client, or longer delay in the drivers, receivers and cable, will affect the maximum achievable reverse link data rate. The advanced reverse link sampling method establishes the offset via the round trip delay measurement packet. Once this delay is measured, which determines the offset to the first reverse data bit, each subsequent reverse link data bit is known to arrive within a specific number of forward link clock periods, depending on the value of the reverse rate divisor. For example, if a reverse link is a quarter of the rate of the forward link, the sample point for the second bit would be 4 bit times later.
10.5.9 Link Budget The maximum speed at which an MDDI link can operate is determined by imperfections in the transmission of data and strobe from the host to the client. One of these factors is the unintentional delay skew between the data and strobe paths. The effect of this skew is illustrated in Figure 10.32, below.
Figure 10.32 Effects of skew between data and strobe (image courtesy of VESA).
When data is early and strobe is late, as viewed at the output of the differential receivers at the client, there is reduced data input set-up time at the data capture circuit in the client. Other factors that affect the link budget are: delay asymmetry, i.e., the one-to-zero and zero-to-one transitions don’t occur at the same speed; clock jitter in the system clock at the host that is used to generate the transmitted MDDI waveform; data set-up time of the data capture flip flops in the client;
310
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
distortion of the signal waveform through the interconnecting cable between the host and client; slow rise and fall time of the data and strobe signals combined with the input offset voltage of the differential receiver circuitry. All of these factors can shorten (or lengthen) the set-up time that is highlighted in Figure 10.32. Figure 10.33 shows a simple transmitter and receiver circuit and an example of skew and asymmetry at each stage of an MDDI Type 1 forward link. To simplify the following explanation, each stage is named with an abbreviation that represents the function of the corresponding stage (e.g. TXFF, TXDRVR, CABLE, RXRCVR, RXXOR, and RXFF)1. Skew in the delay between MDDI_Stb and MDDI_Data0 causes the dutycycle of the output clock to be distorted, as illustrated in Figure 10.32. In some cases this causes the set-up time to the RXFF stage to be very small. Another consideration is that data at the D input of the RXFF stage must change after the clock edge so it can be sampled reliably. To accomplish this, a delay element is inserted in the data path to compensate for the delay in the clock path in the RXXOR stage. The maximum data rate (minimum bit period) of an MDDI Type 1 link is a function of the maximum skew and asymmetry encountered through all the drivers, cable, receivers, and data-recovery
Figure 10.33 Components of the flexible link budget equation.
logic plus the host clock jitter and total data setup into the RXFF stage. In this situation the data path is as late as possible and the clock path is the earliest. Rather than specify the limits of each parameter independently, specific fractions of the total link timing budget have been allocated to the host, client, and interconnect subsystem between the host and client. This concept is known as a flexible link budget. Equations that are a function of the link data rate specify the sum of specific parameters that are allocated individually to the host and client. The requirements of the host and client budgets for internal mode are shown in Equation 10.1 and Equation 10.2, respectively. tAsymmertyTXFF þ tAsymmetryTXDRVR þ tSkewTXFF þ tSkewTXDRVR þ tjitterhost 0:415 ðtBIT 150 psÞ Equation 10.1 Host internal mode flexible link budget equation. tAsymmertyRXRCVR þ tAsymmetryRXXOR þ tSkewRXRCVR þ tSkewRXXOR þ tsetupRXFF 0:439 ðtBIT 150 psÞ Equation 10.2 Client internal mode flexible link budget equation. 1 The abbreviations of the stages in the figure have meaning as follows: TXFF – transmit side flip flop in the host; TXDRVR – transmit side differential driver in the host; CABLE – the cable, flexible printed circuit, or rigid printed circuit that connects the host to the client; RXRCVR – receive side differential receiver in the client; RXXOR – receive side exclusive-or gate in the client; and RXFF – receive side data sampling flip flop in the client.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
311
The remainder of the budget is allocated to the interconnect subsystem, which includes the skew and distortion in the interconnecting cable, and the rise and fall time of the data and strobe signals combined with the input offset voltage of the differential receiver circuitry. The considerations for implementing the data-recovery circuit illustrated in Figure 10.33 are different than those required to implement a synchronous data capture circuit, commonly used for a conventional clock-data communication link. The benefits of these additional considerations are easily justified by the two-fold skew tolerance increase realized by data-strobe encoding, compared to a data-clock approach. Implementing logic to define the critical paths through certain elements of the data-recovery circuit is easy to achieve because these path delays are relative to other delays in the same circuit. It is common for this data recovery circuit to be implemented using custom logic that is co-located with the differential receivers in the pad ring.
Figure 10.34 Forward link skew calibration packet (image courtesy of VESA).
10.5.10 Link Skew Calibration The greatest limitation to the MDDI link speed is skew introduced between the data and strobe signals. As we saw in Figure 10.32, skew introduced into the MDDI link affects the recovered clock source and, in turn, affects the ability to reliably sample the received data in the data recovery circuit. Under extreme skew conditions, violations may occur to setup and hold time in the registers used to sample forward link data in the client. As described in section 10.5.9 Link Budget, the data late strobe early condition illustrated in Figure 10.32 presents the most challenging timing margin to the data capture circuit. In this situation, it can be observed that an edge of the recovered clock can occur very shortly after a data transition, which can result in a failed capture of an MDDI data bit in the sample register. The data-strobe encoding technique used in an MDDI link provides a relatively wide tolerance for skew between the data and strobe signals because only one of the two signals may change at any bit boundary. Data sampling triggered by a transition on the data pair is not at risk because it is self-clocked; the only risk is a clock edge generated by a transition on the strobe pair that follows a previous transition on the data pair. The margin for the skew is therefore the width of a bit time minus the setup time of the register itself. This skew tolerance is nearly the width of a bit time, depending on the MDDI data rate. The maximum configurable data rate is based on the maximum possible skew in the link that meets setup and hold time of the capturing register. This relaxed constraint is valid only for MDDI Type 1. For MDDI Type 2 systems, the transitions on the second data pair can occur on a clock edge generated by either data or strobe, so it is critical that it is aligned to both. To achieve this, the data and strobe lines must first be aligned with each other, and then the second data pair can be aligned to them. After finding the alignment, the second, or any additional data pairs, must be shifted away from a perfect alignment. This shift ensures that transitions on the additional data pairs occur far enough away from the recovered clock edge to allow the data to be sampled and meet register setup and hold times. To facilitate this ideal alignment, variable delays are co-located with the client differential receiver circuitry, before the clock and data recovery circuit, so any skew introduced prior to receipt in the client can be removed. After this adjustment, the outputs of the compensated signals are fed to the clock and data recovery circuit. The MDDI host sends a special packet called the forward link skew calibration packet to the client so it can adjust the variable delays and find the perfect alignment. During the calibration data sequence field, the MDDI host sends an identical alternating one-zero pattern on the data and strobe pairs. When this field is being received, the client uses only strobe as the clock source. The MDDI client adjusts the variable delay on the data and or strobe signals to find a
312
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 10.35 Example waveforms of the forward link skew calibration packet (image courtesy of VESA).
point where the sampled data changes. This means that the transitions on the data signals have passed over a sample point. This transition location is the ideal alignment for the strobe and MDDI_Data0 signal. Next, the delays of the remaining data signals are adjusted relative to the strobe signal to find their appropriate offset. Extra delay is added to this offset to accommodate the hold time required for the data sampling register. At the end of the calibration sequence, the host resumes the data-strobe encoding and proceeds with the next packet.
10.5.11 Display Synchronization Synchronization between the transmission of the video data stream and the active refresh operation of a display controller allow systems to be built that require only a single frame buffer. When video data transmission is not properly synchronized with the display controller’s display refresh, a phenomenon known as ‘tearing’ occurs. Tearing is when a single frame refresh of a display device contains portions of one or more images. For example, in Figure 10.36 the first image displayed on the screen is a star, and the next image to be displayed is a square. If tearing occurred, the screen would display the top of the square and the bottom of the star. This image would appear only for the duration of a single frame refresh time, but the human eye is capable of detecting the corrupted image and that will detract from the quality of the playback. To prevent image tearing, the MDDI host must be aware of the current location of the read back pointer inside the display controller. This read pointer advances sequentially through the display frame buffer, reading pixels and updating them on the screen. By sending the read pointer value back to the host controller, the host can synchronize the transfer of video data to the client without causing a tearing condition. Figure 10.37 shows this in more detail. Figure 10.37 depicts a frame buffer at three points in time. The display controller is sequentially reading the frame buffer from top to bottom and refreshing the display with the contents of the buffer.
MOBILE DISPLAY DIGITAL INTERFACE (MDDI)
313
Figure 10.36 Example of image tearing.
In the first diagram, the read pointer has advanced one third of the way through a frame refresh operation. The display area in the buffer that is behind the path of the read pointer (line 1 to line m-1) can be updated safely; the area in front of the read pointer is not safe to update. In the second diagram, the read pointer has refreshed almost the entire frame. The portion of the buffer immediately behind the read pointer is safe to updated, and the portion at the beginning of the frame buffer has already been updated; the read pointer is about to wrap around to the beginning of the buffer. In the third diagram, the read pointer has wrapped around from the end to the beginning of the buffer. The middle of the frame buffer has already been updated, and the area at the end is safe to update.
Figure 10.37 Display frame buffer pointer management.
Figure 10.38 illustrates the progression of the read pointer superimposed with the movement of the write pointer from the beginning to the end of the frame buffer through time. The first two examples result in no image corruption, and the last one causes tearing on the screen. This example illustrates some important behaviors regarding the relative movement of buffer pointers. The first is that tearing occurs when the write pointer crosses the read pointer. The second is that the buffer can be updated at half the rate required to refresh the image on the display panel. For example: when using a panel that is refreshing at 60 Hz, the MDDI link can transmit data at 30 frames per second to update the buffer without introducing tearing on the display. To avoid image tearing, the location of the read pointer must be communicated back to the MDDI host. This is accomplished over the MDDI link by using VSYNC-based wake from hibernation. In this method, the MDDI host programs the MDDI client to wake up the MDDI link from hibernation at the next VSYNC boundary. The host then places the link into the hibernation state by sending a Link Shutdown Packet. At the next VSYNC boundary inside the display controller, when the read pointer rolls over from the end to the beginning of the frame buffer, the MDDI client located with the display controller instructs the data lines to wake-up the MDDI host from hibernation state. The MDDI host then re-initializes its internal timing controls to align itself to the new display controller timing, and subsequently transmits pixel data over the MDDI link to the display buffer behind the read pointer.
314
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 10.38 Progression of read and write pointers to avoid image tearing.
10.6 Conclusion To meet the demands of today’s sophisticated mobile device capabilities, MDDI offers significant advantages compared to alternative solutions, and is an established, VESA standard, high-speed mobile display interface. Handset providers looking to reduce cost and power consumption to better serve the needs of their customers have successfully applied MDDI in released production handsets. MDDI is an unparalleled solution to the baseband processor to display interconnect problem, minimizing the number of connections and efficiently addressing the requirements of all the key functions related to the display and camera sub-systems in mobile devices.
References [1] ‘VESA Mobile Display Digital Interface Standard’, Version 1.0, editor: Wiley, G., pp 1–172, published by Video Electronics Standards Association (VESA), Milpitas, CA, (July 2004). [2] ‘VESA Mobile Display Digital Interface Standard’, Version 1.1, editor: Wiley, G., pp 1–171, published by Video Electronics Standards Association (VESA), Milpitas, CA (May 2007). [3] Wiley, G. and Steele, B. (2003) ‘MDDI, A Low-Power Display Interface for Portables’, Proc. Display Interfaces, Session 5 – New Interfaces and Applications, Video Electronics Standards Association (VESA), Milpitas, CA. [4] Wiley, G. (2004) ‘PC/TV Convergence: Devices That Fit in Your Pocket’, Proc. Display Interfaces 2004 Symposium, Session 7, Video Electronics Standards Association (VESA), Milpitas, CA.
11 MIPI High-Speed Serial Interface Standard for Mobile Displays Richard Lawrence Intel Corporation, Santa Clara, California, USA (Retired)
11.1 Introduction In the course of the last decade, interfaces in desktop computers have undergone a revolution. Connections to hard drives, add-in cards and external peripherals have evolved from wide parallel buses, with many slow signals, to high-speed serial buses – demonstrated by SATA, PCI-Express, and USB 2.0, respectively. Similarly, internal interfaces in handheld products like PDAs and cell phones began their own evolution several years later. The reasons for migrating interfaces from slow parallel buses to highspeed serial buses are the same: reduced pin count on processors and peripherals, lower power consumption, lower product cost and better EMI (electromagnetic interference) characteristics. Advances in circuit design, chip fabrication techniques and interface protocol standards are now bringing the same benefits to handheld systems that revolutionized interfaces for desktop and laptop systems.
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
316
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
11.1.1 Motivation for New Standards A variety of proposals for new interfaces have surfaced in the consumer electronics industry, with many focused on the processor-to-display connection. Some were developed by small consortia of companies attempting to become industry standards; others were proprietary and covered by patents and similar IP protection. The industry was threatened with fragmentation, which could prevent any proposed new standard from gaining acceptance or ‘critical mass’ – negating one of the primary goals of standardization. In late 2003, a group of four companies founded MIPI – Mobile Industry Processor Interface Alliance – to develop new interface standards that would be widely accepted across the industry, accommodate today’s requirements, and lay a foundation for future improvements. Since then, MIPI has grown to around 150 companies representing all aspects of handheld product design, development, and manufacturing. Their MIPI representatives organized active working groups developing an integrated set of standards for the mobile-terminal industry.
11.1.2 Display Architectures and DSI Goals Working groups within MIPI focus on particular interfaces in handheld platform. In early 2004, the Display Working Group (DWG) was formed to understand and evaluate requirements for processor-todisplay interfaces and to develop new standards for the industry. The main goal of DWG was to develop a new high-speed serial interface standard which could: 1. functionally replace traditional parallel and slower serial Complementary Metal Oxide Semiconductor (CMOS) display interfaces; 2. take advantage of new physical-layer developments which minimize pincount and power consumption, and provide robust protection against transmission errors; 3. meet mass-production goals so that products using the new interface would cost no more to manufacture than equivalent earlier products; 4. incorporate forward-looking technology to provide (with upgrades) a useful lifetime of 5 to 10 years. For the last requirement, this meant the high-speed serial display standard must support LCD panels of at least XGA (1024 768 pixels) resolution, with 16, 18 or 24 bits per pixel. The standard was also required to support two different architectures for display panels – video mode and command mode architectures. In video mode architecture, there is no frame buffer on the display panel, and the host processor must transmit pixel data to the LCD panel, through the interface, at full display-refresh bandwidth – 60 frames per second or faster, to maintain a viewable image on the display. This requirement puts maximum stress on interface bandwidth capabilities. The MIPI specification DPI-2 (Display Pixel Interface, version 2) is a good example of a traditional parallel interface for video-mode display panels. DSI must have all the features and performance required to support this type of display interface functionality. Figure 11.1, below, shows a typical video-mode configuration. In command mode architecture, there is a local display controller with its own frame buffer on the display panel. This controller manages display refresh from its local frame buffer, without help from the host processor. With this architecture, the host processor sends commands to the display, or sends new pixels to the display, only when it wants to change the viewable image. The local display controller executes the received command to change the image, or transfers new pixels from the interface to its local frame buffer. The MIPI standard DBI-2 (Display Bus Interface, version 2) is a good example of a specification for command-mode display architectures, using a traditional parallel
MIPI HIGH-SPEED SERIAL INTERFACE STANDARD FOR MOBILE DISPLAYS
Host Processor Timing Control
317
Video-Mode Display Panel Display Refresh
Bus Interface
Bus Interface
60 fps from Frame Buffer Update Frame Buffer
Color Frame Buffer
Display Driver
Figure 11.1 A typical video-mode configuration.
bus between host processor and display panel. DSI must include all the features and performance required to support command-mode display architectures, including real-time (30 fps) video update. In parallel to development of a high-speed serial standard for displays, MIPI DWG also created a specification for command sets used for controlling operation of systems having command-mode display architectures – the MIPI Display Command Set (DCS). Figure 11.2, below, shows a typical command-mode display configuration.
Host Processor
Image Update Data
Commands &
Bus Interface
Bus Interface
Image Update Data
Command-Mode Display Display Controller Panel
Color Frame Buffer
Figure 11.2 A typical command-mode display configuration.
Some display modules that use video mode in normal operation also make use of a simplified form of command mode, to save power in low-power state. These display modules can shut down the streaming-video interface and continue to refresh the screen from a small local frame buffer, at reduced resolution and pixel depth. The small local frame buffer is pre-loaded with image information which is then shown on the screen while the system is in low-power mode, and the interface may be shut down to save power. These displays can switch between power modes in response to power-control commands.
11.2 Scope of MIPI DSI Specification MIPI standards are deliberately constrained in scope. At the physical layer, they specify electrical requirements and timing in complete detail. There are some mechanical constraints, such as the
318
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
maximum length of the bus. However, MIPI standards do not mandate mechanical detail, such as the size or position of connectors, or the physical order of signals at the interface. Such detail is left for implementers to decide. Similarly, at a higher level, MIPI standards like DSI (Display Serial Interface) do not specify requirements that must be managed at a software level, such as register locations and bit field descriptions, memory or buffer sizes, or memory addresses in the host processor’s memory map. MIPI standards build in sufficient flexibility to accommodate a variety of system architectures, while not constraining them to a particular implementation.
11.3 DSI Layers The MIPI specification for serial high-speed interface to displays is called DSI (Display Serial Interface). DSI itself is primarily a protocol-level specification and it actually builds on several other MIPI documents to collectively form a 3-layer or 4-layer specification stack.
Command Set (DCS) Pixel Formats (DSI) 8-bits Data
Control Protocol Layer (DSI) Control
Data 8-bits
Data Control Lane Management (DSI) Control N * 8-bits Data
Control PHY Layer (D-phy)
N serial lanes Figure 11.3 Stack relationship of MIPI DSI [1].
Figure 11.3 shows the relationship of DSI to its lowest-layer specification, MIPI D-PHY. At the top (application) layer, DSI specifies a standardized set of pixel formats for video-mode displays. For command-mode displays with on-panel controllers, a related specification – DCS (Display Command Set) – documents a standardized group of commands which are recognized and interpreted by the controllers. The Lane Management layer is optional, only applying to systems having multiple data
MIPI HIGH-SPEED SERIAL INTERFACE STANDARD FOR MOBILE DISPLAYS
319
paths (lanes). For single-lane designs, lane management ‘collapses’ to a null-layer, and is not part of the DSI protocol stack.
11.3.1 Physical Layer Specification The foundation of a complete DSI implementation is D-PHY, a physical layer design intended to support display interfaces as well as promote reuse with other MIPI protocol specifications. MIPI’s Phy Working Group developed the D-PHY specification in parallel with DSI. D-PHY documents electrical, low-level timing, and low-level protocol requirements [5]. D-PHY uses a source synchronous protocol and specifies, at minimum, a pair of signals – one clock, and one serial data – each implemented as a high-speed differential signal with low swing, terminated at the receiver. When no data is being transmitted, both transmitter and receiver may be shut off to conserve power. High-speed (HS) operation of D-PHY is based on scalable low voltage signaling (SLVS), a voltage-mode differential technology. Additionally, a low-power (LP) mode of signaling is available in which the interface pins function as traditional LV-CMOS signals with 1.2 V swing. These features enable DSI to operate using a fraction of the power of traditional parallel displays. The MIPI D-PHY specification documents electrical and timing requirements for both modes of operation. D-PHY enables serial data transmission at rates between 80–1000 Mbits/s per lane, depending on transmitter and receiver capability, length and matching of conductors, and signal loading. Within the PHY Layer, serial data is transmitted or received, and converted between parallel 8-bit data format and serial data format. D-PHY specifies the fundamental unit of data at one byte; only complete bytes traverse the internal interface between PHY and protocol layers. All forms of information which traditionally pass from host processor to display over wide parallel buses are serialized and converted to byte format. From a system point of view, serialization and deserialization should be transparent. However, one unavoidable consequence is an increase in latency for transactions requiring a response from the display, such as a Memory Read operation to on-display memory. Another significant difference is the lack of flow control – for example, the host processor cannot throttle the rate or size of returning data during a read transaction. Other mechanisms must be substituted to manage returning data. D-PHY also specifies low-level protocols and signaling behaviors for starting up and for ending high-speed (HS) transmissions, Bus Turn-Around (BTA) and for sending messages like Reset.
11.3.2 Multi-Lane Operation For higher bandwidth, DSI permits lane scalability (each differential data signal is a ‘lane’), expanding the data path to 2, 3, or 4 lanes wide, which share a common clock signal. A multi-lane implementation simply incorporates multiple D-PHY functional blocks operating in parallel. The lane management layer has a ‘distributor’ function in the transmitting unit, handing out bytes to the various paralleled PHY blocks; on the receiving end, it collects bytes in parallel from the receiving PHY units, orders them correctly into a single byte stream, and passes the bytes to the DSI protocol layer.
11.3.3 Bidirectional Operation with DSI DSI and its underlying D-PHY specification enable bidirectional transmission. Bidirectionality is optional for video-mode displays – where normally all data is transmitted in the direction from hostprocessor to display. Command-mode displays require bidirectionality, so the host processor can read memory or status information from the local controller on the display panel.
320
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Reverse-direction transmissions (from display to host processor) have low bandwidth requirements. DSI therefore requires that all reverse-direction transmissions only use a single lane, and that they only use low-power transmission mode. This simplifies both host-processor and display design, and ensures that such transmissions consume a minimum of power. Bandwidth is limited to a maximum of 10 Mbits/s in low-power mode. Changing bus direction is effectively managed at the PHY level. Bus control must be given by the existing transmitter to the receiver. A unique code (bus turn-around) is sent in low-power mode from transmitter to receiver first, followed by the two controllers executing a sequence of states, and ending with the original receiver driving the bus, and the original transmitter returning to ‘receive’ mode.
11.4 DSI Protocol 11.4.1 Packet Transmission DSI is a packet-based protocol. A packet is composed of multiple bytes with a fixed organization. During high-speed transmission, one or more packets are sent from processor to display. Between transmissions, the bus reverts to low-power state to reduce power consumption. The transmitter may send one packet per transmission, or it may concatenate multiple packets into one transmission. Concatenating multiple packets into one transmission improves bandwidth by spreading the overhead of mode transitions (from low-power to high-speed and vice-versa) across more data. Note that no special codes are required to indicate beginning or end of packet, so packet payloads may contain arbitrary data. One consequence of this freedom is the requirement for each packet to indicate its length, so the receiver knows where each packet ends and the next one begins. Figure 11.4, below, illustrates high-speed transmission of multiple packets. Before a HS transmission begins, the transmitter PHY issues a SoT (Start of Transmission) sequence to the receiver. After that, data or command packets are transmitted in HS mode. Multiple packets may be sent within a single HS transmission, and the end of transmission is finally signaled by the PHY layer using a dedicated EoT (End of Transmission) sequence. In order to enhance the overall robustness of the system, DSI also defines a dedicated EoT packet at the protocol layer for signaling the end of HS transmission. For backwards compatibility with earlier DSI systems, the capability of generating and
EoT Packet
EoT Packet
EoT Packet
LPS SoT SP SP EoT LPS SoT SP SP EoT LPS
SoT
LgP
SP EoT LPS
Separate Transmissions KEY: LPS – Low Power State SoT – Start of Transmission EoT – End of Transmission
SP – Short Packet LgP – Long Packet
EoT Packet
LPS SoT SP SP
LgP
SP EoT LPS
Single Transmission
Figure 11.4 Transmission of packets. Reproduced from [1] by permission of MIPI.
MIPI HIGH-SPEED SERIAL INTERFACE STANDARD FOR MOBILE DISPLAYS
321
interpreting this EoT packet can be enabled or disabled. In Figure 11.4, the EoT short packets are labeled ‘EoT packet’. In Figure 11.4, the top diagram illustrates a case where the host sends two short packets followed by a long packet using three separate transmissions. In this case, an additional EoT short packet is generated before the final transmission ends. This mechanism provides a more robust environment, at the expense of increased overhead, compared to cases where EoT packet generation is disabled, i.e. the system only relies on the PHY-layer EoT sequence for signaling the end of HS transmission. The overhead imposed by enabling EoT packet transmission can be minimized by sending multiple packets within a single transmission as illustrated by the bottom diagram in Figure 11.4. D-PHY uses a unique sequence of bus states at the start of each transmission (SoT) to tell the receiver that a new transmission is beginning. The receiver responds by turning on its HS receiving circuits and connecting its termination resistor. At the end of a transmission, the transmitter follows the last packet, i.e. the EoT packet, by a final transition, holds the signal at that value for a guaranteed minimum time period, and finally switches to LP state. After receiving the EoT packet, the receiver knows the transmission is over; however it waits until detecting LP state, while ignoring all bits in HS mode following the EoT packet, before disconnecting its termination resistor and switching to LP-receive state.
11.4.2 Packet Formats DSI specifies two different packet formats: short packet and long packet. The short packet, as shown in Figure 11.5, has just 4 bytes. Short packets are used primarily for sending synchronization information or commands from host processor to display. They may also be used for returning register and status information from display to host processor, in response to a read request from the processor.
DATA IDENTIFIER (DI): Virtual Channel ID + Data Type
PACKET DATA (2 bytes)
EOC
DATA 1
DATA 0
DATA ID
8–bit Error Correction code (ECC)
LPS or next packet
Time Figure 11.5 Short packets [1].
Long packets are variable length. They are used to send larger blocks of data, such as a scanline of RGB pixels sent to a display to refresh the image. Long packets include a 2-byte word count field which indicates the size of the payload following the packet header. Figure 11.6 below illustrates long packet format for sending 18 bpp (bits per pixel) packed pixels from host to display. The least significant bit (LSB) of every byte is transmitted first across the lane to the display. In addition, pixel values comprised of multiple bytes transmit the least-significant byte first.
322
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 11.6 Long packet format for 18 bpp (packed) video packets. Reproduced from [1] by permission of MIPI.
Although DSI does not specify any particular error policy, it defines a set of detectable error conditions and corresponding error bits with a reporting mechanism, and enables a variety of responses to errors and protocol violations for robustness and flexibility. All packets sent by the host processor are protected by ECC (short packet, and the header of long packet). ECC enables correction of single-bit errors and detection of multiple-bit errors. Payload bytes in long packets are additionally protected by a Checksum, enabling detection of transmission errors in long blocks of data [6]. Host processor transmissions must send calculated ECC and Checksum bytes. Since commandmode systems interpreting commands and addresses have a bigger stake in ensuring data is received correctly, ECC protection of transmitted commands in short packets, and for long-packet headers, is mandatory. For video-mode operation, ECC protects Hsync/Vsync information sent by the host from potential single-bit errors – important for real time operation. Implementation of Checksum capability is optional for DSI peripherals.
11.4.3 Virtual Channels and Data Types Both long and short packets begin with the Data ID byte. This byte has two fields, as shown in Figure 11.7. The Virtual Channel Identifier (VCID) field enables a processor to control multiple displays, by ‘tagging’ each packet with the ID of the display it wants to address (see section 11.4.6 Dual-Display Operation, below). The other field is the data type field. This identifies the packet type (long or short), indicates what kind of information is in the packet’s payload, and how many valid data bytes are in the packet payload if it’s a short packet (the packet is always the same length, but there may be 0, 1, or 2 valid bytes following the data ID byte).
11.4.4 Video-Mode Transmission and Burst Operation Functionally, DSI must provide the same capability available from traditional parallel display interfaces. For a video-mode display, this means the same pixel and timing information – sync,
MIPI HIGH-SPEED SERIAL INTERFACE STANDARD FOR MOBILE DISPLAYS B7
B6
B5
B4
B3
B2
VC
DT
Virtual Channel Indentifier (VCID)
Data Type (DT)
B1
323
B0
Figure 11.7 Data ID byte [1]. Bits B7-B6 are virtual channel identifiers. Bits B5-B0 specify the data type. Reproduced from [1] by permission of MIPI.
blanking, and active pixels for display – must be encoded, converted to serial bits, and transmitted to the display, where they must be transformed back into correct timing and pixel information. DSI’s data types support this requirement by including sync event (both horizontal, and vertical), blanking, and RGB pixels. DSI also has four ‘native’ pixel formats specified by different data types: 16-bit, 18-bit packed, 18-bit loosely-packed, and 24-bits per pixel (‘loosely-packed’ uses one byte per 6-bit color component; the extra 2 bits in each byte are ignored by the receiver). The transmitter can send these data types in back-to-back packets, preserving the timing relationship between sync pulses and scanline pixels. (Between sync events and pixel blocks, the bus may send blanking packets, or switch to LP [Low Power] state to save energy.) Accurate timing may then be reconstructed in the display panel to ensure proper display. DSI in HS (High Speed) mode does not present the traditional CMOS linear power relationship between bandwidth (transitions per second) and power dissipation at the interface. Because of the termination resistors and the nature of D-PHY drivers and receivers, some power is always dissipated regardless of signal switching frequency. The graph of power vs. frequency, unlike traditional CMOS, does not pass through the 0, 0 origin point of the chart. For this reason, a strategy for maximizing power efficiency should in fact try to minimize energy per bit transmitted, rather than absolute power (energy per unit time) dissipated by the link. One conclusion for DSI links is that the optimum strategy for energy efficiency is to maximize the data transmission rate of the link, regardless of the actual bandwidth required by the display. Between transmissions, the link can return to LP state. Instantaneous power may be slightly higher during the burst transmission, but then power drops to near-zero levels during the longer ‘idle’ time between bursts, so average power is lower. (Note that in some cases the power savings for this display type is only on the data path, as the HS serial clock may be required to run continuously for timing purposes on the display panel.) One complication of such a time-compression strategy is that the relationship between HS serial clock and timing circuits on the display side is no longer simple. Normally, the HS serial clock is divided down on the display side to provide scanline and frame timing, and possibly other functions, for timing controllers and other logic on the display panel. With scanline-data time compression, incoming pixels are loaded into a line buffer on the display at high speed, but the HS serial clock will probably have a more complex relationship to pixel and scanline timing than in the simple ‘real-time’ (not time-compressed) display. (It should always be a synchronous relationship, however, to avoid possible jitter problems on the display.) Whatever the timing relationship between HS serial clock and scanline/frame timing, the host processor’s display controller and the display panel must agree on that timing relationship and duplicate it on both sides of the link. For this reason, display manufacturers must clearly specify the relationship between HS serial clock used for burst mode and scanline/frame timing. They may be able to operate at any of several specified timing relationships, permitting some flexibility on the part of the host processor’s display controller.
324
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS H HBP S
HFP
RGB Video Stream Data
BLLP
HFP
H S
HBP RGB Video Stream Data
Hsync DSI Serial Clock DSI Data
Free Running Divided down HBP
High Speed Mode RGB
Low Power Mode
H FP
(Host to Display Driver)
Line Write Buffer
RGB Video Stream Data
(Inside Display Driver)
Line Read Buffer (Display Driver to Display)
Valid Pixel Data (Horizontal Display Area)
Pixel Clock DE
Valid Pixel Data (Horizontal Display Area)
Figure 11.8 Bursting data (time compression) to reduce energy-per-bit. Reproduced from [1] by permission of MIPI.
11.4.5 Command-Mode Operation Command-mode systems have a display controller on the LCD panel with its own clock source, for generating display timing, and a local frame buffer, from which the display is refreshed. DSI makes reference to another MIPI standard, DCS (Display Command Set), a collection of commands used for setup and simple functions that can be controlled locally (such as scrolling text, image rotation, and setting power levels). DCS represents an application-level specification, which in the protocol stack sits above DSI. A group of packet data types is reserved for DCS commands: DCS Read Request, DCS Short Write, and DCS Long Write. Like all DSI packets, these begin with a data ID byte. The next byte is the actual DCS command byte, which passes directly to the DCS controller on the display, for interpretation and command execution. The remaining byte in the short packet can convey a 1-byte parameter; if more parameters must be attached to the command, the transmitter must use long packet format. Note that DCS commands may contain from zero to six parameters, depending on the command.
11.5 Dual-Display Operation DSI protocol permits ‘tagging’ packets to target them at specific displays in a multiple-display configuration (common in cell phones). The transmitter may then send transmissions alternately addressing different displays, or may even send back-to-back packets to different displays within the same transmission. Figure 11.9, below shows example transmissions, with EoT packet generation disabled, addressing multiple displays [1].
MIPI HIGH-SPEED SERIAL INTERFACE STANDARD FOR MOBILE DISPLAYS
LPS SoT PH
PF EoT LPS
SoT PH DCS Wr Cmd
Virtual Channel 0
PF PH
Virtual Channel 1
LPS SoT PH 16-bit RGB PF PH DCS Wr Cmd PF PH 16-bit RGB
Virtual Channel 0
Virtual Channel 1
KEY: LPS – Low Power State SoT – Start of Transmission EoT – End of Transmission
325
16-bit RGB PF EoT LPS
Virtual Channel 0 LPS PF PH DCS Rd Req PF EoT BTA
Virtual Channel 0
Virtual Channel 1
PH – PacketHeader PF – Packet Footer BTA – Bus Turn-Around
Figure 11.9 Example transmissions, with EoT packet generation disabled, addressing multiple displays [1].
In this example, RGB display-refresh packets are sent over the link targeting the main display, which is physically connected to the DSI signals. These packets have packet headers (PH in Figure 11.9) with tag 00, employing Virtual Channel 0. Interleaved with the RGB packets are DCS command packets, intended for the second display (sub-display). The DCS commands are tagged 01, so they use Virtual Channel 1.
DSI serial Data
Video Drivers Channel Select
Command Formatter
Video Mode Display Panel
Sub-Display (Command Mode)
Figure 11.10 Multiple-Display Configuration [1]. Channel Select box implements a hub-like function that directs display-bound traffic to the correct display, as indicated by Virtual Channel ID.
DSI signaling uses point-to-point connections to control loading and ensure robust signal quality and high performance. The protocol, however, specifies support for multiple displays using Virtual Channel ID bits. This contradiction is resolved by implementing a ‘hub’ function on the main display, as shown in Figure 11.10, with the Channel Select block. The hub function (Channel Select) receives all packets going to the displays, checks the VC ID of each packet, and then forwards packet information to the correct display. The EoT packet is an exception since it relates to the operation of the physical transmission channel. Therefore it uses a fixed
326
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Virtual Channel ID¼00, indicating end of a high-speed transmission, regardless of Virtual Channel IDs used in previous packets of that transmission. Note the format and signal definition for the interface between main display and the sub-display is outside the scope of the DSI specification. It can use an existing industry standard, or a proprietary format. It is entirely an implementation decision based on sub-display requirements, physical signal path, cost constraints, and similar considerations. Of course, the shared interface must first have adequate bandwidth to meet the requirements for both displays. Designers considering dual-display systems must also pay close attention to meeting display timing requirements, particularly if one of the displays is video-mode architecture requiring displayrefresh traffic. That type of display has low tolerance for latency in the pixel stream refreshing the screen. Commands and data targeting the other display must be ‘squeezed’ into the available time remaining, between packets to the video-mode display. If the second display has low bandwidth requirement, its data may be sent during Vertical Blanking period of the video-mode display. If that imposes too much latency on traffic to the second display, it’s possible to shorten video-mode packets by time-compressing them (‘burst mode’), which frees up enough time during every scan line to send small packets to the second display. This same strategy also improves bandwidth and simplifies design for dual-display systems. When the time required to transmit a scanline of RGB display-refresh pixels is compressed, there is more time available for commands and update pixels to the second display. Less buffering may be required in the design, reducing cost. Commands to the sub-display could even be sent in LP mode, if there is enough time remaining between HS bursts to the main display. This would minimize power required to operate the second display over the link. Time-compressing RGB packets to the main display is recommended for dual-display systems where the main display requires video-mode operation. This strategy – which opens up a larger time slot for packets to the sub-display – permits larger packets to be sent to the sub-display, and may enable short-packet DCS_read transmissions from sub-display to host processor between scanlines, with enough time-compression of RGB data. Note that long transmissions, such as blocks of video update pixels or a read of multiple bytes back from memory on the sub-display, will probably need to be deferred to the main display’s V. Blanking period. The sub-display controller must tolerate such postponement, and performance may benefit from command re-ordering in certain cases. The host processor’s controller in this example probably integrates two display controllers, as shown in Figure 11.11 – one with a full LCD timing controller and frame buffer, generating the RGB video stream for refreshing the main display. The second display controller is a simple design which employs the DCS ‘instruction set’ for controlling the sub-display. This controller might be as simple as a command buffer, taking commands generated by software and passing them to the DSI arbiter and packet generator. This enables it to do simple things like scrolling a region of the screen, or flipping or rotating the image, using standard DCS commands. In the host processor, each display processor controller generates a block of bytes (DSI ‘payload’) representing the data content of a packet targeted at the corresponding display on the other end of the DSI link. The outputs of the two display controllers are functionally multiplexed into a common DSI packet formatter, which appends packet header, ECC, and Checksum information onto each packet. Packets may then be sent as individual transmissions, or as back-to-back packets in a shared transmission with no LP interval between packets. Packets are only sent back-to-back if they can be transmitted with no pause or gap between packets; the host processor must further guarantee that complete packets will be sent, once started, with no pauses, since DSI has no handshake mechanism for throttling or pausing transmissions. (Note that DSI protocol does permit the functional equivalent of a gap between packets for displays: the processor can send a null packet between packets with real data. The null packet could theoretically be used to fill a gap between packets of actual data, permitting the bus to stay in HS mode and avoiding the overhead of changing twice between HS mode and LP mode). The host processor’s packet multiplexing mechanism must prioritize packets going to the main (video-mode) display, as that display may have low tolerance for ‘jitter’ or latency on arriving RGB
MIPI HIGH-SPEED SERIAL INTERFACE STANDARD FOR MOBILE DISPLAYS
327
Host Processor
Video-Mode Display Controller
Main Display
0
Command Mode Display Controller
PRIORITIZE
Video Frame Buffer
DSI Interface -----------Protocol -----------Packet Ass’y
0 DSI Serial Data
Channel Select
1
Video Drivers
Command Formatter
Video Mode Display Panel
1
Sub-Display (Command Mode) Figure 11.11 Dual controllers in Host Processor. Reproduced from [1] by permission of MIPI.
packets relative to horizontal time events (H Sync) for that display. This requires a prioritization mechanism and a ‘smart’ transmission controller that can look ahead in time, when it receives a request for a command transmission to the sub-display, to ensure that this transmission will not overflow into the time slot reserved for the RGB video-mode display. Typically, a command buffer between the controller for sub-display and the multiplexer functional block in the host processor will permit the sub-display controller to operate with few restrictions or stalling. If the DSI link is busy feeding a line of RGB pixel data into the packet generator when the subdisplay controller outputs a command, that command may be stored temporarily in the buffer until the multiplexer/packet generator are available to generate the command-mode packet. Latency in this scheme will normally be less than one scanline period, unless the sub-display controller needs to send/receive a long packet to its display. Note, however, that there normally must be some handshake mechanism between this buffer, the multiplexer, and the sub-display controller to prevent command buffer overflow, by stalling the controller if the buffer is full. A deeper buffer, storing multiple commands, will permit operation with fewer stalls as long as overall DSI bandwidth is sufficient for both displays. The sub-display typically has its own clock source, so there is a timing-domain boundary between it and the main display. The Command Formatter block shown in Figure 11.11 normally must handle the synchronization task between the two displays. Consider a new ‘smart phone’ design with a large (3.5 inch) main display panel on the inside of a flip-up lid. It can function as a PDA, receive and display email and still images, and even play back video clips and show TV broadcasts. When the lid is closed, a small color display is visible on the top outside surface of the lid. This display functions as a status indicator, clock, and shows the caller ID when there is an incoming phone call. It can also function as a viewfinder for the unit’s camera. Because both displays are built into the flip-lid of the product, there are fewer restrictions on the signal path width between displays; they may be on opposite sides of the same PCB substrate and require no flex cable between them. Therefore, the interface from main display to sub-display need not be serialized, but instead can be a ‘conventional’ parallel interface as in MIPI DBI-2 specification [2]. In most cases a conventional 8-bit parallel datapath with several control signals will be adequate. When streaming video is sent to the sub-display, it uses the format shown for 12-bit pixels specified in MIPI DCS specification [4]. The motivation for connecting both displays with a single DSI (4 signal wires) is to minimize the number of conductors in the flex cable going through the hinge to the displays. Multiple-display
328
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
capability built into MIPI DSI protocol enables cost and size reductions using a narrow flex cable, while imposing few restrictions on the performance and resolution of the displays.
11.6 Conclusion As computing power grows in the handheld platform, its other aspects – memory, I/O bandwidth, and display capabilities – must also grow, to prevent system bottlenecks and to progress toward the goal of true handheld computing. High-performance serial interfaces are key features that provide needed bandwidth and flexibility while maintaining low cost, low power consumption, and small form factors. A new generation of standards bring the benefits of high-performance serial interfaces to handheld computing: bandwidth with headroom for future growth, low pincount, reduced power, and costs comparable to or lower than legacy parallel buses. These advantages make high-performance serial interfaces a competitive choice for future generations of handheld systems like PDAs and advanced cell phones.
Notes and Acknowledgements The author gratefully acknowledges the invaluable assistance of Fariborz Pourbigharaz, Chairman of MIPI Display Working Group, for his help in final editing and checking of this chapter.
About The MIPI Alliance The Mobile Industry Processor Interface (MIPI) Alliance is an open membership organization that includes leading companies in the mobile industry that share the objective of defining and promoting open specifications for interfaces in mobile terminals. MIPI Specifications establish standards for hardware and software interfaces between the processors and peripherals typically found in mobile terminal systems. By defining such standards and encouraging their adoption throughout the industry value chain, the MIPI Alliance intends to reduce fragmentation and improve interoperability among system components, benefiting the entire mobile industry.
About MIPI Specifications MIPI Specifications are currently available only to MIPI Alliance member companies. Interested companies are welcome to join the MIPI Alliance. Membership includes limited royalty-free intellectual property rights and obligations. For more information on membership in the MIPI Alliance, visit www.mipi.org.
References [1] [2] [3] [4] [5] [6]
Draft MIPI Alliance Standard for Display Serial Interface (DSI), MIPI Alliance, September 2007. MIPI Alliance Standard for Display Bus Interface (DBI-2), MIPI Alliance, November 2005. MIPI Alliance Standard for Display Pixel Interface (DPI-2), MIPI Alliance, September 2005. MIPI Alliance Standard for Display Command Set (DCS), MIPI Alliance, June 2006. Draft MIPI Alliance Standard for D-PHY, MIPI Alliance, June 2007. Johnson, B.W. (1989) The Design and Analysis of Fault Tolerant Digital Systems, Addison-Wesley.
12 Image Reconstruction on Color Sub-pixelated Displays Candice H. Brown Elliott Clairvoyante, Inc., Sebastopol, California, USA
12.1 The Opportunity of Biomimetic Imaging Systems Electronic Information Displays, such as Cathode Ray Tubes (CRT) and Flat Panel Displays (FPD) are interface devices between two computers, one hardware – based on the solid state physics of silicon; and one wetware – based on the electrochemistry of neurons in the human brain. One would not begin a design project for an interface device or protocol between two hardware computers without thoroughly understanding the specifications of both systems. The same tenet holds for interfacing to the Human Vision System (HVS). The rough outline of the HVS is intuitively known to most engineers, as they operate their own Human Vision System daily. However, it is in the details that we find the needed information to design our visual electronic interfaces and image processing systems. The simplistic view of the requirements for full color cameras, displays, and image processing systems calls for three color primaries, typically red, green, and blue, in equal proportion with a uniform resolution across the field of view. Fortunately for us, the human eye does not work like a movie camera. At its maximum foveal resolution of 60 cycles/degree, the human eye has access to 8,000 10,000 pixels, equivalent to an 80 MegaPixel digital camera. With a dynamic range of over 10,000:1, a contrast sensitivity of 0.1%, and a maximum frame rate of 80 frames/second, each eye could be sending data to the brain at a 460 GHz data rate! Fortunately, the eye doesn’t do this. To avoid giving us this headache, the eye’s neural process prefilters the information to reduce the actual bandwidth in the optical nerve down to about 6 MHz, thereby
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
330
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
compressing the data stream by a ratio of over 80,000:1. The eye’s pre-filtering includes logarithmic representation of luminance grayscale values, variable resolution between the center and edge of vision, separation and compression of color into luminance and color differences, etc. In the future, imagers and displays will become smart enough to show people only what they can actually see, avoiding wasting power and money to display information that the eye is designed to just throw away. Already, since images are processed by the Human Vision System into another color space – luminance, red/green, and yellow/blue image channels – imaging system designers have adopted several roughly biomimetic color spaces, such as YIQ, component video (YPbPr), and YCrCb. The designers of NTSC color television utilized YIQ in part because it was possible to modulate the phase and amplitude of the color subcarrier at a significantly reduced bandwidth, matching the inherently lower bandwidths of the Human Vision System’s color space, while simultaneously making it backward compatible with previous black and white television sets that only displayed the luminance channel. But then again, that black and white television set was based in large part on that very same characteristic of the Human Vision System having separated the color into the three channels. The black and white television sets having been designed to stimulate just one HVS channel, the luminance channel. Furthermore, through a fortuitous similarity of a logarithmic response of electron emission from the CRT cathode to applied voltage, the television luminance signal ‘gamma’, the brightness to signal-level transfer curve, naturally aligned with, and complemented, the HVS characteristic of Weber’s law of logarithmically decreasing response to an increase in stimulus. This – when the video signal was amplitude-modulated with white being less signal and black being more – meant that noise that crept onto the signal was perceptually equalized in both dark and bright portions of an image. When images are digitized, they are ‘perceptually encoded’ through the use of the non-linear gamma quantization, which has smaller quantization intervals in low luminance than in higher luminance. This approximates Weber’s Law, and reduces the visibility of the quantization error by ensuring that the error ‘noise’ is perceptually equalized in both dark and bright portions of an image. Had the luminance been quantized in a linear fashion, the quantization error would be visible in the dark regions while the bright regions will have more grayscale steps than can be discriminated by the HVS. Digital display systems, such as Liquid Crystal Displays (LCDs), thus use a gamma quantization that matches the digital image quantization for the very same reason. The designers of the JPEG and MPEG image compression standards also took advantage of biomimicry of the HVS in using YCrCb and allowing sub-sampling of the chroma channels before the Discrete Cosine Transform (DCT). Furthermore, the transformed image may be perceptually quantized on the spatial frequency domain in which high spatial frequencies are more coarsely quantized than those at the peak band-width of the HVS luminance channel. When done properly, these operations may be described as ‘perceptually lossless’, yet mathematically lossy, image compression. In solid-state digital color image-capture sensors, the most popular arrangement of the Color Filter Array (CFA) is the classic Bayer (pronounced ‘buyer’) pattern which is shown in Figure 12.1 (refer to the
Figure 12.1 Bayer pattern.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
331
Figure 12.2 Color key.
color key presented in Figure 12.2). As originally envisioned by Bayer, the inventor, the green colored filter is a broad pass band filter that is similar to the HVS luminance spectral sensitivity curve, not the narrow bandwidth green filter of a display. The red and blue are narrower band filters that pass the long and short wavelengths respectively. This device may directly sample the color image focused upon it in a color space analogous to, and easily converted to, YCrCb color space. In this arrangement the ‘green’ photosites serve to sample the luminance of the image, while the red and blue sample analogously to the Cr and Cb respectively. The luminance is sampled at twice the spatial frequency as each of the chroma components, placing greater resolution in the luminance channel, mimick the HVS for which it is capturing images to be later presented. This matches the color sub-sampling of the chroma channels for JPEG and MPEG, being essentially 4:2:2 sampling. In modern digital cameras, each of the photosites, regardless of the actual color of the color filter array above it, is mapped to a conventional full color RGB or YCrCb pixel using a ‘de-mosaic-izing’ algorithm which interpolates the color fields to fill in the missing samples, assuming that the focused image is band-limited below the sample frequency. If the Human Vision System samples images with different sampling rates for different wavelengths of light and then processes them into an opponent color space with different bandwidths, why then aren’t our electronic information displays, especially flat panel displays, manufactured with similar biomimetic designs? Why do they have pixels with equal numbers of red, green, and blue sub-pixels? Why do they have equal resolution in the chroma channels as they do in the luminance channel? The answer can only be understood in historical perspective.
12.1.1 History The primary issue delaying the use of biomimetic displays has been the resolution of the graphics systems and the size of the screens. For a laptop computer with only VGA resolution, the spatial frequencies were low enough that with equal numbers of color sub-pixels, the color barely blended together as the chromatic modulation of the red to green to red sub-pixel again barely met the minimum requirement of being above eight cycles per degree. Had the red sub-pixel density been reduced, it would be visible as chrominance artifacts, such as color fringes on high spatial frequency image components. Furthermore, had the blue sub-pixel density been reduced [15], the dark luminance well that a blue color presents to the HVS luminance channel would have been visible as an objectionable luminance artifact (dark stripes or dots). As resolutions of the panels and graphic systems have increased, it became possible to design biomimetic displays. Today, the resolution requirements for some products, such as mobile phones, ultra-mobile personal computers (UMPCs), and personal media
332
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
players, are at the point where it is becoming a necessity to use biomimetic displays to obtain desired brightness, battery life, and color gamut. A second issue had been the lack of suitably performing, and cost effective, Digital Signal Processing (DSP) algorithms that enable the luminance information to be rendered at the sub-pixel level, while maintaining color accuracy at a lower spatial resolution, a requirement for biomimetic displays. This too has changed. Historically, the first sub-pixel rendering (SPR) algorithm was simply a decimation filter, a subsampling of each color plane of an image at each sub-pixel location, throwing away the other color samples not mapped to a sub-pixel. This works moderately well when the image to be sub-sampled in this manner is band-limited to half of this new sample rate. Examples of such band-limited images include analog video that has been passed through a low pass filter. However, if the image is not bandlimited to the new sub-sampled rate, such as attempting to display NTSC TV resolution on a 320 240 color mosaic, chromatic aliasing may be quite evident. Decimation is still in use in viewfinders for video camcorders and digital still cameras, where the presence of such artifacts is tolerated. The next step in the development of sub-pixel rendering was taken in 1988 by Benzschawel and Howard, display engineers at IBM, who used simple displaced box filters (equal weighted average of a group of pixel values) to reduce chromatic aliasing when sub-sampling a higher resolution image for conventional RGB flat panel displays [1]. Ten years later in 1998, Microsoft publicly announced ClearTypeTM, an improved font rendering technology based on displaced box filtering for black text on RGB Stripe displays [2]. Platt at Microsoft also extended the theory to colored text using one dimensional color vector matrix convolutions. The author began research on combining biomimetic display designs with biomimetic digital signal processing sub-pixel rendering in 1993, first publishing some of her work in 1999, and along with her colleagues, regularly thereafter [3–12]. Subsequently, in 2000 she founded Clairvoyante, Inc. to commercialize this work under the registered trademark of PenTile Matrix1 technology. Some of the figures in this chapter use hatching to represent the color of sub-pixels as shown in the color key in Figure 12.2. When needed, the background will be shaded to represent the brightness of a given sub-pixel to illustrate image reconstruction.
12.2 Sub-pixel Image Reconstruction
Gray Levels
There are three dimensions to image quality for a captured or computer generated image; gray level count, sample point count, and Modulation Transfer Function as shown in Figure 12.3, below.
Spatial Sample Points Figure 12.3 Whole pixel image sampling.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
333
Gray Levels
If the optical MTF of the camera is low, the image quality is limited to the OMTF, and the image is overly smoothed. The spatial sample points and the gray levels at those points map out an approximation of a continuous topology. If the sample points or the gray levels, quantization, are too spread apart, the high spatial frequencies or the fine details are lost, as may be seen in the figure here. Similarly, during image reconstruction on a display, the three dimensions of displayed image quality are gray level count, addressability, and display MTF. Note that these are exact counter parts. Each display technology has intrinsic characteristics. For example, analog Cathode Ray Tube (CRT) displays have nearly infinite addressability in the horizontal axis. Addressability in the vertical axis is limited to the number of scan lines used. The CRT has infinite gray levels. But the image quality is limited by the MTF in the horizontal axis. In digital LCDs, Organic Light Emitting Diode Displays (OLEDs), and plasma displays (PDPs) on the other hand, the image quality is limited by the addressability, gray levels, and MTF simultaneously. Sub-pixel rendering uses the color sub-pixels to increase the addressability, as shown in Figure 12.4. The increased addressability reduces moire´ artifacts, giving a smoother, more
Spatial Sample Points Figure 12.4 Sub-pixel image sampling.
accurate reconstruction, as can be seen in the figure. Compare the image reconstruction using both red and green dots in Figure 12.4 to only using green dots, representing the center of an RGB whole pixel, in Figure 12.3. On displays that have been optimized to use sub-pixel rendering the limit of MTF is increased. With the increased MTF, higher spatial frequencies can be reconstructed.
12.3 Defining the Limits of Performance: Nyquist, MTF and Moiré Limits A common and important metric for display systems is the Modulation Transfer Function (MTF), defined as the ratio of the output modulation to the input modulation: MTF ¼ ðMAXout MINout Þ=ðMAXin MINin Þ
ð1Þ
Another useful metric often confused with MTF is the Modulation Transfer Function Ratio (MTFR), also known as the Michelson Contrast. This measures the contrast found within a given image. Note
334
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
that it depends only on the displayed image, having no terms that refer to the input modulation of the display system: MTFR ¼ ðMAXout MINout Þ=ðMAXout þ MINout Þ
ð2Þ
Gray Levels
The MTF of display systems usually varies with the spatial frequency. Because of a combination of video amplifier bandwidth, beam spot size and overlap, CRTs exhibit decreasing MTF with higher spatial frequencies. Traditionally the bandwidth (resolution) of CRTs is specified as the spatial frequency at which the MTF has dropped to 50%. However, the human eye can perceive contrasts much below this level depending on subtended spatial frequency (cycles per degree) and display brightness, responding more to the MTFR than the raw MTF values. Figure 12.5 shows a cosine wave being sampled at exactly twice the frequency of the signal. Note that the samples exactly catch the peaks of the signal. This is an example of a band-limited signal being sampled at exactly the Nyquist Limit. Although it appears from this graphic that one may sample a
Spatial Sample Points Figure 12.5 In-phase sampling at Nyquist Limit.
Gray Levels
band-limited signal that has significant Fourier energy at the Nyquist Limit, this is not correct, as the next graphic demonstrates. Figure 12.6 shows a sine wave at the very same frequency being sampled at the same Nyquist Limit. Note that the samples now catch the average value, rather than the peak values. A sine wave is exactly the same as a cosine wave, only it is shifted by 90 . Other phase shifts will result in sampling the wave at intermediate points. Since one cannot reconstruct, without ambiguity, the phase and amplitude of a
Spatial Sample Points Figure 12.6 Out-of-phase sampling at Nyquist Limit.
335
Gray Levels
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
Spatial Sample Points Figure 12.7 Whole pixel sampling below the Nyquist Limit.
signal at or above the Nyquist Limit, the signal must be band-limited below the Nyquist Limit if it is to be accurately reconstructed. Figure 12.7 shows a lower frequency sine wave being sampled well below the Nyquist Limit. Note that some of the peaks are sampled, but at other points, the sample is taken on the shoulders. If this sampled image is displayed one-sample-to-one-reconstruction point, a reconstruction error will be present. Since this is a band-limited image, an optical low pass filter may be used to reconstruct the original sine wave. The overlapping Gaussian spots will reconstruct a bright or dark peak between the shoulder samples, at the expense of reducing the MTF of the display at high spatial frequencies. However, for displays that do not have an optical reconstruction filter because they must also display non-band-limited images, the lack of an optical reconstruction filter means that the peaks are not reconstructed, leading to reconstruction error distortions of the image called moire´. Severe moire´ distortion occurs when reconstructing images have the same number of image reconstruction points as sample points just below the Nyquist Limit. As the signal frequency increases and approaches the Nyquist Limit, the moire´ amplitude and wavelength increases. The result is a signal that looks similar to an amplitude modulated (AM) signal, the carrier frequency being the Nyquist Limit and the moire´ spatial frequency being the difference between the Nyquist Limit frequency and the signal being sampled. Conversely, as the frequency is reduced below the Nyquist Limit, the amplitude and wavelength decrease until the moire´ spatial frequency equals the signal frequency. At that point, if the signal is inphase with the sample points, the moire´ distortion amplitude modulation disappears. Below this point, some moire´ amplitude modulation may reappear, but the amplitude will be small and the base signal wavelength long, where the contrast sensitivity function of the Human Vision System is low. The point below the Nyquist Limit at which the moire´ amplitude modulation first disappears for in-phase signals is defined here as the Moire´ Limit, important for non-band-limited images, such as text and icons. It is at one half the Nyquist Limit, one fourth the sample rate. The maximum moire´ distortion occurs for out-of-phase signals and the minimum is for in-phase signals which may be calculated by: MoireMax ¼ ½1 cosðpK=2Þ=½1 þ cosðpK=2Þ
ð3Þ
MoireMin ¼ j sinðpK1 Þj½1 cosðpK=2Þ=½1 þ cosðpK=2Þ
ð4Þ
where K is the ratio of the spatial frequency to the Nyquist Limit frequency.
336
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS 1
1
0.8 1−cos π⋅
K
1+cos π⋅
K
(
sin π⋅K
2
0.6
2
−1
)
1−cos π⋅
K
1+cos π⋅
K
⋅
2
0.4
2
0.2
0 0
0 0
0.2
0.4
0.6
0.8
K
1 1
Figure 12.8 Moire´ amplitude as a function of spatial frequency and phase.
The graph in Figure 12.8, below, shows the maximum and minimum distortion, as a ratio of the signal amplitude, that the moire´ distortion may take for a given spatial frequency. The amplitude of the moire´ pattern is the difference between the maximum and the minimum distortion: MoireAmplitude ¼ MoireMax MoireMin
ð5Þ
Gray Levels
Note that the amplitude of the moire´ pattern goes to zero at K ¼ 2=3. Just above this point may be taken as the maximum useful resolution for band-limited images, akin to the Kerr Factor for CRTs [17], as above this limit the moire´ distortion and amplitude is excessive. With sub-pixel rendering, the number of points that may be independently addressed to reconstruct the image is increased. This increases the spatial frequency of the Kerr Factor and Moire´ Limit for a given number of sub-pixels as shown in Figure 12.9, below. When the green sub-pixels are reconstructing the shoulders, the red sub-pixels are reconstructing near the peaks and visa-versa. An optical reconstruction filter is still desirable for band-limited images,
Spatial Sample Points Figure 12.9 Sub-pixel sampling below the Nyquist Limit.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
337
but should be tuned for sub-pixel rendering using less blur, than for whole pixel rendering. This allows for near-ideal image reconstruction with less loss of MTF at higher spatial frequencies. For non-bandlimited text fonts, increasing the Moire´ Limit allows the font designer to use spatial frequencies and phases that would have created horrendous moire´ distortions had it been whole pixel rendered. The improvement is most noted on italic fonts which exhibit different phases on each row. This reduction in moire´ distortion is the primary benefit of sub-pixel rendered fonts on the conventional RGB stripe panel. Although sub-pixel rendering increases the number of reconstruction points on the display, increasing the Moire´ Limit, this does not always mean that higher spatial frequencies may be displayed on a given arrangement of color sub-pixels. Another phenomenon occurs as the spatial frequency is increased past the whole pixel or single color Nyquist Limit; chromatic aliasing may appear with higher spatial frequencies in a given orientation on the color sub-pixel arrangement. Figure 12.10 shows an example of chromatic aliasing when the single color Nyquist Limit is exceeded. This case shows the result of attempting to place black and white lines at four sub-pixels per
Figure 12.10 RGB Stripe layout showing chromatic aliasing.
cycle on the RGB Stripe architecture. One can visually see that the lines, instead of being white, are colored. Starting from the left, the first line is red combined with green to produce a yellow-colored line. The second line is green combined with blue to produce a pastel cyan-colored line. The third line is blue combined with red to produce a magenta-colored line. The colors then repeat: yellow, cyan, and magenta. This demonstrates that a spatial frequency of one cycle per four sub-pixels is too high. Attempts to go to a yet higher spatial frequency, such as one cycle per three sub-pixels, would result in a single solid color. Figure 12.11 shows an example of how a simple change to the arrangement of color sub-pixels (as shown in Figure 12.14, below) may allow a higher limit in the horizontal direction. In this case, the red and green order are interchanged every row to create a red and green checkerboard pattern. The blue sub-pixels remain in stripes. Now, one may display black and white lines at up to one cycle per three sub-pixels without chromatic aliasing, twice that of the RGB Stripe architecture. This limit is also the Nyquist Limit for both the red and the green sub-pixels considered separately. Not all layouts are created equal. Each particular layout may have a different Modulation Transfer Function Limit (MTFL), defined as the highest number of black and white lines that may be simultaneously rendered without visible chromatic aliasing. Different color sub-pixel layouts have different MTF and Moire´ Limits in different orientations. Mapping the limits aids in understanding and comparing layouts for their suitability for sub-pixel rendering in various applications. Figure 12.12 shows a graph mapping the MTF Limit and Moire´
338
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 12.11 Modified layout eliminates chromating aliasing.
Whole Pixel Moiré Limit
SPR Moiré Limit
MTF Limit Figure 12.12 Modulation Transfer Function and Moire´ Limits for the RGB Stripe layout.
Limits for the conventional RGB Stripe color sub-pixel layout. The center of the graph represents zero spatial frequency. Points away from the center represent higher spatial frequencies in a given orientation. The horizontal axis represents spatial frequencies in the horizontal direction. An example of horizontal spatial frequency would be vertical lines and spaces. The vertical axis represents vertical spatial frequencies. An example of vertical spatial frequency would be horizontal lines and spaces. Points in-between the axis represent diagonally-oriented spatial frequencies. For example, lines and spaces on the display running from the upper left to the lower right would be represented as a point in the upper right quadrant of the graph. In this graph, the MTF Limit is shown to be the same square for both whole pixel rendering and sub-pixel rendering. In this RGB Stripe layout, the MTF Limit is unaffected by the method of rendering since the wavelength of the chromatic aliased signal increases at a very rapid rate just above the Nyquist Limit. However, the Moire´ Limit is substantially expanded, especially in the horizontal axis, by sub-pixel rendering. Note that it is the increased Moire´ Limit with sub-pixel rendering that provides the increased image quality for sub-pixel rendered text, but with no increase in the MTF Limit, no real increase in resolution occurs.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
339
If the color sub-pixels of the conventional RGB Stripe layout were rearranged to form a red and green checkerboard, as shown in Figure 12.14, the sub-pixel rendered MTF Limit would be doubled in the horizontal direction as shown in Figure 12.13, below.
MTF Limit
Figure 12.13 Comparison of Modulation Transfer Function Limits of the RGB Stripe and modified layouts.
Figure 12.14 Layout modified to swap the red and green sub-pixels every other row.
The graph for the conventional RGB Stripe layout is also shown superimposed for comparison. Note that the sub-pixel rendered Moire´ Limit is the same for the rearranged layout and the original RGB Stripe layout. The MTF Limit is increased because both red and green sub-pixels may be found, in a single column, just as both red and green may be found in a single row for both layouts. While it took three columns on an RGB Stripe to make a white line, with this modification, it takes only one and a half, the blue being shared by two columns of red and green, to make a single white line. Thus, the exact layout of colors significantly impacts the sub-pixel rendered MTF and Moire´ Limits. It is possible to achieve higher MTF performance than the single color Nyquist Limit in the luminance component of the image, up to the combined red and green sub-pixel luminance Nyquist Limit, but unless the MTF is rolled-off, the viewer with normal trichromatic vision will notice and object to the chromatic aliasing, often referred to as ‘color fringes’. As the spatial frequency exceeds the single color Nyquist limit, the chromatic aliasing starts with narrow and shallow color bands, deepening and widening as it approaches the combined sub-pixel pattern luminance Nyquist Limit. At some point near the combined sub-pixel Nyquist limit, a solid and deep color error appears. However, the eye can tolerate a small color error, thus if the MTF rolls off, it counters the deepening color error, keeping it within acceptable limits [7]. The eye continues to detect the presence of this contrast up to the limit of the Contrast Sensitivity
340
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 12.15 PenTile Matrix L2 showing high spatial frequency diagonal lines.
Function (CSF). This contrast is measured by the MTFR. Thus, in some circumstances, it is possible to reconstruct a luminance signal above the single color Nyquist Limit. For example, examine the PenTile Matrix1 L2 in Figure 12.15. Note that the diagonally oriented luminance signal (black and white lines) being reconstructed here is slightly higher than the Nyquist Limit for a single color. But the luminance channel of the human vision system is ‘color blind’ (if one will pardon the truism). The red/green chroma channel of the human vision system will blend the colors together, and not see the red and green stripes if the spatial frequency is above 8 cycles/degree from a given color peak to the next. Given that the useful limit on the luminance channel is approximately 30 to 60 cycles/degree, if the luminance modulation is put at 30 cycles/degree, then the chromatic modulation shown here would be 15 cycles/degree, above the limit of 8 cycles/degree and thus not visible. At higher resolutions, the chromatic modulation would be even higher, eliminating the possibility of seeing this short wavelength chromatic aliasing. Lower resolutions are possible as long as the chromatic aliasing component remains above the red/green chroma channel limit; thus, maximum luminance resolutions, based on sub-pixel density, as low as 20 cycles/ degree may be used. Any substantially higher displayed spatial frequency luminance modulation than shown in Figure 12.15 would soon cause the color modulation to lengthen the period of the chromatic aliasing signal. As it approached 8 cycles/degree it would become visible. Hence, the new and useful metric, the Modulation Transfer Function Limit (MTLF), defined as the highest luminance spatial frequency modulation that may be shown on a display without visible chromatic aliasing. Although slightly physical-resolution dependent, the rapidity with which the wavelength of the chromatic aliasing signal increases means that for practical purposes, it remains the same for all resolutions above a critical threshold. An important implication of the above example is the limit on how sharp a diagonal stroke (line) of a given font may be rendered. A half wave of the above pattern would involve one color, centered on the line to be rendered, having a large change from the background (dark, if on a white background) while sub-pixels on either side having the opposite color (red vs. green) would be at half that value to maintain the same color and luminance weight. This is precisely what is the result of a sub-pixel rendering algorithm when rendering diagonal text strokes or lines. Note, however, that the addressability, the placement of the centerline, may be on any diagonal color, maintaining full diagonal addressable resolution, reducing moire´ distortion. The sub-pixel rendering algorithm must include a filter to limit the Fourier energy of spatial frequencies above the MTF Limit to eliminate color fringes. Digital realizations of low pass filters typically demonstrate varying roll-off characteristics that will reduce the MTF and MTFR as the spatial frequency increases. The goal is to design a filter that eliminates detectability of color error, by reducing the MTF above the MTF Limit, while simultaneously keeping the MTFR high below the MTF Limit.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
341
Although one could use the Bayer pattern as a biomimetic display (as it was used for avionics displays in the late 1980s), it does not map well from conventional image data sets which are comprised of samples on a square grid. The luminance reconstruction on the Bayer is best performed from a diamond grid, that is, a square grid that is rotated 45 . The Bayer pattern also suffers from color imbalance between the dominant green area compared to the red and blue area, unless either the color filters and/or the backlight color is adjusted. A family of display layouts that do map well and is color balanced is very similar however, as shown in Figure 12.16 at the same effective resolution (MTFL):
Figure 12.16 PenTile MatrixTM layouts.
In the L6, also known as the Takahashi pattern, the green color plane may reconstruct the luminance channel on a one-to-one basis, while the red and blue may reconstruct the chroma channels at the same resolution ratio as the Bayer pattern. Similarly, the L6W may use the green and white sub-pixel to map the luminance signal. Thus for band-limited images, especially images that had been compressed with 4:2:2 sampling, these layouts performs quite well. These layouts may also reconstruct non-band-limited images such as black and white text and icons with virtually the same performance as the more familiar RGB Stripe display, with one third less sub-pixels. That is to say, that it uses two sub-pixels per pixel, on average, instead of three. A more information-efficient biomimetic layout, the L2, uses both the red and green sub-pixels to reconstruct the luminance channel, just as the HVS uses both the L and M cones to sample it [3, 4, 8]. Also like the HVS, the blue reconstruction points are the lowest resolution, just as the S cones have the lowest sampling resolution [15]. As in the HVS, the red/green chroma channel resolution is half that of the luminance channel, while the yellow/blue chroma channel is half again. This layout can reconstruct images using less than half the number of sub-pixels as the RGB Stripe layout, at only one and a quarter (1.25) sub-pixels per pixel, on average. This is approaching the absolute limit of one sub-pixel per pixel. Although the L2 layout has very high efficiency, it has unusual sub-pixel shapes and drive line layouts, challenging for LCD and PDP manufacturing. Two layouts that are topologically very similar to the L2 find greater favor from that perspective are the L1 and L1W. These also use the red and green sub-pixels to reconstruct the luminance channel. The efficiency is only slightly lower than the L2 at one and a half (1.5) sub-pixels per pixel.
342
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
12.4 Sub-pixel Rendering Algorithm Full color images consist of an array of three vectors. To render these vectors onto the non-coincident reconstruction points that each color sub-pixel provides a vector matrix convolution is required to convert the RGB vector image to the R0 G0 B0 sub-pixel rendered vector image as shown in Figure 12.17, below.
Figure 12.17 RGB vector convolution sub-pixel rendering.
The algorithm is a matrix multiplication of vector arrays as shown in the equation above. To explicate the algorithm, each color primary image array, such as [R] is convolved with a filter kernel whose integral sum is one (1). To this is added the resulting value of two other convolutions of a second and third filter kernel whose integral sums are zero (0) with the other two color primary image arrays, for example [G] and [B]. The sum of these three operations is the sub-pixel rendered image for that color plane, in this example [R0 ]. The first, unity sum, filter is a low pass filter designed to remove high spatial frequencies above the single color Nyquist Limit for that color primary sub-pixel array to reduce or eliminate the possibility of chromatic aliasing. The second and third, zero sum, filters are high pass filters that serve to impress upon a given color primary image the high spatial frequency modulation of the other two color primary images. Since these filters are zero sum, they do not change the average brightness or color of the combined image. Instead, they shift energy from neighboring sub-pixels to allow high spatial frequency luminance modulation in the other color channels to be expressed by each sub-pixel, regardless of color. These zero sum filters serve to sharpen the image back to the original image sharpness. Thus the high spatial frequency luminance information is represented at the individual sub-pixel level, while the chromatic signal is low pass filtered to be expressed over a neighborhood of sub-pixels of all the color primaries. Using the PenTile MatrixTM L2 as an example, the process of determining a simple set of sub-pixel rendering filter kernels, optimized for non-band-limited images, is explicated below. Each sub-pixel shown in Figure 12.18 has an ‘optical center of gravity’ that the eye sees as being a reconstruction point shown in Figure 12.19.
Figure 12.18 PenTile Matrix L2.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
343
Figure 12.19 Reconstruction points for the PenTile Matrix L2.
The traditional, conventional full color data set is an orthogonal array of equidistant, coincident RGB sample points. Figure 12.20 shows a graphical representation of the conventional color data set. Each point represents the pixel sampling point for red, green, and blue image data. Each square represents the effective pixel sample area for each data point. One way to think of it is to picture the representation as being like a camera sensor chip; any information that falls within a square is integrated, or averaged, with other information on the square.
Figure 12.20 Conventional image data set.
This interpretation of the conventional image data set is different than the traditional signal sampling theory of Shannon and Nyquist in which they use a point sample only, a mathematical abstraction called a Dirac Function. Cameras and computer generated text operate much closer to the manner described above, rather than the mathematical abstraction of traditional sampling theory. Further, the image may be non-band-limited in violation of classical Shannon-Nyquist sampling theory. For example, an image may have a single black dot in a bright white field, which exhibits Fourier energy components in all directions up to infinite spatial frequency. The sub-pixel rendering algorithm must therefore handle both classical band-limited and non-band-limited images. To understand how sub-pixel rendering can map information from the conventional data set to the PenTile Matrix, we overlay the conventional data set on top of the reconstruction points of the Layout 2 architecture as shown in Figure 12.21, below. Note that the conventional data pixel points map one-to-one with the red and green checkerboard. For conceptual convenience one may consider that with sub-pixel
344
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 12.21 Mapping reconstruction points and conventional image data showing a logical pixel.
rendering, each red and green sub-pixel becomes the center of a logical pixel. Each red and green subpixel is shared with four neighboring logical pixels. Each blue sub-pixel is shared with four logical pixels. When a single input pixel has a non-zero value in each of its chroma components, the values are mapped to the sub-pixels of the PenTile panel according to the filter kernel coefficients. Since each
Figure 12.22 White dot at logical pixel with green center.
Figure 12.23 White dot at logical pixel with red center. Red at 50% energy; four greens each at 12.5%; blue at 25%. Note that the blue sub-pixel area or brightness is twice that of the red and green sub-pixels for the L2 architecture. The energy is multiplied by two to determine the color balance.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
345
sub-pixel gets values from several input pixels, the reverse is also true; several sub-pixels are used to express the value of each input pixel. This forms the basis of the ‘logical pixel’. If, on a black surround, a single white input pixel is turned on, then six sub-pixels will turn on to various degrees. Below are examples of the ‘one pixel to one sub-pixel’ mapping. Note that the values of the sub-pixels are the same as the values of the self color filter kernels, ignoring for the moment, the cross-color filter contributions. These logical pixels are the result of the ‘impulse response’ of only the low pass antichromatic aliasing filter kernels, ignoring the sharpening filters. If one adds up the total energy of each of the colors, one notes that they are equal. The center subpixel of the logical pixel is set to 50%, the surrounding four sub-pixels of the opposite red/green color are set to 12.5%, which when added together also equals 50%. The nearby blue sub-pixel is set to 25%, which when multiplied by its larger area is also 50%. At first glance it may be thought that the logical pixel is dimmer than a conventional three sub-pixel RGB pixel. Not so, as the logical pixel involves twice as many sub-pixels and thus has the same brightness. Furthermore, as neighboring logical pixels overlap each other, turning on mulitple neighboring pixels, the brightness of the sub-pixels involved are equal to the linear addition of the overlapping logical pixels. A Difference of Gaussians function, illustrated in Figure 12.24, may be used to create DOG logical pixels which sharpen images, especially text. For the one-pixel-to-one-sub-pixel mapping the DOG sharpening filter might look like the logical pixel, illustrated in Figure 12.25. The filter ‘borrows’
Figure 12.24 Difference of Gaussians (DOG) function.
Figure 12.25 Difference of Gaussians (DOG) logical pixel.
energy of the same color from the corners, placing it in the center. The color is kept balanced but the luminance signal modulation is amplified. The Difference of Gaussians logical pixel is convolved with the eye’s retinal DOG filter to become a small focused dot. The brain sees an even sharper white dot with the energy in the center. When diagonally neighboring DOG logical pixels overlap, the DOG filter amplifies the difference between their values, increasing contrast.
346
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
12.5 Area Resample Filter Generation Area Resample functions may be used as a reasonable approximation of Gaussian functions with the added feature of ensuring proper color balance. In Figures 12.26 and 12.27, each green point represents
Figure 12.26 Green reconstruction points with associated resample areas.
Figure 12.27 Red reconstruction points with associated resample areas.
the green reconstruction point, while each red point represents the red reconstruction point, of the PenTile Matrix L2. Each square or edge polygon represents the resample area for each reconstruction point. Note that the red and greens areas are the same, but 180 out of phase. The resample areas are defined by the areas that are closest to a given reconstruction point. The boundaries are defined as the lines or line segments that are equidistant from two or more reconstruction points. To generate sub-pixel rendering filter kernels the resample areas are overlaid with the sample areas of the input conventional RGB pixel array. For ‘one-pixel-to-one-sub-pixel’, each red and green subpixel resample area overlaps five input pixel sample areas as shown in Figure 12.28. The center pixel covers 50% (1/2) of the resample area. Each neighboring pixel covers 12.5% (1/8) of the resample area. The total equals 100%. Each of the area percentages are used in the two dimensional anti-chromatic aliasing filter kernel. Red and green low pass anti-chromatic aliasing filter kernels: 0 0.125 0
0.125 0.5 0.125
0 0.125 0
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
347
Figure 12.28 Resample areas mapped to conventional data set to determine filter kernel.
The blue sub-pixels are mapped to four input pixels, thus the filter kernel is a two by two filter with each value multiplied by 25%: 0.25 0.25
0.25 0.25
The filter kernel represents coefficients in an equation; the positions of the coefficients in the table above represent the relative positions of the input pixels being resampled to the sub-pixel reconstructing them. The coefficients are multiplied by the value of the input pixel, then summed and used as the value of the reconstruction sub-pixel. To generate a simple DOG wavelet sharpening filter a normalized sharpening area filter defined by the centers of the four nearest neighbors of the same color is subtracted from the area resample filter as shown in Figure 12.29 and mathematically below: 0 0:125 0 0:125 0:5 0:125 0 0:125 0 Area Resample
0:0625 0:125 0:0625 0:125 0:25 0:125 ¼ 0:0625 0:125 0:0625 Sharpening Area ¼
0:0625 0 0:0625 0 0:25 0 0:0625 0 0:0625 DOG Wavelet
Note that both the Area Resample and the Sharpening Area filter integral sum is one by definition. Therefore, the DOG Wavelet integral sum is zero since one minus one equals zero. The sharpening filter may have almost any amplitude. The higher the absolute value of the coefficients in filter, the greater the sharpening effect. However, one set of values has particular value. If the total of the DOG wavelet uses the coefficients 0.25 and 0:0625 as shown both above and below, then the horizontal and vertical MTF will equal one (1) for all spatial frequencies from zero up to and including the Nyquist Limit. Note that when added to the Area Resample kernel, the resulting filter kernel still has the integral sum of one. Also note that each column and row has the sum of zero. It is this property that gives rise to the MTF of one for all horizontal and vertical spatial frequency components, but still rolls off to zero MTF
348
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 12.29 Generating Difference of Gaussians (DOG) filter kernel.
for diagonal spatial frequencies that approach the diagonal Nyquist Limit. These DOG values may be spread over the three filter kernels for each red, green, and blue color input channel, proportionally to their contributions to white luminance. The blue input and even the output channels may or may not be sharpened as the yellow/blue chroma channel in the HVS would not be able to perceive the difference. 0:0625 0 0:0625 0 :125 0 0 0:25 0 þ 0:125 :5 :125 0:0625 0 0:0625 0 :125 0 DOG Wavelet þ Area Resample
0:0625 0:125 0:0625 0:125 0:75 0:125 0:0625 0:125 0:0625 ¼ Sharpening Filter Kernel
¼
These simple Area Resample approximations of Gaussian functions work quite well for one-pixelto-one-sub-pixel mapping, especially for non-band-limited images. It also works very well for scaling up an image. However, for down sampling supersampled images, the flat Area Resample function is replaced with better approximations of true Gaussian functions for best results.
12.6 RGBW Color Theory Liquid Crystal Displays are essentially light modulating devices. A bright backlight supplies light that is modulated by the sub-pixels to render an image. The color filters overlying the sub-pixel substantially reduce the brightness. Additional light is lost due to the drive electronics blocking light passage and absorption by polarization layers. In a high resolution display, as little as 4% of the light from the backlight reaches the viewer. One way to improve this has long been known, but proven impractical, until now; removing the color filter from some of the sub-pixels, creating a ‘white’ sub-pixel. One reason that this was impractical was that adding a fourth color meant that a display needed four sub-pixels, RGBW, per pixel. Another reason was that adding white to an image tends to desaturate, wash-out, the colors. A biomimetic display with digital signal processing gamut mapping algorithm plus sub-pixel rendering (GMAþSPR) allows fewer sub-pixels per pixel and keeps the color correct [10]. The ‘real world’ is a subtractive color system. Save for relatively rare emissive light sources such as LEDs and lasers, high brightness saturated colors are not found in real world scenes that are viewed by the Human Vision System in the course of an individual going about his or her daily life. In daily experience, colors are formed by relatively bright white light falling onto pigmented objects that absorb some portion of the light and reflecting the rest. Colors are formed by selectively absorbing part of the spectrum and reflecting another part of the spectrum. Non-color-saturated objects such as white or pastel colored objects may substantially reflect most of the light, thus being radiometrically and visually brighter than saturated color objects. Conversely, objects that form saturated colors absorb most of the light and reflect only a narrow band (or bands in the case of purple or magenta) of the full spectrum of light falling on it. This reduces the brightness of saturated colored objects compared to non-saturated
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
349
Luminosity
Saturation
Grey
Saturation
Figure 12.30 Real world luminance vs. color saturation relationship.
color objects as shown in Figure 12.30. This is especially true for saturated colors that are near the corners of the color triangle of red, green, and blue, as to achieve these colors, the light must be in very narrow wavelength bands. Furthermore, specular reflections occur at the surfaces of objects which do not substantially alter the spectrum of light falling on them, giving reflection highlights that are non-colorsaturated, even on objects that are observed to be highly color-saturated in lambertian reflection. These highlights are the brightest portions of many natural scenes (e.g. the mirror like reflection of an overhead light on a brightly colored billiard ball is white, not colored). Thus, by their very nature, real world scenes may have bright non-color-saturated objects and darker color-saturated objects. Figure 12.32 shows a natural image showing bright highly saturated colors. This photograph is an extreme example, showing brightly colored objects; a typical photograph might not have such brightly colored objects. Shown in Figure 12.31 is a histogram representation of the colors present, showing the brightness and CIE color coordinates of the colors found in Figure 12.32. Note how the brightest red,
Figure 12.31 Histogram of color brightness for Figure 12.32.
Figure 12.32 Image with bright saturated colors. (See Plate 9).
350
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 12.33 Histogram of color brightness for Figure 12.34.
Figure 12.34 Image with bright unsaturated colors. (See Plate 10).
green, and blue colors are far darker than the white. The brightest of the strong colors is the yellow. Note that even this color is only half as bright as the white. In Figure 12.33 is a representation of the brightness and CIE color coordinates of the colors found in Figure 12.34. This plot shows the typical distribution of colors vs. brightness found in large ensembles of natural images. When comparing the statistical occurrence of highly saturated colors vs. nonsaturated colors, saturated colors are relatively rare in natural images. When saturated colors do occur, they are quite dark, as can be seen in the histograms. Further, given the subtractive nature of color formation in natural scenes, bright saturated colors are almost non-existent. For electronic displays to render natural scenes, it would be best for it to be able to create very bright non-color-saturated colors and darker highly saturated colors. The conventional three primary, RGB system, is a color additive system whose non-saturated color brightness is limited to the addition of partially-saturated colors. The brightness/saturation color gamut of the RGB system generally has bright non-saturated colors, but fails to reproduce very bright non-saturated colors as shown in Figure 12.35. There is a trade-off between the brightness of the non-saturated colors and the color saturation gamut of the display. The more saturated the color filters, the less these filtered colors may add to the non-saturated brightness. This creates a luminance/saturation compression in which the non-saturated colors are reduced in brightness and saturated colors are compressed, desaturated, to fit within the limitations of the compromise system. Another color formation system is required to better display natural images. An RGBW Liquid Crystal display provides an additional primary: White. These White sub-pixels are substantially brighter than the Red, Green, and Blue sub-pixels since the White is formed by using a transparent filter that allows substantially all of the light through, while the other three colors are
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
351
Luminosity
Saturation
Grey
Saturation
Figure 12.35 RGB Primary luminance vs. saturation.
formed by filtering out all but a narrow band of the spectrum. Since such filters are not ideal bandpass filters, the transmissivity is less than 100% even in the desired bandpass wavelengths, which further darkens the sub-pixel. The White sub-pixel has three or more times the brightness of the colored subpixels. Thus, the use of a white sub-pixel significantly increases the brightness of the panel when displaying non-saturated colors. If one-fourth of the area of the panel is used for the White sub-pixel, the brightnesses of the remaining RGB sub-pixels are reduced by one fourth, as shown in the diagram in Figure 12.36. Luminosity
Saturation
Grey
Saturation
Figure 12.36 RGBW Primary luminance vs. saturate compared to RGB Primary.
However, non-saturated colors may be formed with a contribution from the bright White sub-pixel, giving significantly higher brightness even as the colored primaries are made to cover a wider, more saturated color gamut as shown in Figure 12.37. It is interesting to note that color transparency film, such as for theatre presentation, also shares this characteristic that the non-saturated colors, especially white, are several times brighter than the brightest saturated primary color. In this regard, an RGBW system is better able to reproduce the theatre film experience. RGBW displays have a brightness/color gamut envelope shape that is closer
352
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Luminosity
Saturation
Grey
Saturation
Figure 12.37 Increasing the color gamut of the RGBW Primaries.
to that of the ‘real world’ as well, as shown in Figure 12.38. The loss of brightness of the saturated colors is an acceptable trade-off when considering the statistics of natural images. However, as the choice of conventional RGB primaries was a compromise between the desired saturation of the primaries and the brightness of non-saturated colors, the introduction of the White sub-pixel offers a new optimization point.
Luminosity
RGBW
"Real World"
RGB
Saturation
Grey
Saturation
Figure 12.38 RGBW Primary system is closer to the real world color brightness gamut.
Since the White sub-pixel provides the majority of the brightness of non-saturated colors, the saturation of the RGB primaries may be increased with only minor decrease of brightness of the nonsaturated colors as shown in Figure 12.37. This decrease may be partially or even completely offset by increase in the aperture ratio of the 1:2 sub-pixel aspect ratio in the L6W, especially for small panels, given their reduced horizontal sub-pixel density. This may result in an RGBW display that has both increased brightness and saturation envelope at every point compared to the conventional RGB display.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
353
Fully saturated colors may be darker compared to white on the optimized RGBW system compared to the RGB system, but at the saturation maximum color points of the reference RGB system, the RGBW brightness may be the same or higher. No real loss of color brightness occurs. The optimized RGBW system more closely approaches the ‘real world’ natural image envelop and statistics, providing higher brightness and deeper saturation as shown in Figure 12.38. But this begs the question, ‘If RGBW systems are superior, why aren’t they more common?’ The answer is that prior to Clairvoyante’s Gamut Mapping Algorithm (GMA) using RGBW meant that colors were washed out, desaturated, by the inadequate old GMA. Further, before Clairvoyante’s Sub-pixel Rendering (SPR), adding white meant adding sub-pixels, increasing the cost. Clairvoyante’s RGBW system maintains color accuracy while lowering cost and increasing brightness. Early algorithms for RGBW usually treated the White sub-pixel as though it represented the luminance value ‘Y’ of the image, converting from YUV to RGBW color format: R ¼ Y 1:371 V;
G ¼ Y 0:698 V 0:336 U;
B ¼ Y þ 1:732 U;
W¼Y
ð6Þ
This severely desaturated all of the colors, especially the greens, as the more saturated green colors were more heavily weighted in the Y component by the conversion from RGB to YUV. To overcome this short coming, more recent algorithms used: R ¼ R;
G ¼ G;
B ¼ B;
W ¼ minðRGBÞ
ð7Þ
This algorithm was an improvement in that maximally saturated colors retained their proper saturation. But moderately saturated colors were still severely desaturated, as any addition of white, without adjusting the values of the remaining primaries, will always desaturate the color. Recent work from other investigators has focused on ‘color correction’ of the above algorithm using ‘gamma correction’, but this is both complicated and unsatisfactory; and the ‘min(RGB)’ algorithm does not utilize the full brightness/saturation gamut envelope. A different approach is required. Modern algorithms, including Clairvoyante’s proprietary algorithm, treats the White sub-pixel as it should be treated; as another color primary. Starting from color theory basics, the algorithm transforms the values of the input RGB using linear matrix multiplication. This results in a transform that maintains the hue and saturation of all colors within the gamut hull, even correcting for nonstandard color primaries if required. Consider the diagram in Figure 12.39. It shows a color space diagram consisting of three vector scales, red, green, and white, originating at black. This is a projection of a three primary display consisting of red, green, and blue primary pixels or sub-pixels in which the color space is projected onto the red/green color plane, the blue color vector projection lying coincidently with the white color vector projection. Or it may be viewed as the result of a four primary display consisting of red, green, blue, and white primary pixels or sub-pixels in which the color space is projected onto the red/green color plane, the blue color vector projection lying coincidently with the white color vector projection. The diagram in Figure 12.39 also shows how two primary vectors, red and green may by vector addition result in a unique color point. It will be understood that a three color vector addition of red, green, and blue will also result in a unique color point in a three dimensional color space that may be projected onto the red/green color plane of this diagram. Conversely, to reach a given point there will be one and only one set of vectors red and green that may by vector addition reach the point since the red and green vectors are orthogonal to each other. In the example given here the red vector is three units of red energy along the red axis, while the green vector is four units of green energy along the green axis. Thus the resulting color point can be said to have a red/green color space coordinate of ‘(3,4)’.
354
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 12.39 Color Vector Space with red and green primary vectors.
The diagram in Figure 12.40 shows how three primary vectors, red, green, and white may by vector addition result in the same unique color point. In the example given in here, the red vector is two units of red energy along the red axis, the green vector is three units of green energy along the green axis, while the white vector is one unit of white along the white axis. However, the white vector may be decomposed into red and green vector components of one unit of energy each. The resulting color point can be said to have a red/green color space coordinate of ‘(3,4)’. Note that to reach a given color point, many possible combinations of red, green, and white vectors may be used. The combination of RGW to reach ‘(3,4)’ color point may be 2,3,1 as in this example. Or it may be 1,2,2 or 0,1,3. Each of these combinations of color vectors is called a metamer for that given color point. These different metamers are useful in improving sub-pixel rendering as will be explained below. Historically, using a fixed brightness white backlight, the brightest white input color in the RGB gamut is mapped to the highest white in the RGBW gamut. This creates a ‘Virtual RGB’ color space as shown in Figure 12.41. The brightness increase from RGB to RGBW may be a linear and constant multiplier for all colors inside of the RGBW brightness/saturation gamut hull, mapping a color A to A0 . This is where most of the colors of natural images lie. In the less likely event that a bright, saturated color C in the image exceeds the RGBW brightness/saturation gamut hull when mapped to C0 , the GMA algorithm maps the color to the surface of the color gamut hull in several possible methods, selectable by setting registers in the GMA. One method, ‘clamp to luminance’, is to map the color at that same hue and brightness that the display may render, allowing the saturation to decrease to C00 . This has the advantage of maintaining simultaneous luminance contrast, an important feature for the Human Vision System. Another method, ‘clamp to black’, is to map to the highest brightness possible while maintaining hue and saturation. Another choice is to use a system that maps all colors in a line
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
Figure 12.40 Color Vector Space with red, green and white primary vectors.
Luminosity
RGBW "Virtual RGB"
A'
RGB
C''
C'
A
C
Saturation
Grey
Saturation
Figure 12.41 Mapping colors from RGB to RGBW color space when using a fixed level backlight.
355
356
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
between black and the brightest possible input color of a given saturation linearly or non-linearly between black and the brightest output color of that same saturation. This last algorithm darkens more highly saturated colors that would otherwise map to colors inside of the gamut hull to make room for the colors that would otherwise map to outside of the gamut hull [16]. These algorithms may be called, ‘compress to black’. Similarly, one may also ‘compress to luminance’. Unlike a fixed brightness backlight, a dynamic backlight whose brightness is varied in response to the image content is able to reproduce the full range of colors. The diagram in Figure 12.42 shows the resulting color/brightness gamut of an RGB primary color display using the vector space diagram introduced in Figure 12.39, above. The maximum saturated red
Figure 12.42 RGB color gamut.
color forms one corner while the maximum green saturated color forms another corner of the color gamut. When all of the colored primaries, the red, green, blue (not shown for clarity) primaries, are turned on to their maximum value of five units the result is the maximum desaturated color, white, with a value of five units. The three color primaries form a gamut hull in the shape of a cube. The choice of units is arbitrary. The use of five units here is only for explanatory convenience. The diagram in Figure 12.43 shows the resulting color/brightness gamut of an RGBW display with color primary vectors that may reach five units maximum. The maximum saturated red color forms one corner while the maximum green saturated color forms another corner of the color gamut. When all of the colored primaries, the red, green, blue (not shown for clarity) and white primaries, are turned on to their maximum value of five units the result is the maximum desaturated color, white, with a value of ten units. The four colors form cube that has been stretched in the white direction. Each color point inside of the color gamut may be formed from a number of metameric vector combinations of red,
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
357
Figure 12.43 RGBW color gamut.
green, blue, and white primaries. The gamut hull surface can only be formed from one combination of the four primaries. For the same backlight power, the same brightness of saturated color may be reproduced in the PenTile RGBW1, since the color sub-pixels have higher aperture ratio to offset the area surrendered to the white sub-pixel. That is to say that the maximum red in the RGB color gamut is the same brightness as the maximum red in the RGBW color gamut, and so on for green and blue. Note that the maximum value of the color gamut of the RGBW display, at ten units of white, is twice that of the RGB display in Figure 12.42 above at only five units, for the same backlight power. Thus, for a given specified maximum white brightness required for a given display the backlight power may be reduced to half that for a PenTile RGBW compared to an RGB display. The vector diagram in Figure 12.44 shows the resulting color/brightness gamut of an RGBW display that has the backlight reduced by half, resulting in the primaries each having half their previous values. The maximum saturated red and green are two and half (2.5) units. The additional corners that result from the vector addition of red and white primaries as well as the vector addition of green and white are each reduced in half. The maximum white point value is reduced in half at five units to be equal to the maximum of the RGB display above. The vector diagram in Figure 12.45 shows the RGBW color/brightness gamut with the backlight set to 50% power fully enclosing a typical natural image color/brightness gamut. The brightest white of the image, which is as bright as RGB color space will allow, is identical to the brightest white of the RGBW gamut with the backlight set to 50%. Since all of the colors used in the image fall within the color/brightness gamut of the RGBW display with its half power backlight, no OOG mapping or backlight adjustment would be needed. Thus, black and white text and many natural images will use only 50% backlight power.
358
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 12.44 RGBW gamut at 50% backlight power.
In contrast, examining the vector diagram in Figure 12.46, while the brightest white of this image is within gamut, portions of the color/brightness gamut of this image, the bright saturated colors, exceeds the 50% backlight brightness RGBW saturated color/brightness gamut. However, when the brightness of the backlight is adjusted, increased to 95%, the RGBW color/brightness gamut is increased sufficiently to contain all of the image color/gamut. This allows very bright saturated colors, such as might be found in television advertisements and cartoons, to be faithfully reproduced. Coversely the diagram in Figure 12.47 shows another situation where the input image is dark and has a gamut that lies completely inside the RGBW gamut at 50% backlight power. In this case the backlight brightness can be reduced lower than half way. This would result in a smaller gamut that still encloses all the colors in the image gamut. This can be used to reduce backlight power requirements when dark images are displayed. The GMA will remap the quantized driver output levels, allowing more of them to be used when showing darker images, reducing quantization noise, the ‘banding’ found in digital televisions. Statistically, given the usual range of color brightness and saturation in natural images, when viewing video or a selection of images, the backlight power will average to around half of the peak value. Other modes of operation are available. For example, an ambient light sensor may be used to change the behavior of the system. For outdoor sunlight ambient conditions, a high brightness mode may be used that maps the brightness white of the RGB color space to the brightness white of the RGBW color space with the backlight power at 100% power, doubling the brightness. In darkness, the backlight may operate at less than 50% power for black and white images.
Figure 12.45 RGBW gamut hull enclosing typical natural image gamut.
Figure 12.46 RGBW gamut enclosing bright saturated image gamut.
360
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 12.47 RGBW gamut enclosing a dim image.
12.7 RGBW Sub-pixel Rendering Historically, the first RGBW displays were modifications of the Bayer pattern. One of the green subpixels was converted to white to become the RGBW Quad pattern shown in Figure 12.48. This is biomimetic, since the white sub-pixel works with the green to provide the higher resolution luminance signal reconstruction. While this layout solves the color balance issue found in the original Bayer
Figure 12.48 RGBW Quad Layout.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
361
Figure 12.49 PenTile RGBWTM (L6W).
pattern, it remained a poor map to conventional images. Because of the lack of a good sub-pixel rendering algorithm, this pattern was mapped as four sub-pixels per pixel, treating them as a single square pixel. Figure 12.49 shows a better arrangement, similar to the Quad pattern in being biomimetic, which uses green and white to reconstruct the luminance signal. This layout maps well to conventional data sets using sub-pixel rendering. This PenTile RGBWTM L6W is driven with a digital signal processing algorithm that converts the RGB or YCrCb signal to a colorimetrically equivalent RGBW signal, subpixel renders the image, and selects the best RGBW metamer for each sub-pixel based on the image content, from the large possible combinations of RGBW that provide the same color and brightness. The result is a bright, colorful display that averages only two sub-pixels per pixel [10].
12.8 RGBW Sub-pixel Rendering Algorithm After conversion from RGB to RGBW color vectors per input pixel, the image is sub-pixel rendered. The nature of sampling and filtering a three value vector to a three color LCD results in a nine filter values [12]. Thus, the expected number of filters for the RGBW sub-pixel rendering algorithm would be sixteen. A simplification may be made by extracting the luminance value of the original data and sub-pixel rendering from that. Thus, the RGBW sub-pixel rendering algorithm is reduced to only eight filters as shown in Figure 12.50. This also allows the sub-pixel rendering algorithm to be independent of the assumptions of the GMA on how it handles metameric selection and gamut clamping or compression.
Figure 12.50 RGBW vector sub-pixel rendering.
362
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
The digital signal processing SPR algorithm must filter out spatial frequencies on each color plane which would alias with the colored sub-pixel arrangement due to sub-sampling. This necessarily has the effect of softening the image. However, a degree of freedom exists in that luminance energy may be transferred between same color sub-pixels, which the SPR algorithm does, using a biomimetic, small support approximation of a two dimensional Difference Of Gaussians (DOG or ‘Mexican Hat Wavelet’) to impress the luminance signal upon each color sub-pixel grid down to the sub-pixel level as shown in Figure 12.51. This increases the image sharpness and apparent contrast by directly matching and stimulating the DOG filters in the retina [9].
Figure 12.51 Sharpening by energy exchange between same color sub-pixels.
Selection from among RGBW metamers for a given color and brightness gives the digital signal processing system a degree of freedom to optimize the rendering of the image down to the sub-pixel level. The axis of freedom is found in the luminance energy transfer between the white sub-pixels and the colored sub-pixels: W , RGB. The PenTile RGBW1 L6W layout may be analyzed as a checkerboard of white sub-pixels alternating with colored sub-pixels as shown in Figure 12.52. Thus the SPR algorithm for the PenTile RGBW1 display uses two degrees of freedom, transferring energy between same color sub-pixels and between white and the colored sub-pixels, again driven by a small support approximation of the Difference of Gaussians function.
Figure 12.52 Sharpening by energy exchange between W and RGB sub-pixels.
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
363
Figure 12.53 Generating Difference of Gaussians (DOG) Metamer Sharpening filter kernel.
The L6W layout uses the same area resample and DOG sharpening filters as the L1 and L2 which were explored above. It is treated as though it were a checkerboard of alternating white and colored sub-pixels. The red, green, and blue sub-pixels in a group centered on the green sub-pixels mapped to the same input pixels. On such a checkerboard, it is desirable to concentrate image contrast by selecting metamers based on the image content. For example, if a white sub-pixel is mapped to a pixel has a darker value than the surrounding pixels. It is desirable to darken that pixel by shifting energy to the surrounding color sub-pixels. This is accomplished by using a DOG Metamer Sharpening filter generated by subtracting a normalized sharpening area defined by the centers of the nearest surrounding sub-pixels of the opponent metameric pair from the central metameric pair area as shown in Figure 12.53 and mathematically below: 0 0 0 0 0:125 0 0 1 0 0:125 0:5 0:125 0 0 0 0 0:125 0 Area Resample Sharpening Area
¼ ¼
0 0:125 0 0:125 0:5 0:125 0 0:125 0 DOG Wavelet
This DOG Wavelet is used to sample the reference luminance image plane. Note that when added to the color plane area resample filter result, the DOG Wavelet exactly cancels the blurring effect of the area resample filtering: 0 0:125 0
0:125 0:5 0:125
Area Resample Color Plane
0 0:125 þ 0 þ
0 0:125 0 0:125 0:5 0:125 0 0:125 0 DOG Wavelet Luminance Plane
¼
0 0 0
0 0 1 0 0 0
¼ Effective Filter Luminance Image
The area resample filtering removes any spatial frequencies in the chromatic components of the image that might alias with the sub-pixel layout pattern. The DOG wavelet ensures that the high spatial frequencies of the luminance component of the image are reconstructed with high fidelity.
364
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
12.9 Gamma Correction and Quantization Error Reduction The human eye and brain experience brightness change as a percentage change, not an absolute radiant energy measurement. As the brightness increases, it takes a larger absolute increase in radiant energy to get a given perceived increase in brightness. This leads to a requirement: for equal perceived increments in brightness, or Lightness, on the screen, each increment must be logarithmically higher than the last. This curve is given by the following equation: L ¼ E1=
ð8Þ
The curve is conventionally called the ‘Gamma Curve’ for the logarithmic term. Displays are designed to output a gamma of approximately 2.2 to approximately fit the logarithmic requirement of the human eye. Images that are quantized to expect this non-linear gamma are ‘perceptually encoded’.
data
g1 ! display
g1
g ! eye brain
ð9Þ
The RGB to RGBW GMA and sub-pixel rendering algorithms are designed to work within a linear luminance space. Transferring luminance energy between color primaries with a gamut mapping algorithm or when a sub-pixel rendered image with very high spatial frequencies is displayed directly on a display with a non-unity gamma, color errors occur. As can be seen in the graph in Figure 12.54, the center of the white dot logical pixel on the PenTile L1 or L2 is set to input 50%, while the display gamma means that the display radiant energy is actually
Figure 12.54 Color error occurs without gamma correction.
25%. The problem occurs when the surrounding four pixels, set at 12.5%, are actually displayed at only 1.6%! The colors must balance in linear luminance energy to be the correct color. However, 25% 6¼ 6.4%, thus the center color dominates, producing a colored dot. On more complex images, color error induced by the non-linear display creates error for portions that have high spatial frequencies in the diagonal directions. The solution is a gamma correction pipeline in the GMA þ SPR data path. Starting from the display and working backwards may seem to be just that, but it is easier to understand the rationale for each
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
365
component’s function. We want the brain to see the data that was in the original image ‘’, that has been perceptually encoded, but we want it to be RGBW sub-pixel rendered ‘p ’. The human eye has a response function, ‘g’, that is approximately g(x) ¼ x1= ¼ x1=2:2 . In order to match, or rather cancel this function, the graphics system, but not necessarily the display itself, has been designed to have a gamma curve, a mathematical function that is the inverse of the eye’s mathematical response function; call it ‘g1 ’. When convoluted by the eye, g1 p becomes p . As noted earlier, the display gamma interferes with the sub-pixel rendering. To achieve the linear display system, the function of the display ‘f’ is identified and a gamma correction (or rather a cancellation function) ‘f 1 ’, which is generated to be the inverse of f. This function is applied after sub-pixel rendering, but before the display. This insures that the sub-pixel rendered data, with the proper gamma curve that matches the Human Vision System reaches the eye undisturbed by the display. But where do we impress upon the image the proper gamma term g1 ? It cannot be after sub-pixel rendering; therefore, it must be right before, in a precondition step. Thus we have the full data pipeline shown below. g1 Sp f 1 f g ! g1 ! g1 p ! f 1 g1 p ! g1 p ! p precondition GMA þ sub þ pixel gamma display eye brain gamma rendering correction curve response
ð10Þ
With the gamma pipeline in place, the GMA þ sub-pixel rendering produces the correct color balance. This also introduces both a problem and an opportunity. The problem exists when the desired gamma function g1 does not match the quantized electro-optical transfer function f. The opportunity exists when the desired gamma function g1 does not match the electro-optical transfer function f. The problem and the opportunity are bound together because it is possible to use the gamma pipeline to obtain an ideal system gamma on a display with a non-ideal quantized gamma. However, the problem arises when the grayscale bit depth of the display is the same, or even less, than the input image. In these cases, mapping and quantization error can create objectionable artifacts. The solution is to add dithering noise before the final quantizing function Qf 1 . g1
!
g
precondition
1
g p
rendering
!
1
g þ noise1
f 1 1
1
g p noisy ! Qf g p noisy ! Qg p noisy X GMA þ sub þ pixel gamma quantizer display
ðremoveÞgamma
!
Qf 1
noise
Sp 1
ðgamma correctionÞ
levels
!
p
eye
brain
response ð11Þ
Now, examining the system going through the pipeline from beginning to end, following the image data; the incoming image has been perceptually quantized with a given gamma, perhaps the sRGB standard, approximating the inverse of the eye’s non-linear response g. The gamma preconditioning step matches incoming image quantization, converting the non-linearly quantized image data x to the linear format g1 . This point is important, although the g1 function representation is used here for convenience, this data is actually linearly quantized data, and may be operated upon by the GMA and SPR functions that follow. Converting from the perceptually quantized data set to a linear data set requires an increase in bit depth to represent the linear values. For example, eight bit perceptually quantized data may be expanded to eleven bit linear data by a Look-Up-Table (LUT). After all linear color conversion, filtering, and mapping operations are performed, a scaled dithering noise signal is added to the data. The noisy image data is then quantized, reduced in bit depth to that of the display (for example, six bits). This quantization is performed with an LUT which stores the brightness values available on the display. Note that since this LUT matches the display, the display need not have
366
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
perfectly adjusted display levels, allowing for less expensive drivers. The noisy image on the screen is smoothed to the desired perceptual image by the filtering operations in the HVS before it is sent to the rest of the brain. A key enabler of the above pipeline is a source of noise that is designed to be beyond the Spatiotemporal Contrast Sensitivity Function of the HVS. Such noise may easily be constructed using high spatiotemporal frequencies of low amplitude in the chromatic and metameric channels of the RGBW sub-pixel rendered signal. Such noise is called ‘UV Noise’ or ‘Invisible Noise’. The noise is stored in the rendering core in the form of a ‘movie clip’ that is spatiotemporally tiled across the entire display screen. When added to the signal before the final quantizing LUT, the noise causes the output of the LUT to dither between two or more values from frame to frame and point to point in relative proportion to the relative position of the desired brightness level to the available quantized display brightness levels. The HVS being unable to see the high spatiotemporal frequency component of the dither, averages the dither to recover the higher bit depth grayscale value. This allows eight to twelve bit grayscale and gamma precision performance from low cost six bit LCD drivers.
12.10 Conclusion Clearly, using fewer sub-pixels to reconstruct an image allows lower-cost driver electronics and increases yields by reducing the total number of transistors in the active matrix backplane. An additional benefit of these biomimetic displays is that using fewer sub-pixels per pixel means that the aperture ratio, the area in each sub-pixel that is surrounded by a fixed width black border, is larger. In combination with the added white sub-pixels, this allows more of the backlight to shine through, increasing the brightness and contrast for a given backlight power. This may be used to increase brightness, lower the backlight power, and thus extend the battery life of mobile products, increase the saturation of the color sub-pixels for more colorful displays, or some combination of these. Reduced backlight power may also mean reduced backlight cost. Thus, biomimetic displays coupled with biomimetic digital signal processing creates a better display for less cost. The future of biomimetic color sub-pixel rendered displays is bright indeed.
References [1] Benzschawel, T. and Howard, W. (1988) ‘Method of and apparatus for displaying a multicolor image’ U.S. Patent No. 5,341,153 (filed June 13, 1988, issued Aug. 23, 1994, expired 3 Nov. 1998). [2] Betrisey, C., Blinn, J.F., Dresivic, B. et al. (2000) Displaced Filtering for Patterned Displays, SID Symposium Digest, 296–299 (May). [3] Elliott, C.H.B.(1999) Reducing Pixel Count without Reducing Image Quality, Information Display, 22–25 (December). [4] Elliott, C.H.B. (2000) ‘Active Matrix Display Layout Optimization for Sub-pixel Image Rendering,’ Proceedings of the 1st International Display Manufacturing Conference, 185–187 (September). [5] Elliott, C.H.B. and Higgins, M.F. (2002) ‘New Pixel Layout for PenTile MatrixTM,’ Proceedings of the 2nd International Display Manufacturing Conference, 115–117 (February). [6] Elliott, C.H.B. et al. (2002) Co-Optimization of Color AMLCD Sub-pixel Architecture and Rendering Algorithms, SID Symposium Digest, 172–175 (May). [7] Credelle, T.L., Elliott, C.H.B. and Higgins, M.F. (2002) ‘MTF of High Resolution PenTile MatrixTM Displays’, EuroDisplay Conference Proceedings (October). [8] Elliott, C.H.B. et al. (2003) Development of the PenTile MatrixTM color AMLCD sub-pixel architecture and rendering algorithms, Journal of the SID, 11 (1). [9] Elliott, C.H.B. et al. (2004) ‘PenTile Matrix Technology: Low Cost, High Resolution Displays Using Sub-pixel Rendering,’ AMLCD’04 (July).
IMAGE RECONSTRUCTION ON COLOR SUB-PIXELATED DISPLAYS
367
[10] Elliott, C.H.B., Credelle, T.L. and Higgins, M.F. (2005) Adding a White Sub-pixel, Information Display, 26–31 (May). [11] Credelle, T.L. and Elliott, C.H.B. (2005) High-Pixel-Density PenTile MatrixTM RGBW Displays for Mobile Applications, IMID 2005 Digest, II, 867 (July). [12] Elliott, C.H.B. (2005) ‘PenTile Matrix Displays and Drivers’, 2nd ADEAC, Portland, OR (October). [13] Messing, D.S. and Kerofsky, L. (2004) Sub-pixel Rendering on Color Matrix Displays with 2D Geometries, SID Symposium Digest, 1474–1477 (May). [14] Yoon, H.J. et al. (2005) Development of the RGBW TFT-LCD with Data Rendering Innovation Matrix, SID Symposium Digest, 244–247 (May). [15] Marten, R., Gille, J. and Larimer, J. (1993) Detectability of Reduced Blue-Pixel Count in Projection Displays, SID Symposium Digest, XX. [16] Lee, B. et al. (2004) Implementation of RGBW Color System in TFT-LCDs, SID Symposium Digest, (May). [17] Klompenhouwer, M.A. et al. (2003) Sub-pixel image scaling for color matrix displays, Journal of the SID, 11 (1).
13 Recent SOG (System-on-Glass) Development Based on LTPS Technology Tohru Nishibe, and Hiroki Nakamura Toshiba Matsushita Display Technology Co., Ltd., Fukaya-shi, Japan
13.1 Introduction One of the greatest advantages of LTPS (Low Temperature Poly Silicon) is that a variety of circuits can be realized on a glass substrate using p-Si TFTs (see Figure 13.1) [1–4]. LTPS technology is based mainly on the epoch-making excimer laser annealing (ELA) method which can produce good quality p-Si at low temperatures, temperatures far below the melting point of Si. Repeatedly irradiated with a pulse-width of around 10 ns, amorphous silicon (a-Si) is suddenly heated by absorbing the light in its very near surface, melts and re-crystallizes without transmitting the heat to the substrate. Since the research and development of LTPS TFT LCD panels started in the mid-1980s, the technology has been applied successfully not only to small displays, but also to medium- and large-screen products. These developments will lead to value-added displays. To date, we have integrated the function of the external ICs driving the pixels directly onto the periphery of the glass substrate. It yields a light and thin display with a drastic reduction of connection pins. It improves the reliability against mechanical shock as well as relaxing the limit on the pitch between connection terminals to be suitable for a high-resolution display. Moreover, such integration contributes to short lead-time because lengthy development time of ICs can be eliminated. The integration level has
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
370
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 13.1 Difference between a-Si, poly-Si and c-Si.
progressed from simple digital circuits to the more sophisticated ones, such as digital-analog converters (DACs), timing controllers, memories, and so on [5–8]. Beyond this, SOG technology also has the potential of integration of input function other than output function of display, which will pave the way for a future display. One of the input functions has been feasible by using photo-sensor system. High sensitivity of a-Si has been fully applied to fingerprint certification or image-scanning devices [9,10], and to simple touch sensors [11]. However these devices have poor or no displaying function at present, because of the inadequate driving ability of a-Si. These various ways of integration are totally expressed by SOG technology. In this chapter, recent SOG development is briefly described, including circuit integration [12,13] or function integration such as scanner devices [14–16] and touch-panel ones [17–19], by utilizing a poly-Si photo-sensor system.
13.2 Added Value Say ‘added value’ and various meanings spring to mind. We think it means some drastic enhancement of value by combining a conventional function with a new one. A simple example is a color LCD combining a monochrome display and color filters with highly expressive colors. A p-Si TFT LCD has realized high reliability against mechanical shock by reducing connecting pins, which is a combination of a conventional a-Si TFT display and p-Si monolithic driver. The combination of a conventional LCD and a speaker has realized a value-added entertainment panel by attaching transducers to an acrylic protection panel, in which we feel as though people on the display are actually talking [20]. Mobile phones have now incorporated many functions in a limited space. Any integration will be favorable. Plural ICs might be downsized to one chip. If various functions were integrated in the LCD panel it would be even better. Primary factors – such as light, electricity, heat, stress, and so on – are candidates for integration items. Of all factors, interaction between light and electricity seems to be easily applicable to our lives. Of course, the added value of any ‘feature’ must be higher than the additional ‘cost’ of its integration. Figure 13.2 shows the relationship between approximate additional cost and integration level. Added value corresponds to additional cost, and the integration level expresses technique level. There are three categories of integration levels: the first is a small integration level with a high cost merit; the second is a mediate integration level with a reasonable cost merit, and third is a very high integration level with a relatively high cost merit. By applying a photo-sensor system as a key device, almost all products within the target category can be realized. Other than those categories, there are devices with low-cost, discrete parts such as speakers, microphones, thermistors and so on. Strategically, we aim for the first
RECENT SOG (SYSTEM-ON-GLASS) DEVELOPMENT BASED ON LTPS TECHNOLOGY
371
Figure 13.2 Relationship between the added value (cost merit) and integration level. There are three categories.
and second categories. However, recently, integration of discrete devices has been emerging, especially on ambient light sensors. In a typical LTPS-TFT fabrication process, a-Si films formed on glass substrates are crystallized using the ELA method. The films are doped with ions to form Pþ regions for PMOS, and Nþ regions and lightly-doped regions for NMOS. Through these steps, lateral p-i-n diodes are also fabricated as shown in Figure 13.3, below. A Pþ region, a lightly-doped region and an Nþ region lay laterally [17]. The
Figure 13.3 Fabrication process of LTPS TFTs and photo-sensor.
dimensions of both a TFT and a photo-sensor are optimized taking account of display characteristics such as aperture ratio and sensor speculation, on condition that the fabrication steps are not increased.
13.3 Requirements for TFT Characteristics and Design Rule Realization of LTPS circuit integration on a glass is dependent mainly on improvement of TFT characteristics and fine structure processing [21]. Low power consumption for mobile use requires a low voltage operation at a high speed. The operation voltage is generally determined by a threshold voltage (Vth) of the TFT, including its fluctuation and stability. In addition to this, high density circuit integration is essential, which requires fine processing for all the elements, such as TFTs, wirings, capacitors and so on. Table 13.1 shows the roadmap where TFT features have to be improved with typical elemental technologies.
372
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Table 13.1 SOG roadmap.
Figure 13.4 CPU speed (CLK) and integration scale.
Although the integration level of LTPS is almost the same level as the crystal Si was 20 years ago, actual operation of 50 MHz with 1 mm design will be realized in the near future (see Figure 13.4). For further progress in LTPS, we need to make a breakthrough in two areas: one is the limit of the gate insulator thickness which is uniquely determined by the highest voltage applied to the pixel TFT around 15V, if the gate insulator is supposed to be common all over the panel. It is estimated that the thickness needs to be at least 50 nm. The other is the difficulty related to the fine process, less than 1 mm for a large-size and uneven glass. It is also necessary to reduce the distribution of Vth in order to prevent the increase of the leakage current and to obtain the wide margin of a circuit operation.
13.4 Display with Fully-integrated Circuit Figure 13.5, below, shows a layout of integrated circuits, all of which consist of LTPS. This design has a 6-bit digital data driver which supports a display of up to 262,144 colors with a reasonable periphery size. Data drivers which are connected to signal lines are arranged longitudinally on the substrate and controlled by a timing controller. A gate driver scans gate lines and drives Cs (storage capacitor) lines by multiple level pulses. All dimensions of wiring and TFTs are shrunk to a minimum of 1.5 mm. Several power-consumption reduction approaches are arranged in the design such as multi-driving,
RECENT SOG (SYSTEM-ON-GLASS) DEVELOPMENT BASED ON LTPS TECHNOLOGY
373
Divider
Shift Resister & Logic Circuit Reference Resistor
Counter
Level Shifter & Gate Buffer
Display Area 320 320XX240 240 (RGB) (RGB)
Cs Polarity Circuit & Level Shifter
Timing Controller
Cs Control Switch (Analog Switch)
Shift Resister Sampling Latch &Load latch 6bitDAC+AMP Selector (Analog Switch)
Selector (Analog Switch) 6bitDAC+AMP Sampling Latch &Load latch Shift Resister
OLB PAD
Figure 13.5 Block diagram of p-Si TFT-LCD.
Transmittance (%)
capacity coupling (CC) driving[24], and power supply control for buffer amplifiers, which has led to less than 60% power consumption. First, multi-driving method was applied. The buffer amplifier is the original type. It is initialized for compensating the threshold voltage variance synchronized with amplifier sampling operations, and then an output signal from buffer amplifiers is supplied to the selector switches. In order to efficiently utilize the amplifier, which has sufficient drive capability, multi-selection-drive can be applied for mobile size displays. Owing to high mobility of TFTs over 100 cm2/Vs, at least six selection drives are possible. Selective analog switches are connected to each signal line, and output signals are distributed to each signal line through the selective analog switches. The number of multi-driving is dependent on the output performance of the buffer amplifier and the DAC within a horizontal time. Second, CC driving method was applied, because it showed low consumption compared to commoninversion method. CC driving method requires only small voltage width resulting from the coupling between LC capacitor and Cs (storage) capacitor. The power loss due to the parasitic capacitor between Cs lines and signal lines is controlled to be small. Third, power supply switches were added in order to reduce power consumption during the time the buffer amplifiers didn’t work. We studied two patterns for the layout of the control switch of power supply line. One is placing a large-sized analog switch on the root of a power supply line. The other is arranging an analog switch near the amplifier. We decided to adopt the latter layout because it is hard to design a large size analog switch in the root of a power supply line where many bus lines are concentrated. By implementing the above-mentioned three power-reduction approaches, we have developed a 5.6 cm diagonal QVGA AMLCD panel. Figure 13.6, below, shows the transmittance curve of the panel 100
50
0
0
7
14
21
28
35
42
49
56
63
Grayscale level Figure 13.6 Transmittance curve for several grayscale levels.
374
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 13.7 Display result of 5.6 cm panel (circuit integration panel (left) and COG panel (right)).
at several grayscale levels. A good gamma correction curve (gamma¼2.2) has been obtained. The size of peripheral circuits is almost the same as that of the conventional COG (Chip on Glass) type panel with the fine line and space rule. Figure 13.7 shows the photograph of the circuit integration panel and the COG panel.
13.5 ‘Input Display’ with Scanning Function As an example of integration of an input function we introduce a novel display, which can also read the articles by itself. The prototype is an 8.9 cm diagonal color LTPS TFT-LCD with QVGA (320 240) resolution format (Table 13.2). Table 13.2 Specification of proto-type ‘Input Display’ with scanning function. Feature
Display
Input function
Display size (diagonal)
8.9cm (3.5-inch)
8.9cm (3.5-inch)
Pixel resolution
320(xRGB)x240 (QVGA)
Up to 960x240
Color
260k color
64-level gray scale (monochrome)
Type
Transmissive
2D photo-sensor
In addition to the ability to display color images, it includes a data input function that enables it to capture images such as photos or printed text. Its specification is shown in Table 13.2. The captured image is a monochrome one with 64-level gray scale. The input function is achieved through photo sensor devices embedded in the pixel. The input resolution is up to 960 240 because photo-sensors are placed in each sub-pixel (RGB). The schematic principal of ‘Input Display’ is shown in Figure 13.8. The backlight illumination went through the cell, then it was reflected by the surface of the article placed face down to the display, and the reflected light was introduced to the photo-sensors. The current through the sensor was modulated according to the reflectance, which corresponded to the contrast of the image. Figure 13.9, below, shows the proposed pixel circuit configuration including not only
RECENT SOG (SYSTEM-ON-GLASS) DEVELOPMENT BASED ON LTPS TECHNOLOGY
375
Figure 13.8 The principle of photo-sensing. Reflected backlight on the surface of the article is introduced to the photo-sensor.
Input
LTPS p-i-n diode
Output
AMP
Figure 13.9 Pixel circuit configuration.
conventional pixel circuits, such as gate TFT and pixel electrode, but also a sensor-circuit consisting of a p-i-n diode, a capacitor and an amplifier. A pre-charged voltage to input of the amplifier leaks through the diode depending on the intensity of incident light. This analog signal is then transferred to the peripheral circuit through the amplifier in a short time. Generally, p-Si is estimated to be less photosensitive than a-Si because of its lower light absorption coefficient and the difficulty in obtaining a thick layer. This disadvantage has been overcome by optimizing the structure and process conditions of the photo-sensor and by adopting the p-Si amplifier circuit. As a result, the S/N ratio has been drastically improved. Each signal has been transmitted to the periphery circuit through the vertical wiring. The array panel contains three kinds of periphery circuits, as shown in Figure 13.10: source driver, data output circuit and gate driver. The specific data output circuit plays the role of processing the photo-sensor signals. After parallel output signals have been synthesized in a serial one it was taken out of the panel, re-arranged into a suitable order for display purposes.
Figure 13.10 Structure of ‘Input Display’.
376
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Of course, it was necessary to carry out the right-and-left mirror conversion of the signals. Arranged signals were applied to the source driver and, finally, the captured image was re-displayed as it was. The gate driver controls both the pixel TFT for displaying and the sensor circuit. Once the original picture had been placed face down to the display surface, the image was captured and re-displayed, as shown in Figure 13.11, below. These circuit integrations of LTPS enable reduction of the connecting pins. It will soon be possible to carry out even complicated signal arrangements at the LTPS circuit on the glass.
Figure 13.11 64-level gray scale capturing (monochrome).
The application of this to color image-capturing has been considered. After three-step capturing by utilizing color filter, R,G,B, their images have been synthesized. Figure 13.12 shows that color imagecapturing of a photograph has been realized, although it is a monochrome picture.
Figure 13.12 Color image capturing.
13.6 ‘Input Display’ with Touch-panel Function LTPS technique has also made it possible to form a photo sensor in each pixel which achieves new functional integrations, such as touch panel function and digitizer function, with no additional device on the display, as shown in Figure 13.13 [17–19,22,23]. For touch-panel function, detection of the finger position with illumination of ambient light is very difficult, so some algorithms for the detection are indispensable. Moreover, detection and display must be performed at the same time, which makes it important to judge whether the timedependent signals of sensor capacitors are high or low. Novel configuration of display/sensor circuits with LSI is presented, as shown in Figure 13.15, in contrast to a conventional one, as shown in Figure 13.14.
RECENT SOG (SYSTEM-ON-GLASS) DEVELOPMENT BASED ON LTPS TECHNOLOGY
377
Figure 13.13 Cross sectional views of new system and conventional system.
Figure 13.14 Block diagram of TFT-LCD having conventional configuration including 8-bit A/D converter LSI.
Data output circuit (A/D converter)
Captured data Control Signal
Display data
Input data
Display data
Gate driver, Sensor controller
Image processing LSI
Cs
Cs
Cs
Sensor, AMP Amplifier 320x240 Soruce driver
Glass substrate Figure 13.15 Block diagram of new TFT-LCD having integrated photo-sensor circuit and A/D converter on the glass substrate.
378
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
A new image processing procedure is realized to obtain gray-scale image data suitable for finger-touch location detection. These techniques eliminate the need for a readout LSI so that display devices with input functions can be thinner and more compact overall. Figure 13.16 shows the time sequence of the developed LCD. In this sequence, sensor operation is carried out in a horizontal blanking time, owing to the higher mobility of LTPS TFTs. In the display a : Display time b : Horizontal
Frame period
blankingtime blanking time
Horizontal period Line N Line N+1 Line N+2
a
Exposure time
b a
a
b
b
a a
b
b Time
Figure 13.16 Time sequence of developed LCD.
(data writing) time, display data is sampled to the storage capacitor in each pixel – the same as in the conventional LCDs. On the other hand, at the beginning of the horizontal blanking time, the sensor capacitors are pre-charged to a given voltage. By just before the next blanking time, the pre-charged voltage leaks through the p-i-n diode – depending on the incident light intensity which is closely related to the touched finger image (expressed as exposure time) – and the amplifier outputs an analog signal to peripheral circuit in a short time. Without the amplifier in each pixel, more time is needed to read out signals from each pixel and results in a lower refresh rate. In this manner, just one-bit data for each pixel is transferred to the external image processing LSI. The operation to generate higher gradation data is performed in the image processing LSI. Figure 13.17 shows the internal circuit configuration of the new integrated A/D converter, which includes an inverter, a capacitor, reference voltage Va and switches controlled by signals S or /S
Figure 13.17 Internal circuit configuration and time sequence of integrated A/D converter.
RECENT SOG (SYSTEM-ON-GLASS) DEVELOPMENT BASED ON LTPS TECHNOLOGY
379
(inversion of S). During the charging period, S is set to high and the capacitor holds the voltage (Vt-Va), where Vt means threshold voltage of the inverter. The signal S is then set to low and the voltage (VinþVt-Va) is applied to the input of the inverter. Hence the output voltage Vout is determined by comparison between Vin and Va. Even though a multiple-bit A/D conversion can be accomplished by the repetition of different reference voltages, the display refresh rate should be set above 60 Hz for comfortable viewing. High accuracy and higher operation speed are achieved even in the case that there is a Vth deviation in local area because the threshold voltage is compensated by the inverter itself. The specifications of the prototype LCD with an incorporated touch panel function are summarized in Table 13.3; a touch panel response rate of 60 Hz is achieved with a refresh rate of 60 Hz for a 3.500 diagonal LTPS-LCD with QVGA resolution format. Table 13.3 Specification of proto-type ‘Input Display’ with touchpanel function. Display size
3.5 in. diagonal
Number of pixels Display refresh rate Touch panel response rate Surface brightness Aperture ratio
320 RGB 240 60 Hz 60 Hz 200 cd/m2 70%
Figure 13.18 shows illustration and a captured and calculated grayscale bitmap image of a finger approaching to the LCD under 500 lx ambient light. In the image processing LSI, the sensor data of a particular area of each pixel is averaged to a single data value for grayscale bitmapping. In this method, the grayscale depth of the new bitmap increases in proportion to pixel counts involved in the area. It was found that with the increase of pixel counts, the signal to noise ratio (S/N ratio) of the captured image tends to be improved, mainly because the noise level is smoothed.
Figure 13.18 Schematic captured original image, left, and actually calculated grayscale bitmap of touched finger.
Figure 13.19, below, shows a 3-D graph of calculated 256-grayscale bitmap, where the x/y-axis represents the x-y coordinates in the bitmap, and the z-axis represents the grayscale. Lower (darker) grayscale values are observed around the touched location gradating to approach timing under 500 lx ambient light higher (brighter) grayscale values in the background. It was confirmed that grayscale finger image was obtained successfully. Figure 13.20 shows a calculated grayscale bitmap and an edge-enhanced picture. The S/N ratio of the edge-enhanced picture is dependent on the number of photo-sensors in the averaging area. The touched location and the touched timing were calculated successfully through image processing such as edge detection, motion search and calculation of the gravity center using grayscale bitmap. These results demonstrate that the proposed configuration, having amplifier in pixel and A/D converter in peripheral, is suitable for high-resolution sensor-integrated displays using LTPS TFT fabrication process.
380
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 13.19 3-D graph of calculated 256-grayscale bitmap, where x/y-axis means the x-y coordinates in the bitmap and z-axis means the grayscale.
Figure 13.20 Calculated grayscale bitmap, left, and edge enhanced picture in image processing LSI.
13.7 Future Application of ‘Input Display’ The input display technology brings opportunities for new applications for personal and business use. The new technology is scalable up and down, and can be applied to a diverse range of products, from cellular phones to personal computers. We have given full scope to our imagination concerning future use of ‘Input Display’ in Figure 13.21. Its wide range of usage will include recording of text or images for online shopping and the like, without a scanner device saving personal data and images to a computer, and personal identification.
RECENT SOG (SYSTEM-ON-GLASS) DEVELOPMENT BASED ON LTPS TECHNOLOGY
381
Figure 13.21 Future applications of ‘Input Display.’
For example, it could be used to capture data from a catalog, read barcodes, recognize and authenticate fingerprints for security applications, or import a private route map into a PDA from a navigation system. For salesmen, name card reading will be useful. Without complex systems, including some kinds of lenses and mirrors, it can be thin and easily carried. Auto-power control with photo-sensor will be suitable for extremely low power cellular phones. Moreover, photo-sensor must have a role in detecting the position of finger or pen attached to the panel. Application to the touch panel devices or pen input devices will be feasible. Future refinement of the ‘Input Display’ will support higher resolutions and personal identification based on immediate identification of fingerprints in support of e-commerce and on-line transactions. Our dream of realizing the one-sheet SOG lives on, where only limited chips are hybridically buried on it, as is illustrated in Figure 13.22. It must not be broken or chipped off. In order to make it small and thin, it is desirable that almost all functions are incorporated into the display, such as the direct pen-input function without parallax, scanner function easily reading the articles, image processing and so on.
Figure 13.22 Future display of sheet-like SOG.
382
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Without a doubt, the development of ultimate light and thin SOG devices must be accelerated by realization of ‘Input Display’ which integrates added value such as capturing, pen input, or touch sensing.
13.8 Summary Recent SOG development has been described from the point of view of added-value display. In this chapter, we have briefly introduced examples of circuit integration and function integration, such as scanning and touch panels. These technologies are almost completed and waiting for market release. However, these are the very first technologies prone to negative thoughts, and there remains room for their improvement taking into account the usage demands of different lifestyles. We are looking forward to active and challenging proposals to these technologies, which are as yet still at the embryonic stage. SOG will surely lead to various applications.
References [1] Ibaraki, N. (1999) SID International Symposium Digest of Technical Papers, 172. [2] Aoki, Y. T. Iizuka, S. Sagi et al. (1999) SID International Symposium Digest of Technical Papers, 176. [3] T. Higuchi J. Hanari, N. Nakamura et al. (2000) SID International Symposium Digest of Technical Papers, 1121. [4] Y. Hanazawa et al., (1999) The 19th International Display Research Conference, Conference Proceedings, 369. [5] H. Kimura T. Maeda, T. Tsunashima et al. (2001) SID International Symposium Digest of Technical Papers, 268. [6] T. Nakamura M. Karube, H. Hayashi et al. (2001) Proceedings of The 8th International Display Workshops (IDW’01), 1603. [7] Y. Goto H. Yoshihashi, N. Tada et al. (2001) International Workshops on AM-LCD Digest of Technical Papers, 21. [8] Y. Kida Y. Nakajima, M. Takatoku et al. (2002) The 22nd International Display Research Conference, Conference Proceedings, 831. [9] J. H. Kim J. K. Lee, Y. G. Chang et al. (2000) SID International Symposium Digest of Technical Papers, 353. [10] Y. Izumi O. Teranuma, M. Takahashi et al. (2003) Proceedengs of The 10th International Desplay Workshops (IDW’03), 363. [11] W. den Boer, A. Abileah, P. Green S. Robinson et al. (2003) SID International Symposium Digest of Technical Papers, 1494. [12] T. Motai, Asia Display’04 digest p. 75 (2004). [13] M. Karube T. Tsunashima, H. Nishimura et al. (2005) Proceedings of The 12th International Display Workshops (IDW’05), 1229. [14] T. Nishibe and N. Ibaraki (2003) Proceedings of The 10th International Display Workshops (IDW’03), 359. [15] T. Nakamura, H. Hayashi, M. Yoshida et al. (2003) Proceedings of The 10th International Display Workshops (IDW’03), 1661. [16] T. Nishibe and H. Nakamura et al. (2004) International Workshops on AM-LCD Digest of Technical Papers, 85. [17] N. Tada H. Hayashi, M. Yoshida et al. (2004) Proceedings of The 11th International Display Workshops (IDW’04), 349. [18] T. Nakamura H. Hayashi, M. Yoshida et al. (2005) SID International Symposium Digest of Technical Papers, 1054.
RECENT SOG (SYSTEM-ON-GLASS) DEVELOPMENT BASED ON LTPS TECHNOLOGY
383
[19] H. Nakamura T. Nakamura, H. Hayashi et al. (2005) Proceedings of The 12th International Display Workshops (IDW’05), 1003. [20] M. Tashiro and T. Nishimura (2003) Proceedings of The 10th International Display Workshops (IDW’03), 569. [21] T. Nishibe (2002) The 22nd International Display Research Conference, Conference Proceedings, 269. [22] T. Sakamoto et al. (1999). The 19th International Display Research Conference, Conference Proceedings, 37. [23] E. Takeda et al., Proc.Japan Display’89 p. 580 (1989).
14 Advances in AMOLED Technologies Y.-M. Alan Tsai, James Chang, D.Z. Peng, Vincent Tseng, Alex Lin, L.J. Chen, and Poyen Lu TPO Displays Corp., Chunan, Taiwan
14.1 Introduction In all electronic displays, the front of screen (FOS) performance and physical dimensions (e.g. thickness and weight) are the most straightforward features that users can feel and appreciate. Flat-panel displays such as liquid crystal display (LCD) and plasma display have progressed well technically, and made the move in many applications to replace bulky and energy-consuming cathode ray tubes (CRTs). Nowadays, flat-panel displays have become the dominant display choices for large-size PC monitors and TVs. In recent years the multimedia and digital era has emerged, with more and more functions built on mobile devices, such as camera and TV viewing on mobile/cell phones. As a result, FOS performance, such as brightness, contrast ratio, viewing angle, and response time, all need to be substantially enhanced. Emissive display is the natural display to choose to provide superior FOS performance. One emissive technology that shows promise involves organic electroluminescence devices (or organic light-emitting diodes – OLEDs), has been studied extensively, and its commercial products have gradually started to come on the market. Usually, they have appeared at just the right time to fulfill the massive need. In this chapter, we will give an overview of OLED technology, including the OLED electroluminescence mechanism, materials, and device structure, as well as the backplane that is needed to drive the OLED – thin-film transistor (TFT). Advances in active-matrix organic electroluminescence display (AMOLED) will be reviewed and discussed. Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
386
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
14.2 OLED Technology 14.2.1 Introduction OLED displays have attracted a lot of attention due to their many advantages, such as fast response time, wide viewing angle, higher contrast ratio, and very thin structure. In 1987, Tang and Van Slyke of Kodak described an efficient green OLED from thin-film evaporation of the vapor organic compounds, triphenylamine and aluminum tris-8-hydroxyquinoline (Alq) [1, 2]. This important discovery enhanced the prospects for producing larger and inexpensive displays that could replace CRT and LCD. In 1990, another new electroluminescence device based on conjugated polymer (para-phenylenevinylene, PPV) that emitted yellow-green light was produced by Burroughes’s group at Cambridge University [3]. Since then, OLED has become a popular topic in academia and industry, and has the potential to be the great display of the future. In addition to the features that OLED can offer in FOS and physical dimensions, active-matrix OLED (AMOLED) has generated much attention due to its capability to deliver higher resolution, larger panel, and better display quality. Many OLED development activities have been based on AMOLED to further strengthen OLED’s advantages. Significant progress has been made in AMOLED materials, device, and production technologies in recent years. In this section, we will give an overview of OLED technology, including its electronic mechanism, materials, device structure, advanced processes, and application in AMOLED.
14.2.2 Electroluminescence Mechanism (1) Physics for OLED Operation The simple structure of OLED is illustrated in Figure 14.1. The simplest OLED structure consists of an anode (ITO), a hole transporting layer (triphenylamine), an electron transporting layer with emitting function (Alq), and a cathode [1]. In order to reduce the driving voltage and improve efficiency of the OLED, the hole injection layer and electron injection layer have been introduced. When a voltage is applied across the OLED, the charged carriers (holes and electrons) are injected from the anode and cathode into the adjacent organic layers, respectively. Traveling through the
Figure 14.1 OLED structure.
ADVANCES IN AMOLED TECHNOLOGIES
387
injection materials and transporting materials, the carriers with opposite polarity recombine in the emitting layer and generate the ‘exciton’. Relaxation of the exciton leads to photon emission. (2) Relaxation and Luminescence When carriers with opposite polarity recombine they produce excitons. An exciton is essentially a molecule in the excited state. By means of the electroluminescent mechanism, there are four excited microstates of the exciton. One is the anti-symmetry spin hybrid (s ¼ 0, singlet), and the other three are symmetry spin hybrids (s ¼ 1, triplet). According to the selection rule, only relaxation from the singlet excited state to the ground state is in general allowed. The allowed relaxation that produces a photon is called fluorescence (see Figure 14.2).
Figure 14.2 Hole–electron recombination and energy distribution.
Relaxation from triplet excited states to the ground state is forbidden in the selection rule. However, when spin orbital coupling occurs, this kind of relaxation can still take place, and is called phosphorescence [4]. (3) Efficiency Since the mechanisms of OLED are related to carrier injection, carrier recombination, and relaxation of the excited state, the internal quantum efficiency ( int) is defined as int ¼ ex p
ð1Þ
: the ratio of electrons and holes injected from opposite contacts (the electron–hole charge balance factor). ex: the fraction of total excitons formed which result in radiative transition. p: the intrinsic quantum efficiency for radiative decay. The factor is the ratio of recombination. The effective collision of electrons and holes takes place on the same emitting molecule, and then the molecule is excited and forms the exciton. The injection ratio of the carriers and the recombination zone should be well controlled to improve the efficiency.
388
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
The ex factor is the energy transfer ratio during relaxation of the exciton. It refers to the photoluminescent efficiency of the emitting material. Introducing high photoluminescent efficiency emitting material will give rise to high quantum efficiency. The p factor is the intrinsic quantum efficiency based on the selection rule of the relaxation. For fluorescent emitting material, the maximum value for the p factor is 1/4. The internal quantum efficiency is the energy transfer ratio between the electric energy and photo energy. It only occurs inside the emitting layer and is not easy to measure. We usually detect the photons through the organic layers, electrode, and substrate. The detected efficiency is the external quantum efficiency ( ext) defined by ext ¼ int ¼ ex p
ð2Þ
int: internal quantum efficiency. : light out-coupling efficiency. , ex, and p: as defined above. There is optical interference between the organic layers, organic–electrode interface, electrode– substrate interfaces, and substrate–air interface. The factor is the total effect of the interference. A good optical design should reduce the interferences, and improve the external quantum efficiency.
14.2.3 OLED Materials Organic materials can be classified, according to their function, as hole injection materials (HIM), hole transport materials (HTM), emitters (guest and dopant), electron transport materials (ETM) and electron injection materials (EIM).
14.2.3.1 Hole Injection Material The function of the hole injection material is to help hole injection into the organic layer from the anode (ITO). Thus, there are some requirements for these materials, such as: (a) the work function needs to be as close to the ITO work function (4.8 eV) as possible; (b) good adhesion to ITO. Popular materials for hole injection include: CuPc, CFx, mTDTA, and TNATA (see Figure 14.3).
14.2.3.2 Hole Transport Material The function of hole transport material is to transport holes to the emitting layer. Required properties for these materials are: (a) high hole mobility; (b) high Tg (glass transition temperature) material; (c) good electrochemical stability; (d) good thin-film quality from vapor deposition. Popular materials include: NPB, TPD, and Spiro-NPB (see Figure 14.3).
ADVANCES IN AMOLED TECHNOLOGIES
389
14.2.3.3 Emitter The function of the emitter is to control light output and color. Two types of material can be used for an emitter: one is the host, and the other is the dopant. Required properties for an emitter include: (a) high quantum yield; (b) good electrochemical stability; (c) good thin-film quality from vapor deposition. Popular materials include: DPVBi, C-545T, DCJTB, Rubrene, AND, Ir(ppy)3, UGH1, UGH2, UGH3, UGH4, FIr6, Ir (pmb)3, and Ir(btp)2(acac) [5–11] (see Figure 14.3).
14.2.3.4 Electron Transport Material The function of electron transport material is to transport electrons to the emitting layer. Hence, there are some requirements for these materials, such as: (a) high electron mobility; (b) good electrochemical stability; (c) good thin-film quality from vapor deposition. Popular materials include: Alq, BeBq, TPBI, and TAZ (see Figure 14.3).
Figure 14.3 Popular OLED materials.
390
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 14.3 (Continued)
ADVANCES IN AMOLED TECHNOLOGIES
391
14.2.4 Advanced OLED Devices Compared to advanced TFT-LCD devices, low power consumption is one of the key properties that AMOLED need in order to be competitive, especially in mobile devices. High-efficiency inorganic LED is used in current TFT-LCD products as a backlight, thus efficiency improvement is a crucial topic in OLED technology development. Recently, some new designs in OLED device structure have been published that give rise to better performance. One is the p–i–n structure; the other is the tandem OLED structure.
14.2.4.1 The p^i^n Structure To obtain high power efficiency and low driving voltage for OLEDs, efficient charge injection at the interface and low ohmic transport layer are two key factors. A commonly used method is to insert a buffer layer between the anode and HTL, and to insert a thin layer between ETL and EIL, to improve hole and electron injection respectively. A method that increases the conductivity of the organic semiconductor layer by doping with p-type acceptors into HTL or n-type acceptors into ETL can theoretically reduce the driving voltage of OLEDs, as reported by Leo’s group in 1998 [12, 13]. For instance, one OLED structure reported is ITO / mTDATA: F4-FCNQ / TPD/Alq /Bphen / BPhen:Li / LiF /Al (Figure 14.4) which resulted in a good OLED device with a performance of 5.4 cd/A and 1000 cd/m2 at 2.65 V. Moreover, Forrest’s group [14–16] used the p–i–n structure in a phosphorescence OLED. The structure is ITO/mTDATA: F4-FCNQ/Irppy:CBP/BPhen/BPhen:Li/Al, which produced a highly efficient OLED with a performance of 29 lm/W and 200 cd/m2 at 1 mA/cm2.
Figure 14.4 Pin structure of OLED.
14.2.4.2 Tandem Structure of the OLED An OLED device having multiple emitting units stacked vertically in series, i.e. tandem OLED, can provide high luminance, enhancement of current efficiency, and convenient tuning of the emission spectra. The spectral tuning of devices, through stacking units emitting different colors, is particularly useful. The major challenge in tandem OLED is to prepare an effective connecting structure between emitting units so that the current can flow smoothly without encountering substantial barriers. This has already been stated in 1st line of section. Therefore, some researchers
392
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
have proposed using the tandem structure to provide high luminance from OLED. They used connecting electrodes such as: Mg:Ag/IZO [17], ITO [18], BPhen:Cs/V2O5(or ITO) [19], Alq:Li/NPB:FeCl3 or TPBI:Li/NPB:FeCl3 [20], Alq:Mg/V2O5 [21], Alq:Mg/WO3 [22], and LiF/Ca/Ag or LiF/Al/Au [23]. In our study, the connecting structure consists of a thin metal layer (Al) as the common electrode, a hole injection layer (MoO3) providing hole injection into the upper unit, and an electron injection layer (Alq3: Cs2CO3) providing electron injection into the lower unit. The white-emitting two-unit tandem devices were fabricated with a structure of ITO/ HI-01 (60 nm)/ HT-01 (20 nm)/ BH-01: BD-04 (10 nm)/ BH-01: RD-01 (25 nm)/Alq3 (10 nm)/Alq3: Cs2CO3 (20 nm)/Al (1 nm)/MoO3 (5 nm)/ HI-01 (device A: 50 nm, device B: 55 nm, device C: 60 nm)/ HT-01 (20 nm)/ BH-01: BD-04 (10 nm)/ BH-01: RD-01 (25 nm)/Alq3 (25 nm)/ Cs2CO3 (1 nm)/Al (Figure 14.5). The I–V–L characteristics and efficiency of tandem devices are shown in Figure 14.6, below. The device A-C exhibits a driving voltage roughly double the single-unit device voltage, and the current efficiency of tandem devices (A: 16.9 cd/A; B: 16.6 cd/A; C: 17.5 cd/A at 20 mA/cm2) is more than double that of a device with a single unit (8.3 cd/A at 20 mA/cm2) [24].
14.2.5 Advanced OLED Process OLED technology for full color display can be achieved with various approaches. The most conventional OLED process is the RGB side-by-side approach. A fine metal mask (FMM) is applied during RGB deposition for the color patterning.
Al, 150nm Cs2CO3, 1nm Alq3, 25nm BH-01:RD-01, 25nm
Unit 2
BH-01:BD-04, 10nm HT-01, 20nm HI-01, 50-60nm MoO3, 5nm Al, 1nm
Connect Unit
Alq3:Cs 2CO3, 20nm Alq3, 10nm BH-01:RD-01, 25nm
Unit 1
BH-01:BD-04, 10nm HT-01, 20nm HI-01, 60nm ITO/Glass Figure 14.5 The structure of tandem white OLED.
ADVANCES IN AMOLED TECHNOLOGIES
393
Figure 14.6 (a) The I–V curve of single unit, A, B, and C devices. (b) The L–V curve of single unit, A, B, and C devices. (c) The efficiency–current curve of single unit, A, B, and C devices.
In 2003, Samsung SDI developed a high-resolution AMOLED full color display by utilizing a fine metal mask [25]. The prototype display is 5" diagonal size with WVGA (800 480) resolution and with a pixel pitch equal to 0.1365 mm (186 ppi), as shown in Figure 14.7, below. The specification of the prototype is also listed in Figure 14.7. The white CIE coordinates are x ¼ 0.31 and y ¼ 0.32. The
394
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 14.7 The specification and picture of the 5" WVGA AMOLED demonstrated by Samsung SDI.
performance exhibits an average luminance efficiency of 11 cd/A, and the peak luminance of the panel is over 300 cd/m2 with a contrast ratio of 200:1 under ambient light of 500 lx. High NTSC ratio and high luminance efficiency can easily be achieved with the RGB side-by-side technology. However, as the resolution of the panel increases (>200 ppi), the fabrication of the fine metal mask and alignment control become tremendously difficult. The merging of several multimedia applications such as DSC, cellular phones, and DMB into one mobile device will become a trend in the near future. It makes the development of high-resolution a critical factor in OLED displays. However, the limitations of FMM for high resolution will become a key issue when it comes to considering mass production. In addition to the FMM color patterning method, the use of a white OLED with a color filter is an alternative approach to realize a full color OLED display, avoiding the limitation of the FMM process. For an active-matrix OLED display, the color filter on array (COA) technology should be introduced in the array substrate process. Figure 14.8 illustrates a cross-sectional view of white OLED with the COA structure. Color filters R, G, and B are patterned underneath the emitting areas sequentially for the primary color sub-pixels. Successive planarization layers on top of the color filters are required to flatten the ragged surface profile of the array substrate. Without the use of FMM for color patterning, the resolution of the bottom emission AMOLED display will be greatly increased. One drawback of white OLED with a color filter for AMOLED color patterning is, however, the color saturation issue. Generally, the transmittances of R, G, and B color filters overlap each other, which reduces the color saturation of the AMOLED display.
Figure 14.8 Cross-section of white OLED with COA structure.
ADVANCES IN AMOLED TECHNOLOGIES woled CF_R
R CF_B
B CF_G
G
0.8
395
1.75µm
0.7
1.00µm
0.6 0.5 0.4 0.3 0.2 0.1 500
400
600
700
x-y coordinate
0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Wavelength (nm) (a)
(b)
Figure 14.9 (a) White OLED spectrum, white OLED spectrum through a color filter, and the R/G/B color filter transmittance, and (b) color coordinates of white OLED with 1 mm and 1.75 mm color filter thickness.
Increasing the color filter thickness improves the color saturation of the AMOLED display [26]. Figure 14.9(a) shows the white OLED spectrum before and after the use of color filters as well as the transmittances for R, G, and B color filters. The color coordinates of the white OLED after color filters are shown in Figure 14.9(b) for color filter thicknesses of 1 mm and 1.75 mm. The improvement in color saturation is noticeable when the color filter thickness increases from 1 mm to 1.75 mm. Table 14.1 lists the color information for a white OLED with the two color filter thicknesses in each color. Color saturation increases up to 60% when using a color filter thickness of 1.75 mm. Table 14.1 Color information of white OLED with color filter thickness of 1.75 mm. R
B
R
1 mm
Color filter thickness Color coordinates NTSC
G
(0.56, 0.35)
(0.28, 0.51) 37%
G
B
1.75 mm (0.31, 0.19)
(0.66, 0.34)
(0.25, 0.59) (0.12, 0.12) 60%
Sanyo first demonstrated a 14.700 full color AMOLED by utilizing white OLED with a COA substrate in 2002. Without FMM, a high-resolution display with white OLED þ COA technology can be realized. Toppoly has succeeded in making a 700 full color AMOLED by utilizing COA technology with a pixel compensation circuit [27]. The specification, as well as the developed AMOLED display, is shown in Figure 14.10, below. One drawback of the white OLED þ COA technology may be the higher power consumption for display applications. Much of the white light is absorbed by color filters, which gives a low luminance efficiency. Van Slyke et al., from Eastman Kodak Company, introduced a white emitter-based AMOLED with an RGBW pixel format in 2005 [28]. It was shown that a simple approach for the RGBW format to achieve low power consumption is to select white OLED material for which the CIE is equivalent or close to the required white point in the specification. The power consumptions of white OLED with the RGB and RGBW formats and various white materials are compared in Table 14.2 for 2.200 AMOLED displays.
396
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 14.10 The specification and picture of the 7.000 AMOLED demonstrated by Toppoly. The panel utilized COA technology and a pixel compensation circuit.
According to Table 14.2, the smallest power consumption for the RGBW panel is only 42% of that for the RGB display. Moreover, this table also indicates that the power consumption of the RGBW display is determined by the color and efficiency of the white emitter. Recently, laser-induced thermal imaging technology (LITI) was introduced to realize highresolution OLED displays [29, 30]. The LITI process utilizes a donor film, a highly accurate laser-exposure system, and a substrate. The LITI process can be described as follows (as shown in Figure 14.11, below): (1) The thermal transfer donor is first laminated to a substrate. The donor and receptor surfaces must be in intimate contact. (2) The donor is then exposed in a pattern with the laser beam. The result is a release of the transfer layer (light-emitting materials) from the donor interface and adhesion of the transfer layer to the receptor interface. (3) The used donor is peeled away and discarded. The film in exposed regions is transferred to highresolution stripes and the performance of the device is as good as an evaporated small-molecule device. Three donor films (red, green, and blue) are used sequentially to create a full color display. LITI transfer is a laser-addressed imaging process and has unique advantages such as highresolution patterning, excellent film thickness uniformity, multi-layer stack ability, and scalability to large-size mother glass. Samsung SDI introduced a high-resolution AMOLED panel by utilizing LITI technology. The specification as well as the image sample of the panel are indicated in Figure 14.12, below [31]. The Table 14.2 Power consumption comparison between RGB and RGBW formats. In all cases, the target color temperature is 6500 K, the luminance is 100 cd/m2, and the calculations include a circular polarizer with 44% transmittance. WOLED structure
Emitter combination
White White White White White
BlueX þ RedX BlueX þ YD3 BD2 þ YD3 BD3 þ RedX BD3 þ YD3
1 2 3 4 5
Efficiency (cd/A) 11.3 15 11.2 6.2 10.4
CIE (0.319, (0.318, (0.314, (0.329, (0.313,
0.326) 0.434) 0.327) 0.217) 0.281)
RGBW avg. power (mW)
RGB avg. power (mW)
137 318 137 466 191
280 387 328 566 305
ADVANCES IN AMOLED TECHNOLOGIES
397
Figure 14.11 Illustration of LITI process.
developed display features a 2.600 diagonal size and 28 mm (302 ppi) sub-pixel pitch with 40% emission aperture, which make it the highest resolution AMOLED display to date. The white CIE coordinates of the panel are x¼0.31, y¼ 0.31, and the peak luminance is 200 cd/m2 with 74.1% NTSC color saturation.
14.3 Backplane for AMOLED Display An OLED, like a liquid crystal display (LCD), is driven by a thin-film transistor (TFT) to achieve high resolution and high display quality. However, the basic theory driving OLEDs and LCDs is quite different: the latter is driven by a voltage while the former is operated by a current supply.
Figure 14.12 Specification of, and picture on, an AMOLED display using LITI technology.
398
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
a-Si TFT has been used extensively in LCD applications due to its simple process and scalability. It is therefore a natural consideration to evaluate a-Si TFT for AMOLED applications. In the voltagedriven LCD applications, the TFT is only used as a switch and a-Si performance is capable of delivering this function. However, for current-driven OLED applications, the TFT needs to act as a current source, to deliver the required current level accurately for different gray levels or luminance. Many technical difficulties and challenges have been experienced and studied using a-Si TFT [32–34]; the most challenging course for a-Si is to maintain TFT performance under the continuous current flow conditions. Until now, a-Si has not seemed to offer any reliable answer. We will review and discuss TFT performance in the following sections. As its name suggests, low-temperature polysilicon (LTPS) is polycrystalline silicon that is fabricated by a low-temperature process. The so-called ‘low temperature’ is only a relative term when compared to high-temperature polysilicon (HTPS) which is typically formed at temperatures of 800–1000 C. Because of the high-temperature process, HTPS needs to utilize expensive quartz as a substrate, therefore HTPS is limited to small panel applications ( 0, then Va > Vc and no voltage will appear across the OLED (Dark). For one pixel, Vdata determines the emission time and hence the grayscale. Information on the threshold voltage and mobility is determined in Vc, and the emission time does not depend on Vc. Although the pixel circuit can achieve AMOLED displays with better uniformity, another issue is raised during operation. The power consumption may be higher than expected. During the data-writing period each pixel is biased at Vc, and both PTFTs and NTFTs will turn on, which consumes extra power. In the emission period, the sweeping signal changes alternately and when Va approaches Vc, extra power is consumed.
14.4.1.6 Time-Ratio Grayscale Time-ratio grayscale has been conventionally used for plasma display panels (PDPs), and any timeratio driving method for PDPs may also be applicable to AMOLEDs. Mizukami et al. [59], from Semiconductor Energy Laboratory (SEL), applied the digital driving approach to a VGA AMOLED
420
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
display. The pixel circuit can be as simple as the 2T1C structure and is redrawn in Figure 14.43. Figure 14.44 depicts the OLED current versus the input data voltage for different characteristics of the driving TFT. It can be seen that for any fixed Vdata between Vsl and Vsh, IOLED will vary due to different TFT characteristics. However, for Vdata ¼ Vsl, which drives the TFT into the linear operating region, excellent image uniformity can be obtained since the current of the driving TFT in this region is expected to be similar all over the panel. It is therefore better to have the driving TFT operate in the two states (Vsl and Vsh) only. The grayscale of each pixel is determined in Figure 14.45 below, in which 6 bits are used. For 6 bit operation, one frame is divided into six sub-frames (SF1–SF6). The initial time TA for each sub-frame is reserved for data writing over the whole panel, and during each TA period, the cathode voltage is pulled high (Vch) to make sure there is no emission during data writing. The rest of the time for each sub-frame is for the emission period (TL1–TL6). The emission periods for TL1–TL6 are ratioed according to the time weight of the 1st, 2nd, 3rd, 4th, 5th, and 6th bits. For example, the data of 101100 should store Vsl, Vsh, Vsl, Vsl, Vsh, Vsh at the gate of the driving TFT for TL1 to TL6, respectively.
Figure 14.43 Simple 2T1C circuit for digital-driven pixel.
Figure 14.44 The output Ioled versus the input data Vsignal for different devices.
ADVANCES IN AMOLED TECHNOLOGIES
421
Figure 14.45 Conceptual drawing of the operation of digital driving method within one frame.
Although this approach gives excellent luminance uniformity, one drawback is that the pixel requires very rapid data addressing for each sub-frame or the emission time will be shortened, which influences the EL life time if total front-of-screen luminance is kept constant. Eight bit operation will become even more difficult from the design point of view. Another drawback is that the luminance is linearly proportional to grayscale, which means that this method can only be applied for a linear gamma panel. In this circuit, the cathode has to be turned on and off six times within one frame, and this may lead to spike noise on the panel and extra power consumption. To avoid this, the same group (SEL) proposed a modified approach [60], and the pixel circuit is drawn in Figure 14.46. The driving approach within one frame is also illustrated in Figure 14.47 below. In Figure 14.46, the pixel contains one extra transistor (SW2) and one extra signal line (ES line). When the ES line turns on the transistor SW2, the charge on the storage capacitor will vanish and no current flows into the OLED. The pixel operation is different from the previous one in that the pixel emits at the time when the data is written into the pixel. When one sub-frame starts, the data is written row by row into the pixel and hence the emission starts row by row. After a time, which depends on the corresponding emission time for the sub-frame, the OLED is turned off by turning on switch SW2 row by row. One sub-frame is then finished and is followed by the data writing and emission for the next sub-frame. Notice that in this modified approach, the total emission time in one frame is longer than that in the previous approach and the cathode voltage is always kept constant. To further increase the total emission time and decrease the data addressing frequency, a continuous sub-field with a multiplex scanning system and a multiple addressing method were proposed by Ouchi et al. from Hitachi Research Laboratory [61] and Tagawa et al. from Sharp Corporation [62], respectively. These approaches, however, required more complicated peripheral driving schemes.
Figure 14.46 Another digital-driven pixel circuit with one extra transistor and one signal line.
422
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 14.47 Improved operation of digital driving method to increase the emitting time in one frame.
14.4.1.7 Area-Ratio Grayscale In the area-ratio grayscale method, the emitting area is modulated and divided into several subareas, depending on the grayscale. The concept of this approach can be seen in Figure 14.48, in which one pixel is shown for 3 bit grayscale. The ratio of the emitting area is 4:2:1, while the other area is left for addressing and driving TFTs and storage capacitors. One advantage of this approach is that if the driving TFT is biased at Vsl or Vsh (see Figure 14.44) during emission, uniformity can be obtained since the grayscale is determined by the ratio of the emitting areas. The disadvantage is, however, the limited grayscale due to the limited area for subdivision within one pixel. Kimura et al. [63], from Ryukoku University, applied this approach to a TFT-driven polymer display, and a picture of the emitting area as well as the driving TFT in the pixel can be found in Figure 14.49 below. The details can be found in ref. [63].
Figure 14.48 Idea of the area-ratio grayscale approach within one pixel.
ADVANCES IN AMOLED TECHNOLOGIES
423
Figure 14.49 A picture shows the emitting area as well as the driving TFT for the area-ratio grayscale approach.
14.5 Summary and Outlook The invention of the OLED has produced an exciting emissive display that can naturally deliver vivid front-of-screen visual experiences. It shows great potential to become a disruptive technology to the existing dominant display technology, TFT-LCD. The OLED offers potentially a lower bill of materials (BOM) cost, simpler structure and process, and far better performance in contrast ratio, response time, viewing angle, and color saturation. Looking back at its history, the first LCD panel was produced in 1960, but a-Si TFT-LCD mass production started in the late 1980s and early 1990s, almost 30 years from invention to commercialization. The first efficient OLED device was described in 1987, and the first AMOLED product appeared on the market in 2003. The AMOLED has shown a much faster pace to commercialization. The maturity of TFTs in the LCD industry played an important role in accelerating AMOLED commercialization. However, the commercialization activities slowed down considerably after 2003. The production yield did not improve quickly, with issues relating to fine shadow mask and TFT non-uniformity being two major bottlenecks. These have been discussed extensively in this chapter. Alternative device architectures and compensation schemes have been widely studied and implemented. The slow rate of commercialization is due to technical issues, but none of them is critical enough to seriously slow the whole technology. There is also the market situation to bear in mind. With the expansion of Gen. 5, 6, 7, and 8 continuously driving the LCD size to be larger and larger, notebook PC, monitor, and TV panel prices have all dropped significantly. All of a sudden, Gen. 3 or even Gen. 4 has become less economic in terms of large panel production. Meanwhile, with the Internet boom in the late 1990s, the multimedia and digital era emerged. With more and more functions built into mobile devices, such as camera phones and mobile TVs, small-to-medium displays suddenly became a ‘sweet’ market with per-glass revenues several times those of large panels. As a result, more and more Gen. 3 and Gen. 4 FABs started to make small-to-medium panels, and over-supply issues then became a serious problem, leading to a price drop of 40–50% in just a few years. This dramatic market change and a serious price war have made it hard for the newcomer, AMOLED, to enter the market. Being directly involved in LCD and OLED development and production for many years, we feel the performance of the AMOLED is as good as it can be, and as beautiful as advertised. It is ready to enter the small-to-medium display market and significant increases in AMOLED production volumes in
424
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
2007 seem to indicate a new dawn. We believe that in just a few years from now, we will see AMOLED panels in many of our mobile devices, and people will continuously ask for OLEDs.
References [1] Tang, C. W. and Van Slyke, S. A. (1987) ‘Organic electroluminescent diodes’, Applied Physics Letters, 51, 913. [2] Tang, C. W., Van Slyke, S. A. and Chen, C. H. (1989) ‘Electroluminescent of doped organic thin films’, Journal of Applied Physics, 65, 3610. [3] Burroughs, J. H., Bradley, D. D. C., Brown, A. R. et al. (1990) ‘Light emitting diodes base on conjugated polymer’, Nature, 347, 539. [4] Baldo, M. A., Segal, M., Holmes, R. J., Forrest, S. R., Soos, Z. G. (2003) ‘Excitonic singlet-triplet ratio in molecular and polymeric organic materials’, Physical Review B 68, 075211. [5] Lee, M.-T., Chen, H.-H., Liao, C.-H. et al. (2004) ‘Stable styrylamine-doped blue organic electroluminescent device base on 2-methyl-9, 10-di(2-naphthyl) anthracene’, Applied Physics Letters, 85, 3301. [6] Ren, X., Li, J., Holmes, R. J., et al. (2004) ‘Ultrahigh energy gap host in deep blue organic electrophosphorescent devices’, Chemistry of Materials 16, 4743. [7] Adachi, C., Baldo, M. A., Forrest, S. R., et al. (2001) ‘High-efficiency red electrophosphorescence devices’, Applied Physics Letters, 78, 1622. [8] Holmes, R. J., D’ Andrade, B. W., Forrest, S. R. et al. (2003) ‘Efficient deep-blue organic electrophosphorescence by guest charge trapping’, Applied Physics Letters 83, 3818. [9] Yeh, S.-J., Wu, M.-F., Chen, C.-T. et al. (2005) ‘New dopant and host materials for blue-light-emitting phosphorescent organic electroluminescent devices’, Advanced Materials, 17, 285. [10] Lamansky, S., Thompson, M. E., Djurovich, P. I. et al. (2001) ‘Highly phosphorescent bis-cyclometalated Iridium complex: synthesis, photophysical characterization and use in organic light emitting diodes’, Journal of the America Chemical Society 123, 4306. [11] Holmes, R. J., Forrest, S. R., Sajoto, T., Tamyo, A, Djurovich, P. I., Thompson, M. E., Brooks, J., Tung, Y. –J., D’ Andrade, B. W., Weaver, M. S., Kwong, R. C., Brown, J. J. (2005) ‘Saturated deep blue organic electrophosphorescence using a fluorine-free emitter’, Applied Physical Letters 87, 243507. [12] Blochwitz, J., Pfeiffer, M., Fritz, T. and Leo, K. (1998) ‘Low voltage organic light emitting diodes featuring doped phthalocyanine as hole transport material’, Applied Phisical Letters, 73, 729. [13] Blochwitz, J., Pfeiffer, M., Huang, J. et al. (2002) ‘Low-voltage organic electroluminescent devices using PIN structures’, Applied Physics Letters 80, 139. [14] Pfeiffer, M., Forrest, S. R., Leo, K. and Thompson, M.E. (2002) ‘Electrophosphorescent p-i-n organic lightemitting devices for very-high-efficiency flat-panel display’, Advanced Materials, 22, 1633. [15] D’Andrate, B. W., Forrest, S. R. and Chwang, A. B. (2003) ‘Operational stability of electrophosphorescent devices containing p and n doped transport layers’, Applied Physics Letters., 83, 3858. [16] He, G., Schneider, O., Qin, D. et al. (2004) ‘Very high-efficiency and low voltage phosphorescent organic lightemitting diodes based on a p-i-n junction’, Journal of Applied Physics, 95, 5773. [17] Tanaka, S. and Hosakawa, C. (2000). ‘Organic EL light emitting element with light emitting layers and intermediate conductive layer’, U.S. patent No. 6107734. [18] Gu, G., Phrthasarathy, G. and Forrest, S. R. (1999) ‘A metal-free, full-color stacked organic light-emitting device’, Applied Physics Letters 74, 305. [19] Matsumoto, T., Nakada, T., Endo, J. et al. (2003) ‘High efficiency organic EL devices having charge generation layers’, SID Symposium Digest 34, 979. [20] Liao, L. S., Klubek, K. P and Tang, C. W. (2004) ‘High-efficiency tandem organic light-emitting diodes’, Applied Physics Letters 84, 167. [21] Terai, M., Kumaki, D., Yasuda, T. et al. (2005) ‘Organic thin-film diodes with internal charge separation zone’, Current Applied Physics 5, 341. [22] Chang,. C. C., Hwang, S. -W., Chen, C. H. and Chen, J.-F. (2004) ‘High-efficiency electroluminescent device with multiple emitting unit’, Japanese Journal of Applied Physics 43, 6418. [23] Sun, J. X., Zhu, L. X., Peng, H. J., Wong, M., Kwok, H. S. (2005) ‘Effective intermediate layers for highly efficient stacked organic light-emitting devices’, Applied Physics Letters, 87, 093504. [24] Lu, Y.-J, Chen, C.-W., Wu, C.-C. et al. (2006), ‘White-emitting tandem organic light-emitting devices with an effective connecting architecture’, SID Symposium Digest, 954.
ADVANCES IN AMOLED TECHNOLOGIES
425
[25] Kwak, W. K., Lee, K. H., Oh, C. Y. et al. (2003) ‘A 5-in WVGA AMOLED display for PDAs’, SID Symposium Digest, 100. [26] Tsai Y. M. Peng, D.-Z., Lin, C.-W., Tseng, C.-H. et al. (2006) ‘LTPS and AMOLED technologies for mobile displays’, SID Symposium Digest, 1451. [27] Peng, D.-Z., Tseng, C.-H., Hsu, H.-L. et al. (2005) ‘7-inch WVGA AM-OLED display with color filter on array (COA) and pixel compensation technology’, International Display Workshops, 629. [28] Spindler, J. P., Hatwar, T. K., Miller, M. E. et al. (2005), ‘Lifetime- and Power-Enhanced RGBW Displays Based on White OLEDs, SID Symposium Digest, 36. [29] Lee, S. T., Lee, J. Y., Kim, M. H. et al. (2002) ‘A new patterning method for full-color polymer light-emitting devices: laser induced thermal imaging (LITI)’, SID Symposium Digest, 784. [30] Lee, S. T., Lee, J. Y., Kim, M. H. et al. (2004) ‘A novel patterning method for full-color organic light-emmitting devices: laser induced thermal imaging (LITI)’, SID Symposium Digest, 1008. [31] Yoo, K.-J., Lee, S.-H., Lee, A.-S. et al. (2005) ‘302-ppi high-resolution AMOLED using laser induced thermal imaging’, SID Symposium Digest, 1344. [32] Jung, J. H., Kim, H., Lee, S. P. et al. (2005) ‘A 14.1 inch full color AMOLED display with top emission structure and a-Si TFT backplane’, SID Symposium Digest, 1538. [33] Saafir, A. K., Chung, J., Joo, I. et al. (2005) ‘A 14.10 WXGA solution processed OLED display with a-Si TFT’, SID Symposium Digest, 968. [34] Tsujimura, T., Kobayashi, Y., Murayama, K., et al (2003) ‘A 20-inch OLED display driven by superamorphous-silicon technology’, SID Symposium Digest, 6. [35] Ching-Wei Lin, A., Chang, T.-K., Jan C.-K. et al. (2006) ‘LTPS circuit integration for system-on-glass LCDs’, Journal of the Society for Information Display, 14 (4), 353–362. [36] Matsueda, Y., Park, Y. S., Choi, S. M. et al. (2005) ‘6-bit AMOLED with RGB adjustable gamma compensation LTPS TFT circuit’, SID Symposium Digest, 1352. [37] Dawson, R. M. A., Shen, Z., Furst, D. A. et al. (1998) ‘Design of an improved pixel for a polysilicon activematrix organic LED display’, SID Symposium Digest, 11. [38] Dawson, R. M. A., Shen, Z., Furst, D. A. et al. (1998) ‘The impact of the transient response of organic light emitting diodes on the design of active matrix OLED displays’, International Electron Devices Meeting, 875. [39] Sasaoka, T., Sekiya, M., Yumoto, A. et al. (2001) ‘A 13.0-inch AM-OLED display with top emitting structure and adaptive current mode programmed pixel circuit (TAC)’, SID Symposium Digest, 384. [40] Fang, C. H., Deng, D. H., Chang, S. C. and Tsai, Y. M. (2003) ‘Fully self-aligned low temperature poly-silicon TFT process with symmetric LDD structure,’ SID 2003 International Symposium Digest Tech Papers, 34, 1318–1321. [41] Fang, C. H., Deng, D. H., Lin, C. W. et al. (2003) ‘High performance fully self-aligned symmetric (FASt) LDD LTPS TFTs’, International Display Workshops, 403–406. [42] Toyota, Y., Shiba, T. and Ohkura, M. (2002) ‘Mechanism of device degradation under AC stress in lowtemperature polycrystalline silicon TFTs’, International Reliability Physics Symposium technical Digest, 278–282. [43] Shiba, T., Itoga, T. and Toyota, Y. (2002) ‘A design robust against hot-carrier stress for low-temperature polySi TFT LCDs fabricated using a 450 C process’, SID 2002 International Symposium Digest Tech Papers, 33, 220–223. [44] Rajeswaran, G., Itoh, M., Barry, S. et al. (2001) ‘Active-matrix low-temperature poly-si TFT/OLED full-color display: status of development and commercialization’, SID Symposium Digest, 974. [45] Hack, M., Kwong, R., Weaver, M.S. et al. (2002) ‘Active-matrix technology for high efficiency OLED Displays’, Proceedings of the 2nd International Display Manufacturing Conference, 57. [46] Libsch, F. R. and Kanicki, J. (1993) ‘TFT lifetime in LCD operation’, SID Symposium Digest, 455. [47] Lu, M. H., Ma, E., Sturm, J. C. and Wagner, S. (1998) ‘Amorphous silicon TFT active-matrix OLED Pixel’, Proceeding of Lasers and Electro-Optics Society, 130. [48] Sasaok, T., Sekiya, M., Yumoto, A. et al. (2001) ‘A 13.0-inch AM-OLED display with top emitting structure and adaptive current mode programmed pixel circuit (TAC)’, SID Symposium Digest, 384. [49] Huang, H.-Y., Sun, W.-T., Chen, C.-C. et al. (2004) ‘A simple data driver architecture to improve uniformity of current-driven AMOLED’, International Display Workshops, 287. [50] Dawson, R. M. A., Shen, Z., Furst, D. A. et al. (1998) ‘Design of an improved pixel for a polysilicon activematrix organic LED display’, SID Symposium Digest, 11. [51] Ono, S., Kobayashi, Y., Miwa, K. and Tsujimura, T. (2003) ‘Pixel Circuit for a-Si AM-OLED’, International Display Workshops 255.
426
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
[52] Sanford, J. L. and Libsch, F. R. (2003) ‘TFT AMOLED pixel circuits and driving methods’, SID Symposium Digest, 10. [53] Peng, D.-Z., Chang, S.-C. and Tsai, Y.-M. (2005) ‘Novel pixel compensation circuit for AMOLED display’, SID Symposium Digest, 814. [54] Peng, D.-Z., Tseng, C.-H., Hsu, H.-L. et al. (2005) ‘7-inch WVGA AM-OLED display with color filter on array (COA) and pixel compensation technology’, International Display Workshops / Asia Display, 629. [55] Dawson, R. M. A., Shen, Z., Furst, D. A. et al. (1998) ‘The impact of the transient response of organic light emitting diodes on the design of active matrix OLED displays’, International Electron Devices Meeting, 875. [56] Sasaoka, T., Sekiya, M., Yumoto, A. et al. (2001) ‘A 13.0-inch AM-OLED display with top emitting structure and adptive current mode programmed pixel circuit (TAC)’, SID Symposium Digest, 384. [57] Akimoto, H., Kageyama, H. and Shimizu, Y. (2002) ‘An innovative pixel-driving scheme for 64-level grayscale full-color active matrix OLED displays’, SID Symposium Digest, 972. [58] Kageyama, H., Akimoto, H. and Shimizu, Y. (2004) ‘A 2.5-inch OLED display with a three-TFT pixel circuit for clamped inverter driving’, SID Symposium Digest, 1394. [59] Mizukami, M., Inukai, K., Yamagata, H. et al. (2000) ‘6-Bit Digital VGA OLED’, SID Symposium Digest, 912. [60] Inukai, K., Kimura, H., Mizukami, M. et al. (2000) ‘4.0-in. TFT-OLED displays and a novel digital driving method’, SID Symposium Digest, 924. [61] Ouchi, T., Mikami, Y., Satou, T. and Kasai, N. (2002) ‘A 2.200 color TFT-OLED display using a continuous subfield with a multiplex scanning system’, International Display Workshops, 247. [62] Tagawa, A., Numao, T. and Ohba, T. (2004) ‘A Novel Digital-Grayscale Driving Method with a Multiple Addressing Sequence for AM-OLED Displays’, International Display Workshops, 279. [63] Kimura, M., Maeda, H., Matsueda, Y. et al. (2000) ‘An area-ratio grayscale method to achieve image uniformity in TFT LEPDs’, Journal of Society of Information Display, 8, 93.
15 Electronic Paper Displays Robert Zehner E Ink Corporation, Cambridge, Massachusetts, USA
15.1 Introduction: The Case for Electronic Paper Paper, in one form or another, has been the chief medium for documenting human thoughts for the past millennium. The concept of using a thin, flat, flexible material to record written information dates back to the Egyptian papyrus (from which we derive the English word paper), but the modern technique for paper-making was first documented by the Chinese in 105 CE. From there, it has been adopted around the world and has become an integral part of our daily lives. Although today’s information displays can do things that we could never do with paper alone, display devices have not been able to replace paper in the home or workplace. Despite continued promises from technologists of a ‘paperless office,’ per-capita paper consumption in the United States has almost doubled over the past 50 years (Figure 15.1). The personal computer has become a tool not for saving paper, but instead for producing more and more printed documents. And, while we turn to the World Wide Web for information and entertainment, the top 100 U.S. newspapers continue to turn out more than 30 million paper copies every day [1].
15.2 What is Electronic Paper? Against this backdrop, it is easy to see why the display industry has coined the term ‘electronic paper’. Electronic paper does not refer to any display technology in particular. Rather, it refers to a collection of characteristics that should allow a display to serve as an effective stand-in for paper. Electronic paper
Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
428
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS 300
55
Total consumption (millions of metric tons)
50
Per capita consumption (kg)
250
45 40
200
35 30
150
Per capita consumption (kg)
Total consumption (millions of metric tons)
60
25 20 1965
100 1970
1975
1980
1985
1990
1995
2000
Year Figure 15.1 Total (solid line, left axis) and per-capita (dotted line, right axis) consumption of paper in the United States from 1965–1999. [31]
should be durable and flexible. It should not have to be plugged in or recharged very frequently, if at all. And, most importantly, it should be comfortable and enjoyable to read. A user should never feel that he or she needs to print a paper copy of a document when an e-paper display is available [2]. As an introduction to the field of electronic paper, let’s first consider the details of each of these important attributes.
15.2.1 Paper-like Look Unlike the majority of displays that we interact with on a daily basis, paper is a reflective medium. Ambient light hits the page, and is either reflected back to the eye of the viewer, or is absorbed by pigments or inks adsorbed to the paper’s surface. A reflective display has an inherent advantage in readability, because the brightness of the display naturally adapts to the ambient lighting conditions. Consider that it is equally easy to read a book by a single 60 Watt bedside lamp as it is to read outside on the beach under full sunlight. A typical sheet of office paper will reflect between 80 and 95 percent of the incoming light, while black toner or ink will absorb all but a few percent. Thus, a high-quality printed page will typically have a reflectance contrast ratio of 20:1 or more. Newsprint and mass-market paperback books use inks and paper of lower quality, typically exhibiting a reflectance of 65% and a 7:1 contrast ratio [3]. The angular properties of paper’s reflectance provide another key to its look. Paper is an excellent diffuse reflector, with a nearly perfect Lambertian profile. As a result, it appears equally bright at any viewing angle. More importantly, light from any illumination angle will be dispersed into all illumination angles. This means that paper has the same contrast ratio under both diffuse and directional illumination. A large number of studies have been done to determine the relative benefits of paper versus traditional computer displays (primarily CRTs or back-lit LCDs) for reading; the results to date are inconclusive [4]. In a review of the relevant results, Dillon and co-workers postulate that the variation
ELECTRONIC PAPER DISPLAYS
429
in outcome results from poor control of experimental variables, like screen position, lighting and font choice[5]. Nevertheless, users consistently express a strong preference for high-quality paper over computer monitors. We are all familiar with the durability of paper. Certainly one of man’s first engineered materials, paper can be rolled, crumpled, creased, folded, and soaked in water without suffering permanent damage. In many places, the morning newspaper is still hurled from a moving car or bicycle onto the front porch. A large part of the appeal of that same newspaper is that a small package unfolds to panoramic size, letting the reader skim through a large amount of content in short order. Traditional information displays can not withstand this kind of incredibly rough treatment, nor can they be rolled or folded. For some applications, it may be enough to have a display constructed from unbreakable plastic films instead of sheets of glass. In other cases, it would be desirable to have a large display that rolls out from a small storage tube, or a display that could be folded to fit into a briefcase. Flexible displays are covered in more detail elsewhere in this book, but the topic is worth some mention here as a very important attribute of electronic paper.
15.2.2 Paper-like Form Factor 15.2.2.1 Thin and light Paper is not a particularly dense storage medium – a stack of books weighing hundreds of kilograms could easily be digitized and stored on a single DVD disc. However, a single sheet of paper is a selfcontained input/output device weighing about a gram. A typical paperback book might weigh in at 300–500 grams, and, for a reading device to be comfortable to hold and read for long periods of time, it must come in at or beneath this target. However, a complete page-sized liquid crystal display module, including the backlight, weighing 200 grams or more, can easily account for half of that mass. Add in enough battery capacity to power the display and backlight for a day of reading, and there is not much room left for other components. Existing flat-panel displays, with their glass sheets, polarizers, touch screens and back- or frontlights, are also not very paper-like in thickness. A complete page-sized backlit AMLCD module is roughly 5 mm in thickness, which must be added to the depth of the processor, batteries, speakers, etc. used in the device. Reducing this to a paper-like 0.1 mm would make a dramatic difference for industrial designers.
15.2.2.2 Large area The front page of a broadsheet newspaper like the New York Times measures nearly 2000 in diagonal, the size of a large desktop monitor; a two-page spread spans almost 4000 – bigger than most televisions. A sheet of A4 office paper is roughly the same size (but not the same shape) as a 1400 laptop display. While 1400 backlit LCDs are easily available, they are heavy, fragile, and consume several Watts of power when operating at full brightness. A 2000 display would not only be impossible to fit in a briefcase; the cost would be prohibitive. Producing a large-area display that is practical for portable use requires a mix of several of the other e-paper characteristics discussed in this section. A large display needs to be thin and light to allow it to be truly portable. It needs to be robust, so that large amounts of packaging are not required to keep it from breaking. A rollable or foldable display would allow users to fit a large-format page viewer into a compact package. Finally, conventional large displays use very large amounts of power; even a laptop-size battery can only power a device with a 1400 transmissive LCD for an hour or two at full brightness.
430
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
15.2.2.3 High Resolution Inkjet and laser printers are capable of producing between 600 and 2400 dots per inch (dpi) on the page, while computer monitors deliver around 125 pixels per inch. Many people therefore infer that paper has a much higher intrinsic resolution than an information display. However, it is important to realize that printers produce dots, not pixels. While each pixel on a computer screen can take on one of millions of different colors, each dot produced by a printer can be only on or off. Shades between these two extremes are produced by halftoning, a process in which the individual dots produced by the printer are organized into patterns to produce the effect of a continuous-tone image. For printing grayscale or color images, the effective resolution depends on the size of these patterns, also known as the line screen. For a newspaper, a line screen of 80–100 is typically used, corresponding to a resolution of 160–200 pixels per inch. Magazines use a higher-resolution 130–150 line screen, which can be approximated by a 260–300 pixel per inch continuous-tone display. On this basis, a display that is capable of showing multiple gray tones should be able to reproduce the appearance of a newspaper at between 150 and 200 pixels per inch. Slightly higher resolution might be necessary to match the quality of a magazine, but resolutions above 300 pixels per inch will not add appreciably to the quality of the display. Displays without grayscale capability will need to achieve significantly higher resolutions to give the same image quality; 300 binary pixels per inch is the minimum necessary to produce reasonable halftone images, and 600 ppi would be preferred. In the case of a color electronic paper display, the required sub-pixel pitch will necessarily be higher, since the sub-pixels must deliver both brightness and color information. Recent advances in rendering technology have demonstrated that high-quality color can be generated with roughly twice the number of pixels needed for a monochrome display [6]. This increases the requisite resolution to 200–300 pixels per linear inch for a full-color e-paper display.
15.2.3 Paper-like Power Consumption Paper is the ultimate low-power information display, albeit a limited one. Once an image is printed or written on a sheet of paper, it remains readable for as long as the paper is intact, without any additional energy required. However, it’s essentially impossible to change the information on that sheet of paper, short of grinding it into pulp and remanufacturing it. An information display, by definition, should be rewriteable many, many times, so it’s a little unfair to use paper as a benchmark for power consumption. Nonetheless, in today’s mobile devices, the display may account for over 50% of the total power budget when it is active. Reducing the power consumed by the display will have a substantial impact on the usable battery life of devices such as PDAs, laptops and e-readers, where the user interacts heavily with the display for long periods of time.
15.2.3.1 The Importance of Image Stability A typical information display consumes power whenever it is displaying an image; displaying a volatile image on the screen all the time is completely impractical for a battery-powered device, and environmentally irresponsible for a display connected to wall power. A display that could hold an image indefinitely would have a clear advantage in power consumption over today’s display technologies. For an image-stable display, the power consumption is roughly proportional to the duty cycle of the display; that is, the frequency of display updates multiplied by the time it takes to complete a single update. As an example, consider an e-book equipped with an electrophoretic display (Figure 15.2). If the user turns the page once per minute, and the display requires a half second (500 ms) to update the
Normalized power consumption
ELECTRONIC PAPER DISPLAYS
431
1
0.1
0.01
0.001 200
400
600
800
1000
Time between updates (sec) Figure 15.2 Normalized power consumption of an image-stable display, as a function of the average time between updates.
image, then the duty cycle of the display is 1:120. Thus, the display requires less than 1% of the power that it would need if it were being continuously updated [7]. This simple equation is the key to all of the ultra-low-power displays in existence today. All of the fundamental imaging technologies discussed below consume as much power as non-image-stable reflective LCDs (if not more), when operated at 100% duty cycle. Only when the usage model of the display is taken into account do the power savings become clear. Beyond the power advantage, a display that maintains its image can give the illusion of an ‘always-on’ device. Since the screen retains the last image shown, the processor and other resources can enter a low-power standby state while the user is still reading the display. This would be useful on the many occasions when users want to save a screen image for later use, for example when taking directions or a map in the car for reference. Existing devices don’t work well for this task because they typically shut off the display after only a few minutes; the user is then left to fumble for the power switch while driving. In an always-on device, the screen can also provide context to the user. For example, in an electronic book, the display can remain on the last page read for days, weeks or months; when the user picks it up again, the action of the ‘next page’ button is immediately obvious to both the user and the device (assuming that the device has somehow saved the current state in memory).
15.2.4 Addressing Means In order to understand the benefits and drawbacks of the various display technologies being proposed for electronic paper displays, it is worthwhile to take a moment to consider how these displays can be driven. There are three commonly-used display addressing methods: direct drive, passive (or simple) matrix, and active matrix. We might also consider how a display could be addressed using external means, like a print head.
15.2.4.1 Direct drive Direct-drive displays are the simplest to understand and construct. In a direct-drive display, each pixel is directly connected to an output of the display driver circuit. Typically, all pixels will share a single, unpatterned common electrode that covers the viewing surface of the display. Direct-drive displays
432
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
have the advantage that every pixel can be individually addressed at the same time. In addition, because the electrodes are not required to be in any kind of repeating pattern, pixels in direct-drive pixels can take on arbitrary shapes. The direct-drive method is not effective for high pixel resolutions or large numbers of pixels; not only does each pixel require its own drive circuit, but a conductive line must be routed from each pixel to its drive circuit. For this reason, direct-drive is often used for simple displays like those found in clocks, calculators and thermometers.
15.2.4.2 Passive Matrix Passive matrix (also known as simple matrix) displays rely on a response threshold within the display material to update the display a row at a time. The pixels in a passive matrix display are typically defined by conductive lines in the top and bottom display electrodes that intersect at 90 . An activation signal is applied to each row in turn, while all of the other rows are held at zero volts. At the same time, each column electrode is driven with one of two signals that, when added to the activation signal, will cause the pixels in that row to be turned on or off. The column signals alone are not large enough to cause any effect in the non-activated rows. For image-stable display materials with perfect threshold behavior (meaning that any voltage beneath the threshold has no effect on the final state of the pixel), the number of rows that can be multiplexed is infinite. Since each pixel is formed by the intersection of one row line and one column line, the number of drive lines is proportional to the square root of the number of pixels. The cost and complexity advantage over the direct-drive display, with one drive line per pixel, is readily apparent. Production of a passive matrix display is not considerably more difficult than making an unpatterned display – each electrode must be segmented into a series of lines by etching, scribing or some other patterning technique. The extremely simple structure of the electrodes mean that passive matrix displays can easily be constructed on flexible substrates; in some cases, they can even be assembled in a continuous roll-to-roll process. There is one important drawback of using a passive matrix instead of directly driven pixels, and that is speed. Only a single row can be addressed at a time, meaning that the update time of the entire display is equal to the update time for a single pixel, multiplied by the number of rows in the display. Consider a SVGA display with 600 rows, still a relatively small display. Even a display material with a 1.5 ms response time would require over one second to update in a passive matrix configuration. A slower material with a 100 ms update time would take a full minute to update. Such performance is unlikely to be tolerable for an interactive display.
15.2.4.3 Active Matrix For those materials that lack a threshold, it is necessary to use active matrix multiplexing. As its name suggests, an active matrix display uses an array of active elements to control the voltage applied to each pixel. The most common kind of active matrix is constructed from thin-film transistors (TFTs). Each row of the display is activated by bringing the corresponding gate line high (in the case of n-type silicon transistors), at which point the pixel electrodes in that row are directly connected to the corresponding column data lines. Current flows through the transistor to charge or discharge the pixel capacitance, which may be augmented by a storage capacitor. When the row is deactivated, the transistor becomes non-conducting, and the pixels in that row are isolated from the data lines for the remainder of the frame. However, because charge is stored in the combined pixel and storage capacitors, the voltage across the pixel remains relatively constant for the remainder of the frame. This is the essence of TFT active matrix drive – electrically, each pixel
ELECTRONIC PAPER DISPLAYS
433
responds as if it is being direct-driven. In this way, an active matrix display built from a material with a 100 ms response time can be completely updated in about 120 ms. The construction of a thin-film transistor array is a highly technical process requiring equipment and conditions similar to those used to manufacture silicon ICs. As a result, only a handful of companies around the world are in the business of mass-producing TFT arrays. The cost of a TFT backplane is significantly greater than that of a scribed electrode used in a passive-matrix display, but this differential cost is falling as TFT-LCD displays capture a larger and larger percentage of the total market, including everything from cellular phone displays to large-screen televisions. TFT arrays are also not known for their mechanical flexibility. At the time of this writing, all commercially-produced TFTs are made on glass substrates. Several companies have demonstrated flexible active matrix arrays on substrates ranging from stainless steel to plastic film to ultra-thin glass; displays based on flexible TFTs are predicted to enter the market within the next one or two years, and will almost certainly use one of the imaging materials described in this chapter. More details on flexible displays can be found in Chapter 22.
15.2.4.4 External Addressing Means Addressing and control electronics are typically the most expensive part of a display by far. For this reason, several technologists have proposed creating electronic paper display systems in which the addressing electronics are separate from the imaging film. In this way, the cost of the imaging film can be reduced so that it becomes a semi-disposable component. At the same time, the complex electronics necessary to produce a high-resolution image can be consolidated into a large printing unit with a relatively high cost. If the imaging film possesses good image stability, yet can be erased completely and easily, such a system could act as a photocopier or printer stocked with re-useable paper. A brief survey of work done in this area yields several interesting results. Xerox Corporation’s Gyricon group produced prototype page printers, and also demonstrated an imaging wand that could produce an image by being dragged across a sheet of their bichromal display material [8]. Dai Nippon Printing of Japan has announced that it is developing a reusable paper comprising sheets of polymer-dispersed Smectic-A-phase liquid crystal doped with a dye. DNP’s material is erased by applying an electric field to bring all of the LC domains to the homeotropic state, then written by locally heating the film to induce a transition to a bipolar structure. The dye molecules contained within the liquid crystal do not absorb light when oriented normal to the viewing surface, but absorb strongly when they are positioned parallel to the viewing surface in the bipolar state [9]. Finally, E Ink Corporation has recently received a patent on a structure combining their electrophoretic film with a photoconductive layer. An image can be formed on the film by projecting a pattern of light and dark on the photoconductor while energizing the film [10]. Some may disagree about whether rewritable paper sheets can be considered an information display, but the environmental and cost benefits of such a system could be substantial.
15.2.5 Paper-like Interaction Paper is not just a display, it is also a medium for information storage. When computers, PDAs and cell phones are readily available, we still turn to index cards and note pads to jot down grocery lists, phone numbers, and reminders. Even a seemingly non-interactive task like reading can be improved by adding the ability to highlight text and make margin notes. Arguably, any true electronic paper display must be capable of acting as an input device as well, if it is to capture the true paper experience. The most common way of adding interactivity to a display is by installing a resistive touch sensor, in which pressure from a fingertip or stylus deforms a flexible front film to bring it in contact with a rigid
434
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
second film. By measuring the resistance along different conductive traces on the two films, the location of the contact point between the two films can be triangulated. Resistive touch sensors work well, but they add thickness and weight to the display module, and the addition of two extra films and an air gap can decrease the effective brightness and contrast of the underlying display. In contrast, inductive touch sensors operate by sensing the interaction of an inductor (embedded in the stylus) with an electromagnetic field generated by a printed circuit board located behind the display. Since the sensor PCB is behind the display, there is no impact on the appearance or optical performance of the display. However, an inductive sensor can only be activated with the specialized stylus; fingertips have no effect. The iLiad electronic reader product, announced in early 2006 by iRex Technologies of the Netherlands, is the first electronic-paper-based product to incorporate touch input technology. Using an inductive sensor manufactured by Wacom, users are able to annotate documents and make free-hand notes and drawings [11] (see Figure 15.3, below).
15.3 Particle-based Electro-optic Materials for Electronic Paper 15.3.1 Bichromal Particle Displays The GyriconTM display, pioneered by Nicholas Sheridon of Xerox, was one of the first serious attempts to create a display with truly paper-like characteristics. Gyricon is an example of a bichromal particle display, in which the image is formed in a disordered array of microscopic spherical particles that are dark on one side and light on the other [8,12]. As in all particle displays, this combination of light absorption and diffuse scattering gives the display a paper-like appearance. The Gyricon bichromal particles are formed by flowing black and white pigment-loaded polymer down opposite faces of a spinning disc. At the perimeter of the disc, the two streams combine into droplets that spin off and solidify before the components can mix, creating spherical particles that are half black and half white. The particles are cast into a sheet of silicone rubber, which is then swollen with oil to increase the size of the cavities containing the particles. The materials comprising the particles are designed so that the two hemispheres adopt opposite electrical charges when in the presence of the oil; in addition, the
Figure 15.3 iRex’s iLiad e-reader device features an 800 XGA electrophoretic display and an inductive touch sensor. Image courtesy of the Society for Information Display.
ELECTRONIC PAPER DISPLAYS
435
relative charging strength of the two sides ensures that all of the particles will have a net charge of a particular polarity. In order to build the finished display, the particle-containing sheet is placed between two planar electrodes. An electric field applied between these electrodes exerts two forces on each particle: a force proportional to the net charge on the particle that pulls it across its cavity; and a torque that attempts to align the charge dipole counter to the applied field by rotating the particle. When there is no field applied, the particles become adhered to the cavity walls by a variety of forces. This particle sticking gives the Gyricon system image stability; the electrophoretic motion of the particles is critical to detaching the particles from the cavity walls so that they can be rotated. Each bichromal particle is approximately 100 microns in diameter, which is large enough to resolve by eye under close examination. In a high-resolution Gyricon display, each pixel comprises only two or three particles. Although half-tone images can theoretically be generated by partially rotating the particles, this results in a spatially non-uniform distribution of pure black and white regions which is too coarse to be seen as a smooth level of gray. Although the individual spheres do exhibit threshold behavior in their response to electric field, the position of this threshold varies substantially within a display. This makes the Gyricon system unsuitable for passive matrix addressing. Although TFT active matrix addressing of this material has been demonstrated, the relatively high drive voltage required (approximately 80 V) is not compatible with standard amorphous silicon transistor designs. For this reason, the primary drive mechanism used with bichromal ball displays is direct driving of individual pixel electrodes, which is suited only to relatively low-density patterns. In combination with the resolution limit imposed by the size of the bichromal particles, this makes the Gyricon display best suited to low-resolution display applications, such as signage. An interesting twist on the bichromal ball display was described in 2002 by researchers at Oji Paper of Japan. Nicknamed the ‘peas in a pod’ display, the spherical particles of the Gyricon display are replaced with high aspect ratio cylinders. According to the inventors, cylinders are preferred to spheres because they have a lower ratio of particle surface area to optical cross section. The use of cylindrical particles is predicted to offer faster switching, fewer threshold effects and more uniform performance. However, no more recent publications on this system have appeared.
15.3.2 Electrophoretic Displays 15.3.2.1 Operating Principle Electrophoretic displays (EPDs) produce image contrast by manipulating the position of charged colloidal particles by applying an electric field, causing the particles to migrate between two electrodes. In the simplest case, the charged colloids are suspended in an organic fluid between two planar electrodes, and a light-absorbing dye is dissolved in the fluid. These single-particle EPDs have been thoroughly studied and characterized [13]; a brief introduction will be given in this section to familiarize the reader with their basic operation. In the presence of the suspending fluid, the particles adopt a characteristic surface charge, known as the zeta potential. When an electrical potential is applied across the cell, the particles migrate to the electrode with the opposite charge; for example, if the rear electrode is negatively charged with respect to the front electrode, then a negatively charged particle would move towards the front of the cell. When the particles are at the front of the cell, they scatter light, making the display appear light. When the particles are pushed to the rear, the dye absorbs the incoming light, causing the display to appear dark [14]. The velocity, v, of a charged particle in an electric field is given by: ~ v ¼ ~ E
436
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
where is the electrophoretic mobility and E is the local electric field. In the general case, the velocity and electric field are vector quantities, but in the case where the electrodes are planar and semi-infinite, the applied electric field will always be perpendicular to the electrodes; thus, the particles can be caused to move back and forth between the two electrodes by applying an electric field. The transit time for a particle across a cell of thickness d can be derived from this equation: ttransit ¼
d d d d2 ¼ ¼ ¼ ~ v E ðV=dÞ V
Here, we can see that the thickness of the cell affects not just the distance that the particles have to travel, but also the magnitude of the electric field for a given applied voltage. Therefore, decreasing the cell gap by half reduces the response time of the display by a factor of four, to first order. Alternately, a half-thickness cell can be driven at half the voltage, while maintaining the same response time. The other intrinsic display property that appears in the above equation is the electrophoretic mobility. The response time of the display is inversely proportional to the mobility, so increasing this quantity will lead to a faster display. It has previously been shown [15] that the electrophoretic mobility can be calculated by the formula: ¼
" 6
Where z is the zeta potential of the electrophoretic particles e is the dielectric constant of the suspending fluid, and Z is the viscosity of that fluid. From this equation, one can easily see that the mobility, and thus the response speed of an EPD, can be improved by increasing the dielectric constant of the suspending fluid; by increasing the surface charge (zeta potential) of the electrophoretic particle; or alternately by decreasing the viscosity of the suspending fluid. As an extreme case, Section 15.3.3.5 of this chapter describes a particle-based display in which air is the carrier medium. While it is doubtful that the governing equations for electrophoresis would apply to such a display, consider for a moment that air has a viscosity roughly 1/50th that of the suspending fluid described by Comiskey and coworkers. Interestingly, the single-millisecond response times reported for these particle-in-air displays are roughly 50 to 100 times faster than the 100 ms response times commonly observed for EPDs.
15.3.2.2 History The idea of building a rewriteable display using electrophoretic particles derives from xerographic printing, in which a toner made of charged particles is adhered to a photosensitive plate containing a latent image pattern, then transferred to a sheet of paper. The first record of an electrophoretic display (EPD) is from a patent filing by Xerox in 1969; Ota, working for Japan’s Matsushita, published his first results in 1973 [16]. Thus, the EPD, rather than being a newcomer to the display world, is actually a contemporary of the twisted nematic LCD – as a point of comparison, the TN field effect was discovered in 1969. Early EPDs suffered from several failure mechanisms related to their cell construction. The most significant problem is that the particles will tend to settle under the influence of gravity. If the display is lying flat, this is not much of an issue, but for a vertical display, all of the particles will end up at the lower edge of the display cell. While it is theoretically possible to solve this problem by matching the density of the particles and solvent, it is difficult to find a combination of a liquid-phase suspending fluid and a solid-phase particle with densities close enough to prevent settling, while fulfilling the other requirements of the display.
ELECTRONIC PAPER DISPLAYS
437
Even if the particles are perfectly density-matched, planar EPDs often exhibit non-uniformities due to convective fluid flow [17]. When the display is switched from one state to the other, the suspending fluid is entrained with the particles. However, it is not possible for all of the fluid to flow in the same direction as the particles – this would leave a vacuum behind. Instead, the fluid flows back in the opposite direction in a periodic pattern known as Rayleigh-Benard convection. Some of the particles are carried away with the return fluid flow, leaving a pattern on the viewing electrode. Finally, since the particles can come in direct contact with the display cell walls, some particles may become adsorbed on the viewing electrode of the display. Particle sticking can result in decreased display contrast, and in severe cases the stuck particles can be seen to clump together, giving a speckled appearance to the display. In light of these serious performance roadblocks, it is not surprising that such bi-pane EPDs never achieved commercial success.
15.3.2.3 Microencapsulated Electrophoretic Displays One approach to solving the performance problems of first-generation EPDs is to disperse the electrophoretic suspension into droplets, each one surrounded by a polymeric shell. These microencapsulated electrophoretic displays are not subject to particle settling or convective flow, because the particles are contained within the boundaries of a single capsule. In addition, the particles can no longer come into contact with the display electrodes, eliminating particle sticking. Microencapsulated electrophoretic displays have been demonstrated by E Ink Corporation of Cambridge, Massachusetts, and NOK Corporation of Japan [15]. Microencapsulating the electrophoretic material brings additional benefits in materials handling. By blending the capsules with a polymeric binder material, they can be coated in a roll-to-roll process onto a polymer substrate with a transparent conductive surface film. Finally, an adhesive film with a removable release liner is laminated to the open ink surface. This structure, which E Ink calls a frontplane laminate (FPL) can be laminated to a variety of backplanes to form a complete display cell. The microcapsules provide an internal means of cell gap control, allowing the finished display to be flexed or rolled without affecting its appearance or performance. Although the particles are contained within microcapsules, the achievable resolution of a microencapsulated EPD is determined by the size of the underlying electrodes and the contours of the electric field within the cell, not by the location or size of the microcapsules. Micrographs of switching microencapsulated EPDs clearly show capsules that are bisected by the edge of a pixel electrode. By constructing an electrophoretic display using an ultra-high-resolution backplane, Bouchard and coworkers have demonstrated that a resolution of 400 ppi is achievable using currently available microencapsulated EPD materials. Another performance issue stems from the use of a dye to produce the dark state. When the scattering particles are packed at the front of the display to produce the white state, a small – but nonnegligible – amount of dye remains in the space between the particles. As a result, some of the incoming light is absorbed by the dye before being scattered back to the viewer. This problem can be improved by decreasing the dye concentration; however, a certain amount of dye is required to provide a sufficiently colored dark state. As a further improvement, the absorbing dye can be removed and replaced with a second species of absorbing particle with opposite electrical charge from the colorless scattering particles. Figure 15.4, below, depicts a cross-section of such a dual-particle microencapsulated electrophoretic display. As shown, the particles move in opposite directions under an applied electric field. The colorless scattering particles are still responsible for creating the white state, but the dark state is now formed by a pack of black pigment particles. Unlike in the single particle – dye case, the black particles are not intermixed with the scattering particles in the white state, leading to an increase in the achievable display brightness.
438
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 15.4 Schematic cross-section view of E Ink’s dual-particle microencapsulated electrophoretic display film.
At first glance, one may wonder how it is possible that two oppositely-charged pigment species can be mixed together within the electrophoretic internal phase without immediately binding together into a gray mass. The answer comes from the steric stabilization provided by long-chain surfactant molecules that adsorb to the particle surfaces. Murau and Singer first described importance of steric stabilization in single-particle electrophoretic displays in 1978 [17]; without some kind of stabilizing layer, the electrophoretic driving force would pack the particles so tightly at the wall that the van der Waals attraction would overcome the electrostatic repulsion and cause the particles to agglomerate irreversibly. In the dual-particle case, electrostatic repulsion becomes electrostatic attraction, but the fundamental principle remains the same: the particles are prevented from making hard-sphere physical contact. E Ink’s microencapsulated dual-particle electrophoretic display material has been reported to show substantial image stability [18]. Although some degradation in contrast and brightness occurs over a short time after switching, the display remains readable for months. Accordingly, a display built with this material only draws power when the image is changed. On the other hand, E Ink has not demonstrated a substantial voltage threshold which would allow for passive matrix addressing; therefore, an active matrix backplane is required for high-resolution matrix displays. The power consumption characteristics of an active matrix electrophoretic display were covered in detail by Pitt and co-workers, who reported that the power required to update an image depends strongly on the complexity of the image data. Further, the authors concluded that, while the power consumption of an active matrix EPD would be comparable to a reflective AM-LCD when updated continuously, applications like electronic books could realize a 90% or greater power savings by duty-cycling the display electronics. Electrophoretic displays are capable of generating continuous-tone grayscale by interrupting the migration of the particles from one electrode to the other; this can be achieved by modulating either the pulse length or the voltage applied to the display. The first commercial active-matrix EPD, produced by E Ink and Philips, achieves four distinct gray levels (white, black, light gray and dark gray) by using a pulse-width modulated driving scheme [19]. More recently, iRex, a Dutch electronic book developer, has announced a product based on E Ink’s EPD material that is capable of displaying 16 gray levels. One key concern for microencapsulated electrophoretic displays is the response time – current products have an image-to-image time of 1000 ms for grayscale images, and 500 ms for black/white transitions. This slow response, coupled with the requirement that each image transition be fully complete before the next transition starts, limits the adoption of EPDs in highly interactive devices like personal digital assistants (PDAs). Research samples of EPD materials with response time of less than 100 ms have been demonstrated which, when commercialized, should be suitable for more inputintensive applications [20].
ELECTRONIC PAPER DISPLAYS
439
15.3.2.4 Patterned Array Electrophoretic Displays Another approach to solving the problems with the planar electrophoretic display is to confine the particles in manufactured cavities, instead of in microcapsules. In the case of the Belgium-based display materials company Papyron, the electrophoretic suspension was dispersed into holes laserdrilled through a polymer film; the display cell consists of this film sandwiched between two planar electrodes. More recently, Sipix has developed a method for producing arrays of embossed microcups, which serve as man-made microcapsules [21]. Once these microcups are filled with the electrophoretic suspension, a sealant material is overcoated onto the top surface to enclose the open microcups. As with E Ink’s material, Sipix has demonstrated that a stable image can be maintained after the voltage is removed from the display, enabling low-power operation by disconnecting the power between drive events. Sipix’s electrophoretic films exhibit a threshold voltage, which makes passive matrix addressing possible; nonetheless, with a response time on the order of 100 ms, updating an XGA display would take over a minute; in order to create a high-performance, high-resolution matrix display, a TFT active matrix is required.
15.3.2.5 Gaseous Medium Electrophoretic Displays To a large degree, the switching speed of electrophoretic displays is dictated by the viscosity of the suspending fluid through which the particles travel. As one might imagine, replacing the liquid found in the microencapsulated and microcell displays discussed above with a gas results in a faster-switching display. In collaboration with Kyushu University, Japanese company Bridgestone has developed such a display, which they call a quick-response liquid powder display (QR-LPD) [22]. A QR-LPD display consists of two glass plates, sandwiching a mixture of black and white charged particles in ordinary air. Barrier ribs within the display cell define the cell gap, and also prevent particle settling. Upon the application of a driving voltage to the cell, the particles migrate to the electrode with the opposite potential. One aspect of the QR-LPD that the inventors have not elaborated upon is the charging mechanism for the particles. In this device, air is the suspending fluid, and therefore charging by solution-based charging agents is not possible. The closely-related field of dry toner electrophotography provides some insights into potential charging methods [23]. Dry toner particles are frequently charged by a highvoltage corona discharge, or by triboelectric interactions with other particles, or with the container itself. Although this display is similar in construction and performance to an electrophoretic display, the particle transport process is likely by ballistic motion through the gas contained within the display, rather than true electrophoresis. The QR-LPD cell exhibits a switching threshold, which makes passive matrix addressing possible. Coupled with the extremely fast switching speed of this system (response times of significantly less than 1 ms have been reported), passive matrix addressing of high-resolution displays would be feasible – an XGA display could be completely updated in less than a quarter of a second. In addition, these displays show good image stability, allowing for low-power operation. By substituting plastic films for the glass plates, flexible QR-LPD devices can be constructed; a recent publication reports that such a plastic-substrate display could be bent to a 2 cm radius without ill effects [24].
15.3.3 Color Particle-based Displays So far, all of the display technologies discussed in this chapter have been focusing on generating monochrome images, whether full black and white or grayscale. However, just as with traditional information displays, color is becoming increasingly important for electronic paper publishing. From textbooks to newspapers to web pages, a large fraction of printed material includes color illustrations and photographs that can not be rendered effectively on a black-and-white display.
440
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Particle based display engineers have proposed three fundamentally different technological pathways to achieving color electronic paper. The first, and conceptually simplest, is to add a mosaic color filter atop a monochrome display, in the same manner that liquid crystal displays produce color. The second is to reproduce the arrangement of the color filter within the display cell by producing a patterned array of red-black, green-black and blue-black electrophoretic suspensions. The third method is to produce a display cell capable of producing any color within any area of the display, clearly the most challenging approach, but also potentially rewarding. The feasibility of constructing a color particle-based display using a color filter has been demonstrated effectively by both Bouchard and Duthaler of E Ink [25] and Sakurai of Bridgestone [26]. As all of the authors point out, the addition of a color filter has a negative impact on the brightness of the display, since roughly 2/3 of the light is absorbed by the filter. In collaboration with color filter supplier Toppan Printing, E Ink was able to improve on this figure somewhat by adopting a 4-unit pattern including a white sub-pixel, increasing the maximum brightness. In the second approach to color electronic paper, the color filter is discarded in favor of multicolored pigments, or pigment-dye systems. Bridgestone and Sipix have both demonstrated prototypes of large-area spot color formed by combining regions with different pigments within the same display [26]. In principle, this approach can also be adopted for high-resolution information displays to produce full color. The trick, however, is to deposit the desired pigments or dyes at high resolution, and in tight registration with the addressing electrodes. Sipix has proposed two solutions to this problem. In a 2005 patent, they describe a process for capping their microcells with a photodegradable material, then selectively opening, filling and re-sealing the color components by exposing the film through a photomask [27]. In a later publication, they propose ink-jetting a dye solution into the microcells, then removing the solvent to leave a dry film of the desired dye within each cell. The cells can then be blanket filled with a colorless suspension of particles in internal phase fluid; over time, the dye dissolves in the fluid, providing the background color [28]. The third pathway to a full-color display is to create a display cell that is capable of producing the full range of colors across the entire area of the display. Unlike the patterned mosaic approaches described above, such a display would theoretically have no brightness loss, because the entire display surface would be contributing to the color rendition. In addition, a full-color display of this kind could have the same sub-pixel count as a comparable monochrome display, because multiple sub-pixels would not be needed to produce color. While no practical demonstrations of this class of color displays have been shown to-date, E Ink has obtained patents for two possible constructions, one that uses inplane switching electrodes to shuttle pigment between different regions of a display cell [29], and another that uses an electric field to control the interactions between nano-sized metallic particles to adjust their optical absorption profile [30].
15.4 Particle-based Electronic Paper Products The past three years have seen explosive growth in the quantity and quality of devices equipped with particle-based displays in the market. A brief survey of some of the available e-paper products will serve to demonstrate the broad reach of this new display category. The first commercial high-resolution matrix e-paper display was developed by Philips and E Ink, and was first seen on the market in 2004 in the Sony LIBRIe´, an electronic book reader for the Japanese home market. The Librie featured a 600 diagonal SVGA (800 600) TFT active matrix display capable of displaying four gray levels. Sony claimed 10,000 page views were possible before the four AAAsize batteries would need changing. In Fall of 2006, Sony launched their PRS-500 Reader, an updated version of the Librie design for the United States market. The electrophoretic display, while maintaining its size and pixel count, had a nearly 50% faster image update time (one second vs. nearly two seconds for the Librie), and was linked to the Sony Connect online content store, where users could purchase electronic copies of a variety of books and novels. A variety of other e-book readers using the same display module, now supplied by
ELECTRONIC PAPER DISPLAYS
441
Prime View International of Taiwan, have also been launched, including the Jinke Hanlin from China, the NUUT from Korea, and the French Cybook from Bookeen. Online bookseller Amazon recently entered the e-book hardware business with its Kindle reader device, also based on the 600 SVGA module. The Kindle’s distinguishing feature is its built-in 3G wireless modem, which allows readers to purchase and download e-books from nearly any location in the United States. The iRex iLiad has already been mentioned for its inductive touch sensor, located behind its 800 diagonal XGA (1024 768) electrophoretic screen. Originally intended as an electronic newspaper device capable of being synchronized to a central content delivery system, the iLiad has become popular among hobbyists for its open-source Linux operating system and powerful CPU. The iLiad has also been re-branded as the eFlybook, an electronic aeronautical chart reader. The electrophoretic display allows the device to be readable under the widely varying lighting conditions of an airplane cockpit. Another matrix electronic paper product is the Albirey tablet, developed by Hitachi using Bridgestone’s QR-LPD technology. The Albirey is designed as a stationary information display tablet, and includes all of the necessary hardware to download and update messages on the display wirelessly. Although the cost, approximately $3,000 per unit, is prohibitive for mass-market use, field trials of the Albirey as an information terminal have been conducted in Japan’s JR East stations and trains. The passive-matrix 1024 768 QR-LPD display in the Albirey is only capable of black/white display, no grayscale. Update time image-to-image is reported to be several seconds, significantly longer than for the active-matrix EPD devices used in the Reader and iLiad. Turning to low-resolution products, the first mass-production consumer device to use a segment-type electronic paper display was the JumpDrive Mercury from Lexar. Launched in 2006, the Mercury is a typical USB Flash memory drive, with the added feature of a 10-bar ‘gas gauge’ that shows how much of the drive’s memory is being used. The image stability of the electrophoretic display material (supplied by E Ink) allows the gauge to continue to display the state of the drive, even months after it is removed from a computer. In 2007, Taiwanese memory manufacturer A-Data produced a limited run of a SD Flash memory card with a display constructed from Sipix’s microcell EPD – instead of indicating space with a bar-style indicator, the A-Data card reported its capacity with a two-digit numeric display. Motorola provided further validation of the mainstream possibilities of e-paper technology when it adopted E Ink’s electrophoretic film for its Motofone F3. Launched in late 2006, the Motofone was designed to appeal to first-time phone buyers in emerging markets. The choice of an electrophoretic display for the Motofone allowed for excellent viewability under a variety of indoor and outdoor lighting conditions, including direct sun in sub-tropical locales. In addition, the ultra-low power consumption of the EPD made possible a typical battery life of one–two weeks with a slim, light-weight battery. In order to save cost, weight and thickness, the Motofone display was constructed using the keypad printed circuit board as the display backplane; this decision restricted the display to two rows of ‘starburst’ pattern alphanumeric characters for output, along with an array of fixed icons. As a result, the functions of the phone are restricted to the basic functions: placing and receiving calls, typing and receiving short text messages, and a simple alarm clock. In selecting a monochrome segmented EPD for the Motofone, Motorola deliberately went against the grain of the rest of the mobile phone market, which is rapidly standardizing on high-resolution color transflective AMLCD as the display of choice.
15.5 Conclusion The strong market demand for more paper-like displays has given rise to a wide variety of new display technologies. While the preceding discussion has focused on the electronic-paper-specific characteristics of flexibility, readability and ultra-low power, a new display effect must also be manufacturable, cost-effective and reliable before it can be incorporated into commercial products. A wide array of products with electronic paper displays is now available in the mass market, demonstrating how this new technology can overcome the limitations of traditional information displays to enable new applications.
442
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
References [1] Audit Bureau of Circulations, Chicago, 2006, Vol. 2006. [2] Omodani, M. (2004) SID Symposium Digest of Technical Papers, 128. [3] Danner, G. M., Fiske, T. G. and Silverstein, L. D. (2001) Display performance for mobile device applications, 21st International Display Research Conference, 1653. [4] Kerr, M. A. and Symons, S.E. (2003) Computerized presentation of text: effect on children’s reading of informational material, Reading and Writing: An Interdisciplinary Journal, 00 (1); Gujar, A., Harrison, B.L. and Fishkin, K.P. (1998) Proceedings of Human Factors and Ergonomics Society, 527; Ziefle, M.C. (2001) Proceedings of Human Factors and Ergonomics Society, 45, 262. [5] Dillon, A., McKnight, C. and Richardson, J. (1988) Reading from paper versus reading from screens, The Computer Journal, 31 (5), 457. [6] Brown Elliott, C.H., Credelle, T.L., Han, S. et al. (2003) Development of the Pentile matrix color AMLCD subpixel architecture and rendering algorithms, Journal of the Society of Information Display, 11 (89), p89; Kerofsky, L.J. and Messing, D. (2005) Optimal rendering for sub-pixel displays, presented at the Proceedings of the 2005 Americas Display Engineering and Applications Conference (ADEAC 2006), Portland, OR, 2005 (unpublished). [7] Pitt, M.G., Zehner, R.W., Amudson, K.R. et al. (2002) SID Symposium Digest of Technical Papers, 33 (1), p1378. [8] Sheridon, N. (2005) ‘Gyricon materials for flexible displays’, in: Greg P. Crawford (ed) Flexible Flat Panel Displays, Chichester: John Wiley & Sons, Ltd, pp393. [9] Saito, W. Baba, A. and Sekine, K. (1999) Future of Print Media. Available at http://web.archive.org/web/ 20040219002211/futureprint.kent.edu/acrobat/saito01.pdf. [10] Zehner, R.W., Duthaler, G., Paolini, R. et al. (2004) Electrophoretic displays in portable devices and systems for addressing such displays, WS patent 6,753,317. [11] Henzen, A., Ailenei, N., van Reeth, F. et al. (2004) SID Symposium Digest of Technical Papers, 35 (1), 1070. [12] Sheridon, N. K. and Berkovitz, M.A. (1977) Proceedings of the Society for Information Display, 18. [13] Dalisa, A.L. (1980) ‘Electrophoretic Displays’, in J. I. Pankove (ed) Display Devices, Berlin: Springer-Verlag. [14] Lewis, J.C. (1975) ‘Electrophoretic Displays’, in A. R. Kmetz and F. K. von Willisen (eds) Nonemissive Electrooptic Displays, New York: Plenum Press, pp223; Murau, P. (1984) SID Symposium Digest of Technical Papers, 141. [15] Comiskey, B., Albert, J.D. and Yoshizawa, H. (1998) An electrophoretic ink for all-printed reflective electronic displays, Nature (London), 394, 253. [16] Ota, I., Ohnishi, J. and Yoshiyama, M. (1973) Proceedings of the IEEE, 61 (7), 832. [17] Murau, P. and Singer, B. (1978) The understanding and elimination of some suspension instabilities in an electrophoretic displays, Journal of Applied Physics, 49 (9), 4820 (1978). [18] Webber, R.M. (2002) SID Symposium Digest of Technical Papers, 33 (1), 126. [19] Zehner, R., Amundson, K., Knaian, A. et al. (2003) SID Symposium Digest of Technical Papers, 34 (1), 842. [20] Whitesides, T., Walls, M., Paolini, R. et al. (2004) SID Symposium Digest of Technical Papers, 35 (1), 133. [21] Liang, R.C., Hou, J., Zang, H. et al. (2003) Microcup displays: electronic paper by roll-to-roll manufacturing processes, Journal of the Society for Information Display, 11 (4), 621. [22] Masuda, Y., Nihei, N., Sakurai, R. et al. (2006) A reflective display QR-LPD, Journal of the Society for Information Display, 14 (5), 443. [23] Schein, L.B. (1996) Electrophotography and Development Physics, (Rev. 2nd edn), Morgan Hill: Laplacian Press. [24] Hattori, R., Yamada, S., Masuda, Y. et al. (2004) A quick-response liquid powder display (QR-LPD) with plastic substrate, Journal of the Society for Information Display, 12 (4), 405. [25] Bouchard, A., Doshi, H., Kalhori, B. et al. (2006) SID Symposium Digest, 37, 1934; Duthaler, G., Au, J., Davis, M. et al. (2002) SID Symposium Digest, 33, 1374. [26] Sakurai, R., Ohno, S., Kita, S. et al. (2006) SID Symposium Digest, 37, 1922. [27] Chen, X., Zang, H., Wu, Z.G. et al. (2005) USA Patent No. 6,914,714. [28] Wang, X., Zang, H. and Li, P. (2006) SID Symposium Digest, 37, 1587. [29] Albert, J.D., Comiskey, B. and Jacobson, J. (2007) USA Patent No. 7,167,155. [30] Morrison, I. and Jacobson, J. (2007) USA Patent No. 7,180,649. [31] Howard, J.L. (2001) U.S. timber production, trade consumption, and price statistics 1965 to 1999. U. S. DEPARTMENT OF AGRICULTURE, F. S., FOREST PRODUCTS LABORATORY (Ed.).
16 Reflective Cholesteric Liquid Crystal Displays Deng-Ke Yang Chemical Physics Interdisciplinary Program in the Liquid Crystal Institute, Kent State University, Kent, Ohio, USA
16.1 Introduction Liquid crystal displays are dominant in the display market due to the availability of a variety of liquid crystal technologies. They are used in hand-held devices such as calculators, PDAs, cellular phones, and digital cameras, middle-size displays such as laptop and desktop computer monitors, and large-size displays such as flat panel TVs and projection TVs. They can also be operated in various modes such as transmissive mode, reflective mode, and transflective mode to satisfy the requirements of various display applications. Among the liquid crystal display technologies, reflective cholesteric (Ch) display technology is a relatively new technology and can be used for hand-held devices, info-signs, electronic books, and electronic papers [1–3]. Ch liquid crystals exhibit two stable states at zero field: the reflecting planar texture and non-reflecting focal conic texture. Because of their bistability, Ch liquid crystals can be used to make multiplexed displays on passive matrices, which keeps the manufacturing cost low. Because of the reflection, they do not need power-hungry backlights and, therefore, are energy-saving. The bistability also means that they do not need to be addressed constantly to display static images, saving even more energy. Ch liquid crystals can also be encapsulated to make flexible displays with flexible plastic substrates, making the displays durable and light weight. Furthermore, they can be fabricated in a roll-to-roll process. The merits of energy-saving, light weight, and durability are highly desirable features for mobile displays. Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
444
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
CH3 CH3
CH2
CH
CH2
C
N
Figure 16.1 Chemical structure of a typical chiral liquid crystal molecule.
16.2 Basics of Ch Liquid Crystals Liquid crystals used in displays are usually rod-like organic molecules [4–6]. The simplest liquid crystal is the nematic liquid crystal consisting of rod-like achiral molecules that have reflection symmetry. The long molecular axes of the liquid crystal molecules are statistically aligned along a common direction called the liquid crystal director and denoted by a unit vector ~ n. If the constituent molecules are chiral – i.e., have no reflection symmetry, such as CB15 whose chemical structure is shown in Figure 16.1 – the liquid crystal director is no longer uniform but twists in space [4, 5]. The liquid crystal director ~ n twists around a perpendicular axis known as the helical axis (the z axis in Figure 16.2). In a plane perpendicular to the helical axis (xy plane), the long molecular axes are aligned parallel to the director as in nematic liquid crystals. The liquid crystal directors on two neighboring planes, however, twist slightly with respect to each other. It should be noted that in the Ch liquid crystal there is no layered structure. Figure 16.2 is schematic. The distance along the helical axis for the
Figure 16.2 The helical structure of the cholesteric liquid crystal.
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
445
director to twist 2p is called the pitch and is denoted by P. The period of the liquid crystal is P=2 because ~ n and ~ n are equivalent. The chirality qo of the liquid crystal is defined by q ¼ 2p=Po. If the z axis of the coordinate is chosen parallel to the helical axis, the director ~ n is given by ~ n ¼ nx ^x þ ny~ y þ nz^z ¼ cosðqz þ o Þ^x þ sinðqz þ o Þ^y
ð1Þ
where o is a constant depending on where the origin of the coordinate is chosen. When q > 0, the helical twisting is right-handed. When q < 0, the helical twisting is left-handed. When q ¼ 0, the pitch is infinitely long, which corresponds to the nematic phase. The first material that was observed to exhibit a cholesteric phase was a cholesteryl derivative [7], and the liquid crystal phase was therefore named as the cholesteric phase. Nowadays, most cholesteric liquid crystals used in display applications are actually mixtures of nematic hosts and chiral dopants, and therefore cholesteric liquid crystals are also called chiral nematic liquid crystals. When a nematic host is mixed with m chiral dopants at the concentrations xi ði ¼ 1; 2; . . . ; mÞ, where xi is the concentration of the ith chiral dopant, the pitch of the mixture is calculated by P¼P m i¼1
1 ðHTPÞi xi
ð2Þ
where ðHTPÞi is the helical twisting power of the ith chiral dopant. The (HTP) of a chiral dopant is mainly dependent on the chemical structure of the chiral dopant and slightly dependent on the nematic host. As an example, the (HTP) of CB15 is about 10 mm1 . When a nematic host is mixed with 5CB at the concentration of 30%, the pitch of the mixture is P ¼ 1=ð0:2 10 mm1 Þ ¼ 0:5 mm. When the concentration of the chiral dopant is 0, the pitch is infinitely long. The highest (HTP) of commercially available chiral dopants is about 30 mm1 [8]. The pitch of a Ch liquid crystal may be temperature-dependent. There are two main factors affecting the temperature dependence of the pitch. One is the pretransitional effect if the material, which is called thermochromic Ch liquid crystal, exhibits smectic-A phase at a temperature below the Ch phase [9]. Smectic-A phase has a layered structure and is incompatible with Ch helical structure [4]. When the temperature is decreased toward the Ch–smectic-A transition due to thermal fluctuations, the smectic-A structure, in short time and spatial scales, is developed in the Ch phase, which tends to dramatically increase the pitch [10–12]. The other factor is thermal contraction. When the temperature is decreased, the inter molecular distance decreases, which may slightly decrease the pitch. In display applications, it is highly desirable that the pitch of the Ch liquid crystal is temperature-independent; therefore, Ch liquid crystals exhibiting smectic-A phase at lower temperatures should not be used. Experiments show that for Ch liquid crystals with more than one chiral dopant, the pitch is usually less temperature-dependent.
16.3 Optics of Ch Liquid Crystals Liquid crystals possess birefringence because of the elongated molecular shape and orientational order. For light with polarization parallel to the liquid crystal director of a liquid crystal, the material exhibits the extraordinary refractive index ne , while for light with polarization perpendicular to the liquid crystal director, the material exhibits the ordinary refractive index no . In a Ch liquid crystal, the liquid crystal director twists in space, and the liquid crystal is a periodic optical medium with a periodicity of P=2. The liquid crystal exhibits Bragg reflection which is made use of in reflective Ch displays [4, 5].
446
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
P, From Bragg’s law we know that the reflection band is located at the wavelength lo ¼ 2 n ðP=2Þ ¼ n where n ¼ ðne þ no Þ=2 is the average refractive index. Now we quantitatively discuss the optics of a right-handed Ch liquid crystal with pitch P and refractive indices ne and no . The liquid crystal is sandwiched between two parallel substrates with the helical axis perpendicular to the substrate. We choose the lab frame in such a way that the z axis is parallel to the helical axis and at the surface of the bottom substrate (z ¼ 0) the liquid crystal director is parallel to the x axis. The components of the liquid crystal director are given by nx ¼ cosðqzÞ; ny ¼ sinðqzÞ; nz ¼ 0
ð3Þ
where q ¼ 2p=P > 0. We consider light propagating in the z direction, whose electric field is in the x y plane and is given by ~ Eðz; tÞ ¼ ~ AðzÞei!t ¼
Ax ðzÞ i!t e Ay ðzÞ
ð4Þ
where ! is the frequency of the light and is related to the wavelength l of light in a vacuum by ! ¼ 2pc=l. The dielectric tensor at the optical frequency in the x y plane of the lab frame is $
" ðzÞ ¼
$ "? I
þ ~ n~ n¼
"? þ 2n2x 2ny nx
2nx ny "? þ 2n2y
¼
" þ cosð2qzÞ sinð2qzÞ sinð2qzÞ " cosð2qzÞ
ð5Þ
where ¼ ð"k "? Þ=2, " ¼ ð"k þ "? Þ=2, "k ¼ n2e , and "? ¼ n2o . The propagation of the light is governed by Maxwell’s equations: ~ @~ B @H r~ E¼ ¼ o @t @t ~ @~ E ~ ¼ @ D ¼ "o $ " ðzÞ rH @t @t
ð6Þ ð7Þ
From Equations (4), (6), and (7) we derive [13–15]: AðzÞ @ 2~ $ ¼ ko2 " ðzÞ ~ AðzÞ @z2
ð8Þ
where ko ¼ !=c ¼ 2p=l. There is no eigenmode whose polarization state is invariant in space in the lab frame; consequently, we employ the local frame whose x0 axis is parallel to the liquid crystal director which twists in space. The angle between the x0 axis and the x axis is ¼ qz. The relation between the local frame and the lab frame is described by ^x0 ¼ cosðqzÞ^x þ sinðqzÞ^y ^y0 ¼ sinðqzÞ^x þ cosðqzÞ^y
ð9Þ ð10Þ
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
447
In the x0 y0 frame, the electric field of the light is ~ A0 ¼
A0x A0y
¼
cos sin
sin cos
Ax Ay
$1
S
ðÞ~ A
ð11Þ
$
where S is the transformation matrix. The dielectric tensor in the local frame is $0
$
S1 " ~ " ¼~ S¼
cos sin
sin cos
" þ cosð2Þ sinð2Þ sinð2Þ " cosð2Þ
cos sin
sin cos
" ¼ k 0
0 "?
ð12Þ Because the dielectric tensor in the local frame is a constant tensor, we presume that the polarization of the eigenmodes is invariant in space in this frame and is given by ~ A0o eikz ¼ A0 ðzÞ ¼ ~
A0xo ikz e A0yo
ð13Þ
where A0xo and A0yo are constants. In the lab frame, the electric field is $
~ AðzÞ ¼ S ðqzÞ ~ A0 ðzÞ ¼ ½A0xo cosðqzÞ A0yo sinðqzÞeikz^x þ ½A0xo sinðqzÞ þ A0yo cosðqzÞbeikz ^y 0
0
¼ ðA0xo ^x þ Ayo ^yÞ cosðqzÞeikz þ ðA0yo ^x þ Axo ^yÞ sinðqzÞeikz
ð14Þ
@~ A ¼ ðikÞ A0xo^x þ A0yo^y cosðqzÞeikz þ ðikÞ A0yo^x þ A0xo^y sinðqzÞeikz @z þ ðqÞ A0xo^x þ A0yo^y sinðqzÞeikz þ ðqÞ A0yo ^x þ A0xo ^y cosðqzÞeikz
ð15Þ
A @ 2~ ¼ ðk2 q2 Þ~ A þ ði2kqÞ~ B @z2
ð16Þ
~ B ¼ f½A0xo sinðqzÞ þ A0yo cosðqzÞ^x ½A0xo cosðqzÞ A0yo sinðqzÞ^ygeikz
ð17Þ
where
Equation (8) becomes $
AðzÞ þ ði2kqÞ~ B ¼ ko2 " ðzÞ ~ AðzÞ ðk2 þ q2 Þ~
ð18Þ
448
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Multiplying both sides by the transformation matrix, we obtain $1
ðk2 þ q2 ÞS
$1
~ AðzÞ þ ði2kqÞS $1
ðk2 þ q2 Þ~ A0 ðzÞ þ ði2kqÞS $1
Because S ~ B¼ put into the form
cos sin
sin cos
$1$
~ B ¼ ko2 S
$1
~ AðzÞ
$0
~ B ¼ ko2 " ðzÞ ~ A0 ðzÞ
A0xo sin þ A0yo cos A0xo cos þ A0yo sin
n2e ko2 k2 q2 i2qk
$
" ðzÞ S S
n2o ko2
i2qk k2 q2
ð19Þ
¼
A0xo A0yo
A0yo , Equation (19) can be A0xo
¼0
ð20Þ
For non-zero solutions, it is required that n2 k2 k2 q2 e o i2qk
i2qk ¼0 n2o ko2 k2 q2
ð21Þ
Defining k ¼ nko and ¼ q=ko ¼ l=P, Equation (21) becomes n4 ð22 þ n2e þ e2o Þn2 þ ð2 n2e Þð2 n2o Þ ¼ 0
ð22Þ
n21 ¼ 2 þ " þ ð42 " þ 2 Þ1=2
ð23Þ
¼ þ " ð4 " þ Þ
ð24Þ
The solutions are
n22
2
2 1=2
2
n21 is always positive. n22 can be either positive or negative depending on the ratio between the wavelength and the pitch, as shown in Figure 16.3. In the wavelength region around nP, n22 is negative.
Reflection band
0.14
0.10 n 22
0.06
0.02
–0.02 1.2
1.4
1.6
1.8
2.0
λ /P Figure 16.3 The curve of the refractive index as a function of the wavelength. ne ¼ 1:7 and no ¼ 1:5.
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
449
The corresponding refractive index is imaginary; the light cannot propagate in the liquid crystal, and therefore is reflected. The eigen refractive indices are n1 ¼ ½2 þ " þ ð42 " þ 2 Þ1=2 1=2
ð25Þ
n2 ¼ ½2 þ " ð42 " þ 2 Þ1=2 1=2
ð26Þ
For each eigenvalue, there is an eigenmode. Altogether there are four eigenmodes. Two of the eigenmodes propagate in the þz direction, and the other two eigenmodes propagate in the z direction. When ‘þ’ is used from the sign ‘’, the corresponding eigenmode is not necessarily propagating in the þz direction. From Equation (20) we can calculate the polarization of the eigenmodes: ~ A0o1 ¼ a
1 i½n2e n21 2 =2n1 1 ¼b i½n2e n22 2 =2n2
ð27Þ
~ A0o2
ð28Þ
where a and b are the normalization constants. Generally they are elliptically polarized because of the p=2 phase difference between A0xo and A0yo . n1 is always real for any wavelength l: n2 can be real or imaginary depending on the wavelength. For reflective Ch displays, pffiffiffi we are only interested pffiffiffi in their optical behavior in the reflection band which is located at l "P. ¼ q=ko ¼ l=P ". Usually = " 1. From Equation (25) we have pffiffiffi " þ " þ 2 "1=2 ¼ 2 " n1 ¼ ½
ð29Þ
The corresponding eigenmodes have the polarization ~ A0o1 ¼ a
1 pffiffiffi pffiffiffi " "=2 "ð2 "Þ i½n2e 4
¼
1 i
ð30Þ
In the lab frame, the polarization is A1x ¼ ½cosðqzÞ i sinðqzÞeiko n1 z ¼ eiðqko n1 Þz A1y ¼ ½sinðqzÞ i cosðqzÞeiko n1 z ¼ ð iÞeiðqko n1 Þz pffiffiffi pffiffiffi pffiffiffi Because q ko n1 ¼ ko " ko 2 " ¼ ko ", we have ~ A1 ¼
pffiffi 1 eiðko Þ "z i
ð31Þ
Eigenmode 1 is left-handed circularly polarized and propagates in the þz direction with the speed of pffiffiffi c= ". Eigenmode 2 is also left-handed circularly polarized but propagates in the z direction with the pffiffiffi speed of c= ". Because n n, from Equations (26) and (28), respectively, we have n2 ¼ i½ð no Þðne Þ1=2 1 0 ~ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi p Ao2 ¼ b ðne Þ=ð no Þ
ð32Þ ð33Þ
450
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
In the lab frame, the polarization is ~ A2 ¼
1 cosðqzÞ sinðqzÞ ffi eko ½ðno Þðne Þ1=2 z pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi b ðne Þ=ð no Þ sinðqzÞ cosðqzÞ
ð34Þ
When no P < l < ne P, i.e., no < < ne , n2 is imaginary. When the refractive index is imaginary, the eigenmode is a non-propagating wave. The light intensity decays as the eigenmode propagates into the liquid crystal. This means that the light is reflected by the Ch liquid crystal; therefore the reflection band is from l1 ¼ no P to l2 ¼ ne P. The bandwidth is l ¼ l2 l1 ¼ nP
ð35Þ
The central wavelength of the reflection band is 1 nP lo ¼ ðne þ no ÞP ¼ 2
ð36Þ
When l approaches l1 , ¼ no , n2 ¼ 0, the polarization of the eigenmode is ~ A02 ¼ 01 , which is linearly polarized perpendicular to the liquid crystal director in the local frame. In the lab frame, the instantaneous electric field pattern of the eigenmode varies in space in the same way as the cholesteric helix and remains perpendicular to the liquidcrystal director. When l approaches l2 , ¼ ne , n2 ¼ 0, the polarization of the eigenmode is ~ A02 ¼ 01 , which is linearly polarized parallel to the liquid crystal director in the local frame. In the lab frame, the instantaneous electric field pattern of the eigenmode also varies in space in the same way as the cholesteric helix and remains parallel to the liquid crystal director. If in the local frame the polarization of the light makes the angle with respect to the liquid crystal director, then the effective refractive index is 1=2 ¼ ne no = n2e sin2 þ n2o cos2
ne ðne no Þ sin2 (because n n), n2 ¼ iðne no Þ sin cos
ð37Þ
and the eigenpolarization is ~ A0o2 ¼
cos sin
ð38Þ
The angle between the polarization and liquid crystal remains at as the light propagates through the liquid crystal. When the wavelength is in the reflection band, in the Ch liquid crystal the eigenmode adjusts its polarization with respect to the liquid crystal in such a way that the instantaneous electric field matches, except for a phase shift, the helical structure of the liquid crystal. The reflection of the Ch liquid crystal can be calculated from the eigenmodes and boundary conditions. In the wavelength region from l2 ð¼ no PÞ to l1 ð¼ ne PÞ, for light which is circularly polarized with the same helical sense as the helix of the liquid crystal, the angle between the electric vector of the light and the liquid crystal director is fixed as the light propagates along the helical axis. The light reflected from different positions is always in phase with respect to other reflected light, and interferes constructively to result in a strong reflection. Next we calculate the reflection of normal incident light from a cholesteric film in the case when the media below and above the Ch film are
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
z
451
Et
z=h
E1
E2
Ei
Er
Ch
z=0
Figure 16.4 Schematic diagram showing the electric fields in the Ch liquid crystal.
isotropic and have the refractive index n ¼ ðne þ no Þ=2, as shown in Figure 16.4. The Ch film has m (integer) pitches. The helix of the liquid crystal is right-handed. The incident light is right-handed circularly polarized. Below the Ch film, there are incident light and reflected light. The amplitude of the incident light is u and the electric field is u ~ Ei ¼ pffiffiffi 2
1 iko nz e i
ð39Þ
The reflected light is also right-handed circularly polarized and has amplitude r. The field is r ~ Er ¼ pffiffiffi 2
1 eiko nz i
ð40Þ
Above the Ch film, there is only the transmitted light that is right-handed circularly polarized and has amplitude t. The field is t ~ Et ¼ pffiffiffi 2
1 iko nz e i
ð41Þ
Generally speaking, there are four eigenmodes inside the Ch film. Two of the eigenmodes (eigenmodes 1 and 2) are left-handed circularly polarized (one propagating in the þz direction and the other in the z direction). The other two eigenmodes (eigenmodes 3 and 4) are linearly polarized (one propagating in the þz direction and the other in the z direction). The amplitudes of the left-handed circularly polarized eigenmodes are zero. This can be shown by the argument that their amplitudes do not change when propagating in the Ch liquid crystal and therefore must be zero in order to satisfy the boundary conditions. For the right-handed eigenmodes, ~ B ¼ ð1=i!Þr ~ E ¼ ðq=i!Þ~ E. For the incident, reflected, and transmitted right-handed circularly polarized light, ~ B ¼ ð1=i!Þr ~ E ¼ ðko n=i!Þ~ E. In and near the reflection band, ko n q. If the boundary conditions for the electric field are satisfied, the boundary conditions for the magnetic field are also satisfied. Therefore we do not need to consider the
452
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
boundary conditions for the magnetic field. The amplitudes of the right-handed circularly polarized eigenmodes inside the Ch film are v1 and v2 , respectively. The fields are v1 ~ Ech ¼ pffiffiffi 2
v2 1 iko n2 z 1 e eþiko n2 z þ pffiffiffi w 2 w
ð42Þ
where w¼
pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ðne Þ=ð no Þ
ð43Þ
Note that the local frame is the same as the lab frame at the bottom and top surface of the Ch film because the film has m pitches, and thus the rotation matrix is omitted in Equation (42). The relations between u, r, v1 , v2 , and t can be found by using the boundary conditions at the surface of the Ch film. At the interfaces, the tangential components of the electric field are continuous. The boundary conditions at z ¼ h ¼ mP are teiko nh ¼ v1 eiko n2 h þ v2 eiko n2 h ite
nh iko
¼ v1 we
iko n2 h
v2 we
iko n2 h
ð44Þ ð45Þ
From the above two equations we can get teiko nh ðw þ iÞeiko n2 h 2 teiko nh v2 ¼ ðw iÞeiko n2 h 2 v1 ¼
ð46Þ ð47Þ
The boundary conditions at z ¼ 0 are u þ r ¼ v1 þ v2
ð48Þ
u r ¼ iwv1 þ iwv2
ð49Þ
1 1 u ¼ ð1 iwÞv1 þ ð1 þ iwÞv2 2 2 1 1 r ¼ ð1 þ iwÞv1 þ ð1 iwÞv2 2 2
ð50Þ
From these two equations we get
ð51Þ
The reflectance is given by 2 r 2 ð1 þ wiÞ þ ð1 wiÞðv =v Þ2 ðw2 þ 1Þð1 ei2ko n2 h Þ 2 1 ¼ R ¼ ¼ i2k n h 2 i2k n h u ð1 wiÞ þ ð1 þ wiÞðv2 =v1 Þ 2wð1 þ e o 2 Þ iðw 1Þð1 e o 2 Þ
ð52Þ
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
453
For a sufficiently thick Ch film, within the reflection band, ei2ko n2 h 0, R
2 ðw2 þ 1Þ ¼1 2w iðw2 1Þ
The thickness dependence of the reflectance can be estimated in the following way. At the center of the reflection band, l ¼ lo ¼ nP, w ¼ 1; and n2 ¼ in=2. The reflectance is given by R¼
1 expð2nph= nPÞ 2 1 þ expð2nph= nPÞ
ð53Þ
The reflection spectrum of the Ch liquid crystal with a few film thicknesses calculated from Equation (51) is shown in Figure 16.5. For saturated reflection, the film thickness should be about 10 pitches. 10P
Reflectance
1.0
0.8
5P
0.6
3P
0.4 2P 0.2 1P
0.0 0.8
1.0
1.2
1.4
1.6
1.8
2.0
2.2
2.4
λ /P Figure 16.5 Reflection spectra of Ch films with various film thicknesses. Refractive indices used are 1.7 and 1.5.
Because the reflection of Ch liquid crystals is a Bragg reflection, it is dependent on the incident angle of light. The reflection angle equals the incident angle as shown by the inset in Figure 16.6 below. It is difficult to analytically calculate the reflection of the oblique incident light. In this case a good numerical method to calculate the reflection is the Berreman 44 method [16–20]. The numerically calculated reflection spectrum of a Ch liquid crystal for various incident angles is shown in Figure 16.6. The Ch liquid crystal is sandwiched between two glass plates with a refractive index of 1.5. The refractive indices of the liquid crystal are 1.7 and 1.5. The pitch is 400 nm and the cell thickness is 5 mm. The shown angles are the incident angles outside the glass plate. As the incident angle increases, nP cos int , where int is the the reflection band is shifted to shorter wavelengths according to lo ¼ angle between the propagation direction and the helical axis inside the Ch liquid crystal. The background reflectance increases because of the reflection from the glass–air interface. Ambient light is usually unpolarized and can be decomposed into left-handed light and right-handed light. If the Ch liquid crystal is right-handed, the right-handed component is reflected while the lefthanded component is transmitted. The maximum reflection of a single layer Ch liquid crystal is 50% for unpolarized light. One layer of left-handed Ch liquid crystal and one layer of right-handed Ch liquid crystal can be stacked to achieve 100% reflection.
454
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 16.6 Reflection spectra of the Ch film for various incident angles.
16.4 Bistable Reflective Ch Display 16.4.1 Bistability The optical properties of Ch liquid crystals sandwiched between two parallel substrates are dependent on the orientation of the helical axis. A Ch liquid crystal with a short pitch (in the visible light region) exhibits three main textures (also called states) in which the orientation of the helical axis is different [1, 21]. They are the planar (P) texture, focal conic (FC) texture, and homeotropic (H) texture, as shown in Figure 16.7. In the planar texture, the helical axis is more or less perpendicular to the substrate and
Figure 16.7 Schematic diagram of the cholesteric textures.
the material is reflecting. In the focal conic texture, the orientation of the helical axis is more or less random and the material is scattering. In both planar and focal conic textures, the helical structure is preserved. When a sufficiently high voltage is applied across the cell, the Ch liquid crystal (with a dielectric anisotropy " > 0) can be switched into the homeotropic texture where the helical structure
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
455
is unwound and the liquid crystal director is perpendicular to the substrate of the cell [22]. In this texture, the material is transparent. Because the helical structure is preserved in both the planar and focal conic textures, the free energies of the two states can be about the same and therefore both states can be stable at zero field. Bistable Ch displays make use of the bistability of the reflecting planar texture and the non-reflecting focal conic texture. In order to achieve bistability, the energy difference between the planar and focal conic textures must be small and the energy barrier between them must be high. The energy difference between these two states is mainly attributed to the surface interaction of the liquid crystal with the substrate and elastic energy of defects (liquid crystal director distortion). If there are homogeneous alignment layers coated on the substrates, the surface interaction favors the planar texture. In this case a small amount of polymer can be dispersed in the liquid crystal to create an energy barrier between the planar and focal conic textures such that both textures are stable at zero field; the display is called a polymer stabilized Ch display [23–26]. If there are homeotropic alignment layers or weak homogeneous alignment layers coated on the substrates, the surface energies of the two textures are about the same, and both textures are stable at zero field; the display is called a surface stabilized Ch display [27, 28].
16.4.2 Designs of Reflective Ch Displays There are several designs of bistable reflective Ch displays. In the first design [26], a black absorption layer is coated on the back substrate as shown in Figure 16.8(a). Sandwiched between the substrates is a single layer of Ch liquid crystal reflecting monochromatic light, say, green light. When the liquid crystal in a pixel is in the planar texture, the pixel has a green appearance. When the liquid crystal in a pixel is in the focal conic texture, the pixel has a black appearance. The contrast between the two states is better if the black layer is on the inner surface of the substrate. In the second design [29], a color (say, blue) absorption layer is coated on the back substrate as shown in Figure 16.8(b). Sandwiched between the substrates is a single layer of Ch liquid crystal reflecting yellow light. When the liquid crystal in a pixel is in the planar texture, yellow light is reflected from the liquid crystal and blue light is reflected from the blue layer; the pixel has a white appearance. When the liquid crystal in a pixel is in the focal conic texture, only blue light is reflected from the blue light, and the pixel has a blue appearance. In this way, the display can show blue words or pictures on a white background.
Figure 16.8 Schematic diagram of the designs of the monochromatic and color Ch displays.
Full color Ch displays are made from a stack of three cholesteric layers reflecting blue, green, and red light, respectively, as shown in Figure 16.8(c) [30–33]. The order of stacking is important because
456
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
of the scattering effect of the material. As will be discussed later, the Ch liquid crystal in the bistable display has multi-domain structures. The refractive index changes abruptly when crossing the domain boundaries. When light propagates down the Ch layers, some light is scattered. The scattering is dependent on the pitch of the Ch liquid crystal. In arranging the order of the Ch layers, another factor to be considered is the reflection bandwidth. Because l ¼ nP and lo ¼ nP, l ¼ ðn= nÞlo . The reflection bandwidth of the blue Ch layer is the narrowest, and therefore should be on the top of the stack in order to have sufficient reflection. The middle Ch layer should have the opposite handedness so that in the overlapped wavelength region, a higher reflection can be obtained.
16.4.3 Grayscale Reflection As discussed in previous paragraphs, Ch liquid crystals exhibit bistability when both the planar and focal conic textures are stable at zero field. On the other hand, the Ch displays have grayscale reflectance, which is highly desired in display applications [25, 34–36]. There is no contradiction between these two statements because of the multi-domain structure of Ch liquid crystals, as schematically shown in Figure 16.9. There are multi-domains in Ch displays due to the dispersed
Figure 16.9 Schematic diagram of the multi-domain structure of the Ch display.
polymer in polymer stabilized Ch displays and the homeotropic anchoring in surface stabilized Ch displays. The helical axes of the domains are not exactly parallel to the cell normal but distributed around the cell normal. Each domain is bistable in the sense that it can be either in the planar texture where the helical axis is parallel to the cell normal, or in the focal conic texture where the helical axis is parallel to the cell substrate. Because the orientations of the helical axes of the domains are different, the voltages under which the domains are switched to the focal conic texture are different. Therefore, it is possible to switch a certain number of domains from the planar texture to the focal conic texture by applying an appropriate voltage. When all the domains are in the planar texture as shown in Figure 16.9(a), the reflectance is the highest. As more domains are switched to the focal conic texture, the reflectance decreases. When all the domains are switched to the focal conic texture, as shown in Figure 16.9(d), the reflectance is at a minimum. A microphotograph of a Ch display in grayscale
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
457
reflectance states is shown in Figure 16.10. The dark regions are the domains in the focal conic texture and the bright regions are the domains in the planar texture. The linear domain size is about 10 mm, which is invisible to the naked eye. The pixel size of Ch displays is usually 100 mm or larger and therefore a large number of grayscale levels are possible.
Figure 16.10 Microphotograph of the grayscale states of the Ch display.
16.4.4 Viewing Angle For a collimated incident light beam, the wavelength of the reflection of a Ch liquid crystal in the perfect planar texture (where the helical axis is exactly parallel to the cell normal) depends on the incident angle. The observed colors of the reflection at different viewing angles are different, which is undesirable in display applications. The viewing angle can be improved by the poly-domain structure discussed in Section 16.4.3. In room light conditions, there is incident light from all directions. At one viewing angle, reflected light from different domains has different colors. In addition, the incident angle inside the Ch liquid crystal is less than 40 when the average refractive index is 1.6. Therefore the color of the reflection does not change much with the viewing angle.
16.4.5 Polymer Stabilized Black–White Ch Display The effect where the reflection of Ch liquid crystals shifts with incident angle can be used to make black–white Ch displays. When a polymer is dispersed in a Ch liquid crystal, the helical axes of the domains have a distribution around the cell normal [37, 38]. At a particular viewing direction, one can see light reflected from different domains, which has different colors. When the concentration of the polymer is sufficiently high, it is possible for the light reflected from the domains to cover the visible region, and therefore the planar texture has a white appearance. The reflection spectra of a polymer stabilized black–white Ch display are shown in Figure 16.11 below. The planar texture has a high reflectance in the entire visible region and the focal conic texture has a low reflectance. The spectra were measured at the cell normal direction with isotropic incident light. The liquid crystal had a pitch of ð620=nÞ nm, where n is the average refractive index of the liquid crystal. For incident light with an incident angle out of 90 outside the glass substrate of the display, the incident angle int inside the liquid crystal is about 39 , and the reflected light has the shortest wavelength of nP cosðint =2Þ ¼ 480 nm. The reflection bandwidth is about 100 nm, and therefore reflection covers the entire visible region.
16.4.6 Encapsulated Ch Display The reflective Ch displays do not need polarizers and therefore can be made of plastic substrates that may have non-uniform birefringence. However, viscosity and adhesion to the plastic substrates of Ch
458
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 16.11 Reflection spectra of the polymer stabilized black–white Ch display.
liquid crystals are low. It is difficult to retain a uniform cell gap with pure Ch liquid crystal. This difficulty can be overcome by using encapsulated Ch liquid crystals. When a Ch liquid crystal is encapsulated in droplet form, the bistability and reflectivity can be preserved when the droplet size is approximately larger than five pitches [2, 39–45]. High reflectance is obtained with droplet sizes larger than 10 pitches. Two methods are used to produce encapsulated Ch liquid crystals: phase separation and emulsification. In the phase separation method, the Ch liquid crystal is mixed with monomers or oligomers to make a homogeneous mixture. The mixture is coated on plastic substrates and then another substrate is laminated on top of the mixture. The monomers or oligomers are then polymerized to induce phase separation. The liquid crystal phase separates from the polymer to form droplets. In the emulsification method, the Ch liquid crystal, water, and a water dissolvable polymer are placed in a container. Water dissolves the polymer to form a viscous solution, which does not dissolve the liquid crystal. When this system is stirred by a propeller blade at a sufficiently high speed, micron size liquid crystal droplets are formed. The emulsion is then coated on a substrate and the water is allowed to evaporate. After the evaporation, a second substrate is laminated to form the Ch display. Encapsulated Ch liquid crystals can be used to make flexible displays with plastic substrates. The viscosity of encapsulated Ch liquid crystals is high and can be coated on flexible plastic substrates in a roll-to-roll process. The polymers for encapsulation also provide adhesion to the plastic substrate to sustain the cell thickness. A photograph of a flexible display made from an encapsulated Ch liquid crystal is shown in Figure 16.17(b), below.
16.5 Drive Schemes of Ch Displays 16.5.1 Transition Between Ch Textures Bistable Ch liquid crystals have two stable states at zero field. In order to use them in display applications, there must be a means to switch the liquid crystal between the two stable states. Here we discuss how to use voltage pulses to switch bistable Ch liquid crystals with positive dielectric anisotropy (" > 0). When the liquid crystal director is parallel to the externally applied electric field, the electric energy is low and therefore the liquid crystal tends to be aligned parallel to the field. In the Ch display, the inner surfaces of the substrates are coated with a layer of conducting film that
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
459
serves as the electrode. The voltage is applied across the two electrodes and the electric field produced is perpendicular to the substrates. Switching of the Ch liquid crystal between the two stable states is shown in Figure 16.12 [1, 25, 46, 47]. If the liquid crystal is initially in the planar texture, the helical
Figure 16.12 Schematic diagram showing method to switch the bistable Ch display.
axis is perpendicular to the substrates while the liquid crystal director is parallel to the substrates. When an electric field is applied across the display cell, the planar texture is unstable because the liquid crystal is perpendicular to the field. The liquid crystal tends to rotate parallel to the electric field. If the applied field is low, the liquid crystal is switched to the focal conic texture where it is no longer perpendicular to the applied field and has a low free energy. If the voltage is turned off when the liquid crystal is in the focal conic texture, it remains in the focal conic texture due to the bistability. To switch the liquid crystal back to the planar texture, it must be switched to the homeotropic texture by applying a high voltage. When the electric field is sufficiently high, the helical structure is unwound and the liquid crystal director is aligned parallel to the electric field everywhere, even though the elastic energy is increased because of the unwinding of the helical structure. The decrease of the electric energy can compensate for the increase of the elastic energy. When the applied high voltage is quickly turned off from the homeotropic texture, the liquid crystal relaxes back to the planar texture. The transition from the planar to the focal conic texture is achieved by applying a low voltage VL . The electric field produced by the low voltage is not sufficiently high to unwind the helical structure. In the transition the helical axis is tilted away from the cell normal direction [5, 48–50], and therefore the reflection decreases. The energies involved in the transition are the surface energy, elastic energy, and electric energy. The surface energy fsurface is due to the interaction between the liquid crystal and the alignment layers on the substrates, and usually increases when the liquid crystal is transformed from the planar texture to the focal conic texture. The change fsurface of the surface energy is positive. The elastic energy felastic is due to the bend of the Ch layers, and increases in the transition. The change felastic of the elastic energy is positive. The electric energy felectric is due to the interaction between n~ EÞ2. In the the liquid crystal and the applied electric field ~ E, and is given by felectric ¼ ð1=2Þ"o "ð~ ~ planar texture, the liquid crystal director ~ n is perpendicular to the field E, felectric=P ¼ 0. In the focal
460
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
conic texture, the liquid crystal is tilted toward the direction of the electric field, felectric=FC < 0. In the transition the electric energy decreases and the change felectric of the electric energy is negative. felectric / E2 ¼ ðV=hÞ2 , where h is the cell thickness. The applied voltage must be higher than a threshold VPF in order to induce the transition. At the threshold VPF , the change of the free energy f ¼ fsurface þ felastic þ felectric ¼ 0. When V > VPF , f < 0, and therefore the transition can take place. It will be shown later that a high VPF is desired in order to prevent cross-talk problems. The transition from the focal conic texture to the homeotropic texture is achieved by applying a high voltage VH [4, 22]. The field produced by the high voltage must be higher than the critical field pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi Ec ¼ ðp2 =PÞ K22 ="o " to unwind the helical structure. The main energies involved are the elastic energy and electric energy. In the homeotropic texture, the helical structure is unwound and the elastic energy density is felastic=H ¼ ð1=2ÞK22 ð2p=PÞ2 . Although the elastic energy is high, the electric energy is very low and can compensate for the elastic energy. In the display application, it is desirable that Vc ð¼ Ec hÞ is low, which can be achieved by using liquid crystals with a small twist elastic constant K22 and a large dielectric anisotropy. When the field is turned off from the homeotropic texture, there are two relaxation modes. One is the homeotropic–focal conic (HF) mode in which the liquid crystal relaxes from the homeotropic texture into the focal conic texture [3, 51]. The liquid crystal director configuration on a plane parallel to the substrates in this relaxation mode is shown in Figure 16.13(b). Because there is an energy barrier
Figure 16.13 Schematic diagram of the HP and HF relaxation modes.
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
461
between the homeotropic texture and the focal conic texture, this mode is a nucleation process. The helical structure grows from nucleation seeds. The transition time from the homeotropic texture to the focal conic texture is slow and is of the order of 102 ms. There is a hysteresis in the transition between the focal conic and homeotropic textures. The threshold voltage for the transition from the focal conic texture to the homeotropic texture is Vc , while the threshold voltage for the transition from the homeotropic texture to the focal conic texture is VHF 0:9Vc . The second relaxation is the homeotropic–planar (HP) mode in which the liquid crystal relaxes from the homeotropic texture to the planar texture [3, 52–54]. The liquid crystal director configuration in the plane perpendicular to the substrates in this mode is shown in Figure 16.13(a).ffi This mode occurs when the applied voltage is pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi reduced below the threshold VHP ¼ ½ð2=pÞ K22 =K33 Vc 0:4Vc . The transition from the homeotropic texture to the planar texture takes place in two steps. First the liquid crystal transforms from the homeotropic texture to a planar texture with a pitch Pt ¼ ðK33 =K22 ÞP 2P, which is known as the transient planar texture. The free energy of the transient planar texture is higher than the (stable) planar texture with pitch P; therefore the transient planar texture is unstable. In the second step the liquid crystal transforms from the transient planar texture to the stable planar texture. The transition from the homeotropic texture to the transient planar texture is a homogeneous transition with a transition time of about 1 ms, which is much faster than the transition time of the HF mode. The transition from the transient planar texture to the stable planar texture is also a nucleation process with a transition time of about 102 ms [55–57]. Once the liquid crystal is in the transient planar texture and the applied voltage is removed, the liquid crystal can only transform into the stable planar texture. In summary, if the applied voltage is reduced from the homeotropic texture to a value between VHP 0:4Vc and VHF 0:9Vc , the liquid crystal can only relax to the focal conic texture. If the applied voltage is reduced between VHP , the HP mode dominates and the liquid crystal relaxes into the planar texture.
16.5.2 Response of the Bistable Ch Liquid Crystal to Voltage Pulses To design drive schemes for the bistable Ch display, its response to voltage pulses must be studied. In the measurement, a voltage pulse with a certain time interval is applied to the Ch material, and the reflectance of the material is measured at a time when the reflectance no longer changes after the removal of the pulse. Because of the memory effect of the Ch material, its response depends on its initial state as shown in Figure 16.14 [25]. The dashed line shows the response of the Ch material initially in the planar texture which is
Reflectance
Initially in Planar texture
Initially in Focal conic texture
V1
V2
V3 V4 V5 V6
Figure 16.14 Response of the Ch display to voltage pulses.
V
462
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
obtained by applying a high voltage pulse with an amplitude higher than V6 . When the voltage of the pulse is below V1, the Ch material remains in the planar texture during and after the pulse. When the voltage of the pulse is increased above V1 , some domains are switched into the focal conic texture during the pulse and remain there after the pulse, and thus the reflectance decreases after the pulse. The higher the voltage of the pulse, the more domains are switched to the focal conic texture and the lower the reflectance becomes. When the voltage of the pulse reaches V2 , all of the domains are switched to the focal conic texture and a minimum reflectance is reached. The region from V1 and V2 is the best region to achieve grayscale reflectance. When the voltage of the pulse is increased above V3 , some domains are switched to the homeotropic texture and the remaining domains are switched to the focal conic texture during the pulse. The domains switched to the homeotropic texture relax to the planar texture after the pulse, and therefore the reflectance increases again. When the voltage of the pulse is increased above V5 , all domains are switched to the homeotropic texture during the pulse and relax to the planar texture after the pulse, and the maximum reflectance is obtained. The solid line shows the response of the Ch material initially in the focal conic texture which is obtained by applying a low voltage pulse with an amplitude between V2 and V3 . When the voltage of the pulse is below V4, the Ch material remains in the focal conic texture during and after the pulse. When the voltage of the pulse is increased above V4 , some domains are switched to the homeotropic texture and the other domains stay in the focal conic texture during the pulse. The domains switched to the homeotropic texture relax to the planar texture after removal of the pulse, and therefore the reflectance increases. When the voltage of the pulse is increased above V6 , all the domains are switched to the homeotropic texture during the pulse and relax to the planar texture after the pulse, and the maximum reflectance is obtained.
16.5.3 Conventional Drive Scheme for Ch Displays Ch liquid crystals exhibit bistability and may be used to make multiplexed displays on passive matrices. The simplest drive scheme for the bistable Ch display is the conventional drive scheme [3, 25]. In this drive scheme, a high voltage pulse with voltage VH is used to select the planar texture. When the high voltage pulse is applied, the Ch material is switched into the homeotropic texture during the pulse and relaxes to the planar texture after the pulse, as shown in Figure 16.15(b). The high voltage VH is chosen to be equal to or higher than V6 shown in Figure 16.14. The pulse width is usually a few tens of ms and can be reduced by applying higher voltages. A low voltage pulse with voltage VL is used to select the focal conic texture. When the low voltage pulse is applied, the Ch material is switched into the focal conic texture during the pulse and remains in the focal conic texture after the pulse, as shown in Figure 16.15(a). The low voltage VL is chosen to be equal to V3 shown in
Figure 16.15 Schematic diagram of the conventional drive scheme.
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
463
Figure 16.14. The pulse width is usually a few tens of ms and cannot be reduced by applying higher voltages which would switch the liquid crystal into the homeotropic texture during the pulse and result in the planar texture afterwards. In the drive scheme, the display is addressed one line at a time. The row voltage for the row being addressed is chosen to be Vs ¼ ðV6 þ V3 Þ=2. The row voltage for the row not being addressed is chosen to be Vns ¼ 0. The column voltage to select the planar texture is Von ¼ ðV6 V3 Þ=2. The voltage applied across the pixel is Vs Von ¼ ðV6 þ V3 Þ=2 ½ðV6 V3 Þ=2 ¼ V6 ; therefore, the liquid crystal in the pixel is switched to the homeotropic texture during addressing and relaxes into the planar texture after addressing. The column voltage to select the focal conic texture is Voff ¼ þðV6 V3 Þ=2. The voltage applied across the pixel is Vs Voff ¼ ðV6 þ V3 Þ=2 ½þðV6 V3 Þ=2 ¼ V3 , and therefore the liquid crystal in the pixel is switched to the focal conic texture during addressing and remains there afterward. The voltage applied across the pixels not being addressed is either Vns Von ¼ 0 ½ðV6 V3 Þ=2 ¼ ðV6 V3 Þ=2 or Vns Voff ¼ 0 ½þðV6 V3 Þ=2 ¼ ðV6 V3 Þ=2. The absolute value is ðV6 V3 Þ=2: ðV6 V3 Þ=2 must be smaller than V1 so that the column voltage does not change the state of the pixels in the rows not being addressed and thus does not cause cross-talk problems. The time to address one line is a few tens of ms, which may be too slow for high information density displays.
16.5.4 Dynamic Drive Scheme for Ch Displays A fast drive scheme for the bistable Ch display is the dynamic drive scheme which consists of three phases as shown in Figure 16.16 below [58–60]. This drive scheme makes use of the fast homeotropic– transient planar transition and the hysteresis in the transition between the focal conic texture and the homeotropic texture. Figure 16.16(a) shows how the planar texture is selected. In the preparation phase, a high voltage VP ð> Vc Þ is applied to switch the Ch liquid crystal into the homeotropic texture. The preparation phase must be long enough to switch the liquid crystal into the homeotropic texture and is usually about 50 ms. The selection phase is determined by the homeotropic–transient planar transition time, and is short, about 1 ms. In the selection phase a high selection voltage Von ð> VHP Þ is applied. Because the voltage is higher than VHP , the liquid crystal cannot relax into the transient planar texture. Even though the voltage may be lower than VHF , the liquid crystal cannot relax either to the focal conic texture because of the short time interval. At the end of the selection phase, the liquid crystal is still in the homeotropic texture. In the evolution phase, the applied voltage VE is between VHF and Vc . The liquid crystal remains in the homeotropic texture because the liquid crystal is in the homeotropic texture at the beginning of this phase and VE > VHF . After the evolution phase, the voltage is turned off and the liquid crystal relaxes from the homeotropic texture into the planar texture. Figure 16.16(b) shows how the focal conic texture is selected. In the preparation phase, the Ch liquid crystal is also switched into the homeotropic texture by the applied high voltage. In the selection phase, a low selection voltage Voff ð< VHP Þ is applied. Because the voltage is lower than VHP , the liquid crystal relaxes into the transient planar texture. The selection phase must be long enough to allow the liquid crystal to relax into the transient planar texture and is usually about 1 ms. In the evolution phase, the transient planar texture is unstable because of the applied evolution voltage VE . In this phase the liquid crystal is switched into the focal conic texture but not the homeotropic texture because VE < Vc . The evolution phase must long enough to switch the liquid crystal from the transient planar texture to the focal conic texture and is usually about 50 ms. After the evolution phase, the liquid crystal remains in the focal conic texture. In selecting either the planar texture or the focal conic texture, the voltages in the preparation phase and evolution phase are the same, respectively. The voltages are only different in the short selection phase. Although the preparation and evolution phases are long, the frame time (the time to address the display from the top to bottom) can be reduced by using the pipeline algorithm where multiple lines are simultaneously put into the preparation and selection phases. Only one line is in the selection phase at a time. If the time intervals of the preparation, selection, and evolution phases are tP , tS , and tE ,
464
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
Figure 16.16 Schematic diagram of the dynamic drive scheme.
respectively, the frame time to address an n line display is tP þ tE þ n tS . This frame time is about 1 s, which is fast enough for e-book and e-paper applications. Based on the dynamic behavior of Ch liquid crystals, several more drive schemes have been designed for bistable Ch displays [61–65].
16.6 Conclusion Bistable reflective Ch liquid crystals have many merits and are suitable mobile displays which do not require video rate. Because of the bistability, they can be used to make highly multiplexed displays on passive matrices and therefore the displays can be manufactured at low cost. When the displays show static images, they do not need to be constantly addressed and thus are energy-saving. Because of the relatively high reflectivity, the Ch displays do not need power-hungry backlights, which reduces the power consumption further. They do not need active matrices and polarizers and can be encapsulated; therefore they can be used to make flexible displays with plastic substrates in a roll-to-roll process, which makes the displays light weight and durable (Figure 16.17).
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS
465
Figure 16.17 (a) A hand-held device with a Ch display manufactured by Kent Displays Inc. (photo courtesy of Kent Displays Inc.).
Figure 16.17 (b) A flexible reflective Ch display manufactured by Kodak (photo courtesy of Kodak).
466
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
References [1] Wu, S.-T. and Yang, D.-K. (2001) Reflective Liquid Crystal Displays, New York: John Wiley & Sons, Inc. [2] Doane, J. W. and Khan, A. (2005) ‘Cholesteric liquid crystals for flexible displays’, in G.P. Crawford (Ed) Flexible Flat Panel Displays, Chichester: John Wiley & Sons, Ltd. [3] Yang, D.-K., Huang, X.Y. and Zhu, Y.-M. (1996) Bistable cholesteric reflective displays: material and drive schemes, Annual Review of Materials Science, 27, 117. [4] de Gennes, P. G. and Prost, J. (1993) The Physics of Liquid Crystals, New York: Oxford University Press. [5] Chandrasekhar, S. (1997) Liquid Crystals (2nd edn), New York: Cambridge University Press. [6] Hall, A. W., Hollingshurst, J. and Goodby, J.W. (1997) ‘Chiral and achiral calamitic liquid crystals for display applications’ in P. J. Collings and J.S. Patel (Eds) Handbook of Liquid Crystal Research, New York: Oxford University Press. [7] Reinitzer, F. (1898) Monatsch Chem, 9, 421. [8] E. Merck catalogue. [9] Sage, I. (1990) ‘Thermochromic liquid crystal devices’, booking B. Bahadur (ed) ‘Liquid Crystals – Applications and Uses,’ Vol. 3, New Jersey: World Scientific Inc. [10] Lim, K.-C. and Ho, J. T. (1981) Mol. Cryst. Liq. Cryst., 67, 199. [11] Pindak, R. S., Huang, C. C. and Ho, J. T. (1974) Phys. Rev. Lett., 32, 43. [12] Keating, P. N. (1969) Mol. Cryst. Liq. Cryst., 8, 315. [13] Yeh, P. and Gu, C. (1999) Optics of Liquid Crystal Displays, New York: John Wiley & Sons, Ltd. [14] Yariv, A. and Yeh, P. (1984) Optical Waves in Crystals, New York: John Wiley & Sons, Inc. [15] Yang, D.-K. and Wu, S.-T. (2006) Fundamentals of Liquid Crystal Devices, New York: John Wiley & Sons, Inc. [16] Berreman, D. W. (1972) J. Opt. Soc. Am., 62, 502. [17] Berreman, D. W. and Scheffer, T. J. (1970) Phys. Rev. Lett., 25, 577. [18] Berreman, D. W. and Scheffer, T. J. (1970) Mol. Cryst. Liq. Cryst., 11, 395. [19] St. John, W.D., Fritz, W. J., Lu, Z. J. and Yang, D.-K. (1995) Phys. Rev. E, 51, 1191. [20] Xu, M., Xu, F.D. and Yang, D.-K. (1998) J. Appl. Phys., 83, 1938. [21] Blinov, L. M. and Chigrinov, V. G. (1994) Electrooptical Effects in Liquid Crystal Materials, New York: Springer-Verlag Inc. [22] Meyer, R.B. (1969) Appl. Phys. Lett., 14, 208. [23] Yang, D.-K., Doane, J.W., Yaniv, Z. and Glasser, J. (1994) Appl. Phys. Lett., 65, 1905. [24] Yang, D.-K., Chien, L.-C. and Doane, J.W. (1991) Proc. Intl. Display Res. Conf., 49. [25] Yang, D.-K. and Doane, J.W. (1992) SID Intl. Symp. Digest Tech. Papers, 23, 759. [26] Doane, J. W., Yang, D.-K. and Yaniv, Z. (1992) Proc. Japan Display ’92, 73. [27] Lu, Z.-J., St. John, W.D., Huang, X.-Y. et al. (1995) SID Intl. Symp. Digest Tech. Papers, 26, 172. [28] Yang, D.-K., West, J.L., Chien, L.C. and Doane, J.W. (1994) J. Appl. Phys., 76, 1331. [29] Lu, M.H., Yuan, H. J. and Yaniv, Z. (1996) ‘Color reflective liquid crystal display,’ US Patent 5,493,430. [30] Hashimoto, K., Okada, M., Nishguchi, K. et al. (1998) SID Intl. Symp. Digest Tech. Papers, 29, 897. [31] Davis, D., Kahn, A., Huang, X.-Y. and Doane, J. W. (1998) SID Intl. Symp. Digest Tech. Papers, 29, 901. [32] West, J. L. and Bodnar, V. (1999) Proc. 5th Asian Symp. on Information Display, 29. [33] Davis, D., Hoke, K., Khan, A.et al. (1997) Proc. Itnl. Display Research Conf., 242. [34] Huang, X.-Y., Miller, N., Khan, A. et al. (1998) SID Intl. Symp. Digest Tech. Papers, 29, 810. [35] Gandhi, J., Yang, D.-K., Huang, X.-Y. and Miller, N. (1998) Proc. Asia Display ’98, 127. [36] Xu, M. and Yang, D.-K. (1999) SID Intl Symp. Digest Tech. Papers, 30, 950. [37] Ma, R.Q. and Yang, D.-K. (1997) SID Intl. Symp. Digest Tech. Papers, 28, 101. [38] Ma, R.Q. and Yang, D.-K. (1999) J. SID., 7, 61. [39] Yang, D.-K., Lu, Z.J., Chien, L.C. and Doane, J. W. (2003) SID Intl Symp. Digest Tech. Papers, XXXIV, 959. [40] Shiyanovskaya, Green, S. Magyar, G. and Doane, J. W. (2005) SID Intl Symp. Digest Tech. Papers, XXXVI, 1556. [41] Schneider, T., Nicholson, F., Kahn, A. and Doane, J. W. (2005) SID Intl Symp. Digest Tech. Papers, XXXVI, 1568. [42] Hiji, N., Kakinuma, T., Araki, M. and Hikichi, Y. (2005) SID Intl Symp. Digest Tech. Papers, XXXVI, 1560. [43] Stephenson, S. W., Johnson, D. M., Kilburn et al. (2004) SID Intl Symp. Digest Tech. Papers, XXXV, 774. [44] McCollough, G. T., Johnson, C. M. and Weiner, M. L. (2005) SID Intl Symp. Digest Tech. Papers, XXXVI, 64. [45] Yang, D.-K. (2006) Journal of Display Technology, 2, 32. [46] Greubel, W., Wolf, U. and Kruger, H. (1973) Mol. Cryst. Liq. Cryst., 24, 103.
REFLECTIVE CHOLESTERIC LIQUID CRYSTAL DISPLAYS [47] [48] [49] [50] [51] [52] [53] [54] [55] [56] [57] [58] [59] [60] [61] [62] [63] [64] [65]
467
Dir, G. A., Wysock, J. J., Adams, J. E. et al. (1972) Proc. of SID, 13 (2), 106. Lavrentovich, O.D. and Yang, D.-K. (1998) Phys. Rev. E, 57, Rapid Communications, R6269. Helfrich, W. (1970) Appl. Phys. Lett., 17, 531. Hurault, J. P. (1973) Journal of Chem. Phys., 59, 2068. Mi, X.-D. (2000) ‘Dynamics of the transitions among cholesteric liquid crystal textures’, Dissertation, Kent State University, Ohio. Yang, D.-K. and Lu, Z.-J. (1995) SID Intl. Symp. Digest Tech. Papers, 26, 351. Kawachi, M. and Kogure, O. (1977) Japanese Journal of Applied Physics, 16, 1673. Kawachi, M., Kogure, O., Yosji, S. and Kato, Y. (1975) Japanese Journal of Applied Physics, 14, 1063. Watson, P., Sergan, V., Anderson, J. E. et al. (1998) SID Intl. Symp. Digest Tech. Papers, 29, 905. Watson, P., Anderson, J. E., Sergan, V. and Bos, P.J. (1999) Liq. Cryst., 26, 1307. Lu, M.-H. (1997) Journal of Applied Physics, 81, 1063. Huang, X.-Y., Yang, D.-K., Bos, P. J. and Doane, J.W. (1995) SID Intl. Symp. Digest Tech. Papers, 26, 347. Huang, X-Y., Yang, D.-K., Bos, P.J. and Doane, J.W. (1995) J. SID, 3, 165. Huang, X.-Y., Yang, D.-K., Stefanov, M. and Doane, J.W. (1996) SID Intl. Symp. Digest Tech. Papers, 27, 359. Zhu, Y.-M. and Yang, D.-K. (1997) SID Intl. Symp. Digest Tech. Papers, 28, 97. Ruth, J., Hewitt, R. and Bos, P.J. (1997) Proc. Flat Panel Display, ’89. Yu, F. H. and Kwok, H. S. (1997) SID Intl. Symp. Digest Tech. Papers, 28, 659. Yip, W. C. and Kwok, H. S. (2000) SID Intl. Symp. Digest Tech. Papers, 31, 133. Kozachenko, A., Sorokin, V. and Oleksenko, P. (1997) Proc. Intnl. Display Research Conf., 35.
17 BiNem1 Displays: From Principles to Applications Jacques Angele´,1 Ce´cile Joubert,1 Ivan Dozov,1 Thierry Emeraud,1 Ste´phane Joly,1 Philippe Martinot-Lagarde,1,2 Jean-Denis Laffitte,1 Franc¸ois Leblanc,1 Jesper Osterman,1 Terry Scheffer,3 and Daniel Stoenescu1 1
Nemoptic S.A., Magny-les-Hameaux, France Paris-Sud University, Orsay, France 3 Motif Corp., Hilo, Hawaii, USA 2
17.1 Introduction A great explosion of academic interest in liquid crystals started in the 1960s. Two major players were the teams lead by Jerry Ericksen and later Frank Leslie, the other one by Pierre-Gilles de Gennes and his colleagues from the Orsay Liquid Crystal Group. The invention of BiNem1 displays is the last of a series of remarkable achievements that were born in the Orsay Liquid Crystal Group, founded in 1968 by Georges Durand in the Laboratoire de Physique des Solides, Paris-Sud University. This group made major contributions to the study of the bulk properties of liquid crystals: Participation in the discovery of ferroelectricity, discovery of ordoelectricity, study of flexoelectricity, instability of nematics and smectics, elasticity and viscosity of smectic and columnar liquid crystals. In 1984, Durand’s group began theoretical and experimental studies of surface effects in liquid crystals, primarily the anchoring of nematic liquid crystals on surfaces. Their discovery, in 1988, of a nematic bistable anchoring on SiOx gave the first idea that a nematic bistable Mobile Displays: Technology and Applications Edited by A. Bhowmik, Z. Li, and P. Bos # 2008 John Wiley & Sons, Ltd
470
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
display might be possible using bistable surfaces like the ferroelectric display, but which would be easier to achieve because nematics are easier to orient without defects than smectics are. If it is possible to change the orientation of the molecules on the surface from one position to another one, then why not try to simply reorient them on the surface tail to head? A rotation on the surface by 180 gives a twist in the cell bulk of 180 . It changes the optical properties of the cell, allows a good contrast ratio, and seems to be simple because the surface needs to orient the molecules only along one direction. This discovery was, and remains, a completely new effect, quite different from the volume switched devices proposed by Berreman or the cholesteric displays developed by the Liquid Crystal Institute, Kent, USA. In a way, the BiNem1 display is closer to ferroelectric displays than to the other nematicbased LCDs. It shares with the smectic C* chiral device bistability, simple and efficient optics, and short switching times, while benefiting from the robustness and simple manufacturing processes of nematic liquid crystal devices. In 1999, a start-up company, Nemoptic S.A., was funded to develop and prepare for commercial production of BiNem1 displays. The French national research organization CNRS supplied to Nemoptic all the intellectual property, including the original patents and know-how. Soon thereafter, the first BiNem1 polymeric alignment layers were developed, paving the way to volume manufacturing. BiNem1 displays are liquid crystal-based electronic paper displays. They maintain static images at zero power and do not need a backlight or frontlight in most indoor environments. They provide, like physical paper, excellent legibility, white background, high resolution and wide viewing angles. Their strong power-saving features makes it possible to create new nomadic equipment. BiNem1 displays can be manufactured in STN manufacturing plants, re-using the existing LCD ecosystem and supply chain with only limited adjustments, material adaptations and fine tuning of a few key process steps.
17.2 Liquid Crystal Textures of BiNem1 Displays 17.2.1 Bulk Textures The typical BiNem1 device uses a thin (d < 2 mm) sandwich type nematic cell with simple monostable alignment on both substrates [1,2] with the directors lying in the same vertical plane (Figure 17.1). The resulting bistable bulk textures are the uniform planar texture U and the half-turn (180 ) twisted texture T.
1
U
T
2 Figure 17.1 Bistable bulk textures: Uniform planar texture U and half-turn (180 ) twisted texture T.
The optimal architecture implies two different anchorings on the plates. On the upper (‘master’) plate the anchoring is slightly tilted (a few degrees) and with strong zenithal (polar, out-of-plane) anchoring energy Wz1 > 103 J=m2 . On the bottom (‘slave’) plate the anchoring is weaker (Wz2 ffi 3:104 J=m2 ) and the pre-tilt is very small ( Ec
E=0
E=0
U
b)
B
c)
T
d)
Figure 17.11 Texture transformations in the cell after the anchoring breaking by static/dynamic competition: (a) Initial unstable equilibrium after the anchoring breaking field removal; (b) Slow decrease of the field from E > Ec down to 0; (c) Instantaneous (fast) switch-off of the field followed by spontaneous relaxation from B to T.
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
481
This p-bend texture B is unstable (due to K33 > K22 ) and it relaxes by continuous bulk distortion to the p-twisted state T [Figure 17.11(d)].
17.4.2 Control of the Switching To control the switching at the bifurcation, we use the coupling of the director between the two plates of the cell. Two different couplings exist: the static coupling always produces the uniform state U at the end of the command pulse, while the dynamic coupling always ‘writes’ the T texture. In both cases the strongly anchored ‘master’ plate emits a ‘command’ signal, the broken anchoring ‘slave’ plate receives it and switches.
(a) Static Coupling Let us suppose a continuous decrease of the field from Eo > Ec down to 0, avoiding any dynamical effects (Figure 17.12). θ0
ξΕ≈Lz
ξ >> L z
ξΕ> L z
θS=θoexp (-d/ξ)
(a)
Ε
θS>>θoexp (-d/ξ)
(b)
θS = +π/2
(c)
Figure 17.12 Static coupling induced switching toward the uniform state U.
Close to the ‘unbroken’ master plate the bulk director is distorted. It relaxes exponentially down to the bottom plate with a small tilt of the order of yelast yo exp ðd=Þ (Figure 17.12a). For thin cells this elastic coupling is sufficient to lift the bifurcation degeneracy in absence of other effects. When the field decreases slowly below Ec, the tilt increases, keeping the same sign (Figure 17.12b) and the sample relaxes always toward the quasi-uniform U-state (Figure 17.12c).
(b) Dynamic Coupling If we turn off the field instantaneously, starting from Eo > Ec , we will favor dynamic coupling. The master plate director rotates rapidly back toward its equilibrium orientation yo (Figure 17.13a). This creates a surface flow V, diffusing rapidly into the bulk down to the broken slave plate. The shear velocity gradient V/d (Figure 17.13b) applies a hydrodynamic torque on the slave plate KLz =d, with a sign opposite to the elastic tilt. Then the cell relaxes rapidly (under the increasing anchoring torque) to the transient p-bent state B (Figure 17.13c) and then to the p-twisted texture T. We note that the hydrodynamic effect, proportional to E =d, usually dominates over the elastic coupling expðE =dÞ – with fast field decrease the final T state is always obtained.
(c) Final State Selection To select the final state we command the elastic and the hydrodynamic couplings by varying the electric pulse’s trailing edge.
If we turn off the field instantaneously, we always obtain the B and T textures, i.e. we ‘write’ the pixel.
482
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS ξΕ≈Lz B
V V
(a)
(b)
(c)
Figure 17.13 Hydrodynamic induced switching toward the p-bend state B (relaxing spontaneously to the ptwisted texture T.
To ‘erase’, i.e. to produce the U-texture, we can simply gradually decrease the field. The estimation of the field decrease time , needed to erase the pixel, gives 1 ms for d 1mm. To shorten the erasing pulses, one can use a two-step decrease of the field, with an intermediate value close to the static breaking threshold. The slave surface relaxation time becomes infinite and damps out most of the hydrodynamic effect[10]. It is important to note that the strength of the elastic and hydrodynamic couplings are quite different – in order to obtain the ‘erased’ state U one needs to completely damp the backflow hydrodynamics, which is very efficient in thin cells.
17.4.3 Switching by ‘First Order’ Breaking of Slightly Tilted Anchoring Up to now we have assumed that the pre-tilt of the slave plate is zero. However, the pre-tilt can be process-tuned to be small, but not equal to zero. This leads to another possible scenario for the BiNem1 switching, a ‘first order’ breaking. The ‘first order’ breaking of slightly tilted anchoring is characterized by the existence of two different thresholds, an ‘erasing’ threshold EU for the transition T-to-U, and a ‘writing’ threshold ET for the transition U-to-T. The breaking of a strictly planar (zero pre-tilt) anchoring, described above, is a textural transition of ‘second order’. By symmetry, this kind of transition is possible only when the electric field pulls the surface director toward the maximum of the anchoring energy function. With the usual ‘vertical’ field (along the cell normal), only strictly planar anchorings can be broken - for tilted anchorings the surface energy maximum is not parallel to the cell normal. A few years ago it was proposed[14] that even tilted anchorings could be broken in some special cell geometries, e.g. parallel tilt in half-turn twisted texture (Figure 17.14a). This texture transition is closer to a first order transition, with an out-of-equilibrium irreversible jump of the surface director orientation yS during the anchoring breaking. In Figure 17.14 we present schematically how the first order anchoring breaking can be used in order to ‘erase’ the BiNem1 device, making the elastic coupling much more efficient. We start from the twisted T texture realized with slightly tilted anchoring on the slave plate (Figure 17.14a). Increasing the field, the surface director on the slave plate rotates toward[14] the unstable orientation p=2 ye , the maximum of the anchoring energy (Error! Reference source not found.b, c). At E ¼ Ec , the torque equilibrium is no longer possible[14,15]and a first order transition takes place: the director orientation jumps to the other side of the cell normal (Figure 17.14d). Upon field removal, the texture relaxes to the U texture [Figure 17.14(e), (f)] under the anchoring torques acting independently on both surfaces. To go back from U to T, i.e. to rewrite the pixel, we can use as before the hydrodynamic coupling, shutting off the field sharply (Figure 17.14d). It should be noted, however, that now the static coupling is very efficient – the tilt of the surface director away of
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
E < Ec
E < Ec
483
E > Ec
U
T
a
b
c
d
e
f
Figure 17.14 Erasing by first order anchoring breaking transition on tilted slave plate.
the cell normal in Figure 17.14(d) is c, the pre-tilt angle of the slave plate. In practice, rewriting the pixel is possible only if the pre-tilt is very small (c 1 ). The main advantage of the BiNem1 operation with first order anchoring breaking is the possibility of using strong enough couplings for both writing and erasing and to achieve the switching with extremely short pulses. The theoretical limit of the time to write and erase one display row is defined mainly by the bulk liquid crystal viscosities, and for usual nematics this can be as low as a few microseconds. Obviously, as in the case of planar anchoring breaking, in order to achieve this kind of behavior we need a very strict control of the anchoring on the slave plate, with low anchoring strength (Lz 30 nm) and small pre-tilt.
17.4.4 Grayscale An interesting property of the BiNem1 displays is their grayscale capability. Even though the display presents only two stable textures, one which appears black and the other one white, gray levels can be visually achieved when the black and white domains are too small to be resolved individually by the eye. Therefore two problems have to be solved: (1) How to obtain stable coexistence of the U and T textures within the pixels? (2) How to achieve partial switching of the pixels into U and T domains?
17.4.4.1 Stability of Gray Levels U and T textures are topologically distinct and cannot transform one into another even when they coexist inside the same pixel. However, the U and T domain size can be modified if the disclination line that separates the domains moves. We have seen that the slave plate, because it has a weaker zenithal anchoring Wz energy than the master plate, attracts and transforms the mobile bulk disclination line into a surface disorientation wall (Figure 17.2b) that has orders of magnitude lower mobility. The wall mobility decreases when the BiNem1 azimuthal anchoring energy Waz increases, and reaches nearly zero when Waz : > 3 105 J=m2 , a moderately strong azimuthal anchoring. Increasing Waz improves also the wall pinning on the weak alignment surface. An additional stabilization is provided by equalizing the bulk twist energies by suitable chiral doping.
484
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
In practice, the wall that separates the U and T domains achieves a very stable configuration other than a large temperature range, ensuring infinite bistability even in grey level states of the pixel with coexisting U and T textures, when the methods described above are combined.
17.4.4.2 Pixel Partial Switching The partial switching of the pixels can be achieved by two methods:
modulation of the zenithal anchoring energy, or use of existing spatial non-uniformities of the zenithal anchoring energy on the BiNem1 alignment layer (‘random microdomains’); or
modulation of the liquid crystal flow velocity inside the pixel (‘curtain effect’). The latter method is now preferred because it does not require any additional process step to obtain the suitable anchoring characteristics and is more reproducible.
17.4.4.3 Random Microdomains This method makes use of spatial non-uniformities of the anchoring energy on the slave surface to induce a variation of the T-texture threshold over the surface of the pixel. Figure 17.15 shows the transmission that results in one pixel, plotted versus the signal amplitude. Figure 17.16 shows the photos of the same pixel in the states designated L2 to L7 in Figure 17.15.
U texture
Grayscale
Optical response (a. u.)
0,60
T texture
L1
0,50
L2 L3
0,40
L4 0,30
L5 0,20
L6 0,10
L7 L8
0,00 0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21
Driving voltage(V) Figure 17.15 Optical response of a BiNem1 sample versus the driving voltage. Anchoring energy variation exists on the slave surface at the microscopic scale. If the anchoring energy ranges from say Wz _min to Wz _max , it produces some variation of the threshold voltage, say from Vc_min to Vc_max. In the given example Vc_min is about 10.5 V and Vc_max about 13.5 V. There is no anchoring breaking at all below the lower limit, and the pure initial U texture remains unchanged. The whole pixel surface switches toward the T texture when the driving voltage exceeds the higher threshold. Any driving voltage between Vc_min and Vc_max produces partial switching (i.e. a mix of U and T textures) giving macroscopic gray levels.
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
485
Figure 17.16 Photos of the same pixel for six different grey levels. The states designated L2 to L7 are the same states as the ones referred in Figure 17.15.
17.4.4.4 Grayscale with the ‘Curtain E¡ect’ This method uses the curtain effect to control continuously the transmission of one pixel. Figure 17.17 shows an example of ‘curtain effect’ grayscale.
Figure 17.17 Example of gray levels achieved with the curtain effect. Magnified view (left) of a detail of the displayed image (right). The gray levels appear spatially uniform at the eye at the distance of observation.
In a pixel the T state is obtained by hydrodynamic coupling between the master plate and slave plate. When the field is switched off, the nematic director in going back to its equilibrium state creates a flow. If the velocity of this flow is higher than a threshold value (Figure 17.18), the shear induced on the slave plate is higher than the torque created by the pre-tilt angle, and the twisted state is obtained, otherwise the uniform state appears. If the flow velocity in the pixel is non-uniform, it is possible to separate the pixel into two domains, one in the T texture, the other one in the U texture. The separation line between the domains is located where the LC flow reaches the threshold speed (Figure 17.18). The separation line can be moved from one edge of the pixel to the opposite one by adding a uniform velocity to the velocity gradient. The velocity gradient is obtained by flow propagating in the pixel from flow created in the neighbor rows from the column signal and/or an additional signal applied to the neighbor rows. The uniform velocity is generated by the master plate when the row voltage switches from V1 to V2 and can be adjusted by the value of V2. The filling ratio of T and U domains inside every pixel is controlled by the suitable choice of the row V2 voltage and the amplitude of the column pulses. The magnification of the displayed image in Figure 17.17 shows the pixels separated into two parts, one black, and the other white. These microscopic domains create the sensation of spatially uniform gray levels at the distance of observation.
486
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Lin e n -1
P ixe l w ritten just before
Pixel voltage of line n
Lin e n
TT
U
Line n+ 1
Rubbing direc tion
F low spee d in the pixels of lines n and n+ 1 T hres hold speed
V1 V2
Tim e Figure 17.18 LC flow velocity along the rubbing axis when row n is addressed. The signal applied to the pixels of row n is the two-level voltage waveform (left), with V1 the anchoring breaking voltage and V2 the selection voltage. A sketch of the velocity of the liquid crystal flow in the middle of the cell is presented when V1 is switched to V2. The column signals allow adjusting V2 and thus controlling the gray levels of the different pixels of the row n written at this time.
The ‘curtain effect’ grayscale method is compatible with simple industrial processes aimed at achieving a well defined zenithal anchoring energy on the BiNem1 substrates, as spatially uniform as possible.
17.5 Specific BiNem1 Materials To obtain low switching thresholds, optimal optical performance and long term bistability, the BiNem1 display requires custom made materials[16] (nematic mixture and slave plate alignment layer) with well defined anchoring properties. The most important surface properties we need are:
Weak zenithal anchoring strength. In practice, we need low enough anchoring breaking threshold pffiffiffiffiffiffiffiffiffiffiffiffiffi ffi Uc ¼ d Ec ¼ d Wz = Ke0 e < 30 V, implying Wz < 5 104 J:m2 .
Strong azimuthal anchoring strength, typically Wa > 5 105 J m2, is needed to stabilize the bistable textures against defect propagation. High Wa also improves the optical contrast by decreasing the twist of the U-state due to the chiral doping of the nematic.
Pre-tilt c on the weak anchoring plate. The best conditions for BiNem1 operation are obtained for c < 0:5 .
The temperature dependence of the surface properties should be as weak as possible, the BiNem1 temperature range being limited above from too weak Wa and below by too strong Wz.
The anchoring should also remain stable in time even under strong torques – easy axis gliding and other anchoring memory phenomena can seriously disturb the BiNem1 switching. The first demonstration[1] of the switching by anchoring breaking was reported with the pure nematic compound 5CB on an evaporated SiOx anchoring layer – the only known system at that time with weak enough anchoring. This model system has confirmed the feasibility of the BiNem1 display and its basic performance, but these materials are not appropriate for real devices produced on an industrial scale.
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
487
The extensive research program at Nemoptic has resulted in a breakthrough in the area of the weak anchoring materials for industrial applications. Several weak anchoring polymer alignment materials have been developed and optimized. The processing of these materials is straightforward using standard industrial equipment (flex printer, rubbing machine, etc.), and is suitable for high volume production. Mixtures of classical nematic compounds have been shown to be compatible with these alignment layers, giving low anchoring breaking thresholds (below 10 V/mm at room temperature), low pre-tilts (controlled by the choice of the process) and a large temperature range of operation (typically from 0 C to 50 C).
17.5.1 Polymer Alignment Layers For the ‘master’ plate the BiNem1 device needs strong anchoring, both zenithal and azimuthal, and a moderate pre-tilt (a few degrees). This anchoring is quite similar to that needed for traditional LCD technologies, e.g. STN, and commercial polyimide (PI) alignment materials can be used for its implementation without any further development of the material and its processing. Commercial alignment polymers, however, are not suitable for the ‘slave’ plate due to their strong zenithal anchoring energy (typically Wz 103 J m2 at room temperature). The first step in our R&D program was the selection of polymer materials showing the lowest possible zenithal anchoring strength with some typical nematic materials, e.g. 5CB. More than a hundred polymers and co-polymers were evaluated, both commercially available ones and ones that we specifically designed and synthesized. The study showed that the zenithal anchoring strength is defined mainly by the chemical structure of the polymer and by its swelling by the nematic (defining the organization of the nematicpolymer ‘composite’ at the interface), and it is influenced only in a minor way by the physical treatments during the layer processing (rubbing, curing, etc.). Several families of polymers with sufficiently weak zenithal anchoring have been identified, the most promising being some PVC-containing co-polymers [17] and several classes of PI with statistically grafted short side-chains[18]. Typically, the selected materials showing weak or moderate Wz have insufficient intrinsic azimuthal anchoring strength. We observed that Wa, and sometimes the pre-tilt angle, are very sensitive to the processing of the alignment layer, and can be controlled over a large range by the thickness of the deposited polymer film, the rubbing strength, the curing temperature, etc. This process adaptation enabled us to develop several specific BiNem1 compatible polymer alignment layers (BP11, BP16 and recently BP16B Nemoptic’s proprietary materials and processes). These ‘slave’-plate anchoring layers can be processed with the existing STN industrial lines without major modification and with good reproducibility of the anchoring properties on large area surfaces.
17.5.2 Weak Anchoring Nematic Mixtures Apart from the crucial surface anchoring properties, BiNem1 nematic mixtures should satisfy some bulk property requirements:
large enough nematic temperature range, usually larger than 20 C to þ60 C;
strong positive dielectric anisotropy, typically e> 20;
birefringence n adapted to the cell thickness to optimize the optical properties;
low rotational viscosity for fast switching and relaxation of the display;
the elastic anisotropy is not critical since it plays only a minor role in the BiNem1 technology;
high electrochemical stability under strong fields (up to 20 V/mm).
488
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
These bulk properties are similar to the usual LCD requirements and only a limited effort is needed to adapt them for the BiNem1 display. On the contrary, weak anchoring mixtures have not been developed before. Dedicated research has been necessary[19,20] to guide the choice of the weak-anchoring components, to establish the mixing rules and to optimize the mixture in order to satisfy the target anchoring properties in combination with a given anchoring layer. The choice of the components is based on the measurements of Wz in common families of nematics. In Figure 17.19 we present the temperature dependence of Wz for 5CB on three different weak anchoring substrates. The observed exponential increase of Wz with the nematic order parameter S, Wz Sb , where b ¼ 4 7, is confirmed for all weak anchoring nematics. As expected, the anchoring strength variation with the reduced temperature 1T/Tc is approximately the same within each homologous series (for the same substrate).
Ec [V/µm] 15
10
5 5CB on SiO 5CB on BP11 5CB on BP16
0
0
10
20
30
T emperature [°C] Figure 17.19 Temperature dependences of the anchoring breaking threshold of the nematic 5CB on three different alignment layers.
From one series to the other, however, the anchoring strength variation can be spectacular[19], e.g. the replacement of one phenyl ring in the 5CB molecule by a saturated (cyclohexane) ring increases Wz by a factor of 3 (point 18 on Figure 17.20). In a large number of cases it was impossible to measure the anchoring strength of a pure compound, due to a very high value of Wz, a narrow nematic temperature range, or a low e. These compounds have been studied in binary mixtures with 5CB, and the results have been extrapolated to 100% concentration of the studied molecule. This approach, quite usual for most of the bulk properties, is not obvious for anchoring properties: the interaction of the nematic with the alignment polymer can be strongly non-linear due to a segregation of the mixture on the surface and other selective physico-chemical interactions. However, limiting our studies to diluted solutions, we observed experimentally a reasonable agreement of Wz of the mixture as a weighted linear combination Wz of the components. For small concentration c the anchoring breaking field of the mixture can be approximated as: Ec ¼ ð1 cÞ E5CB þ c Ee where Ee is the extrapolated threshold for the pure compound.
ð15Þ
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
489
(Ee-E5CB)/E5CB 4
17
Anchoring ‘poisons’
3 2 43
21
44 18
40 27 41
34
6
1
25
28
47
2 9 32
0 22
-100
0 35
-1
36
26
100
Te-T5CB
200 37
14 10
-2
16
11
12
Useful compounds
Figure 17.20 Extrapolated anchoring breaking thresholds and clearing temperatures of the studied nematic compounds on BP11 alignment layer.
In Figure 17.20 we plot the values of Ee =E5CB 1 for different nematic compounds. We note that the large error bars are due to the extrapolation from low-concentration data and for some nematics the extrapolated threshold Ee is negative, due to the non-linearity of the mixing laws for the anchoring strength. The data presented on Figure 17.20 enable us to make the initial choice of the main mixture components. The most interesting compounds are those on the lower part of the figure, with anchoring strength comparable or weaker than that of 5CB. To obtain a large temperature range and to adjust the mixture clearing temperature Tc we need components with dissimilar values of Tc, if possible distributed over the entire range of the abscissa axis. The compounds on the upper part of the figure can be considered as anchoring ‘poisons’: even a small amount of them seriously increases Wz. These components should be avoided even if they are expected to improve some secondary properties of the mixture – their price in anchoring energy is too high. Finally, the compounds lying between these two extreme groups, with moderate anchoring strength, are very useful to adjust the final mixture properties. Typically, we need up to 30–40% of these compounds in order to promote the miscibility of the low anchoring strength components, to increase the electrochemical stability and to decrease the viscosity. Several weak anchoring mixtures were designed [21], specially adapted to the BiNem1 technology and to our rubbed polymer weak anchoring alignment layers. In order to improve the electro-chemical stability of the mixtures, we tested the components and their simple mixtures for long term stability under strong fields (E ¼ 20 V=mm). Two mixtures, N372 and N467, were optimized for BiNem1 switching. Their driving voltage is lower than 30 V over the operating temperature range. In Figure 17.21 we show the anchoring breaking voltage versus the temperature of these liquid crystal mixtures on the BP16 alignment layer. The anchoring breaking voltage is renormalized to the optimal cell thickness given by d n ¼ l=2, for the two mixtures, combining components with high e and low anchoring strength, and avoiding anchoring poisons. The basic physical properties of the two optimized mixtures are presented in Table 17.3, below.
490
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS Uλ/2 [V] N372 on BP16 N467 on BP16
20
15
10
5 - 10
0
10
20
30
40
50
60
T emperat ure [°C]
Figure 17.21 Anchoring breaking voltage of two weak anchoring mixtures on the BP16 alignment layer.
Table 17.3 Some basic properties of Nemoptic BiNem1 – anchoring mixtures. Mixture Nematic range e at 25 C n at 25 C Wz on BP16 at 25 C Wa on BP16 at 25 C
N372
N467
20 – 59 C 29 0.198 4.2 104 J/m2 4.0 105 J/m2
24 – 62 C 26 0.166 3.6 104 J/m2 2.2 105 J/m2
The present work demonstrates the way to achieve BiNem1 nematic mixtures that are compatible with weak zenithal anchoring surfaces, low pre-tilt, high electrochemical stability and wide temperature range. Further improvement of these mixtures, by introducing new components and by matching the alignment layer process and the mixture composition, is underway.
17.6 BiNem1 Manufacturing Process 17.6.1 Structure of BiNem1 Displays The structure of a reflective BiNem1 displays is shown in Figure 17.22. It differs from usual LCDs by a thinner cell gap (typically 1.5 (mm), and the use of two specific materials: the BiNem1 anchoring layer and the liquid crystal mixture.
17.6.2 Manufacturing Process The outline of the fabrication flow chart is summarized in Figure 17.23, below. The BiNem1 displays manufacturing process flow is highly compatible with standard cell process equipment as used for STN-LCD or TFT-LCD manufacturing[22]. The key pieces of equipment are the flex-printing machines, post-process machines, rubbing machines, spacer deposition, assembly machines and cleaning machines.
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
491
Figure 17.22 Internal structure of a reflective BiNem1 cell. The texture U appears black and T white in the standard two-polarizer configuration.
Figure 17.23 BiNem1 displays manufacturing process flow (front-end, back-end, and module assembly).
The major processing factors are related to the BiNem1-anchoring layer process and the thin cell gap. The design-dependant tooling includes two ITO masks, flexographic printing plates and a limited number of other items, making the display’s customization simple, fast and cost effective.
17.6.2.1 BiNem1 Polymer Solution The polymer materials used for the BiNem1 anchoring layer are Nemoptic proprietary materials. We use commercially available standard solvents for preparing the polymer solutions in ready-for-use form. Solvent dilution, solution filtering, conditioning and quality control are carried out in order to allow mass-production of BiNem1 LCDs with the proper material supply conditions. The strong anchoring layer is deposited using commercially available polyimide solutions of which there is a broad range of selection. Alignment layers deposition and characterization Both PI and BiNem1 alignment layers are deposited using conventional flexographic printing techniques. In stable temperature and humidity conditions, we reach excellent uniformity and
492
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
repeatability levels better than 10% over the mother glass and over a complete lot. AFM and optical microscope observations of the cured layers demonstrate good flatness and a pinhole-free microstructure. We have set up under pilot line conditions a Statistic Process Control (SPC) by measuring the layer thickness using spectroscopic ellipsometry. We have developed a proprietary measurement software and optical modeling. Post-deposition process The post-deposition process aims at stabilizing the layer properties in order to optimize the working and storage temperature ranges. After deposition of the weak anchoring layer, the layers are cured, and then exposed to UV light. The UV exposure is followed by a solvent rinsing step. UV exposure and solvent rinsing can be carried out using upgraded LCD-based production tools. Rubbing of the weak alignment layer Optimization of the rubbing parameters for the weak anchoring layer was carried out on an industrial rubbing machine which provided the desired anchoring properties as required for BiNem1 operation: weak zenithal anchoring energy, low pre-tilt angle, strong azimuthal anchoring energy. Commercial velvet rubbing cloths were used. Thin cell gap structure The critical particle size is mostly determined by the gap value (1.5mm), which is significantly lower than TFT-LCD (3 to 4mm) or STN-LCD (5 to 6mm) structures. Limitation of particle contamination is of paramount importance. By introducing a Si02 top-coat process in the manufacturing flow as well as extensive cleaning processes (USC dry-cleaning or DIW wet-cleaning, both being compatible with the weak-anchoring film process), one can reduce the particle contamination to an acceptable level from a manufacturing yield point of view. We have reached yield value related to particle contamination above 95% as measured on a 2.800 -test vehicle in our pilot manufacturing operation. For larger LCDs, it may become necessary to upgrade the local environment at assembly process with cleanliness class up to ISO 5. Furthermore, in addition to the cleanliness, we monitor the spacer density by a SPC technique in order to ensure suitable cell gap repeatability within one lot or from one lot to another. LC filling and end-sealing processes The LC-filling cycle is carried out in conventional LC filling chambers. A few process parameters, such as the LC contact time needed to be fine-tuned for the thinner cell gap. For the same reason, the endseal parameters such as the cell press time needed fine-tuning as well. Polarizer lamination and LCM assembly Commercial polarizer films can be used to optimize the T and U texture optical performance. The lamination process conditions and robustness window are identical to standard LCD manufacturing. Finally, BiNem1 LCDs are compatible with any standard driver packaging (TAB, TCP, COG) and do not require any specific constraints in terms of manufacturing processes.
17.7 Passive Matrix Addressing We will focus in this section on addressing schemes for black and white passive-matrix driven BiNem1 displays. Their structure is shown in Figure 17.24.
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
493
Common electrodes Cj (Master plate)
Rn-2 Rn-1 Rn Rn+1 Rn+2 Rn+3
Rn-1 Cj-2
Cj
Rn+1 Rn (Slave plate)
Cj+2
Segment electrodes Figure 17.24 Geometry of the passive matrix BiNemBiNem1 cell. On the right hand side, direction of anchorings on both master and slave plates for the U and T-textures.
The Alt and Pleshko[23] iron law of multiplexing does not hold for BiNem1 displays because they are not rms-responding (unlike STN-LCDs, which are limited by it to a maximum practical resolution
VGA). Passive matrix BiNem1 displays can achieve very high multiplexing ratios (>1500 rows have been demonstrated) with the same excellent optical performance. BiNem1 addressing schemes are typically divided into two parts: (1) a frame blanking, that settles the image to an initial T state; (2) a one row-at-a-time driving scheme, that updates the image content row by row. At the row level (and pixel level), addressing waveforms are divided into three phases: (1) anchoring breaking phase; (2) texture selection phase; (3) relaxation phase.
17.7.1 Switching Thresholds BiNem1 displays usually have a slightly tilted anchoring and are characterized by the existence of two different thresholds: the ‘erasing’ threshold EU , close to Ec , for the transition T-to-U, and the ‘writing’ threshold ET , for the transition U-to-T (i.e. 0 VT (Figure 17.10 and Figure 17.28) applied simultaneously to all rows of the display, then turned off instantaneously. Blanking signals greatly simplify addressing schemes. Without them, all four possible texture transitions U->T, U->U, T->T, T->U would have to be implemented, and some of them are more difficult than others to achieve with simple waveforms and high refreshing rates.
17.7.3 Anchoring Breaking Phase The anchoring breaking signal is a pulse of amplitude V1 > VT and duration t1 applied on the selected row electrode (Figure 17.25).
Figure 17.25 Possible two step signals for switching to final T-texture: (left) The first voltage drop is above a Tswitching threshold; (right) The second voltage drop is above VT .
It reorients the texture close to homeotropic (Figure 17.14d) in a characteristic time[1] 1 g1 =e:E12 2ms, where g1 0:1 Pa s is the rotational viscosity coefficient of the liquid crystal mixture, e 30e0 is the dielectric anisotropy and d is the LC layer thickness. At the falling edge of the anchoring breaking signal, t ¼ t1 1 , the anchoring has been broken and the initial T-texture is transformed into a quasi-homeotropic one.
17.7.4 Texture Selection Phase A texture selection pulse of amplitude V2 and duration t2 immediately follows the anchoring breaking pulse. There are two ways to induce a T-texture with this 2-step driving voltage; the V2 voltage has to satisfy one of the following conditions:
the first voltage drop VD1 is larger than a specific T-threshold voltage; or
the second voltage drop VD2 is larger than the ‘writing’ threshold VT , with VD1 ¼ V 1 V 2 and VD2 ¼ V2 . Figure 17.25 illustrates these two methods: Figure 17.26 shows the final texture versus the V2 voltage (the initial state of the pixel is U). The pixel voltage is the difference between the row voltage and the column voltage. We can expect that the selected texture switches from T-to-U and U-to-T for small variations of the column voltage if the optical response versus the V2 voltage in Figure 17.26 is step-like.
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
495
Figure 17.26 Final texture of a pixel driven with a two-step voltage defined in Figure 17.25. The initial texture has been settled to U. For V2 below V2-min ¼ 6 V, the voltage drop VD1 exceeds a T-switching threshold and induces the T-texture. For V2 larger than V2-max ¼ 15 V, the voltage drop VD2 exceeds the writing threshold and induces the T- texture. For V2 between 6 V and 15 V, none of these thresholds is reached, the pixel remains in the U-state.
Figure 17.27 shows line and column driving waveforms that can induce either a T or U-texture depending on the column voltage. When the column voltage is zero, the selected texture is T. When the column voltage is a small negative voltage, the selected texture is U. The column amplitude is usually jVc j < 3 V. A usual choice for the row driving voltage is V2 V2-min .
Figure 17.27 Final texture versus the pixel voltage for a fixed V2 row voltage chosen equal to V2-min in Figure 17.26 (top). When the column voltage is zero, the selected texture is T (lower). When the column voltage is a small negative voltage, the selected texture is U. We suppose that t2 ¼ tc .
496
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
When the one row-at-a-time driving scheme is in progress, the column electrodes see driving pulses pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ~col ffi nU tc =Tf Vc , with amplitude Vc and width tc . The column root-mean-square voltage is V where nU is the number of pixels in the U-texture in the given column and Tf the frame duration (we suppose we can take the integration time equal to Tf ). The root-mean-square voltage therefore depends on the content of the displayed image. There are different approaches to lower this rms image dependence and possible flicker effects. For example, the column pulses can be made symmetrical (positive and negative pulse of same amplitude to selected T and U), the frame duration can be increased, or equalization voltages can be applied.
17.7.5 Final Phase and Multiplexing Scheme Figure 17.28 describes an example of a simple black and white BiNem1 addressing scheme. There are several differences compared to Figure 17.27:
Figure 17.28 Black and white BiNem1 multiplexing scheme. In this example, the pixel on row Rn and column Cj (respect. Cjþ1 ) switches toward the T-texture (respect. U-texture).
Bipolar row pulses minimize the charge migration between the upper and lower ITO electrodes.
Slopes introduced at the blanking period (row waveforms) are aimed at lowering the transient peak current.
Polarity of V1 ; V2 and Vc have been reversed.
A small texture relaxation delay (t3 50 ms) is introduced between two consecutive rows, but since it is repeated N times during an image refreshment, it cannot be neglected in the estimation of the frame duration Tf :
Tf ¼ tblank þ N:ðt1 þ t2 þ t3 Þ
ð10Þ
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
497
We have assumed that t2 ¼ tc and that the falling end of the selection signal and the column signal are synchronized (see Figure 17.28). Many important details in the addressing schemes are related to the hydrodynamic coupling between the master and slave plate and the LC flow relaxation. In the approach we have followed, the only LC flow considered is the one caused at the master plate level by the field switch-off. Advanced addressing schemes can be quite different from the basic ones we have described. The row addressing time Tr ¼ t1 þ t2 þ t3 depends on the temperature among other parameters, such as the BiNem1 liquid crystal mixture and anchoring layer (see Figure 17.29). At room temperature, a row selection time Tr 400 ms is routinely achieved for black and white operation without grayscale. DNS-386 on BP16 DNS-386 on BP162 DNS-639 on BP162 DNS-701 on BP162 N856 on BP162
Line selection time T r (µs)
3000 2000
1000 900 800 700 600 500 400 300 0
5
10
15
20
25
30
35
40
45
Temperature (°C)
Figure 17.29 Row selection time Tr versus temperature for several LC mixtures on different polymer anchoring layers for black and white driving.
17.7.6 Partial Refreshing Figure 17.30 shows an example of partial refreshing in a 320-row BiNem1 display. A block of consecutive rows can be updated without any image content change outside this block. The driving scheme of Figure 17.28 has been modified to implement partial image refreshing. In this example, a small window is opened in the middle of the panel [Figure 17.30 (c) and (d)]. It is displayed and replaces partially the previously stored background image without any discernible connection perturbation. Partial refreshing makes possible fast image update when only small parts of the image are modified. This important feature improves real time interaction at the system level, for example to enter or
Figure 17.30 Partial refreshing makes possible local image changes in selected blocks of rows without affecting the image previously stored outside the block that is partially refreshed.
498
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
acknowledge user commands, or to move a pointer. The interaction can be completed in the time it takes to refresh the small block of rows where the windows or the pointer is located, without having to wait for refreshing the whole display. For example, refreshing 20 physical rows in a BiNem1 QVGA display takes about 20 ms at ambient temperature, while refreshing the whole panel requires 240 ms, assuming an addressing time of 1 ms per row. Partial refreshing is important for applications requiring high resolution passive matrix displays with real time user visual interaction.
17.7.7 Implementation of the Driving Schemes 17.7.7.1 Interface with System Host Nemoptic has developed a specific electronics interface to facilitate integration of BiNem1 display modules into products. The additional electronics circuitry frees the system host from low level BiNem1 display task management, such as timing generation or temperature compensation. A communication protocol convenient to use with most system platforms is implemented to handle display data transfer and display commands.
Figure 17.31 Structure of a BiNem1 display module with its dedicated BVGP interface.
17.7.7.2 Black and White BiNem1 Displays The driving waveforms in Figure 17.28 require the simultaneous application of two different row voltages (three, if positive and negative signals are counted separately). The same applies for the column waveforms. The maximum row and column LCD voltages are respectively about 30 V (for VBlank) and 5 V (for Vcol) over the operating temperature range. These requirements are compatible with STN drivers. The implementation of the B&W driving scheme uses conventional STN drivers for both row and column electrodes.
17.7.7.3 Grayscale BiNem1 Displays Grayscale ‘curtain effect’ driving schemes are derived from the black and white ones, with most changes being in the way columns are driven. Instead of selecting one of the two column voltages, one (Vm) to switch to the U texture and the other one (Vm þ Vc ) to switch to the T texture, the column voltage can be continuously modulated between Vm and Vm þ Vc . For example, with a 6-bit source TFT driver (64 voltage levels), up to 32 distinct ‘curtain effect’ gray levels can be displayed (see Figure 17.17) with good gamma correction. The standard implementation of grayscale requires source TFT LCD drivers as column drivers, and STN drivers as row drivers. No modification in the BiNem1 display’s manufacturing process is required.
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
499
17.7.8 Power Consumption Achieving a long battery-life is a key requirement for mobile and ultra-mobile devices. However, weight and thickness are usually severely limited by the nomadic applications addressed. For example, e-books should be thin (3 inches) bistability provides an additional advantage. The power consumption of the BiNem1 display is proportional to the number of images displayed, while the power consumption of conventional displays such TFT-LCDs or STN-LCDs is independent of image changes, and depends only on the amount of time the display is turned on. This different behavior happens because BiNem1 displays are updated and require power only when the image content changes, while non-bistable displays have to be refreshed at a constant rate (typically 60 to 80 Hz), regardless of image change (Figure 17.32).
Relative power consumption
100,00% 10,000% 1,0000%
0,1000% 0,0100% 0,0010%
0,0001% 1
10 100 1000 Number of image updates per day
10000
Figure 17.32 Relative power consumption of a BiNem1 reflective display compared with a STN reflective display of the same area (100% reference) versus the number of image updates per day.
When the image update rate is low, the power consumption BiNem1 displays is much lower than that of conventional LCDs. Attaining a very low power consumption is especially important for high resolution, medium to large size displays such as those required for e-documents (A4 size displays), e-newspapers and e-books. These applications require large size and high resolution because the amount of information that is found in printed documents, books or newspapers cannot fit into tiny
500
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
screens while giving users the same pleasant reading experience. Alternatively, lower power consumption can be turned into battery budget reduction, or thinner and lighter devices. The power consumption to update an image EUpdate can be determined from the driving scheme in Figure 17.28. Near the anchoring breaking voltage, a BiNem1 cell is, in a first approximation, electrically equivalent to a capacitance C e0 er dS, with er the parallel dielectric constant of the liquid crystal mixture ( 35), d the cell gap ( 1.5 mm), S the cell area and e0 the permittivity of free space. The energy needed to apply a rectangular driving pulse of peak voltage V across C is V C2 (we assume no charge recycling). For example, the power consumption to blank the display is EBlank 3 C V2Blank (the factor 3 appears because we use bipolar blanking waveforms of peak voltage VBlank, see Figure 17.28). Discarding some small cross-terms and taking the T switching as the worst case:
EImage 3 C V2Blank þ 3 C V21 þ N C V2c
ð17Þ
where N is the number of rows of the display and VBlank, V1 and Vc are the driving voltages (we assume for simplification V2 ¼ V1 ). The three terms in [17] are respectively the energy consumption for display blanking, row scanning and data (column) driving. The last term is dominant for high resolution displays. Numerical evaluation of [17] results in about 0.5 mJ per cm2 per update (BiNem1 cell only, without drivers, medium resolution). The power consumption at the display module level is typically several times higher because of the power losses in the LCD drivers, on-chip voltage boosters, timing generation and other electronic circuitry. As a concrete example, assume for a reflective TFT-LCD module a power consumption of about 0.1 mW per cm2 of active area (about 8 joules per day per cm2), amounting to 800 J per day for a module with 100 cm2 active area. This display would completely deplete an AAA battery ( 4300 joules) in less than six days, whereas a BiNem1 module with the same active area updated 50 times per day (100 mJ per update) would achieve a battery life greater than two years. When partial refreshing is enabled, the power consumption in [11] is reduced in proportion of P/N, where P the number of consecutive rows to be refreshed.
17.8 Performance of BiNem1 Displays 17.8.1 Optical Performance of Monochrome Reflective BiNem1 Displays 17.8.1.1 Contrast Ratio The measured intrinsic contrast ratio (CR) of reflective BiNem1 displays (ratio between the bright state luminance T by the dark state luminance U) is typically >15:1 (Figure 17.33). In practice, the contrast is limited by the anti-reflection characteristics of the optical layers and coatings used to manufacture the display (for example, an anti-reflective front polarizer achieving 1% photopic reflection would limit the CR below 35 in any case); this is why the measured contrast ratios are below the values predicted by the simulations in Figure 17.6. The macroscopic contrast ratio measured over an area containing many pixels is usually lower than the intrinsic contrast ratio because the optical state of the area between the pixels (defined by the intersection of row and column ITO electrodes) has to be taken in account. The minimal space between ITO electrodes is typically 10 to 20 mm to satisfy ITO photolithographic design rules of a passive matrix. As a consequence, the contrast ratio depends on the pixel resolution because the spacing between pixels is filled with the bright T texture.
BiNem1 DISPLAYS: FROM PRINCIPLES TO APPLICATIONS
501
Figure 17.33 Left: contrast ratio versus the dot-per-inch resolution, for reflective BiNem1 displays of different resolutions. The geometric model assumes either 16 mm spacing between electrodes or 10 mm spacing; Experimental CR values are measured in diffuse illumination on various BiNem1 samples with pixel density ranging from 72 dpi to 200 dpi. Right: magnified view of pixels in a reflective BiNem1 display. The bright T texture fills the space between pixels in order to optimize brightness.
A simple geometric model gives the dependence of the CR versus the pixel density (Figure 17.33, left). The resulting CR depends on the intrinsic CR and on the optical aperture ratio (ratio between the active area and the total area including the space between pixels). Reduction of the isolation spacing between ITO electrodes from 16 mm to 10 mm leads to a significant increase of the aperture ratio that translates directly into an improved contrast ratio (Figure 17.33). The CR ranges from 14:1 to 6.5:1 range over the pixel resolution range. The geometric model assumes that the T texture fills the space between pixels.
17.8.1.2 White Optical State A nearly achromatic white state is achieved by BiNem1 displays, as shown by Figure 17.34(a). The color of the bright optical state is nearly indistinguishable from the color of ideal white: its color coordinates in the CIE 1976 colorimetric system lie inside the iso-perception area centered on the white point. The color shift when the viewing direction changes from þ15 to þ50 is very small (Figure 17.34b). (b)
(a)
0,51
0,5
0,5 0,49
0,48
0,48
V’
V’
0,49
0,47
0,47
0,46 0,46 0,45 0,45 0,18
0,19
0,2
0,21 U’
0,22
0,23
0,44 0,15
0,16
0,17
0,18 0,19 U’
0,2
0,21
0,22
Figure 17.34 (a) The experimentally determined hue of the bright state BiNem1 display in the CIE 1976 colorimetric system lies inside the iso-perception circle centered at the ideal white point; (b) Comparison of color shift measured for various display technologies when the viewing direction changes from þ15 to þ50 .
502
MOBILE DISPLAYS: TECHNOLOGY AND APPLICATIONS
17.8.1.3 Re£ectance (Brightness) The typical white reflectance of BiNem1 displays is 32–33% at normal incidence, in agreement with simulations (see Figure 17.7). The reflectance is calculated by the ratio of the bright state (T) luminance by the luminance of a Lambertian diffuser, measured under diffuse illumination (specular reflectance is excluded). The white reflectance is nearly unaffected by the pixel resolution when the T texture fills the space between pixels (see Figure 17.33, right).
17.8.1.4 Pixel Density A 200 dot per inch resolution is easily achieved by BiNem1 display. The maximum resolution is only limited by the passive matrix ITO patterning rules. For 200 dpi, the pixel pitch is 127 mm and space between pixels is typically 10 mm or 16 mm. Because of the limited angular resolution of the eye, an increase of the pixel resolution above 200 dpi is of little interest in the B&W mode for direct view displays. However, >200 dpi can be useful to achieve high resolution (>100 ppi) color displays.
17.8.1.5 Addressing Time In passive matrix BiNem1 displays, a one row at a time addressing scheme is used and the frame refreshing time depends directly on the number of rows of the display (Table 17.4). The row addressing time TL depends on the temperature and other parameters, such as the BiNem1 liquid crystal mixture and anchoring layer. Nemoptic is continuously improving the switching speed of BiNem1 electronic paper, for example by developing new liquid crystal mixtures with lower viscosity and advanced driving methods.
Table 17.4 Refreshing time of reflective black and white BiNem1 displays.
Display time (25 C) Display time (10 C)
QVGA
SVGA
Row voltage
Column voltage
0.15 s 0.38 s
0.43 s