IS Management HANDBOOK 8 TH E D I T I O N
OTHER AUERBACH PUBLICATIONS The ABCs of IP Addressing Gilbert Held ISBN: 0-8493-1144-6 The ABCs of TCP/IP Gilbert Held ISBN: 0-8493-1463-1 Building an Information Security Awareness Program Mark B. Desman ISBN: 0-8493-0116-5 The Complete Book of Middleware Judith Myerson ISBN: 0-8493-1272-8 Computer Telephony Integration, 2nd Edition William A. Yarberry, Jr. ISBN: 0-8493-1438-0 Global Information Warfare: How Businesses, Governments, and Others Achieve Objectives and Attain Competitive Advantages Andy Jones, Gerald L. Kovacich, and Perry G. Luzwick ISBN: 0-8493-1114-4 Information Security Architecture Jan Killmeyer Tudor ISBN: 0-8493-9988-2 Information Security Management Handbook, 4th Edition, Volume 1 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-9829-0 Information Security Management Handbook, 4th Edition, Volume 2 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-0800-3 Information Security Management Handbook, 4th Edition, Volume 3 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-1127-6 Information Security Management Handbook, 4th Edition, Volume 4 Harold F. Tipton and Micki Krause, Editors ISBN: 0-8493-1518-2
Information Security Policies, Procedures, and Standards: Guidelines for Effective Information Security Management Thomas R. Peltier ISBN: 0-8493-1137-3 Information Security Risk Analysis Thomas R. Peltier ISBN: 0-8493-0880-1 A Practical Guide to Security Engineering and Information Assurance Debra Herrmann ISBN: 0-8493-1163-2 The Privacy Papers: Managing Technology and Consumers, Employee, and Legislative Action Rebecca Herold ISBN: 0-8493-1248-5 Securing and Controlling Cisco Routers Peter T. Davis ISBN: 0-8493-1290-6 Securing E-Business Applications and Communications Jonathan S. Held and John R. Bowers ISBN: 0-8493-0963-8 Securing Windows NT/2000: From Policies to Firewalls Michael A. Simonyi ISBN: 0-8493-1261-2 Six Sigma Software Development Christine B. Tayntor ISBN: 0-8493-1193-4 A Technical Guide to IPSec Virtual Private Networks James S. Tiller ISBN: 0-8493-0876-3 Telecommunications Cost Management Brian DiMarsico, Thomas Phelps IV, and William A. Yarberry, Jr. ISBN: 0-8493-1101-2
AUERBACH PUBLICATIONS www.auerbach-publications.com To Order Call: 1-800-272-7737 • Fax: 1-800-374-3401 E-mail:
[email protected]
IS Management HANDBOOK 8 TH E D I T I O N Carol V. Brown Heikki Topi EDITORS
AUERBACH PUBLICATIONS A CRC Press Company Boca Raton London New York Washington, D.C.
This edition published in the Taylor & Francis e-Library, 2005. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.”
Library of Congress Cataloging-in-Publication Data Information systems management handbook / editors, Carol V. Brown, Heikki Topi. — 8th ed. p. cm. ISBN 0-8493-1595-6 1. Information resources management — Handbooks, manuals, etc. I. Brown, Carol V. (Carol Vanderbilt), 1945– II. Topi, Heikki. T58.64.I5338 2003 658.4d038—dc21
2003041798
This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the consequences of their use. Neither this book nor any part may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, microfilming, and recording, or by any information storage or retrieval system, without prior permission in writing from the publisher. All rights reserved. Authorization to photocopy items for internal or personal use, or the personal or internal use of specific clients, may be granted by CRC Press LLC, provided that $1.50 per page photocopied is paid directly to Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923 USA. The fee code for users of the Transactional Reporting Service is ISBN 0-8493-15956/03/$0.00+$1.50. The fee is subject to change without notice. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. The consent of CRC Press LLC does not extend to copying for general distribution, for promotion, for creating new works, or for resale. Specific permission must be obtained in writing from CRC Press LLC for such copying. Direct all inquiries to CRC Press LLC, 2000 N.W. Corporate Blvd., Boca Raton, Florida 33431. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation, without intent to infringe.
Visit the CRC Press Web site at www.auerbach-publications.com © 2003 by CRC Press LLC Auerbach is an imprint of CRC Press LLC No claim to original U.S. Government works International Standard Book Number 0-8493-1595-6 Library of Congress Card Number 2003041798
ISBN 0-203-50427-5 Master e-book ISBN
ISBN 0-203-58711-1 (Adobe eReader Format)
Contributors SANDRA D. ALLEN-SENFT, Corporate IS Audit Manager, Farmers Insurance, Alta Loma, California BRIDGET ALLGOOD, Senior Lecturer, Information Systems, University College, North Hampton, England BARTON S. BOLTON, Consultant, Lifetime Learning, Upton, Massachusetts BIJOY BORDOLOI, Professor, School of Business, Southern Illinois University-Edwardsville, Edwardsville, Illinois BRENT J. BOWMAN, Associate Professor, College of Business Administration, University of Nevada-Reno, Reno, Nevada THOMAS J. BRAY, President and Principal Security Consultant, SecureImpact, Atlanta, Georgia CAROL V. BROWN, Associate Professor, Kelley School of Business, Indiana University, Bloomington, Indiana JANET BUTLER, Consultant, Rancho de Taos, New Mexico DONALD R. CHAND, Professor, Bentley College, Waltham, Massachusetts LEI-DA CHEN, Assistant Professor, College of Business Administration, Creighton University, Omaha, Nebraska TIM CLARK, Senior Systems Engineer, Cylink Corporation, Santa Clara, California HAMDAH DAVEY, Finance Manager, Tibbett Britten, United Kingdom NICHOLAS ECONOMIDES, Professor, Stern School of Business, New York University, New York, New York JOHN ERICKSON, Ph.D. Student, College of Business Administration, University of Nebraska-Lincoln, Lincoln, Nebraska MARK N. FROLICK, Associate Professor, Fogelman College of Business, University of Memphis, Memphis, Tennessee FREDERICK GALLEGOS, IS Audit Advisor and Faculty Member, College of Business Administration, California State Polytechnic University, Pomona, California TIMOTHY GARCIA-JAY, Project Director, St. Mary’s Hospital, Reno, Nevada JAMES E. GASKIN, Consultant, Mesquite, Texas HAYWOOD M. GELMAN, Consulting Systems Engineer, Cisco Systems, Lexington, Massachusetts ROBERT L. GLASS, President, Computing Trends, Bloomington, Indiana v
Information Systems Management Handbook FRITZ H. GRUPE, Professor, College of Business Administration, University of Nevada-Reno, Reno, Nevada UMA G. GUPTA, Dean, College of Technology, University of Houston, Houston, Texas GARY HACKBARTH, Assistant Professor, College of Business, Iowa State University, Ames, Iowa LINDA G. HAYES, Chief Executive Officer, WorkSoft, Inc., Dallas, Texas ROBERT L. HECKMAN, Assistant Professor, School of Information Studies, Syracuse University, Syracuse, New York LUKE HOHMANN, Consultant, Luke Hohmann Consulting, Sunnyvale, California RAY HOVING, Consultant, Ray Hoving and Associates, New Tripoli, Pennsylvania ZHENYU HUANG, Ph.D. Student, Fogelman College of Business, University of Memphis, Memphis, Tennessee CARL B. JACKSON, Vice President, Business Continuity Planning, QinetiQ Trusted Information Management Corporation, Worcester, Massachusetts RON JEFFRIES, Consultant, Xprogramming.com DIANA JOVIN, Market Development Manager, NetDynamics, Inc., Menlo Park, California RICHARD M. KESNER, Director of Enterprise Operations, Northeastern University, Boston, Massachusetts WILLIAM J. KETTINGER, Director, Center for Information Management and Technology Research, The Darla Moore School of Business, University of South Carolina, Columbia WILLIAM R. KING, Professor, Katz Graduate School of Business, University of Pittsburgh, Pittsburgh, Pennsylvania CHRISTOPHER KLAUS, Founder and Chief Technology Officer, Internet Security Systems, Atlanta, Georgia RAVINDRA KROVI, Associate Professor, College of Business Administration, University of Akron, Akron, Ohio WILLIAM KUECHLER, Assistant Professor, College of Business Administration, University of Nevada-Reno, Reno, Nevada MIKE KWIATKOWSKI, Consultant, Dallas, Texas RICHARD B. LANZA, Manager of Process, Business and Technology Integration Team, American Institute of Certified Public Accountants, Falls Church, Virginia JOO-ENG LEE-PARTRIDGE, Associate Professor, National University of Singapore, Singapore LISA M. LINDGREN, Consultant, Gilford, New Hampshire LOWELL LINDSTROM, Vice President, Business Coach, Object Mentor, Vernon Hills, Illinois ALDORA LOUW, Senior Associate, Global Risk Management Solutions Group, PricewaterhouseCoopers, Houston, Texas JERRY LUFTMAN, Professor, Howe School of Technology Management, Stevens Institute of Technology, Hoboken, New Jersey vi
Contributors ANNE P. MASSEY, Professor, Kelley School of Business, Indiana University, Bloomington, Indiana PETER MELL, Computer Security Division, National Institute of Standards and Technology, Gaithersburg, Maryland N. DEAN MEYER, President, N. Dean Meyer Associates, Ridgefield, Connecticut JOHN P. MURRAY, Consultant, Madison, Wisconsin STEFAN M. NEIKES, Data Analyst, Tandy Corporation, Watuga, Texas FRED NIEDERMAN, Associate Professor, School of Business and Administration, Saint Louis University, St. Louis, Missouri STEVE NORMAN, Manager, Oracle Corporation; and Honorarium Instructor, University of Colorado at Colorado Springs, Colorado Springs, Colorado POLLY PERRYMAN KUVER, Consultant, Boston, Massachusetts MAHESH RAISINGHANI, Director of Research, Center for Applied Technology and Faculty Member, E-Commerce and Information Systems Department, Graduate School of Management, University of Dallas, Dallas, Texas T.M. RAJKUMAR, Associate Professor, School of Business Administration, Miami University, Oxford, Ohio V. RAMESH, Associate Professor, Kelley School of Business, Indiana University, Bloomington, Indiana C. RANGANATHAN, Assistant Professor, College of Business Administration, University of Illinois-Chicago, Chicago, Illinois VASANT RAVAL, Professor, College of Business Administration, Creighton University, Omaha, Nebraska DREW ROBB, Freelance Writer and Consultant, Los Angeles, California STUART ROBBINS, Founder and CEO, KMERA Corporation; and Executive Director, The CIO Collective, California JOHN F. ROCKART, Senior Lecturer Emeritus, Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts JEANNE W. ROSS, Principal Research Scientist, Center for Information Systems Research, Sloan School of Management, Massachusetts Institute of Technology, Cambridge, Massachusetts HUGH W. RYAN, Partner, Accenture, Chicago, Illinois SEAN SCANLON, E-Architect, FCG Doghouse, Huntington Beach, California WILLIAM T. SCHIANO, Professor, Bentley College, Waltham, Massachusetts S. YVONNE SCOTT, Assistant Director, Corporate Information Systems, GATX Corporation, Chicago, Illinois ARIJIT SENGUPTA, Assistant Professor, Kelley School of Business, Indiana University, Bloomington, Indiana NANCY SETTLE-MURPHY, President, Chrysalis International Inc., Boxborough, Massachusetts NANCY C. SHAW, Assistant Professor, School of Management, George Mason University, Fairfax, Virginia JAMES E. SHOWALTER, Consultant, Enterprise Computing, Automotive Industry Business Development, Sun Microsystems, Greenwood, Indiana vii
Information Systems Management Handbook KENG SIAU, Associate Professor, College of Business Administration, University of Nebraska-Lincoln, Lincoln, Nebraska JANICE C. SIPIOR, Associate Professor, College of Commerce and Finance, Villanova University, Villanova, Pennsylvania SUMIT SIRCAR, Professor, Farmer School of Business Administration, Miami University, Miami, Ohio SCOTT SWEENEY, Associate, CB Richard Ellis, Reno, Nevada PETER TARASEWICH, Assistant Professor, Northeastern University, Boston, Massachusetts HEIKKI TOPI, Associate Professor, Bentley College, Waltham, Massachusetts JOHN VAN DEN HOVEN, Senior Technology Advisor, Noranda, Inc., Toronto, Ontario, Canada ROBERT VANTOL, Senior E-Commerce Consultant, Web Front Communications, Toronto, Ontario, Canada ROBERTO VINAJA, Assistant Professor, College of Business Administration, University of Texas Pan American, Edinburg, Texas LES WAGUESPACK, Professor, Bentley College, Waltham, Massachusetts BURKE T. WARD, Professor, College of Commerce and Finance, Villanova University, Villanova, Pennsylvania MERRILL WARKENTIN, Associate Professor, College of Business and Industry, Mississippi State University, Mississippi State, Mississippi JASON WEIR, Senior Researcher, HR.com, Aurora, Ontario, Canada STEVEN M. WILLIFORD, President, Franklin Services Group, Inc., Pataskala, Ohio SUSAN E. YAGER, Assistant Professor, Southern Illinois University-Edwardsville, Edwardsville, Illinois WILLIAM A. YARBERRY, JR., Consultant and Technical Writer, Houston, Texas ROBERT A. ZAWACKI, Professor Emeritus, University of Colorado and President, Zawacki and Associates, Boulder, Colorado MICHAEL ZIMMER, Senior Coordinator, Ministry of Health Services and Ministry of Health Planning, Government of British Columbia, Victoria, British Columbia, Canada
viii
Contents SECTION 1 ACHIEVING STRATEGIC IT ALIGNMENT
.............. 1
STRATEGIC IT CAPABILITIES 1 Assessing IT–Business Alignment. . . . . . . . . . . . . . . . . . . . . . 7 Jerry Luftman 2 IT Capabilities, Business Processes, and Impact on the Bottom Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 William R. King 3 Facilitating Transformations in IT: Lessons Learned along the Journey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Steve Norman and Robert A. Zawacki 4 Strategic Information Technology Planning and the Line Manager’s Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Robert L. Heckman 5 Running Information Services as a Business. . . . . . . . . . . . 47 Richard M. Kesner 6 Managing the IT Procurement Process . . . . . . . . . . . . . . . . 73 Robert L. Heckman 7 Performance Metrics for IT Human Resource Alignment . . . 89 Carol V. Brown 8 Is It Time for an IT Ethics Program? . . . . . . . . . . . . . . . . . . 101 Fritz H. Grupe, Timothy Garcia-Jay, and William Kuechler IT LEADERSHIP ROLES 9 The CIO Role in the Era of Dislocation . . . . . . . . . . . . . . . . 111 James E. Showalter 10 Leadership Development: The Role of the CIO . . . . . . . . . 119 Barton S. Bolton 11 Designing a Process-Based IT Organization . . . . . . . . . . . 125 Carol V. Brown and Jeanne W. Ross ix
Information Systems Management Handbook SOURCING ALTERNATIVES 12 Preparing for the Outsourcing Challenge. . . . . . . . . . . . . . 135 N. Dean Meyer 13 Managing Information Systems Outsourcing. . . . . . . . . . . 145 S. Yvonne Scott 14 Offshore Development: Building Relationships across International Boundaries . . . . . . . . . . . . . . . . . . . . . 153 Hamdah Davey and Bridget Allgood 15 Application Service Providers . . . . . . . . . . . . . . . . . . . . . . . 159 Mahesh Raisinghani and Mike Kwiatkowski SECTION 2 DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 MANAGING A DISTRIBUTED COMPUTING ENVIRONMENT 16 The New Enabling Role of the IT Infrastructure . . . . . . . . 175 Jeanne W. Ross and John F. Rockart 17 U.S. Telecommunications Today . . . . . . . . . . . . . . . . . . . . . 191 Nicholas Economides 18 Information Everywhere. . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 Peter Tarasewich and Merrill Warkentin DEVELOPING AND MAINTAINING THE NETWORKING INFRASTRUCTURE 19 Designing and Provisioning an Enterprise Network . . . . . 223 Haywood M. Gelman 20 The Promise of Mobile Internet: Personalized Services . . . 241 Heikki Topi 21 Virtual Private Networks with Quality of Service . . . . . . . 257 Tim Clark 22 Storage Area Networks Meet Enterprise Data Networks . . 269 Lisa M. Lindgren DATA WAREHOUSING 23 Data Warehousing Concepts and Strategies . . . . . . . . . . . 279 Bijoy Bordoloi, Stefan M. Neikes, Sumit Sircar, and Susan E. Yager 24 Data Marts: Plan Big, Build Small . . . . . . . . . . . . . . . . . . . . 301 John van den Hoven 25 Data Mining: Exploring the Corporate Asset . . . . . . . . . . . 307 Jason Weir
x
Contents 26 Data Conversion Fundamentals . . . . . . . . . . . . . . . . . . . . . 315 Michael Zimmer QUALITY ASSURANCE AND CONTROL 27 Service Level Management Links IT to the Business. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 Janet Butler 28 Information Systems Audits: What’s in It for Executives? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341 Vasant Raval and Uma G. Gupta SECURITY AND RISK MANAGEMENT 29 Cost-Effective IS Security via Dynamic Prevention and Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349 Christopher Klaus 30 Reengineering the Business Continuity Planning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 Carl B. Jackson 31 Wireless Security: Here We Go Again . . . . . . . . . . . . . . . . . 379 Aldora Louw and William A. Yarberry, Jr. 32 Understanding Intrusion Detection Systems. . . . . . . . . . . 389 Peter Mell SECTION 3 PROVIDING APPLICATION SOLUTIONS
. . . . . . . . . . . . 399
NEW TOOLS AND APPLICATIONS 33 Web Services: Extending Your Web . . . . . . . . . . . . . . . . . . 405 Robert VanTol 34 J2EE versus .NET: An Application Development Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 V. Ramesh and Arijit Sengupta 35 XML: Information Interchange. . . . . . . . . . . . . . . . . . . . . . . 425 John van den Hoven 36 Software Agent Orientation: A New Paradigm. . . . . . . . . . 435 Roberto Vinaja and Sumit Sircar SYSTEMS DEVELOPMENT APPROACHES 37 The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic . . . . . . . . . . . . . . . . . . . . . . . . 457 Robert L. Glass 38 Usability: Happier Users Mean Greater Profits . . . . . . . . . 465 Luke Hohmann xi
Information Systems Management Handbook 39 UML: The Good, the Bad, and the Ugly . . . . . . . . . . . . . . . 483 John Erickson and Keng Siau 40 Use Case Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Donald R. Chand 41 Extreme Programming and Agile Software Development Methodologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 511 Lowell Lindstrom and Ron Jeffries 42 Component-Based IS Architecture . . . . . . . . . . . . . . . . . . . 531 Les Waguespack and William T. Schiano PROJECT MANAGEMENT 43 Does Your Project Risk Management System Do the Job? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 545 Richard B. Lanza 44 Managing Development in the Era of Complex Systems. . . .555 Hugh W. Ryan 45 Reducing IT Project Complexity . . . . . . . . . . . . . . . . . . . . . 561 John P. Murray SOFTWARE QUALITY ASSURANCE 46 Software Quality Assurance Activities . . . . . . . . . . . . . . . . 573 Polly Perryman Kuver 47 Six Myths about Managing Software Development . . . . . 581 Linda G. Hayes 48 Ethical Responsibility for Software Development. . . . . . . 589 Janice C. Sipior and Burke T. Ward SECTION 4 LEVERAGING E-BUSINESS OPPORTUNITIES
. . . . . . . . . 599
E-BUSINESS STRATEGY AND APPLICATIONS 49 Building an E-Business Strategy . . . . . . . . . . . . . . . . . . . . . 603 Gary Hackbarth and William J. Kettinger 50 Surveying the E-Landscape: New Rules of Survival . . . . . 625 Ravindra Krovi 51 E-Procurement: Business and Technical Issues . . . . . . . . 637 T.M. Rajkumar 52 Evaluating the Options for Business-to-Business E-Commerce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 651 C. Ranganathan
xii
Contents 53 The Role of Corporate Intranets . . . . . . . . . . . . . . . . . . . . . 663 Diana Jovin 54 Integrating Web-Based Data into a Data Warehouse . . . . 671 Zhenyu Huang, Lei-da Chen, and Mark N. Frolick 55 At Your Service: .NET Redefines the Way Systems Interact . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 691 Drew Robb SECURITY AND PRIVACY ISSUES 56 Dealing with Data Privacy Protection: An Issue for the 21st Century . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 697 Fritz H. Grupe, William Kuechler, and Scott Sweeney 57 A Strategic Response to the Broad Spectrum of Internet Abuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 715 Janice C. Sipior and Burke T. Ward 58 World Wide Web Application Security . . . . . . . . . . . . . . . . 729 Sean Scanlon SECTION 5 FACILITATING KNOWLEDGE WORK
. . . . . . . . . . . . . . . 749
PROVIDING SUPPORT AND CONTROLS 59 Improving Satisfaction with End-User Support . . . . . . . . . 753 Nancy C. Shaw, Fred Niederman, and Joo-Eng Lee-Partridge 60 Internet Acceptable Usage Policies . . . . . . . . . . . . . . . . . . 761 James E. Gaskin 61 Managing Risks in User Computing . . . . . . . . . . . . . . . . . . 771 Sandra D. Allen-Senft and Frederick Gallegos 62 Reviewing User-Developed Applications . . . . . . . . . . . . . . 781 Steven M. Williford 63 Security Actions during Reduction in Workforce Efforts: What to Do When Downsizing . . . . . . . . . . . . . . . . . . . . . . . 799 Thomas J. Bray SUPPORTING REMOTE WORKERS 64 Supporting Telework: Obstacles and Solutions . . . . . . . . 807 Heikki Topi 65 Virtual Teams: The Cross-Cultural Dimension . . . . . . . . . 819 Anne P. Massey and V. Ramesh 66 When Meeting Face-to-Face Is Not the Best Option . . . . . 827 Nancy Settle-Murphy
xiii
Information Systems Management Handbook KNOWLEDGE MANAGEMENT 67 Sustainable Knowledge: Success in an Information Economy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 835 Stuart Robbins 68 Knowledge Management: Coming up the Learning Curve . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 843 Ray Hoving 69 Building Knowledge Management Systems . . . . . . . . . . . . 857 Brent J. Bowman 70 Preparing for Knowledge Management: Process Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 873 Richard M. Kesner INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 891
xiv
Introduction The first few years of the new millennium have been a challenging time for the information technology (IT) manager. The initial economic euphoria that greeted the successful completion of Y2K projects worldwide was quickly followed by a dramatic shakedown within U.S.-based industries most closely related to the growth of the Internet. Today, organizations are striving to find innovative ways to leverage in-place IT solutions to improve efficiency and effectiveness under a harsher economic climate. At the same time, technologies that hold the promise of globally ubiquitous access to distributed applications continue to be strong drivers of new business solutions and organizational change. In this competitive environment, it is increasingly important for IT managers to be able to closely align IT investments with the organization’s strategic goals. Both a high-quality IT infrastructure and high-quality IT services are critical for any modern organization to compete. Yet it is also essential for IT leaders to continue to assess new technologies and understand the fundamental issues of how best to integrate modern information technologies — including packaged enterprise systems, Web services, wireless access technologies, peer-to-peer computing, and voice, video, and data communication technologies — to transform the ways that organizations compete and individuals work across geographically dispersed locations. The 70 chapters in this 8th edition of the IS Management Handbook have been selected with the objective of helping our target audience, the practicing IT manager, successfully navigate today’s challenging environment. Guidelines, frameworks, checklists, and other tools are provided for a range of critical IT management topics. In addition to providing readings for our target audience of senior IT leaders, other members of the IT management team, and those consulting on IT management issues, we encourage potential adopters of this Handbook to use it as a resource for IT professional development forums and more traditional academic curricula. The five section themes we have selected for this Handbook are briefly introduced below. xv
Information Systems Management Handbook Section 1: Achieving Strategic IT Alignment Achieving strategic alignment between the IT organization and the business has been a top IT management issue for more than a decade. Achieving alignment should be viewed as a goal to continually aspire to, rather than an end state. Today, the IT investments to be aligned include not only systems investments, but also investments in IT people and IT processes. The three major topics selected for this section are strategic IT capabilities, IT leadership roles, and sourcing alternatives. Section 2: Designing and Operating an Enterprise Infrastructure A reliable and robust IT infrastructure is a critical asset for virtually all organizations. Decisions as to the design, implementation, and ongoing management of the IT infrastructure directly affect the success and viability of modern organizations. Yet, IT infrastructure issues have also become more complex: distributed technologies in general, and the Internet in particular, have blurred the boundaries between organizational systems and the systems of business partners. The five topics covered in this section include managing a distributed computing environment, developing and maintaining the networking infrastructure, data warehousing, quality assurance and control, and security and risk management. Section 3: Providing Application Solutions The development of Web-based, globally distributed application solutions requires skills and methods that are significantly different from those that were required of IT professionals in earlier eras. As client environments have become very diverse and users have come to expect ubiquitous and uninterrupted application availability, the integration between various systems components has become increasingly important. At the same time, security requirements have become increasingly stringent. The four topics covered in this section are new tools and applications, systems development approaches, project management, and software quality assurance. Section 4: Leveraging E-Business Opportunities The dramatic shakedown after the E-commerce boom of the late 1990s has not reduced the importance of the Internet, but may have reduced its speed of growth. For established traditional businesses, supporting E-business has become a vitally important new responsibility as organizations have pursued initiatives that include a mixture of online and offline xvi
Introduction approaches. E-business technologies have become an integral part of most organizations’ development portfolios and operational environments. The two topics covered in this section are E-business strategy and applications, and security and privacy issues. Section 5: Facilitating Knowledge Work Facilitating knowledge work continues to be a critical IS management role. Today’s typical knowledge worker is a computer-savvy user who has little tolerance for downtime and is an increasingly demanding Web user. Today’s technologies also enable working remotely — as a telecommuter or a member of a virtual team. Work teams are also beginning to demand new technologies to support communications and collaboration across geographical boundaries. The three topics covered in this section are providing support and controls, supporting remote workers, and knowledge management. How to Use This Handbook The objective of this handbook is to be a resource for practicing IT managers responsible for managing and guiding the planning and use of information technology within organizations. The chapters provide practical management tools and “food-for-thought” based on the management insights of more than 85 authors who include former CIOs at Fortune 500 companies now in consulting, other practicing IT managers, consultants, and academics who focus on practice-oriented research and best practices. To help our readers find the sections and information nuggets most useful to them, the chapters in this Handbook have been organized under 17 topics that fit into the five section themes introduced above. For those of you interested in browsing readings in a specific IT management area, we suggest becoming familiar with our table of contents first and then beginning your readings with the relevant section introduction at the beginning of each new section. For those interested in gleaning knowledge about a narrower topical area, we recommend a perusal of our alphabetical index at the end of the Handbook.
xvii
This page intentionally left blank
Acknowledgments It has been a privilege for us to work together again as editors of the IS Management Handbook. We believe that our prior experiences working in the IT field, as educators seeking to respond to the curriculum needs of IT leaders, and as researchers tracking industry trends and best practices, position us well for this editorial challenge. We want to extend our sincere thanks to the authors who promptly responded to our requests for new material and worked together with us to develop new intellectual content that is relevant and timely. Each chapter has been reviewed multiple times in pursuit of currency, accuracy, consistency, and presentation clarity. We hope that all the authors are pleased with the final versions of their contributions to this Handbook. We also wish to offer special thanks to our publisher, Richard O’Hanley, for his insightful direction and to our production manager at CRC Press, Claire Miller, for her friendly communications and expertise. We also are grateful to our own institutions for recognizing the importance of faculty endeavors that bridge the academic and practitioner communities: Indiana University and Bentley College. Finally, we appreciate the continued support of our family members, without whose understanding this project could not have come to fruition. As we complete our editorial work for this Handbook, we can only marvel at the speed with which economic, technological, and professional fortunes have risen, and sometimes fallen, within the past decade. We encourage our readers to continue to invest in the professional development of their staffs and themselves, especially during the down cycles, in order to be even better positioned for a future in which IT innovation will continue to be an enabler, and catalyst, for business growth and change. CAROL V. BROWN Indiana University
[email protected] HEIKKI TOPI Bentley College
[email protected] xix
This page intentionally left blank
Section 1
Achieving Strategic IT Alignment
ACHIEVING STRATEGIC IT ALIGNMENT Achieving strategic alignment between the IS organization and the business has been a top IS management issue for more than a decade. In the past, achieving strategic IT alignment was expected to result primarily from a periodic IT planning process. Today, however, the emphasis is on a continuous assessment of the alignment of IT investments in not only systems, but also IT people and IT processes. Given the high rate of change in today’s hyper-competitive environments, achieving strategic IT alignment needs to also be viewed as a goal to continually aspire to, not necessarily an end state. The 15 chapters in this first section of the Handbook address a large set of IT–business alignment issues, which are organized under three highlevel topics: • Strategic IT capabilities • IT leadership roles • Sourcing alternatives STRATEGIC IT CAPABILITIES Chapter 1, “Assessing IT–Business Alignment,” presents a tool for teams of business and IT managers to reach agreement on their organization’s current state of alignment and to provide a roadmap for improvement. Thirtyeight practices are categorized into six IT–business alignment criteria: communications, competency/value measurement, governance, partnership, technology scope, and skills. According to the author, most executives today rate their organizations between levels 2 and 3 on a five-level maturity curve. Chapters 2 and 3 provide some high-level guidelines for achieving IT–business alignment via strategic IT capabilities. Chapter 2, “IT Capabilities, Business Processes, and Impact on the Bottom Line,” emphasizes the need to focus on IT investments that will result in “bundles” of internally consistent elements that fulfill a business or IT objective. A leading academician, the author argues that IT capabilities primarily impact a company’s bottom line through redesigned business processes. Chapter 3, “Facilitating Transformations in IT: Lessons Learned along the Journey,” focuses on how an IT organization can successfully transform itself into a more flexible consultative model. Based on a previously published model, the authors describe each component of their change model for an IT management context. The chapter concludes with lessons learned based on the authors’ extensive field experiences. The next four chapters are all concerned with developing specific IT capabilities by improving IT processes and metrics. Chapter 4, “Strategic Information Technology Planning and the Line Manager’s Role,” presents 2
ACHIEVING STRATEGIC IT ALIGNMENT an IT planning approach that takes into account two potentially conflicting needs: centralized IT coordination and entrepreneurial IT applications for business units. The author views the roles played by line managers in the IT planning process as critical to achieving IT–business alignment. Chapter 5, “Running Information Services as a Business,” presents a framework and a set of tools for managing the IS department’s service commitments to the organization. For example, the authors provide a comprehensive mapping of all IS services to discrete stakeholder constituencies, as well as templates for capturing business value, project roles, and risks. Chapter 6 presents a process model for IT procurement that has been developed by a Society for Information Management (SIM) working group. The objective of the working group was to impose discipline on a cross-functional process, based on the experiences of a dozen senior IT executives from large North American companies. The model details sub-processes and key issues for three Deployment processes (requirements determination, acquisition, contract fulfillment) and three Management processes (supplier management, asset management, quality management). Chapter 7, “Performance Metrics for IT Human Resource Alignment,” focuses on designing metrics to motivate and reward IT managers and their personnel for the development of a strategic human resource capability within the IT organization. After presenting guidelines for both what to measure and how to measure, a case example is used to demonstrate some best practices in people-related metrics in an organizational context that values IT–business goal alignment. The final chapter under this topic, “Is It Time for an IT Ethics Program?,” provides specific guidelines for developing an ethics program to help IT employees make better decisions. Given the recent publicity on corporate scandals due to unethical behavior within U.S. organizations, the chapter’s title deserves a resounding “Yes” response: an IT ethics program appears to be a relatively inexpensive and totally justifiable investment. IT LEADERSHIP ROLES The three chapters on IT leadership topics share ideas based on many years of personal experience by IT leaders. Chapter 9, “The CIO in the Era of Dislocation,” is based on the author’s insights as a former CIO and now a consultant with regular access to thought leaders in the field. Specifically, the author argues that entrepreneurial leadership is required in today’s networked world. The new era of pervasive computing and dislocating technologies and its meaning for the CIO role are described. As a former IT manager and a facilitator of leadership development programs, the author 3
ACHIEVING STRATEGIC IT ALIGNMENT of Chapter 10, “Leadership Development: The Role of the CIO,” argues that the departing CIO’s legacy is not the IT infrastructure left behind, but rather the IT leadership capability left behind. This chapter was crafted with the objective of helping IS leaders understand their own leadership styles, which is the first step toward helping develop other leaders. Chapter 11, “Designing a Process-Based IT Organization,” summarizes the organization design innovations of a dozen highly regarded IT leaders striving to develop more process-based IT organization. The authors synthesize these research findings into four IT processes and six IT disciplines that characterize the early 21st-century process-based IT organization. Common challenges faced by the IT leaders who were interviewed are also described, along with some of their initial solutions to address them. SOURCING ALTERNATIVES Since the landmark Kodak outsourcing contract, the trade-offs between internal and external sourcing for core and non-core IT functions have been widely studied. Chapter 12, “Preparing for the Outsourcing Challenge,” provides useful guidelines for preventing a bad outsourcing decision. Detailed methods to facilitate fair service and cost comparisons between internal staff and outsourcing vendors are provided. Chapter 13, “Managing Information Systems Outsourcing,” discusses the key components of outsourcing agreements from a client perspective. As the author points out, good contractual agreements are the first step toward effective management and control of IS/IT outsourcing arrangements. The final two chapters on the alternative sourcing topic discuss IT management issues associated with two new outsourcing options: (1) “offshore” outsourcing (IT work managed by organizations in other countries) and (2) application service providers (ASPs). Chapter 14, “Offshore Development: Building Relationships across International Boundaries,” provides useful “food-for-thought” about how to build effective relationships with IT workers managed by an outsourcing firm located in a country that is different from the client’s firm. Using a case example of a client firm based in the United Kingdom and an outsourcer in India, the authors describe some of the challenges encountered due to lack of knowledge about the client firm’s business context and socio-cultural differences, as well as some suggestions for how they can be successfully addressed. ASPs provide Internet-based hosting of packaged or vendor-customized applications. Particularly if the ASP is a third-party provider, their valueadded services may include software integration as well as hosting services. A composite of content from two related articles published by the same authors, Chapter 15, “Application Service Providers,” presents the 4
ACHIEVING STRATEGIC IT ALIGNMENT potential benefits of ASP arrangements (including the support of virtual organizations) and some of the infrastructure challenges. The discussion of service level agreements emphasizes the key differences between SLAs with an external ASP versus SLAs with an internal IT organization.
5
This page intentionally left blank
Chapter 1
Assessing IT–Business Alignment Jerry Luftman
Alignment is the perennial business chart-topper on top-ten lists of IT issues. Educating line management on technology’s possibilities and limitations is difficult; so is setting IT priorities for projects, developing resources and skills, and integrating systems with corporate strategy. It is even tougher to keep business and IT aligned as business strategies and technology evolve. There is no silver-bullet solution, but achieving alignment is possible. A decade of research has found that the key is building the right relationships and processes, and providing the necessary training. What follows is a methodology developed by the author for assessing a company’s alignment. Modeled after the Capability Maturity Model® developed by Carnegie Mellon’s Software Engineering Institute, but focused on a more strategic set of business practices, this tool has been successfully tested at more than 50 Global 2000 companies and is currently the subject of a benchmarking study sponsored by the Society for Information Management and The Conference Board.1 The primary objective of the assessment is to identify specific recommendations for improving the alignment of IT and the business. ALIGNMENT CATEGORIES The tool has six IT–business alignment criteria, or maturity categories, that are included in each assessment: 1. 2. 3. 4. 5. 6.
Communications Maturity Competency/Value Measurements Maturity Governance Maturity Partnership Maturity Technology Scope Maturity Skills Maturity
0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
7
ACHIEVING STRATEGIC IT ALIGNMENT Each maturity category is discussed below. A list of specific practices for each of the six alignment criteria can be found in Exhibit 1. Communications Maturity Effective exchange of ideas and a clear understanding of what it takes to ensure successful strategies are high on the list of enablers and inhibitors to alignment. Too often there is little business awareness on the part of IT or little IT appreciation on the part of the business. Given the dynamic environment in which most organizations find themselves, ensuring ongoing knowledge sharing across organizations is paramount. Many firms choose to draw on liaisons to facilitate this knowledge sharing. The keyword here is “facilitate.” This author has often seen facilitators whose role becomes serving as the sole conduit for interaction among the different organizations. This approach tends to stifle, rather than foster, effective communications. Rigid protocols that impede discussions and the sharing of ideas should be avoided. Competency/Value Measurements Maturity Too many IT organizations cannot demonstrate their value to the business in terms that the business understands. Frequently, business and IT metrics of value differ. A balanced “dashboard” that demonstrates the value of IT in terms of contribution to the business is needed. Service levels that assess IT’s commitments to the business often help. However, the service levels must be expressed in terms that the business understands and accepts. The service levels should be tied to criteria that clearly define the rewards and penalties for surpassing, or missing, the objectives. Frequently, organizations devote significant resources to measuring performance factors. However, they spend much less of their resources on taking action based on these measurements. For example, requiring a return on investment (ROI) before a project begins, but not reviewing how well objectives were met after the project was deployed, provides little value to the organization. It is important to continuously assess the performance metrics criteria to understand (1) the factors that lead to missing the criteria and (2) what can be learned to improve the environment. Governance Maturity The considerations for IT governance include how the authority for resources, risk, conflict resolution, and responsibility for IT is shared among business partners, IT management, and service providers. Project selection and prioritization issues are included here. Ensuring that the appropriate business and IT participants formally discuss and review the 8
Exhibit 1.
Alignment Criteria Level 5: Optimal Process (Complete Alignment)
Alignment Criterion: Communications Maturity
Level 1: With Process (No Alignment)
Understanding of Business by IT Understanding of IT by Business
IT management lacks Limited Good understanding understanding understanding by IT by IT management management Managers lack Limited understanding Good understanding understanding by managers by managers
Organizational Learning
Casual conversation and meetings
Newsletters, reports, group e-mail
Style and Ease of Access
Business to IT only; formal
One-way, somewhat informal
Leveraging Intellectual Assets
Ad hoc
Some structured sharing emerging
Structured around key processes
Formal sharing at all levels
Formal sharing with partners
IT–Business Liaison Staff
None or use only as needed
Primary IT–Business link
Facilitate knowledge transfer
Facilitate relationship building
Building relationship with partners
Level 2: Beginning Process
Level 3: Establishing Process
Understanding encouraged among IT staff Understanding encouraged among staff Formal methods sponsored by senior management Two-way, somewhat informal
Understanding required of all IT staff Understanding required of staff Learning monitored for effectiveness Two-way, informal and flexible
9
Assessing IT–Business Alignment
Training, departmental meetings Two-way, formal
Level 4: Improved Process
Alignment Critera (continued)
Alignment Criterion: Competency/Value Measurements Maturity
Level 1: With Process (No Alignment)
Level 2: Beginning Process
IT Metrics
Technical only
Business Metrics
IT investments measured rarely, if ever Value of IT investments rarely measured Use sporadically
Technical cost; metrics Review, act on rarely reviewed technical, ROI metrics Cost/unit; rarely Review, act on ROI, reviewed cost
Link between IT and Business Metrics Service Level Agreements Benchmarking
Seldom or never
Formally Assess IT Investments
Do not assess
Continuous Improvement Practices
None
Business, IT metrics not linked With units for technology performance Sometimes benchmark informally Only when there is a problem Few; effectiveness not measured
Level 3: Establishing Process
Business, IT metrics becoming linked
Level 4: Improved Process Also measure effectiveness Also measure customer value
Formally linked; reviewed and acted upon With units; becoming Enterprisewide enterprisewide May benchmark formally, seldom act Becoming a routine occurrence Few; starting to measure effectiveness
Level 5: Optimal Process (Complete Alignment) Also measure business ops, HR, partners Balanced scorecard, includes partners Balanced scorecard, includes partners Includes partners
Routinely benchmark, Routinely benchusually act mark, act on, and measure results Routinely assess and Routinely assess, act on findings act on, and measure results Many; frequently Practices and meameasure sures well-estabeffectiveness lished
ACHIEVING STRATEGIC IT ALIGNMENT
10 Exhibit 1.
Exhibit 1.
Alignment Critera (continued)
Alignment Criterion: Governance Maturity
Level 1: With Process (No Alignment)
Formal Business Strategy Planning
Not done, or done as At unit functional level, Some IT input and needed slight IT input cross-functional planning Not done, or done as At unit functional level, Some business input needed light business input and cross-functional planning Centralized or Central/decentral; Central/decentral decentralized or Federal some collocation
Formal IT Strategy Planning Organizational Structure
CIO reports to CFO
How IT is Budgeted
Level 3: Establishing Process
CIO reports to COO
Level 4: Improved Process
Level 5: Optimal Process (Complete Alignment)
At unit and enterprise, with IT
With IT and partners
At unit and enterprise, with business Federal
With partners
Federal
11
CIO reports to COO or CEO
CIO reports to CEO
Cost center, spending Cost center by unit is unpredictable
IT treated as investment
Profit center
Rationale for IT Spending
Reduce costs
Process driver, strategy enabler
Competitive advantage, profit
Senior-Level IT Steering Committee
Do not have
Meet informally as needed
Formal committees meet regularly
Proven to be effective
Also includes external partners
How Projects Are Prioritized
React to business or IT need
Determined by IT function
Determined by business function
Mutually determined
Partners’ priorities are considered
CIO reports to CFO
Some projects treated as investments Productivity, efficiency Also a process enabler
Assessing IT–Business Alignment
Reporting Relationships
Level 2: Beginning Process
Alignment Criterion: Partnership Maturity
Level 1: With Process (No Alignment)
Level 2: Beginning Process
Level 3: Establishing Process
Level 4: Improved Process
Business Perception of IT
Cost of doing business
Becoming an asset
Enables future business activity
Drives future business activity
IT’s Role in Strategic Business Planning Shared Risks and Rewards
Not involved
Enables business processes IT takes all the risks, IT takes most risks receives no rewards with little reward
Managing the IT–Business Relationship Relationship/Trust Style
IT–business Managed on an ad relationship is not hoc basis managed Conflict and mistrust Transactional relationship
Business Usually none Sponsors/Champions
Often have a senior IT sponsor or champion
Partner with business in creating value Enables or drives IT, business adapt business strategy quickly to change Risks, rewards always Managers incented shared to take risks
Drives business processes IT, business start sharing risks, rewards Processes exist but Processes exist and not always complied with followed IT becoming a valued Long-term service provider partnership IT and business sponsor or champion at unit level
Level 5: Optimal Process (Complete Alignment)
Business sponsor or champion at corporate level
Processes are continuously improved Partner, trusted vendor or IT services CEO is the business sponsor or champion
ACHIEVING STRATIEGIC IT ALIGNMENT
12 Exhibit 1. Alignment Critera (continued)
Exhibit 1. Alignment Critera (continued) Level 1: With Process (No Alignment)
Level 2: Beginning Process
Level 3: Establishing Process
Level 4: Improved Process
Primary Systems
Cost of doing business
Becoming an asset
Enables future business activity
Drives future business activity
Standards
Not involved
Architectural Integration
IT takes all the risks, receives no rewards IT–business relationship is not managed
Enables business processes IT takes most risks with little reward
Drives business processes IT, business start sharing risks, rewards Processes exist but Processes exist and not always followed are complied with
How IT Infrastructure is Perceived
Managed on an ad hoc basis
Level 5: Optimal Process (Complete Alignment)
Partner with business in creating value Enables or drives IT, business adapt business strategy quickly to change Risks, rewards always Managers incented shared to take risks Processes are continuously improved
13
Assessing IT–Business Alignment
Alignment Criterion: Technology Scope Maturity
Alignment Critera (continued) Level 5: Optimal Process (Complete Alignment)
Alignment Criterion: Skills Maturity
Level 1: With Process (No Alignment)
Level 2: Beginning Process
Innovative, Entrepreneurial Environment
Discouraged
Somewhat encouraged Strongly encouraged at unit level at unit level
Also at corporate level
Key IT HR Decisions Made by:
Top business and IT management at corporate Tend to resist change Job transfers rarely occur
Same, with emerging functional influence Change readiness programs emerging Occasionally occur within unit
Top business and IT Top management management across across firm and firm partners Programs in place at Also proactive and corporate level anticipate change Regularly occur at all Also at corporate level unit levels
Cross-Functional Training and Job Rotation
No opportunities
Decided by units
Social Interaction
Minimal IT–business interaction
Strictly a business-only Trust and confidence Trust and confidence relationship is starting achieved
Attract and Retain Top Talent
No retention program; poor recruiting
IT hiring focused on technical skills
Change Readiness Career Crossover Opportunities
Level 3: Establishing Process
Top business and unit management; IT advises Programs in place at functional level Regularly occur for unit management
Level 4: Improved Process
Formal programs run Also across enterby all units prise
Technology and business focus; retention program
Also with partners
Also with partners
Attained with customers and partners Formal program for Effective program hiring and retaining for hiring and retaining
ACHIEVING STRATEGIC IT ALIGNMENT
14 Exhibit 1.
Assessing IT–Business Alignment priorities and allocation of IT resources is among the most important enablers (or inhibitors) of alignment. This decision-making authority needs to be clearly defined. Partnership Maturity The relationship that exists among the business and IT organizations is another criterion that ranks high among the enablers and inhibitors of alignment. Giving the IT function the opportunity to have an equal role in defining business strategies is obviously important. However, how each organization perceives the contribution of the other, the trust that develops among the participants, ensuring appropriate business sponsors and champions of IT endeavors, and the sharing of risks and rewards are all major contributors to mature alignment. This partnership should evolve to a point where IT both enables and drives changes to both business processes and business strategies. Naturally, this demands having a clearly defined vision shared by the CIO and CEO. Technology Scope Maturity This set of criteria assesses the extent to which IT is able to: • Go beyond the back office and the front office of the organization • Assume a role supporting a flexible infrastructure that is transparent to all business partners and customers • Evaluate and apply emerging technologies effectively • Enable or drive business processes and strategies as a true standard • Provide solutions customizable to customer needs Skills Maturity This category encompasses all IT human resource considerations, such as how to hire and fire, motivate, train and educate, and culture. Going beyond the traditional considerations such as training, salary, performance feedback, and career opportunities, there are factors that include the organization’s cultural and social environment. For example, is the organization ready for change in this dynamic environment? Do individuals feel personally responsible for business innovation? Can individuals and organizations learn quickly from their experience? Does the organization leverage innovative ideas and the spirit of entrepreneurship? These are some of the important conditions of mature organizations. LEVELS OF ALIGNMENT MATURITY Each of the six criteria described above has a set of attributes that allow particular dimensions (or practices) to be assessed using a rating scheme 15
ACHIEVING STRATEGIC IT ALIGNMENT of five levels. For example, for the practice “Understanding of business by IT” under the Communications Maturity criterion, the five levels are: Level 1: IT management lacks understanding Level 2: Limited understanding by IT management Level 3: Good understanding by IT management Level 4: Understanding encouraged among IT staff Level 5: Understanding required of all IT staff It is important to have both business and IT executives evaluate each of the practices for the six maturity criteria. Typically, the initial review will produce divergent results, and this outcome is indicative of the organization’s alignment problems and opportunities being addressed. The objective is for the team of IT and business executives to converge on a maturity level. Further, the relative importance of each of the attributes for each maturity criterion may differ among organizations. For example, in some organizations, the use of SLAs (service level agreements), which is a practice under the Competency/Value Measurements Maturity criterion, may not be considered as important to alignment as the effectiveness of IT–business liaisons, which is a practice under the Communications Maturity criterion. Assigning the SLA practice a low maturity assessment should not significantly impact the overall rating. However, it is still valuable for the assessment team to discuss why a particular attribute (in this example, SLAs) is less significant than another attribute (liaisons). After each practice is assessed, an average score for the evaluation team is calculated for each practice, and then an average category score is determined for each of the six criteria (see Exhibit 2). The evaluation team then uses these scores for each criterion to converge on an overall assessment of the IT alignment maturity level for the firm (see below). The next higher level of maturity is then used as a roadmap to identify what the firm should do next. A trained facilitator is typically needed for these sessions. ASSESSING YOUR ORGANIZATION This rating system will help you assess your company’s level of alignment. You will ultimately decide which of the following definitions best describes your business practices. Each description corresponds to a level of alignment, of which there are five:
16
Level 1: Without Process (no alignment) Level 2: Beginning Process Level 3: Establishing Process Level 4: Improved Process Level 5: Optimal Process (complete alignment)
Assessing IT–Business Alignment Exhibit 2.
Tally Sheet Averaged Scores
Practice Categories
Practices
Communications 1 2 3 4 5 6 Competency/ 7 Value 8 Measurements 9 10 11 12 13 Governance
14 15 16 17 18 19 20 21
Partnership
22 23 24 25 26 27
Technology Scope
28 29 30 31
1 1.5 2 2.5 3 3.5 4 4.5 5
Average Category Score
Understanding of business by IT Understanding of IT by business Organizational learning Style and ease of access Leveraging intellectual assets IT–business liaison staff IT metrics Business metrics Link between IT and business metrics Service level agreements Benchmarking Formally assess IT investments Continuous improvement practices Formal business strategy planning Formal IT strategy planning Organizational structure Reporting relationships How IT is budgeted Rationale for IT spending Senior-level IT steering committee How projects are prioritized Business perception of IT IT’s role in strategic business planning Shared risks and rewards Managing the IT–business relationship Relationship/trust style Business sponsors/ champions Primary systems Standards Architectural integration How IT infrastructure is perceived
17
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 2. Tally Sheet (continued) Averaged Scores Practice Categories Skills
Practices 32 33 34 35 36 37 38
1 1.5 2 2.5 3 3.5 4 4.5 5
Average Category Score
Innovative, entrepreneurial environment Key IT HR decisions made by: Change readiness Career crossover opportunities Cross-functional training and job rotation Social interaction Attract and retain top talent
Your Alignment Score:
Level 1 companies lack the processes and communication needed to attain alignment. In Level 5 companies, IT and other business functions (marketing, finance, R&D, etc.) adapt their strategies together, using fully developed processes that include external partners and customers. Organizations should seek to attain, and sustain, the fifth and highest level of alignment. Conducting an assessment has the following four steps: 1. Form the assessment team. Create a team of IT and business executives to perform the assessment. Ten to thirty executives typically participate, depending on whether a single business unit or the entire enterprise is being assessed. 2. Gather information. Team members should assess each of the 38 alignment practices and determine which level, from 1 to 5, best matches their organization (see Exhibit 1). This can be done in three ways: (1) in a facilitated group setting, (2) by having each member complete a survey and then meeting to discuss the results, or (3) by combining the two approaches (e.g., in situations where it is not possible for all group members to meet). 3. Decide on individual scores. The team agrees on a score for each practice. The most valuable part of the assessment is not the score, but understanding its implications for the entire company and what needs to be done to improve it. An average of the practice scores is used to determine a category score for each of the six criteria (see Exhibit 2). 4. Decide on an overall alignment score. The team reaches consensus on what overall level to assign the organization. Averaging the cate18
Assessing IT–Business Alignment gory scores accomplishes this, but having dialogue among the participants is extremely valuable. For example, some companies adjust the alignment score because they give more weight to particular practices. The overall alignment score can be used as a benchmarking aid to compare with other organizations. Global 1000 executives who have used this tool for the first time have rated their organizations, on average, at Level 2 (Beginning Process), although they typically score at Level 3 for a few alignment practices. CONCLUSION Achieving and sustaining IT–business alignment continues to be a major issue. Experience shows that no single activity will enable a firm to attain and sustain alignment. There are too many variables. The technology and business environments are too dynamic. The strategic alignment maturity assessment tool provides a vehicle to evaluate where an organization is, and where it needs to go, to attain and sustain business–IT alignment. The careful assessment of a firm’s IT–business alignment maturity is an important step in identifying the specific actions necessary to ensure that IT is being used to appropriately enable or drive the business strategy. Note 1. See also Jerry Luftman, editor, Competing in the Information Age: Align in the Sand, Oxford University Press, 2003; and Jerry Luftman, Managing the IT Resource, Prentice Hall, 2003.
19
This page intentionally left blank
Chapter 2
IT Capabilities, Business Processes, and Impact on the Bottom Line William R. King
During the 1990s, a great deal of attention was paid to the “productivity paradox” — the phenomenon that, for the U.S. economy, while huge business investments were being made in IT, no corresponding improvements in productivity were detectable. Since the early 1990s, the paradox has been debunked on technical grounds involving such things as its reliance on government-provided productivity data, its failure to consider increased consumer benefits from IT, etc. Attempts to trace IT investments to their impact on the bottom line have also generally not proved to be fruitful, presumably because so many factors affect overall profitability that it is impractical to isolate the effect of one of them — IT. However little empirical evidence has existed for the impact of IT, U.S. firms have continued to invest heavily in IT. Now, more than 50 percent of the total annual capital investment of U.S. firms is in IT, and IT “success stories” continue to proliferate. However, while business managers have obviously not been deterred by practitioners of the dismal science, they have received little guidance as well. Now, some research results are beginning to appear that have the prospect of providing such guidance. IT INVESTMENT VERSUS IT CAPABILITIES One of the explanations for the productivity paradox’s apparent failure is that IT investments have often been used as a surrogate measure for IT 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
21
ACHIEVING STRATEGIC IT ALIGNMENT capabilities. Clearly, a firm does not enhance the effectiveness of its IT merely by throwing money at IT. Rather, the possibility that IT will enhance business performance is through the development of efficacious IT capabilities, which include the hardware and software, the shared services that are provided by the IT function, and the IT management and organizational capacities — such as IT planning, software development skills, etc. — that bind the hardware and software to the services. IT capabilities are bundles of internally consistent elements that are focused toward the fulfillment of an IT or business objective. Without such focus on a capability, the organization may make IT expenditures in a fragmented manner. For example, until quite recently, many rather sophisticated commercial banks had computerized checking account systems, loan systems, and credit card systems that were not integrated to focus on fulfilling a variety of customer needs. These disparate systems were operationally effective, but their contribution to the bottom line was limited because they could not be used to fullest advantage to identify potential customers and to enable the development of closer relationships with customers. If a firm just invests in IT rather than in IT capabilities, it is likely to merely be acquiring IT components — primarily hardware, software, and vendor-provided services — that it may not really understand and may not be capable of fully utilizing to achieve business goals. If, on the other hand, it develops IT capabilities — sophisticated packages of hardware, software, shared services, human skills, and organizational processes — that are focused toward specific business goals, it is far more likely to be able to effectively employ the resources to impact profitability. For example, an investment in IT planning may enable better decisions to be made concerning hardware and software and how they may be best used to fulfill organizational needs. The overall package of hardware, software, planning capacity, and services that can result may rather directly impact the bottom line, whereas expenditures for new software or services may not do so because of the organization’s lack of an overall framework for deciding what is needed, what priorities are associated with each “need,” and when the organization may be ready to effectively utilize these expenditures. Part of the explanation for the failure of the productivity paradox has to do with the wide variability in the abilities of business to create IT capabilities rather than to merely spend money on IT. Some firms have created such capabilities and have therefore made wise IT expenditures. Others have continued the early-computer-era practice of buying the latest tech22
IT Capabilities, Business Processes, and Impact on the Bottom Line nology without a comprehensive plan as to how that technology might be most effectively employed in achieving business goals, and as a result they have been less successful. IT AND BUSINESS PROCESSES Research results, including a study that I conducted with Dr. Weidong Xia of the University of Minnesota, have begun to emerge that demonstrate that the primary mechanisms through which IT capabilities impact overall business performance are through the business processes. This result is consistent with the emphasis given in the past decade to the “balanced scorecard” as a measurement tool for assessing performance. The balanced scorecard deemphasizes overall financial measures and provides indices of progress toward the achievement of business process goals such as improved quality, increased customer satisfaction, and reduced cycle time. Now, these results in IT research demonstrate that it is in the improvements that can be made in such measures of business process performance through IT that the business impact of IT can be most directly felt. This is a basic premise of business process reengineering (BPR), which is intuitively appealing and widely applied (even if the BPR terminology is now somewhat dated) but which is not broadly studied and verified. In effect, proponents of BPR have argued that redesigned business processes are needed so that the inertia of the old way of doing things can be wrung out of the processes. These results from IT research can be interpreted to say that the old technologies may be similarly “wrung out” of existing old processes, not merely by replacing old technologies with new ones, but by doing zerobased process redesign on the basis of a new look at process goals and alternative ways of performing the process as well as alternative new technologies and alternative organizational forms (such as strategic alliances and outsourcing). Companies that have successfully developed IT capabilities are, ironically, in the best position to use non-IT solutions, such as alliances, in business process improvements because they are able to recognize the limits of new technologies and to focus on the best way of achieving the business goals of the business processes. The emphasis on business processes as targets for IT investments should incorporate the notion of “real options” — that is, the idea that in making an IT investment, one is not only purchasing the immediate benefit, but is either acquiring or foreclosing future options. This idea is crucial to 23
ACHIEVING STRATEGIC IT ALIGNMENT that of IT capabilities as well as in the determination of the business process benefits that may be derived from an IT investment. The simplest illustration of real options is in terms of the scalability of IT resources. If resources are scalable, some future options are preserved; if not, they may be foreclosed. Guidance to Practitioners The focus on IT capabilities and the influence that they exert through business processes leads to a number of guidelines for managers: • IT should avoid the image and reality of always recommending “throwing money” at the latest technologies. • Rather, the focus of IT investments should be on developing explicit IT capabilities that are “bundles” of hardware, software, shared services, management practices, technical and managerial skills, etc. • The soft side of IT capabilities is as important as the hard side, so that management should consider investments in IT planning and in development methodologies as carefully as they consider new hardware and software investments. • The real option value of IT investments — for example, the future options that are either made possible or foreclosed — need to be considered in developing an IT capability and in redesigning E-business processes. • IT should focus on impacting the business through impacting key business processes because, if the emphasis is on balanced scorecard measures, eventually these impacts will flow to the bottom line. • Correspondingly, while an emphasis on IT impacting profitability can lead to IT never really being held accountable, the focus on business process measures is more readily assessed and more easily attributable to the success or failure of IT. • In considering the redesign of business processes, IT should give attention to new organizational arrangements, such as alliances and outsourcing, rather than concentrating solely on IT solutions.
24
Chapter 3
Facilitating Transformations in IT: Lessons Learned along the Journey Steve Norman Robert A. Zawacki
The rate of change today is higher than it ever has been in the past. There is now more pressure on information technology (IT) professionals than ever before. IT firms or departments must do more with less, and must do so quicker than ever before given today’s competitive environments. Given these factors, it is even more critical today that IT companies employ successful change strategies and processes. If a company cannot quickly and successfully adapt to change, it is destined to fail. The purpose in writing this chapter is to propose a model that will allow for successful change in IT firms. This model is reinforced by more than 30 years of research and personal experiences, and has proven successful in many of today’s top IT firms. CONTINUING TURBULENCE The turbulence in which companies operate today has reached peak levels. Since 1995, mergers and acquisitions have increased in absolute numbers and size. Those mergers permit economies of scale, which translates to more and larger workforce reductions (the projections were that the numbers for 2001 would be the highest ever). Further, organizations are
0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
25
ACHIEVING STRATEGIC IT ALIGNMENT locked in a global struggle over time-based competition, cost-effectiveness, even better customer service, and the need to be innovative while being flexible. As a consequence, the trust between management and individual contributors is at an all-time low.2 The outcome of this perceived random change is a high need by IT leaders to transform their organizations from the old bureaucratic control model to a more flexible consultative model. In the authors’ opinion, which has been proven correct by many companies, the organizational model that permits people to best respond to the above drivers of change is the “learning organization.” Not only is this model flexible, it is also scalable, enabling it to be implemented in many different organizations, regardless of size. This model is also called the STAR organization.3 It has many synonyms, such as the high velocity environment, the ad hoc organization, the shamrock organization, and even the expression “violent implementation,” to describe a development department’s strategy in response to timebased competition. Regardless of what it is called, an outcome of this continuous/discontinuous change is an expressed need by IT leaders4 for strategic alignment with the business, and with a renewed focus on strategy and tactics. Because of time-based competition and cost-effectiveness, IT leaders must do more with less and even quicker than before. CREATING THE BURNING PLATFORM “The need for a transformation stems from environmental turbulence that can render current organizational practices valueless. To respond, leaders must transform their organizations.”5 The transformation to the learning (STAR) organization is a new paradigm resulting in a new synergy that assists people in responding more quickly to the drivers of change than their competitors. The four main drivers of change are:6 1. Even better customer service: both internal and external. 2. Cost-effectiveness: a firm cannot, in the long run, cut costs and have sustained customer service and growth (we went through a period of cost reduction and confirmed this!). 3. Time-based competition: the firm that gets to the market first with a new product/service has a temporary monopoly, which is rewarded by the marketplace and by the stock market. 4. Innovation and flexibility: the organization of the future simply must learn faster and adapt faster than its competitors. 26
Facilitating Transformations in IT: Lessons Learned along the Journey THE MODEL FOR ALIGNMENT AND FOCUS Through coaching various IT leaders through their transformations toward learning organizations, and through personal observation, it was realized that there was a need to have a process that helped an organization’s people understand why they had to launch the transformation. A further need was to understand how to implement the transformation. This circular process is the key to rebuilding trust. When people feel trusted and valued, they add value to the customer and to the bottom line of the income statement. The search for a model to help understand this dual model of why and how led us to an article by Nutt and Backoff of Ohio State University, which Zawacki included in his Organizational Development book (5th edition, see Note 2). We then modified their model to make it a better fit with our research and the IT environment. Several clients have said that the model helped them understand their turbulent environment and also gave them a roadmap through the transformation. The modified model appears in Exhibit 1. Moving up the ladder in the model from the bottom addresses the why questions for the various levels in an organization. For example, for programmers or software engineers, the organizational transformation should reduce their job stress, improve their quality of work life, and give them empowerment (autonomy) with better processes and a clearer vision. Conversely, to implement this transformation (how), the IT leadership team must begin with a clear vision and values, and then must examine all of its processes, people empowerment, etc. VISION, VALUES, STRATEGIC OBJECTIVES, AND NEW BEHAVIORS Establishing the organizational vision, values, strategic objectives, and behaviors should be a collaborative process that increases the opportunity for organizational success.7 By making this process a collaborative one, you increase buy-in and, thus, ownership in the resulting vision, values, strategic objectives, and behaviors. When there is true organizational ownership in these, there is more commitment to the success of the organization. This resulting commitment greatly enhances the organization’s chances for success. Unfortunately, it is our experience that many IT executives have a desire to skip this step because of its “fuzzy” nature and because of the high degree of difficulty in defining the future. After going through this process, we had one very effective executive vice-president say, “If I hear that vision word again, you are out of here!” As stated, this new paradigm must begin with a clear vision, which must align with the company’s strategic objectives, and which then must result in new values and behaviors. The importance of a clear vision and corre27
ACHIEVING STRATEGIC IT ALIGNMENT
9LVLRQ
9DOXHV
6WUDWHJLF 2EMHFWLYHV
5HYLHZ 3URFHVVHV
1HZ %HKDYLRUV
(PSRZHUPHQW RI3HRSOH 7UXVW
7UXVWZRUWK\ ,QWHJULW\
:RUN&OLPDWH DQG&XOWXUH
0HDQLQJIXO :RUN
,QFUHDVHG &RPPLWPHQWDQG 3URGXFWLYLW\
Exhibit 1.
Functional Specification Analysis Precursors
sponding values cannot be overstated. A clear vision and values are the stakes in the ground that associates hang onto, and that also pulls them toward the future when “turf” conflicts begin to surface during a planned transformation. This vision must also be exciting, engaging, and inspiring. It must have all of these qualities to truly get the energy of the organization around it. If it fails to exhibit these qualities, the organization simply will not have the sustained effort required to succeed. After establishing such a vision, the IT leadership team must then clearly articulate the new supporting values and behaviors of the STAR organization to all levels of the organization. The entire organization must clearly understand what the new vision, values, and behaviors are so that everyone knows what is expected of them. In addition, the leadership team 28
Facilitating Transformations in IT: Lessons Learned along the Journey must be sure to keep communicating the vision, values, and behaviors to reinforce the message. It is also critical that the leadership team look for early opportunities to positively reinforce the desired behaviors when exhibited. In addition, incentive and reward programs that support the new vision, values, and behaviors must quickly be put in place to capitalize on early victories and to capture and gain momentum. Of course, new values must be differentiated from old (mature) values and the focus must be on the new values. Some examples of mature and new values and the consequences (outcomes) of each are described in Exhibit 2. REVIEW PROCESSES After establishing the vision, values, and behaviors, the IT leadership team must review all processes to be sure that they support and enhance the learning organization. Every process must map directly back into the vision, values, and behaviors that were established. Otherwise, the process is merely overhead and should be reexamined (and possibly eliminated). For example, does the joining-up process include the interview, offer, in-processing, sponsorship, coaching, training, and the tactical goals and resources to do the job in the new organization? If so, great! If not, this should be immediately reexamined and revamped. Another key success factor is the priority-setting process for projects. It is our experience that if the prioritization process does not involve the business partners to the degree necessary, the CIO is set up for failure. Usually, the business partners negotiate their projects with the CIO and then the CIO is put in a position of setting priorities between the various business units. Unless the business partners (as a team) set the priorities for the entire organization, the CIO is doomed to fail, given limited resources and reduced cycle time of projects, because of the time-based competition in the market. Finally, metrics must be a key part of the process examination. Of course, the metrics used must be valid and must measure what they are supposed to measure. The metrics used must also be carefully examined to be sure they parallel the organizational direction. We use a metrics package titled “360 Degree Benchmarking” that includes measures for five critical areas: human resources/culture, software development, network platform services, data centers, and enterprise IT investments.8 Unfortunately, it has been our experience that many IT leaders resist metrics for fear of the unknown (or for fear of being held accountable to them!). However, metrics and baseline measures are critical because they permit the leadership team to demonstrate the value of the transformation process to the CEO. 29
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 2.
Consequences of Mature and New Values
Mature Values
Manifestations or Outcomes
Little personal investment in IT vision, values, and objectives People need a leader to direct them Keep the boss happy
They are the leader’s values, not mine
If something goes wrong, blame someone else Do not make waves Tomorrow will be just like today
People do not like change/they like security New Values Vision, values, and objectives are shared and owned by all IT individual contributors and business units People are capable of managing themselves within the vision, values, and objectives Keep the customer happy The buck stops here Make waves Nobody knows what tomorrow will bring Although random change upsets our behavior patterns, we learn and adjust
Hierarchy of authority and control Real issues do not surface at meetings Appeal procedures become over-formalized Innovation is not widespread but in the hands of a few technologists People swallow their frustrations: “I can’t do anything — it’s leadership’s responsibility to save the ship.” Job descriptions, division of labor, and little empires Manifestations or Outcomes These are my values
Hierarchy is replaced with selfdirected teams Customer-driven performance People address real problems and find synergistic solutions Waves result in innovation Constant learning to prepare for the unknown future Change is an opportunity to grow
Note: For a more detailed discussion of the change paradigm, see Steven W. Lyle and Robert A. Zawacki, “Centers of Excellence: Empowering People to Manage Change,” Information Systems Management, Winter 1997, pp. 26–29.
EMPOWERMENT OF PEOPLE Empowerment was a new “buzzword” of the 1990s. However, trying to define empowerment is similar to the difficulty of defining pornography. This dilemma was best summed up by former Justice Potter Stewart’s now-famous statement: “I shall not today attempt to define [obscenity]; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it.”
30
Facilitating Transformations in IT: Lessons Learned along the Journey The American Heritage Dictionary defines empower as “to enable or permit.” This is also referred to in the business world as autonomy. One can say that empowerment is basically how much autonomy the employee has in accomplishing organizational goals. So, what does this mean? We can perhaps better define or measure empowerment by observing leadership behavior in IT organizations, and then going back to the values and behaviors in Exhibit 2 to determine how closely they match. Does the organization truly empower its people? Do its actions support the vision, and does the organization then enable its people to get to the vision in their own way? After 30 years of consulting with IT organizations, we do know that individual contributors want to be treated as adults and want to control their own destiny. This does not mean that they desire laissez-faire management. Rather, people want goals and deadlines, and, given that they have the needed ability, training, and resources, they want the autonomy to accomplish those goals. However, they do expect strong feedback on goal accomplishment. Another aspect of empowering people is bureaucracy-bashing. The basic objective of bureaucracy-bashing is to remove low-value work and create “headroom” for overstressed people while building trust. This process of reverse-engineering is similar to GE’s workout sessions or Ford/Jaguar’s egg groups. Many of our STAR organizations also use quick hits and early wins as tactical moves to deliver quickly and reinforce the benefits of organizational transformations.9 WORK CLIMATE AND CULTURE To improve the quality of work life, do not re-invent the wheel. Benchmark against the best and leverage what has already been done. There are many good processes/systems already designed and implemented. For example, a networking group client recently decided that a strong technical career track was one of the keys to the future as organizations delayered. When they called, we referred them to another client that had an excellent technical career track. A team of development people visited the organization, liked what they saw, borrowed the hosting IT organization’s procedure, and implemented the process when they were “back at the ranch.” Very little cost and no consultants. The STAR organization is very flat, with few layers of management. Additionally, it is based on strong project management, and led by people who have a passion for the end product. After 30 years of interventions in IT organizations, however, we find very few organizations with strong project management — and this is an alarming trend. Most companies talk a good game; however, when you look more closely, strong project management is not there. We believe very strongly that this is a key competency of the 31
ACHIEVING STRATEGIC IT ALIGNMENT future in IT. Therefore, it must be examined and reexamined constantly and consistently. MEANINGFUL WORK Strong project managers motivate their IT people through meaningful work. Once the salary and benefits are competitive, our research indicates that IT people want meaningful work, which consists of using a variety of their skills, identifying with the larger goals of the organization, having highly visible work, having autonomy, and receiving good feedback. A related metric that we use is the Job Diagnostic Survey — Information Technology,10 which measures the meaningfulness of work, compares the richness of the jobs to global norms, and also measures the match-up of the person and the job. Meaningful work, equitable pay, and benefits explain more of the productivity of an IT department than any other variables. INCREASED COMMITMENT AND PRODUCTIVITY When organizations look closely at the meaningfulness of the work itself, and then match that with the needs of the person doing the work, they greatly enhance their chances for success. Individuals with a high growth need strength (GNS) should be “matched” in jobs that offer high motivating potential scores (MPS). That is, people with a high need for growth should be put in jobs that offer the growth they need. Conversely, if individuals have a low GNS, they can be placed in the jobs that have lower MPS. If this match is not looked at closely, the people on either end of the scale will quickly become dissatisfied, and their commitment and productivity will then decrease significantly. Our research indicates that 50 to 60 percent of an IT professional’s productivity stems from the match between the person and the job (GNS and MPS). Obviously, organizations want people who are committed and productive in order to increase their overall chances for value added to the bottom line. TRUSTWORTHINESS Organizations also want people who are trustworthy. People with high integrity are more apt to work smarter to make the organization successful. People who are not trustworthy are in a position to cause a great deal of damage to an organization, thus limiting the organization’s chances for success. Many individual contributors want to be trusted (empowered); however, they must also realize that they must be trustworthy. An individual’s trustworthiness is the sum total of his or her behavior, commitment, motivation, and follow-through. Although this variable is more difficult to measure than the others, it is also a key factor in an organization’s success and must be understood and examined. 32
Facilitating Transformations in IT: Lessons Learned along the Journey SUMMARY More than 70 percent of U.S. families now have two or more wage earners. As IT organizations merge, are bought out, or eventually downsize, the remaining people must still do all of the work of the people who left the organization. Not only that, but they must now do the same amount of work in less time. In many IT organizations, leadership’s response to the drivers of change is to have people work longer and harder. People can do this in the short run; however, in many IT organizations, the short run has become the long run. This is a trend that must be altered quickly or there will be severe negative consequences to the company’s success. With a labor market that is becoming increasingly tight for IT workers, and with ever-increasing job stress, the key to a sustained competitive advantage through people is the learning organization. This transformation to the learning organization must begin with the “whys” and “hows.” The key for IT leadership is to tell and show people that there is light at the end of the tunnel. Thus, the transformation journey begins with a vision and ends with reduced job stress. LESSONS LEARNED ALONG THE WAY After 30 years of research, teaching, consulting, and coaching IT organizations through change and transformations, we submit the following lessons learned. While not all IT organizations exhibit all of these characteristics, the trend is so strong and clear that we feel compelled to make these statements. Some may shock the reader. 1. IT cultures eat change for lunch. 2. Empowerment of IT associates is a myth. 3. Mergers and turnover in IT leadership are killing many good change programs. New leaders have a need for their own program or fad of the month. 4. Associates are hearing two conflicting messages: get the product out the door and innovate. Of the two, getting product out the door wins every time. 5. IT leaders talk a good game on measurement but, in reality, many do not want to be measured. 6. Where IT is implementing effective change, there is always a good change champion at the executive level. 7. During the 1990s, the market for IT people shifted from a buyer’s market to a seller’s market (due to hot skills, the year 2000, the Internet, and a drop-off in college majors). Now, there is a shift back to a buyer’s market because of the huge failure of dot.coms and the general downturn in the U.S. economy. However, be alert for the economy to return to its previous high levels of gross domestic 33
ACHIEVING STRATEGIC IT ALIGNMENT product (GDP) and, hence, for the market to again become a seller’s market. 8. IT leaders should concentrate on three main competencies to be successful in a period of random change: passion for the customer, passion for the product, and passion for the people. 9. Many IT change efforts are failing because they are trying to put programs in place in bureaucratic organizations designed for the 1960s, that were really designed for the STAR organization. 10. Turnover at the CIO level and outsourcing will continue in the short term because the business units do not perceive that IT adds timely and cost-effective value to the bottom line. CRITICAL SUCCESS FACTORS FOR WORKFORCE TRANSFORMATION If only a portion of the above statements is true, what can IT leaders do to load their change programs for success? Our conclusions are as follows. 1. Create a vision of the new organization with clearly stated core values and behaviors. 2. Help associates understand the benefits of change (the “whys” and “hows”), because if we do not change, we are all dead. 3. Demonstrate radical change and stay the course. 4. Involve as many associates as possible and listen to them. 5. Realize that repetition never spoils the prayer! Communicate, communicate, and communicate some more. 6. Benchmark with the best. 7. Utilize IT leaders who demonstrate the new values and behaviors. 8. Commit resources to support the change program, in the realization that change is not free! 9. Select future leaders based on the new values and competencies (like paneling, for example, which is a process that uses a committee to evaluate people for future assignments). 10. Monitor progress. Use full-spectrum performance measures before, during, and after the change program. 11. Use multiple interventions and levers. Build on opportunities. 12. Change the performance appraisal and reward system. 13. Put a culture in place that thrives on change. Capitalize on chaos. Notes 1. The Wall Street Journal, “Terror’s Toll on the Economy,” October 9, 2001. 2. Robert A. Zawacki, Carol A. Norman, Paul A. Zawacki, and Paul D. Applegate, Transforming the Mature IT Organization: Reenergizing and Motivating People, EagleStar Publishing, 1995. Also see Wendell L. French, Cecil H. Bell, Jr., and Robert A. Zawacki, Organization Development and Transformation: Managing Effective Change (5th ed.), McGraw-Hill Publishing, 1999. 3. Ibid. pp. 49–50.
34
Facilitating Transformations in IT: Lessons Learned along the Journey 4. The term “IT leaders” is an all-inclusive term that includes people such as CIOs, vice presidents of systems development, directors of networks and operations, and presidents and vice presidents of software organizations. 5. Paul C. Nutt and Robert W. Backoff, “Facilitating Transformational Change,” Journal of Applied Behavioral Science, 33(4), 491, December 1997. 6. For an example of this process see Robert A. Zawacki and Howard Lackow, “Team Building as a Strategy for Time-Based Competition,” Information Systems Management, Summer 1998, pp. 36–39. 7. Zawacki, et al., pp. 26–27. 8. 360 Degree Benchmarking is a trademark of Technology & Business Integrators (TBI) of Woodcliff Lake, New Jersey. 9. For complete guidelines to bureaucracy-bashing, see Figure 2-2 in Zawacki et al., p. 48. 10. The Job Diagnostic Survey — Information Technology and global database is a copyrighted methodology of Zawacki and Associates of Colorado Springs.
35
This page intentionally left blank
Chapter 4
Strategic Information Technology Planning and the Line Manager’s Role Robert Heckman
How can a company gain the benefits of entrepreneurial IT decision making by line managers without permitting the IT environment to become a highcost, low-performance, disconnected collection of independent systems? This chapter proposes an approach to IT planning that includes a formal role and responsibility for line managers. When combined with centralized IT architecture planning, this planning technique creates an approach to information management that is simultaneously top-down and bottom-up. The pendulum is swinging back. For more than a decade the responsibility for managing and deploying information resources has ebbed away from the centralized information management (IM) department and into line departments. The end-user computing revolution of the 1980s was followed by the client/server revolution of the 1990s. In both cases the hopedfor outcome was the location of information resources closer to the customer and closer to marketplace decisions, which in turn would lead to better customer service, reduced cycle time, and greater empowerment of users. The reality, however, was often quite different. Costs for information technology spiraled out of control, as up to half the money a company spent on information technology was hidden in line managers’ budgets. In addition to higher costs, distributed architectures often resulted in information systems with poor performance and low reliability. Because the disciplines that had been developed for centralized mainframe systems were lacking, experienced technologists were not surprised when client/server 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
37
ACHIEVING STRATEGIC IT ALIGNMENT systems performed poorly. Many client/server systems lacked (and still lack) effective backup and recovery procedures, capacity planning procedures, or performance analysis metrics. With costs up and performance down, CEOs are once again calling for greater centralized control over information resources. The growing movement toward enterprise resource planning (ERP) systems such as those offered by SAP, PeopleSoft, and Baan has also increased awareness of the need for careful management of the IT infrastructure. The architectures of the client/server versions of these systems paradoxically create a need for stronger centralized control of the IT infrastructure. Large companies such as Kodak (SAP) and Corning (PeopleSoft) have created single infrastructure development teams with integrated responsibilities for technical architecture, database administration, site assessment, and planning. Finally, the diffusion of Internet and intranet resources has suggested to many that a more centralized approach to control of network resources is also desirable — in fact, even necessary. Recent discussions about the network computer, one that obtains virtually all application and data resources from a central node, have reminded more than one observer of the IBM 3270 “dumb terminal” era. THE IT MANAGEMENT CHALLENGE Despite these drivers toward re-centralization, the forces that originally led to diffusion of IT management responsibility still exist. Certainly, the impact of information technology continues to grow and at the same time becomes more widely diffused throughout organizations. Likewise, the need to respond quickly to competitive thrusts continues to increase the value of independent IT decision making by line managers. As technologies become more user-friendly and the workforce becomes more IT literate, it is inevitable that line managers will face more and more technology-related decisions. The challenge, then, is how to gain the benefits of entrepreneurial IT decision making by line managers without permitting the IT environment to become a high-cost, low-performance, fragmented, and disconnected collection of independent systems. One solution to the IT management challenge is better IT planning. Information systems planning is an idea that has been with us for some time, and numerous systems planning methodologies have been developed and published. However, most IT planning methodologies are based on a topdown, centralized approach and are motivated more by technology issues than by business issues. They tend to be driven or facilitated by technologists within the centralized IM organization, or by outside consultants 38
Strategic Information Technology Planning and the Line Manager’s Role engaged by IM. Ownership of the process and the responsibility for its success are vested in the IM analyst’s role. Top-down, centralized planning conducted by the IM department has an important, even critical role, especially in large organizations. The construction of a single, standardized IT architecture and infrastructure is a crucial step for the successful integration of systems throughout the organization. It provides the foundation upon which aligned business and technology strategies can be built. The development and management of the infrastructure is clearly a centralized IM responsibility. However, it only solves half of the IT management and planning problem. Top-down, centralized IT planning is unlikely to result in a portfolio of IT investments that effectively uses the infrastructure to achieve business objectives. A DIALECTICAL APPROACH A more comprehensive view of IT planning is needed to address the simultaneous needs for centralized coordination and diffused decision making. The first step is to recognize that such planning will necessarily be dialectical — that is, it will involve conflict. To say that a process is dialectical implies tension or opposition between two interacting forces. A dialectical planning process systematically juxtaposes contradictory ideas and seeks to resolve the conflict between them. This expanded view of planning is based on the idea that effective planning can be neither exclusively top-down nor exclusively bottom-up. It must be both. The key to success using this planning philosophy is the creation of a formal role for line managers in the IT planning process. The topdown/bottom-up IT planning approach shown in Exhibit 1 is built on three fundamental principles: 1. Push responsibility for IT planning down and out into the organization. The ability to manage and plan for information resources must be a normal and expected skill for every line manager, equal in importance to the management of human and financial resources. 2. Integrate the IT planning activities of line managers through the role of a chief information officer (CIO). By emphasizing the benefits of entrepreneurial IT decision making by the line manager responsible for business strategy, organizations run the risk of the IT environment becoming fragmented and unresponsive. The CIO, as leader of the information management department, must be responsible for integration and control of IM throughout the organization. 3. View the IT environment as an information market economy. Line managers are free to acquire resources from the information market as they choose. However, just as the federal government regulates activities in the national economy through guidelines, policies, and 39
ACHIEVING STRATEGIC IT ALIGNMENT
Exhibit 1.
Responsibilities in a Dialectical Planning Process
standards, the CIO establishes the information infrastructure within which line managers make information market decisions.1 This emphasis on departmental strategy as opposed to corporate strategy is intentional. It does not deny the critical importance of unified corporate-level business and IT strategies. Rather, it acknowledges that there are often departmental strategies that are not identified in corporate strategy or that may, to some degree, conflict with corporate strategy. A topdown/bottom-up planning process recognizes the possibility that corporate-level business and IT strategies may be influenced over time by the strategic choices made in the sub-units. THE LINE MANAGER’S ROLE Since much attention both in literature and in practice has been given to the top-down component of IT planning, procedures for this kind of work are widely understood in the community of technologists. IT planning, however, is likely to be an unfamiliar job for many line managers. The following simplified planning process (shown in Exhibit 2) may provide a useful framework for line managers to follow when beginning departmental IT planning. Unlike many detailed processes which are more suitable for projectlevel planning, this streamlined approach is valuable because it ensures that line managers focus their attention at the strategic and tactical levels 40
Strategic Information Technology Planning and the Line Manager’s Role
Exhibit 2.
An IT Planning Process for Line Management
rather than at the detailed project level. The process is also highly flexible and adaptable. Within each of the three stages any number of techniques may be adopted and combined to create a customized process that is comfortable for each organizational culture. Stage 1: Strategic Alignment The overall objective of Stage 1 is to ensure alignment between business and technology strategies. It contains two basic tasks: developing an understanding of the current technology situation and creating a motivating vision statement describing a desired future state. In addition to understanding and documenting the current business and technology contexts, this stage has the goal of generating enthusiasm and support from senior management and generating commitment, buy-in, and appropriate expectation levels in all stakeholders. One technique for creating a rich description of the current business and technology situation is the BASEline analysis. The four steps in the BASEline analysis procedure are shown in Exhibit 3. Additional techniques which can be used in Stage 1 are scenario creation, stakeholder interviews, brainstorming, and nominal group techniques. 41
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 3.
BASEline Analysis
(YHU\SODQQLQJSURFHVVVKRXOGEHJLQZLWKDFOHDUXQGHUVWDQGLQJRIWKHFXUUHQWVLWXDWLRQ7KH SXUSRVHRID%$6(OLQHDQDO\VLVLVWRGHÀQHWKHFXUUHQWVWDWHLQDV\VWHPDWLFZD\7RLQVXUH FRPSUHKHQVLYHQHVVLWGUDZVRQPXOWLSOHVRXUFHVRILQIRUPDWLRQ:KLOHLWLVWUXHWKDWWKHSUR FHVVRILQWHOOLJHQFHJDWKHULQJVKRXOGEHRQJRLQJSURDFWLYHDQGV\VWHPDWLFWKHIRUPDOSODQ QLQJH[HUFLVHSURYLGHVDQRSSRUWXQLW\WRUHYLHZDQGUHÁHFWRQLQIRUPDWLRQDOUHDG\FRPSLOHG,Q DGGLWLRQJDSVLQWKHFXUUHQWNQRZOHGJHEDVHFDQEHLGHQWLÀHGDQGÀOOHG 7KH%$6(OLQHDQDO\VLVSURFHGXUHH[SORUHVWKHFXUUHQWVWDWHLQWHUPVRIIRXUGLPHQVLRQV %XVLQHVV6WUDWHJ\,IWKH,7VWUDWHJ\LVWREHLQDOLJQPHQWZLWKWKHEXVLQHVVVWUDWHJ\LWLVFUXFLDO WKHEXVLQHVVVWUDWHJ\EHFOHDUO\DUWLFXODWHGDQGXQGHUVWRRGE\DOOPHPEHUVRIWKHSODQQLQJ WHDP $VVHWV ,IWKH,7VWUDWHJ\LVWREHLPSOHPHQWDEOHLWPXVWEHUHDOLVWLF7KDWLVLWPXVWEHEDVHG RQDQREMHFWLYHDVVHVVPHQWRIWKHDVVHWVFXUUHQWO\DYDLODEOHRUUHDOLVWLFDOO\REWDLQDEOHE\WKH RUJDQL]DWLRQ,QFOXGHGLQWKLVDQDO\VLVDUHWDQJLEOHDVVHWVVXFKDVKDUGZDUHDQGVRIWZDUHGD WDEDVHVFDSLWDODQGSHRSOH,QDGGLWLRQLQYLVLEOHDVVHWVVXFKDVPDQDJHPHQWVNLOOVWHFKQLFDO VNLOOVSURSULHWDU\DSSOLFDWLRQVFRUHFRPSHWHQFLHVPDUNHWSODFHSRVLWLRQDQGFXVWRPHUOR\DOW\ DOVRSURYLGHDIRXQGDWLRQIRUIXWXUHVWUDWHJLFPRYHV 6\VWHP6WUDWHJ\7RDYRLGIUDJPHQWDWLRQGXSOLFDWLRQDQGLQFRPSDWLELOLW\DGHSDUWPHQWDO,7 SODQPXVWUHFRJQL]HWKHRSSRUWXQLWLHVDQGFRQVWUDLQWVSURYLGHGE\FRUSRUDWH,7SROLFLHVDQGLQ IUDVWUXFWXUH7KHGHSDUWPHQWDOSODQQLQJWHDPPXVWEHNQRZOHGJHDEOHDERXWFRUSRUDWHVWDQ GDUGVDQGSODQVWRHIIHFWLYHO\LQWHJUDWHLWVLQLWLDWLYHV (QYLURQPHQWV ([WHUQDOHQYLURQPHQWVRIWHQFUHDWHLPSRUWDQWFRQVWUDLQWVDQGRSSRUWXQLWLHVIRU ,7SODQQHUV5HOHYDQWVWUDWHJLFSODQQLQJDVVXPSWLRQVDERXWIXWXUHWHFKQRORJLFDOUHJXODWRU\ HFRQRPLFDQGVRFLDOHQYLURQPHQWVPXVWEHEURXJKWWRWKHVXUIDFHDQGDJUHHGXSRQE\WKH SODQQLQJWHDP
Stage 2: Create an IT Investment Portfolio In this stage the objective is to identify a rich set of options for future information technology investments. In addition to generating potential investment options, in this stage it is also important to understand which options will have the greatest impact on the business, to assess the risk associated with each option, and to estimate the resources required to implement the impact options. Techniques that may be used in this stage are the strategic option generator,2 value chain analysis,3 critical success factor analysis,4 brainstorming, and nominal group techniques. It may also be useful in this stage to systematize the evaluation process through the use of some form of scoring model. A scoring model enables the planning team to integrate considerations such as financial benefit, strategic impact, and risk. Stage 3: Tactical Bridge In this stage the line manager takes the crucial actions necessary to ensure that the strategic IT investment portfolio is actually implemented. To overcome the greatest single threat to strategic technology planning — a plan 42
Strategic Information Technology Planning and the Line Manager’s Role that is put on the shelf and never looked at again — it is important to ensure that resources are made available to implement the projects that comprise the investment portfolio. To do this, it is necessary to integrate the work accomplished in strategic planning with the ongoing, periodic tactical planning that occurs in most organizations. The most important tactical planning activities are often financial. It is imperative that money be allocated in the annual budgeting cycle to execute the strategic projects identified in the IT investment portfolio. While this may seem obvious, companies often fail to make this link. It is assumed that the operating budget will automatically take into account the strategic work done six months earlier, but in the political process of budget allocation, the ideas in the strategic plan can easily be forgotten. Human resources are also often taken for granted. However, careful tactical planning is usually necessary to ensure that the right blend of skills will be available to implement the projects called for in the strategic plan. Once appropriate resources (time, money, people) have been allocated, then intermediate milestones and criteria for evaluation should be developed. Finally, effective communication of the strategic and tactical work that has been done is a crucial step. Dissemination of the planning work through management and staff presentations and publications will ensure that organizational learning occurs. Thus, attention should be devoted in Stage 3 not only to ensure that strategic plans are implementable, but that they continue to affect the organization’s strategic thinking in the future. PLANNING PROCEDURES When beginning the process of IT planning for the first time, a number of basic procedural issues will have to be addressed and resolved. Who is the line manager responsible for developing an IT plan? Who should be involved in the IT planning process? How formal should the process be? What deliverables should be produced? What is an appropriate planning horizon? What is the right planning cycle? There is no one right answer to these questions. The culture and the leadership style of the company and the department will to a great degree influence how planning processes are executed. There are, however, several procedural guidelines that may be useful for line managers who are undertaking the task of IT planning: Who Is the Line Manager? The key role in this process is played by the department or business unit manager. In smaller companies, no more than two or three senior executives may play the line manager role as described here. Larger corporations may have as many as 20 or 30 business units with a scope that war43
ACHIEVING STRATEGIC IT ALIGNMENT rants independent IT planning. Regardless of who occupies the role of line manager, it is absolutely critical that this individual take an active interest in IT planning and be personally involved in the process. He or she is the only one who can ensure that directly reporting managers view planning for information resources as an integral part of their job accountability. Who Should Be Involved? Composition of a planning team is a delicate art. We may think of representation on the planning team both horizontally and vertically. Attention to horizontal representation ensures that all sub-units are represented in the planning process. Attention to vertical representation ensures that employees at all levels of the organization have the opportunity to provide input to the planning process. It is also critical that departmental IT planning has a link to corporate IT planning. Thus it is usually beneficial to include a member of the central IM staff on the departmental planning team. Other outside members may also be appropriate, especially those who can provide needed technical expertise in areas such as emerging technologies where the line management team may not have the necessary technical expertise. Process and Deliverables As the business environment becomes more dynamic and volatile, the technology planning process must be more flexible and responsive. Thus the planning process should not be too rigid or too formal. It should provide the opportunity for numerous face-to-face encounters between the important participants. Structure for the process should provide welldefined forums for interaction rather than a rigidly specified set of planning documents. Perhaps a more effective mechanism for delivering the work of planning teams is for the line manager to periodically present the departmental IT plan to other senior managers, the CIO, and to members of his own staff. Planning Horizon The planning horizon must also be determined with the dynamic nature of the information technology environment in mind. Although there are exceptions, it is usually unrealistic for a department manager to plan with any precision beyond two years. Corporate IT planning, on the other hand, must look further when considering the corporate system’s infrastructure and policies. This long-term IT direction must be well understood by departmental managers, for it is critical to line planning activities.
44
Strategic Information Technology Planning and the Line Manager’s Role Planning Cycle: A Continuous Process Plans must be monitored and updated frequently. It may be sufficient to go through a formal planning exercise annually, including all three steps mentioned earlier. Checkpoint sessions, however, may occur at various intervals throughout the year. Major evaluations — such as the purchase of a large software package or the choice of a service provider — are likely to occur at any time in the cycle and should be carefully integrated with planning assumptions. It is absolutely critical that the strategic IT plan be integrated into other strategic and tactical planning processes, such as strategic business planning and annual budgeting for the department. Unless this linkage is formally established, it is very unlikely that strategic IT planning will have much influence on subsequent activities. CONCLUSION Regardless of the procedures chosen, the goal is for all members of the organization to understand that strategic IT planning is a critical component of business success. Everyone should be aware of the decisions reached and the linkage between business strategy and technology strategy. In the future, when all members of line management recognize that strategic technology planning is an essential component of strategic business planning, then an emphasis on strategic IT planning as a stand-alone activity may not be necessary. For now, however, as the pendulum swings back from decentralized to centralized control of information resources, there is a risk that line managers may not recognize the need for strategic IT planning. As we better understand the importance of centralized control of the IT infrastructure, we must not forget that the integration of IT into business strategy remains the province of every line manager who runs a business unit. Notes 1. Boynton, A.C. and Zmud, R., “Information Technology Planning in the 1990s: Directions for Practice and Research,” MIS Quarterly, 11(1), 1987, 59–71. 2. Wiseman, C. 1988. Strategic Information Systems, Irwin Publishing, Toronto, Ontario, Canada. 3. Porter, M. and Millar, V., “How Information Gives You Competitive Advantage,” Harvard Business Review, July 1985. 4. Shank, E. M., Boynton, A. C, and Zmud, R., “Critical Success Factors as a Methodology for MIS Planning,” MIS Quarterly, June 1985, pp. 121–129.
45
This page intentionally left blank
Chapter 5
Running Information Services as a Business Richard M. Kesner
Most enterprises lack a comprehensive process that ensures the synchronization of IT project and service investments with overall business planning and delivery. Indeed, many enterprises fail to clarify and prioritize IT investments based upon a hierarchy of business needs and values. Some enterprises do not insist that their IT projects have a line-of-business sponsor who takes responsibility for the project’s outcomes and who ensures sufficient line-of-business involvement in project delivery. To address these shortcomings, each and every enterprise should embrace a process where the business side of the house drives IT investment, where both business and IT management holistically view and oversee IT project deliverables and service delivery standards, and where ownership and responsibility for said IT projects and services are jointly shared by the business and the IT leadership. The objective of this chapter is to present a framework and set of tools for viewing, communicating, managing, and reporting on the commitments of the IS organization to the greater enterprise.1 Throughout, the uncompromising focus is on the customer and hence on the enterprise’s investment in information technology from the standpoint of customer value. A starting point in this discussion is a simple model of the “internal economy” for information services that in turn drives IS’ allocation of resources between service delivery and project work. This model serves as the foundation for a more detailed consideration of two complementary IS business processes: service delivery management and project commitment management. The chapter then discusses a process for the more effective synchronization, communication, and oversight of IS service and project commitments, including the use of measurement and report tools. Finally, the 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
47
ACHIEVING STRATEGIC IT ALIGNMENT discussion will turn to the benefits of establishing an enterprisewide IS project office to support IS delivery and to better monitor and leverage the value of the enterprise’s IT investment portfolio. THE “INTERNAL ECONOMY” FOR INVESTING IN IT SERVICES AND PROJECTS All organizations are resource constrained. Their leaders must choose where best to invest these limited resources. Although the IS share of the pie has been increasing with the growing use of IT across the enterprise, it too has its limits, requiring planning and prioritization in line with the needs of the greater enterprise. Effectively and efficiently managing IS resources requires understanding of the full scope of the demands driving the prioritization of these IT investments. At the most fundamental level, organizations invest in technology in compliance with mandated legal and accounting requirements, such as those set forth by federal and state taxation authorities, government legislation, regulatory statutes, and the like. At the next level, an enterprise expends resources to maintain its existing base of information technology assets, including hardware and software maintenance; system licenses and upgrades; security services; and desktop, storage, and printer expansions and replacements. These investments are meant to “keep the lights on” and, therefore, are not discretionary; nor are these costs stagnant. They go up with inflation and as new workers are added or as the network and related IT infrastructures grow. Furthermore, as new IT services are introduced to the environment, they become, over time, part of the enterprise’s embedded base of IT, expanding its nondiscretionary IT spending. Because none of these IT products and services run on their own or function flawlessly, IS must also provide significant and cost-effective enduser operations, and production support and troubleshooting. Later, because neither the requirements of IS customers nor the evolution of information technology itself are static, there is a constant need to enhance existing IT products and services and to invest strategically in new IT capabilities. Thus, the day-to-day delivery of an information services organization must balance ongoing services — typically running 24 hours a day/seven days a week (a.k.a. 24/7) — with a wide range of service and system enhancements and new project work. Many times, IS project-delivery resources overlap with those focused on service delivery for the simple reason that the development team must understand the current state of the enterprise’s business requirements and IT capabilities if they are to deliver requested improvements. Furthermore, those maintaining an IT service should have a hand in its creation or, at the very least, thoroughly understand its IT underpinnings. Thus, a balanced IS organization requires a workforce dedicated to 24/7 service delivery overlapping a 48
Running Information Services as a Business Total Cost of IT Ownership: New Projects Discretionary (Governed by Project Plans)
Enhancements IT Investment Reserve
Nondiscretionary (Governed by SLAs)
System Maintenance Infrastructure Maintenance Required by External Agencies
Exhibit 1.
Group IS Expenditures
core group focused on technological innovation, development, and systems integration. Taken together, these various layers of IT investment establish the boundaries of the IS organization’s internal economy. As modeled in Exhibit 1, one can group IS expenditures into two large buckets: nondiscretionary costs that support existing IT investments and discretionary costs that fund new initiatives, including major system enhancements and new IT projects.2 Note that our model comprehends all of the enterprise’s IT expenditures, including internal (IS staff) labor and external vendor, consulting, and contractor costs. The “IT investment reserve” represents an amount set aside each year as a contingency for both discretionary and nondiscretionary cost overruns. Driven by the number of users and the extent of services, nondiscretionary costs will typically consume at least 50 percent of the annual IT budget and, if not carefully managed, may preclude the opportunity for more strategic IT (project-based) investments. Put another way, the total sum devoted to IT expenditure by the enterprise is rarely elastic. If nondiscretionary costs run out of control, there will be little left for project work. If the business’ leadership envisions major new IT investments, these may only come at the expense (if possible!!!) of existing IT services or through enlarging the overall IS allocation.3 Not surprisingly, the enterprise’s leaders usually want it both ways; namely, they expect the IS organization to keep the total cost of IT down while taking on new initiatives. For this reason, it is incumbent upon the IS 49
ACHIEVING STRATEGIC IT ALIGNMENT leadership to manage their commitments with great care through a rigorous process of project prioritization, customer expectation management, and resource alignment. To succeed in this endeavor, IS must keep it simple and keep it collaborative. More specifically, they should employ an investment-funding model along the lines mentioned above. They should separate out and manage recurring (nondiscretionary) activity through service level agreements (SLAs).4 Similarly, they should manage projects through a separate but connected commitment synchronization process.5 Throughout these labors, IS management should employ metrics that measure value to the business and not merely the activity of IS personnel. Last but not least, while IS management should take ownership of the actual technology solutions, they must also ensure that the proper business sponsors, typically line-of-business executive management, take ownership of and responsibility for project delivery and its associated business process changes in partnership with their IS counterparts. The next two sections of this chapter will, in turn, consider in greater detail best practices in the areas of service and project delivery management. MANAGING SERVICE DELIVERY6 The services delivered by IS to its customers across the enterprise have evolved over time and are in a constant state of flux as both the business needs of the organization and its underlying enabling technologies evolve. Given this ever-changing environment and given the general inadequacies of the typical lines of communication between IS teams and their customers, much of what is expected from IS is left unsaid and assumed. This is a dangerous place to be, inevitably leading to misunderstandings and strained relations all around. The whole point of service level management is for IS to clearly and proactively identify customer requirements, define IS services in light of those requirements, and articulate performance metrics (a.k.a. service levels) governing service delivery. Then, IS should regularly measure and report on IS performance, hence reinforcing the value proposition of IS to its customers. In taking these steps, IS management will provide its customers with a comprehensive understanding of the ongoing services delivered to them by the IS organization. Furthermore, service level management establishes a routine for the capture of new service requirements, for the measurement and assessment of current service delivery, and for alerting the customer to emerging IT-enabled business opportunities. In so doing, IS service delivery management will ensure both that IS resources are focused on delivering the highest value to the customer and that the customer 50
Running Information Services as a Business appreciates the benefits of the products and services so delivered. The guiding principles behind such a process may be summarized as follows: • Comprehensive. The process must encompass all business relationships, and products and services delivery by IS on behalf of its customers. • Rational. The process should follow widely accepted standards of business and professional best practice, including standard system development life-cycle methodologies. • Easily understood. The process needs to be streamlined, uncomplicated, and simple, hence easily accessible to nontechnical participants in the process. • Fair. Through this process the customers will understand that they pay for the actual product or service as delivered; cost and service level standards should be benchmarked and then measured against other best-in-class providers. • Easily maintained. The process should be rationalized and largely paperless, modeled each year on prior year actuals and subsequently adjusted to reflect changes in the business environment. • Auditable. To win overall customer acceptance of the process, key measures must be in place and routinely employed to assess the quality of IS products, services, and processes. The components of the IS service delivery management process include the comprehensive mapping of all IS services against the enterprise communities that consume those services. It also includes service standards and performance metrics (including an explicit process for problem resolution), the establishment and assignment of IS customer relationship executives (CREs) to manage individual customer group relations, a formal service level agreement for each constituency, and a process for measuring and reporting on service delivery. Let us consider each of these in turn. As a first step in engineering an IS service-level management process, IS management must segment its customer base and conceptually align IS services by customer. If the IS organization already works in a business environment where its services are billed out to recover costs, this task can be easily accomplished. Indeed, in all likelihood, such an organization already has SLAs in place for each of its customer constituencies. But for most enterprises, the IS organization has grown up along with the rest of the business and without any formal contractual structure between those providing services and those being served.7 To begin, employ your enterprise’s organization chart and map IS delivery against that structure. As you do so, ask yourselves the following questions: 51
ACHIEVING STRATEGIC IT ALIGNMENT • What IS services apply to the entire enterprise and who sponsors (i.e., pays for or owns the outcome of) these services? • What IS services apply only to particular business units or departments and who sponsors (i.e., pays for or owns the outcome of) these services? • Who are the business unit liaisons with IS concerning these services and who are their IS counterparts? • How does the business unit or IS measure successful delivery of the services in question? How is customer satisfaction measured? • How does IS report on its results to its customers? • What IS services does IS itself sponsor on its own initiative without any ownership by the business side of the house? Obviously, the responses to these questions will vary greatly from one organization to another and may in fact vary within an organization, depending upon the nature and history of working relationships between IS and the constituencies it serves. Nevertheless, it should be possible to assign every service IS performs to a particular customer group or groups, even if that “group” is the enterprise as a whole. Identifying an appropriate sponsor may be more difficult, but in general, the most senior executive who funds the service or who is held accountable for the underlying business enabled by that service is its sponsor. If too many services are “owned” by your own senior IS executive rather than a business leader, IS may have a more fundamental alignment problem. Think broadly when making your categorizations. If a service has value to the customer, some customers must own it. Service Level Agreements In concluding this piece of analysis, the IS team will have identified and assigned all of its services (nondiscretionary work) to discrete stakeholder constituencies. This body of information may now serve as the basis for creating so-called service level agreements (SLAs) for each customer group. The purpose of the SLA is to identify, in terms that the customer will appreciate, the value that IS brings to that group. However, the purpose of the SLA goes well beyond a listing of services. First and foremost, it is a tool for communicating vital information to key constituents on how they can most effectively interact with the IS organization. Typically, the document includes contact names, phone numbers, and e-mail addresses. SLAs also help shape customer expectations in two different but important ways. On the one hand, they identify customer responsibilities in dealing with IS. For example, they may spell out the right way to call in a problem ticket or a request for a system enhancement. On the other hand, they define IS performance metrics for the resolution of problems and for responding to customer inquiries. Last but not least, a standard SLA com52
Running Information Services as a Business piles all the services and service levels that IS has committed to deliver to that particular customer. Service level agreements can take on any number of forms.8 Whatever form you choose, ensure that it is as simple and brief a document as possible. Avoid technical jargon and legalese language, and be sensitive to the standard business practices of the greater enterprise within which your IS organization operates.9 Most of all, write your SLAs from your customers’ perspective, focusing on what is important to them. Tell them in plain English what services they receive from you, the performance metrics for which IS is accountable, and what to do when things break down or go wrong. Within these more general guidelines, IS SLAs should include the following elements: • A simple definition of the document’s purpose, function and scope • The name(s) and contact information of those parties within IS who are responsible for this particular document and the associated business relationship (typically the assigned IS customer relationship executive and one or more IS business officers) • A brief set of statements identifying the various units within IS, their roles and responsibilities, and how best to contact them for additional information, support, and problem resolution10 • A table listing the particular information technology assets and IS services addressed in the SLA, including hours of operation and support for listed systems and services • Any exclusion statements, such as “significant system enhancements of over $10,000 in value and larger IS projects will be covered through separate agreements between the XYZ Department and Information Services” • If appropriate, a breakdown of service costs and their formulas, if these costs are variable, as well as the projected total cost of the services delivered for the fiscal year of the SLA • Business unit responsibilities11 • Service level response standards when problems arise (see Exhibit 2 for an example) • Escalation procedures for the handoff of problems as need be (see Exhibit 3 for an example • A glossary of key terms, especially defining system maintenance and enhancement activities and the roles and responsibilities of service delivery process participants • Service metrics12 and reporting standards • A sign-off page for the executive sponsor13 and the working clients who are in receipt of the SLA prepared by IS Your next step is to assign a customer relationship executive (CRE) to each SLA “account.” The role of the CRE is to serve as a primary point of 53
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 2.
Service Level Response Standards
Severity
Description
Response Time
Critical
Application does not function for multiple customers Application function does not work for single customer Application questions
1 business day
High
Low
2 business days
3 business days
contact between customer executive management and the IS organization. In this role, the CRE will meet with his/her executive sponsor and working clients to initially review that business unit’s SLA and thereafter on a regular basis to return to that group to assess IS performance against the metrics identified in the SLA. Where IS delivers a body of services that apply across the enterprise, you might consider creating a single “community” SLA that applies to all and then brief addenda that list the unique systems and services that pertain to particular customer groups. Whatever the formal structure of these documents, the real benefit of the process comes from the meetings where the CRE will have an opportunity to reinforce the value of IS to the customer, listen to and help address IS delivery and performance problems, learn of emerging customer requirements, and share ideas concerning opportunities for further customer/IS collaboration. CREs will act within IS as the advocates and liaisons for, and as the accountable executive partners to, their assigned business units in strategic matters. Needless to say, CREs must be chosen with care. They must be good listeners and communicators. They must have a comprehensive understanding of what IS currently delivers and what information technology may afford the customer in question. While they need not be experts in the aspect of the business conducted by the customer, they must at the very least have a working knowledge of that business, its nomenclature(s), and the roles and responsibilities of those working in that operating unit. Among the many skills that a good CRE must possess is the ability to translate business problems into technical requirements and technical solutions into easily understood narratives that the customer can appreciate. The CRE also needs to be a negotiator, helping the customers manage their portfolios of IS services and projects within the available pool of resources, choose among options, and at times defer work to a better time. The greatest value of the CRE is to act as a human link to a key customer constituency, managing their expectations while keeping IS focused on the 54
Running Information Services as a Business Exhibit 3.
Escalation Procedures for the Handoff of Problems
3ULRULW\ Definition $SSOLFDWLRQ LV XQDYDLODEOH WR DQ\RQH LQ WKH HQWHUSULVH Response time:RUN ZLOO EHJLQ LPPHGLDWHO\ DQG FRQWLQXH XQWLO UHVROYHG 5HVSRQVLELOLWLHV
IS service provider — UHVROYHVSUREOHPDQGFRPPXQLFDWHVWRDOOZKRDUHDIIHFWHG DW OHDVW GDLO\ XQWLO UHVROYHG Working client D³ ZRUNV DORQJVLGH &5( XQWLO WKH PDWWHU LV UHVROYHG Partner-providers E³ RWKHU ,6 WHDPV DQG H[WHUQDO YHQGRUV ZLOO SURYLGH WHFKQLFDO DVVLVWDQFH
3ULRULW\ Definition $SSOLFDWLRQ LV QRW DYDLODEOH IRU LQGLYLGXDO XVHUV ZLWKLQ D VLWH Response time $ UHVSRQVH ZLOO EH SURYLGHG ZLWKLQ RQH EXVLQHVV GD\ $ UHFRP PHQGHG VROXWLRQ ZLOO EH SURYLGHG ZLWKLQ WKUHH EXVLQHVV GD\V LI WKHUH DUH no RXW VWDQGLQJ SULRULW\ V )LQGLQJ D VROXWLRQ WR D SULRULW\ SUREOHP ZLOO QRW EHJLQ XQWLO DOO SULRULW\ SUREOHPV WKDW LPSDFW WKH SULRULW\ LVVXH·V UHVROXWLRQ KDYH EHHQ UHVROYHG 5HVSRQVLELOLWLHV IS service provider — VHQGV DFNQRZOHGJPHQW RI SUREOHP UHVROYHV SUREOHP DQG FRPPXQLFDWH VWDWXV WR DOO ZKR DUH DIIHFWHG Working client — ZRUNV DORQJVLGH &5( XQWLO WKH PDWWHU LV UHVROYHG Partner-providers — HPSOR\HG DV QHHG EH 3ULRULW\ Definition $SSOLFDWLRQ JHQHUDWHV DSSURSULDWH UHVXOWV EXW GRHV QRW RSHUDWH RSWLPDOO\ Response time ,PSURYHPHQWV ZLOO EH DGGUHVVHG DV SDUW RI WKH QH[W VFKHGXOHG UHOHDVH RI V\VWHP 5HVSRQVLELOLWLHV IS service provider — FRPPXQLFDWHV QHHGHG FKDQJHV Other process participants — DV SDUW RI WKH UHJXODU V\VWHP XSJUDGH F\FOH a
The executive sponsor rarely gets involved in the day-to-day collaboration with IS. The working clients are those representatives of the business unit served who work with IS on a regular basis and who have the authority to speak for the business unit in terms of identify service requirements or in changing project priorities. b While the party directly responsible for a service (e.g., e-mail, help desk) should deal directly with the customer concerning problem resolution, most customer services entail a value chain of technology services. The IS owners of the latter services are partnerproviders to the former (e.g., network and server services are partner-providers to a Web application owner).
quality delivery of its commitments to that group. This effort is iterative (see Exhibit 4): collecting and processing data, meeting with customers and IS service providers, listening, communicating, and educating. In total, the service level management process will ensure the proper alignment between customer needs and expectations on the one hand, and IS resources on the other. The process clearly defines roles and responsibilities, leaves little unsaid, and keeps the doors of communication and understanding open on both sides of IS service delivery. From the standpoint of the IS leadership, the SLA process offers the added benefit of maintaining a current listing of IS service commitments, thus filling in the non55
ACHIEVING STRATEGIC IT ALIGNMENT
1. Define the SLA
6. Refine the SLA
2. Assign the SLA Owner
5. Improve the Service Provided
3. Monitor SLA Compliance
4. Collect and Analyze Data
Exhibit 4. SLA Management Process
discretionary layers of IS’ internal economy model. Whatever resources remain can be devoted to project work and applied research into new ITenabled business opportunities. Bear in mind that this is a dynamic model. As the base of embedded IT services grows, a greater portion of IS resources will fall within the sphere of nondiscretionary activity, limiting project work. The only way to break free from this set of circumstances is to either curtail existing services or broaden the overall base of IS resources. In any event, the service-level management process will provide most of the information that business and IS leaders need to make informed decisions. With the service side of the house clarified, IS leaders can turn their attention to the discretionary side of delivery. Project work calls for a complementary commitment management process of its own. MANAGING PROJECT COMMITMENTS To put it simply, any IS activity that is not covered through a service level agreement is by definition a project that must be assigned IS discretionary resources. Enterprises employ a planning process of some type whereby IT projects are identified and prioritized. IS is then asked to proceed with this list in line with available resources. Unfortunately, IS organizations find it much easier to manage and deliver routine, ongoing (SLA) services than to execute projects. The underlying reasons for this state of affairs may not be obvious, but they are easily sum56
Running Information Services as a Business marized. Services are predictable events, which are easily metered and with which IS personnel and their customers have considerable experience and a reasonably firm set of expectations. More often than not, a single IS team oversees day-to-day service delivery (e.g., network operations, Internet services, e-mail services, security administration, and so forth). Projects, on the other hand, typically explore new territory and require an IS team to work on an emerging, dynamic, and not necessarily well-articulated set of customer requirements. Furthermore, most projects are, by definition, cross-functional, calling on expertise from across the IS organization and that of its business unit customers. Where there are many hands involved and where the project definition remains unclear, the risk of error, scrap, and rework is sure to follow. These are the risks that the project commitment management process must mitigate. Note that, like the SLA process, the effort and rigor of managing project commitments will vary from one organization to another and from one project to the next. The pages that follow present a framework for informed decision making by the enterprise’s business and IS leadership as they define, prioritize, shape, and deliver IT projects. Readers must appreciate the need to balance their desire to pursue best practices with the realworld needs of delivery within their own business environment. As a first step, the enterprise’s business leadership will work with IS to identify appropriate project work. Any efforts that appropriately fall under existing SLAs should be addressed through the resources already allocated as part of nondiscretionary IS funding for that work. Next, CREs will work with their executive sponsor(s) to define and shape potential project assignments for the coming year. While the CREs will assist in formulating and prioritizing these project lists, they must make it clear that this datagathering activity in no way commits IS. Instead, the CREs will bring these requests back to IS executive management who will in turn rationalize these requests into an IT project portfolio for the review and approval of the enterprise’s leadership.14 This portfolio presentation should indicate synergies and dependencies between projects, the relative merits/benefits of each proposal, and the approximately level of investment required. With this information in hand and as part of the annual budgeting/planning process, the enterprise’s business and IS leaderships will meet to prioritize the list and to commit, in principle, to those projects that have survived this initial review. Typically, all enterprise-level projects emerging from and funded by this process are, by definition, of the highest priority in terms of delivery and resource allocations. If additional resources are available, business-unit-specific projects may then be considered in terms of their relative value to the enterprise. At least in the for-profit sector, enterprises will define a return-on-investment (ROI) hurdle rate for this part of the process, balancing line-of-busi57
ACHIEVING STRATEGIC IT ALIGNMENT ness IT needs against overall enterprise IT needs. In many instances, the business units may receive approval to proceed with their own IT projects as long as they can fund these projects and as long as IS has the bandwidth to handle the additional work. Invariably, unforeseen circumstances and business opportunities will necessitate revisiting the priority list. Some projects may be deferred and others dropped in favor of more pressing or promising IT investments. Similarly, as the IS team and its business partners work through the development life cycle on particular projects, they will find that their original assumptions are no longer valid, requiring the resizing, rescheduling, redefinition, or elimination of these projects. The key to success here is the employment of an initial, rigorous project-scoping effort coupled with a comprehensive project life-cycle management process that ensures regular decision points early on in the project’s design, development, and implementation phases. Once a project is properly scoped and enters the pipeline, the IS project director,15 working in collaboration with working client(s) and supported by an IS project manager,16 will create a commitment document and a project plan (both are discussed below) reflecting detailed project commitments and resource allocations.17 The IS CRE will then monitor the project team’s overall compliance within the plan, reporting back to the customer on a regular basis. Initial project scoping is key to the subsequent steps in the project management process. Too often, projects are pursued without a clear understanding of the associated risks and resource commitments. Neither the project’s working clients nor its IS participants may understand their respective roles and responsibilities. Operating assumptions are left undocumented and the handoffs and dependencies among players remain unclear. More often than not, IT efforts undertaken without sufficient information along these lines end in severe disappointment. To avoid such unhappy results, IS project teams should embrace a commitment process that ensures a well-informed basis for action. The Commitment Management Process A framework for commitment management follows. Like the other illustrations found in this chapter, this methodology’s application should be balanced against the needs of the occasion. For example, if the project in question covers well-trodden ground, less rigor is required than if the envisioned project blazes hitherto unexplored trails. Here again, the commitment management process itself forces the project team to ask the very questions that will help them to determine the best course of action. From the outset, no project should proceed without an executive (business) sponsor and the assignment of at least one working client. The exec58
Running Information Services as a Business utive sponsor’s role is to ensure the financial and political support to see the project through. The sponsor owns the result and is therefore the project’s most senior advocate. The sponsor’s designated working clients are those folks from the business side of the house who will work hand-inhand with IS to ensure satisfactory delivery of the project. Without this level of commitment on the part of the business, no project should proceed. If the project in question happens to be sponsored by IS itself, then the chief IS executive will serve as sponsor, and the IS manager who will own the system or service once it is in production will serve as the working client. While it is assumed that the project is funded, the commitment document should indicate the project’s recognized priority. For example, is this an enterprise project of the highest priority or a line-of-business project of only middling importance? Finally, the team must ask, at what phase in the scoping of the project are we? Do we know so little about the project at hand that we are only in a speculative phase of commitment at this time, or are we so confident in our understanding of the project’s parameters that we are prepared to make a formal commitment to the customer and proceed?18 As a next step in framing the commitment, the project team should define the business problem or opportunity driving the proposed investment of IS resources. The reader may think this a trivial activity but you would be surprised at how disparate the initial conversation on this subject may become. It is essential that the team start from a common base of understanding concerning the project’s rationale and purpose. To that same end, project teams should be walked through a value template similar to Exhibit 5, so that everyone involved can appreciate the benefits of a positive project outcome. With a common view of the overall project vision and value in place, the time has come to detail project deliverables, including those that are essential for customer acceptance, those that are highly desirable if time and resources allow, those that are optional (where the project may be acceptably delivered without these components), and those elements that are excluded from the scope of this project (but that may appear in future, separately funded phases of the project). Given the project’s now agreedupon deliverables, the team should assign critical success factors for customer satisfaction based on the following vectors of measurement: scope, time, quality, and cost. These metrics must be defined in terms of the particular project. For example, if a project must be completed by a certain date (e.g., to comply with a new regulation), “time” rises to the top of the list, meaning that if time grows short, the enterprise will either adjust scope, sacrifice 59
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 5. Value Template
Business Improvement
Major
Minor
None
Business Value Statement (in support of the improvement)
1. Increase revenue 2. Decrease cost 3. Avoid cost 4. Increase productivity 5. Improve time-to-market 6. Improve customer service/value 7. Provide competitive advantage 8. Reduce risk 9. Improve quality 10. Other (describe)
quality, or add to cost to meet the desired date. Similarly, if the scope of a project is paramount, perhaps its delivery date will be moved out to allow the team to meet that commitment. As with many other aspects of the commitment process framework, the importance of these elements is to ensure that a thoughtful discussion ensues and that issues are dealt with proactively rather than in a time of crisis. Obviously, the discussion of these critical success factors must take place with the working client(s), creating a golden opportunity to set and manage customer expectations. Because no major change to an IT environment is without implications, the commitment process must identify any major impacts to other systems and IS services that will result from the implementation of the envisioned project solution. For example, if a new application requires network infrastructure or desktop platform upgrades, these must be noted in the commitment document and their implications carried over more tangibly into the project plan. Similarly, if a new information system requires the recoding of older systems or data extracts from enterprise systems of record, these impacts must be documented and factored into the project plan. It is noteworthy to mention that what often gets a project team in trouble is not what is documented but what goes unsaid. For this reason, the 60
Running Information Services as a Business Exhibit 6.
Risk Management Matrix
Potential Risk
Description of Risk
Resolution
Technology Financial Security Data integrity Continuity Regulatory Business requirements Operational readiness Other (explain)
commitment process should require the team to explore project assumptions, constraints, and open issues. It falls to the project’s director or manager to draw out from the team and make explicit the inferred operating principles of the project, including the roles and responsibilities of project participants (especially internal and external IT partner providers), how project delivery processes should work, what tools and technologies are to be employed, and how key business and technical decisions governing project outcomes will be made. All projects operate under constraints such as the availability of named technical specialists or the timely arrival of computer hardware and software, which may have a direct impact on outcomes but are out of the team’s direct control. These, too, need to be made explicit so that the customer appreciates the risks to the project associated with these issues. Open issues are different from constraints in that these elements can and will be addressed by the team but the fact that they are “open” may adversely impact delivery. The project team should maintain their list of assumptions, constraints, and open items so as to ensure that none of these diminish project outcomes. At the very least, their status should be shared with the customer on a regular basis as part of expectation setting and subsequent project reporting. The two remaining components of the commitment process are (1) those elements that capture the exposure from project risks and (2) those elements that itemize the project’s specific resource commitments. In terms of the former, it is perhaps useful to begin with an illustrative risk management matrix, as shown in Exhibit 6. 61
ACHIEVING STRATEGIC IT ALIGNMENT In completing a commitment document, the project team should identify the major risks faced in pursuing their assignment. The aforementioned Exhibit 6 identifies risk categories and provides room for a more detailed description of a particular risk and its mitigation. For example, a project technology risk might entail introducing a new or untried technology into the enterprise’s IT environment. A way to mitigate that risk would be to involve the vendor or some other experienced external partner-provider in the initial installation and support of the technology. If the envisioned project solution requires clean data to succeed, the project plan could include a data cleanup process. If business requirements are not documented, phase one of the project could call for business analysis and process engineering work to get at those requirements. The team needs to be honest with itself and its customer in defining project risks and in dealing with them. Keeping risks in the commitment document ensures that they are not forgotten. To conclude the commitment process, the team must define its resource needs in terms of people, time, and funding. From the standpoint of people, the commitment document needs to name names and define roles and responsibilities (including the skills required) explicitly. Exhibit 7 contains an illustrative list of project roles. The project director must ensure that a real person who understands and agrees to the assignment is assigned to each project role. However, these commitments cannot occur without a delineation of the other two resource elements, namely the skills and the time commitment for each internal staff person, and the associated funding for hardware, software, contract labor, consulting, and so forth. These details will come from the project plan that accompanies the commitment document. In the plan, which should adhere to an accepted project life-cycle management methodology, activities are appropriately detailed along with the duration and performer for each task. The plan will tell the partner-providers what is required of their teams. It is the responsibility of these managers to ensure that they do not overcommit their own personnel. If the IS organization operates some sort of resource management database or tracking system, this may be easily accomplished. Otherwise, it rests with the individual manager to keep things straight. Thus, with this information in hand, when the IS partner providers commit to a role and responsibilities within a given project, this commitment is not “in principle” but is based on detailed skill, date, and duration data. When viewed in its entirety, the commitment process leaves nothing to the imagination of the project team and those they serve. The commitment document makes explicit what is to be done, why the project merits resources, who is responsible for what, and what barriers lie in the 62
Running Information Services as a Business Exhibit 7.
Project Roles
Role
Name of Associate
Responsibility
The Core Project Team: Executive sponsor Working client(s) Project director Project manager Business analyst Application lead Systems lead Data management lead Infrastructure lead Customer services lead Internal and External Partners: Vendor-based project management support Technical architect(s) Business process architect(s) Creative development/UI Development Training/documentation QA/testing Infrastructure Security Other Partner-Provider(s) (Hardware/Software):
path of success. The project plan details how the team will execute their assignment. Together, these documents form a contract that aligns resources and provides for a common understanding of next steps, roles, and responsibilities. The metrics for successful project delivery are few and simple. Did the project come in on time and within budget? Did it meet customer expectations? To answer these questions, all one needs to do is run actual project results against the project’s commitment document and plan. In addition, the team may employ some post-implementation assessment process or survey tool such as the sample in Exhibit 8. 63
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 8. Post-Implementation Assessment Process +RZsatisfiedDUH\RXZLWK
$
%
&
'
(
)
DScope: $OODJUHHGXSRQEXVLQHVVUHTXLUHPHQWVDUHLQFOXGHG
EQuality: 7KHGHOLYHUHGSURGXFWSHUIRUPVDVH[SHFWHG
FCost: 7KHSURGXFWZDVGHOLYHUHGZLWKLQWKHDJUHHGXSRQFRVW
GTime: 7KHSURGXFWZDVGHOLYHUHGRQWKHDJUHHGXSRQGDWH
D:KDWLV\RXURYHUDOOVDWLVIDFWLRQZLWKWKHV\VWHPVHUYLFH"
E:KDWLV\RXURYHUDOOVDWLVIDFWLRQZLWKVHUYLFHSURYLGHG E\,6"
&ULWLFDOVXFFHVVIDFWRUV³FRQGLWLRQVRIVDWLVIDFWLRQ
2YHUDOOVDWLVIDFWLRQ
2SWLRQDO³&RPSOHWLRQRIWKHIROORZLQJLVRSWLRQDOKRZHYHU\RXUUHVSRQVHVZLOOKHOS XVLQRXUTXHVWIRUFRQWLQXRXVSURGXFWLPSURYHPHQW7KDQN\RX 6\VWHPGDWD
$
%
&
'
(
)
D'DWDDFFXUDF\
E'DWDFRPSOHWHQHVV
F$YDLODELOLW\RIFXUUHQWGDWD
G$YDLODELOLW\RIKLVWRULFDOGDWD
D$FFXUDF\RIFDOFXODWLRQV
E&RPSOHWHQHVVRIIXQFWLRQDOLW\
F6\VWHPUHOLDELOLW\
G6HFXULW\FRQWUROV
D(DVHRIXVH
E6FUHHQGHVLJQ
F5HSRUWGHVLJQ
G6FUHHQHGLWVYDOLGLW\DQGFRQVLVWHQF\FKHFNV
H5HVSRQVHWLPH
I7LPHOLQHVVRIUHSRUWV
D$FFXUDF\RIGRFXPHQWDWLRQ
E&RPSOHWHQHVVRIGRFXPHQWDWLRQ
F8VHIXOQHVVRIGRFXPHQWDWLRQ
G7UDLQLQJ
H2QOLQHKHOS
6\VWHPSURFHVVLQJ
6\VWHPXVH
6\VWHPGRFXPHQWDWLRQDQGWUDLQLQJ
Additional comments or suggestions? (Please use back of form.)
Legend: A = Very satisfied; B = Satisfied; C = Neither satisfied nor dissatisfied; D = Dissatisfied; E = Very dissatisfied; F = Not applicable.
64
Running Information Services as a Business All in all, this framework makes for a good beginning but it does not ensure the success of the project. By being true to the commitment process, many of the problems that might otherwise befall a project will have been addressed proactively. IS METRICS AND REPORTING TOOLS Given all of the work that a typical IS organization is expected to deliver each year, it is easy to see how even major commitments may be overlooked in the furious effort to get things done. To avoid this pitfall, it is incumbent upon IS to clarify its commitments to all concerned. In the prior sections of this chapter, the author has shown how this may be done for both ongoing service and project delivery. Next, IS management must ensure compliance with its commitments once made. Here again, the aforementioned processes keep the team appropriately focused. Service level management requires the integration of performance metrics into each SLA, while the ongoing project-management process forces the team to relate actual accomplishments to plan. During the regular visits of the CREs with their customers, service level and project delivery may be raised with the customers to assess their current satisfaction with IS performance. While each of these activities is important in cementing and maintaining a good working relationship with individual customers, a more comprehensive view is required of how IS services and projects relate to one another. To this end, the author has relied on a single integrated reporting process, called the “Monthly Operations Report,” to capture key IS accomplishments and performance metrics. As its name implies, the Monthly Operations Report is a regularly scheduled activity. The document is entirely customer focused and therefore aligns with both the service level and project commitment management processes. However, it is designed to serve the needs of IS management, keeping customer delivery at the forefront of everyone’s attention and holding IS personnel accountable for their commitments. The report reflects qualitative information from each IS service delivery unit (e.g., help desk, training center, network operations, production services, and so forth) concerning accomplishments of the month. Each accomplishment must be aligned with a customer value (as articulated in SLAs and project commitment documents) if it is to be listed as a deliverable. Next, the report lists quantitative performance data such as system availability, system response time, problem tickets closed, and training classes offered. Note that some of these data points measure activity rather than results and must be balanced with customer satisfaction metrics to be truly meaningful. 65
ACHIEVING STRATEGIC IT ALIGNMENT A system for collecting customer feedback is also needed. This simple surveying process is guided by the following operating principles. First, the process must require no more than two minutes of an individual customer’s time. Second, it must be conducted via the phone or face-to-face, but never via paper forms or e-mail. Third, it must employ measures of customer satisfaction rather than IS activity. Fourth, it must scientifically sample IS’ customer population. And fifth, it must be carried out in a consistent manner on a regular basis. Guided by these principles, the author’s team developed five-question survey tools for each IS service. The team then drew randomly from the help desk customer database where both requests for service and problem tickets are recorded. A single staff member spends the first few days of each month calling customers, employing the appropriate survey scripts. Results are captured in a simple database and then consolidated for the report. These customer satisfaction scores are also tracked longitudinally. All the summary data appears in the monthly operations report. Project delivery is a little more complicated to capture on a monthly basis because projects do not necessarily lend themselves to either quantitative measures or regular customer satisfaction surveying. Nevertheless, the report contains two sets of documents that IS management finds useful. The first is a project master schedule that groups projects by customer portfolio and then by inter-project dependencies. The schedule shows the duration of each project and its status (white for completed, green for on schedule, yellow for in trouble but under control, and red for in trouble). Thus, within a few pages, the IS leadership can see all of the discretionary work that the team has underway at any given time, and what is in trouble, where the bottlenecks are, and who is overcommitted. The presentation is simple and visual. Within the report, each project also has its own scorecard, a single-page representation of that project’s status. Like everything else in the report, the scorecard is also a monthly snapshot that includes a brief description of the project and its value to the customer, a list of customer and project team participants, this month’s accomplishments and issues, a schematic project plan, and a Gantt chart of the current project phase’s activities. Like the master schedule, scorecards are scored white, green, yellow, or red as appropriate. (See Exhibit 9 for a sample scorecard.) In my organization, the monthly operations report is reviewed each month in an open forum by the IS executive team. Other IS personnel are welcome to attend. And within a two- to three-hour block, the entire IS leadership team has a clear picture of the status and health of all existing IS commitments. Follow-up items raised in the review meeting are recorded and appear in the next version of the report. The document itself is distrib66
*UHHQ
3URMHFW6FRUH&DUGDVRI 29(5$//352-(&767$786
67$786.(<
,QWHUQHW8SJUDGH
2EMHFWLYHV7KLVSURMHFWZLOODGGUHVVRYHUDOOFDPSXV ,QWHUQHWFDSDFLW\DQGSHUIRUPDQFHPDQDJHPHQWWKURXJK ERWKLQFUHDVLQJWKHEDQGZLGWKRIWKH8QLYHUVLW\tV,QWHUQHW OLQNVDQGE\SURYLGLQJDPRUHIOH[LEOHUHVSRQVHWRVKLIWVLQ VHUYLFHGHPDQGWKURXJKXVDJHPRQLWRULQJDQGVHUYLFH VKDSLQJWRROVDQGSURFHVVHV 9DOXH3URSRVLWLRQ,QOLQHZLWKWKH8QLYHUVLW\ V FRPPLWPHQWWRDFDGHPLFSURJUDPDQGUHVHDUFKH[FHOOHQFH WKH,QIRUPDWLRQ6HUYLFH'LYLVLRQWKURXJKWKLVSURMHFWZLOO DVVXUHDPRUHUHOLDEOHDQGVFDODEOH,QWHUQHWVHUYLFHLQ UHVSRQVHWRWKHQHHGVRIWKH8QLYHUVLW\FRPPXQLW\
FRPSOHWHG
JUHHQ
([HFXWLYH6SRQVRU0DUN+LOGHEUDQG %XVLQHVV8QLW,6 ,6&XVWRPHU5HODWLRQVKLS([HF1$
3ODQQHG'HOLYHU\'DWH
$FWXDO'HOLYHU\'DWH
3URMHFW 6WDIILQJ
6WDWXV
2&'HOLYHU\DQG7HVW
JUHHQ
7RROV'HOLYHU\DQG7HVW
\HOORZ
7EDFNXS6ROXWLRQ
\HOORZ
)LUHZDOO,PSOHPHQWDWLRQ
JUHHQ
$QDO\VLVRI,QWHUQHW2SWLRQ
QG 2&'HOLYHU\DQG7HVW
5RXWHUDQG)LUHZDOO,QVWDOOV
5HV1HW181HWVHSDUDWLRQ
7HVWDQG&HUWLI\
,VVXHV
5HFRPPHQGHG5HVROXWLRQ
2ZQHU
YDFDQF\LQ1HWZRUN(QJLQHHULQJ LGHQWLILFDWLRQRIPRQLWRULQJWRROVDIWHUIDLOXUHRI VHOHFWHGSDUWQHUSURYLGHU
VHDUFKXQGHUZD\WRILOOYDFDQF\ VHDUFKXQGHUZD\IRUQHZSDUWQHUSURYLGHU
%RE:KHODQ 0DUN+LOGHEUDQG
6WDWXV \HOORZ \HOORZ
'(/,9(5$%/(6 IRU&XUUHQW3KDVH RIH[HFXWLRQRQO\ 2FW
1RY
'HF
-DQ
2&'HOLYHU\DQG7HVW 7RROV'HOLYHU\DQG7HVW 7%DFNXS6ROXWLRQ )LUHZDOO,PSOHPHQWDWLRQ
67
Exhibit 9. Project Scorecard
)HE
0DU
$SU
0D\
-XQ
-XO
$XJ
6HS
Running Information Services as a Business
2&,QWHUQHWVHUYLFHLQVWDOODWLRQXQGHUZD\ ,QYHVWLJDWLRQRIWUDIILFVKDSLQJDQGPDQDJHPHQWWRROV XQGHUZD\ )LUHZDOOXSJUDGHXQGHUZD\ %DFNXSWHOHFRPPXQLFDWLRQV7 OLQNSURYLVLRQHG
\HOORZ ,67HDP 3URMHFW'LUHFWRU%RE:KHODQ 3URMHFW0DQDJHU6WHYH7KHDOO %XVLQHVV$QDO\VW3DW7RGG 3URMHFW$QDO\VW5LFKDUG.HVQHU 6\VWHPV/HDGQD 'DWD/HDGQD 4$7HVWLQJ1HWZRUN6HUYLFHV(QJLQHHULQJ ,QIUDVWUXFWXUH1HWZRUN6HUYLFHV7HDP *8,QD 6HFXULW\QD
$332,17('32,173(56216
3URMHFW3KDVH
+LJKOLJKWV
UHG
&XVWRPHU7HDP :RUNLQJ&OLHQW%RE:HLUIRU1(8FRPPXQLW\ :RUNLQJ&OLHQW :RUNLQJ&OLHQW
ACHIEVING STRATEGIC IT ALIGNMENT uted to the entire IS organization via the unit’s intranet site. As appropriate, sections from the report as well as individual project scorecards are shared by the unit’s CREs with their respective customers. In brief, the process keeps accomplishments and problems visible and everyone on their toes. Bear in mind, the focus of this process is continuous improvement and the pursuit of excellence in customer delivery. Blame is never individually assessed because the entire IS team is held accountable for the success of the whole. THE ROLE OF THE PROJECT OFFICE Admittedly, the methods outlined above carry with them a certain level of overhead. On the other hand, the cost of developing and maintaining these processes will prove insignificant when balanced against the payback — high-quality customer relationships and the repeatable success of IS service and project delivery efforts. But the question remains: how does an IS organization achieve the level of self-management prescribed in this chapter? A project office may be needed to ensure the synchronization of service levels and project commitment management for a given organization. The project office should report to the chief operations officer of the IS organization and enjoy a broad mandate in support of all IS service delivery and project management activities. The tasks assigned to the office might include: • Ensuring alignment between IS commitments and the enterprise’s business objectives • Collecting, codifying, and disseminating best practices to service delivery and project teams • Collecting, documenting, and disseminating reusable components such as project plans and budgets, commitment documents, and technical specification templates to project teams • Managing the reporting requirements of IS These assignments embrace both commitment and knowledge management as processes, leaving to IS operating units the responsibility for both actual service and project delivery and for the underlying technical expertise to address customer needs. As part of the project office function, it could also provide actual project managers as staff support to project teams. The role of the project manager might include: • Assisting in maintaining the project plan at the direction of the project director • Maintaining the project issue lists at the direction of the project director 68
Running Information Services as a Business • Maintaining the project change of scope documentation at the direction of the project director • Attending the weekly working team meetings and taking minutes • Attending the CRE and project manager meetings with working clients and business sponsors as need be and taking minutes • Collecting project artifacts (e.g., plans, scripts, best practices, system components) for reuse • Encouraging and promoting reuse within project teams Because these tasks must be accomplished in the course of project delivery anyway, it is more efficient to establish a center of excellence in these skills, whose members can ensure broad-based IS compliance with best practices. Furthermore, nothing about the processes described herein is static. As IS and its customers learn from the use of the aforementioned processes, they will fine-tune and adapt them based upon practical experience. Project office personnel will become the keepers and the chroniclers of this institutional knowledge. And because they operate independently of any particular IS service delivery or project team, the project office staff are in a position to objectively advocate for and monitor the success of commitment management processes. In short, it is this team that will help IS to run like a business, facilitating and supporting the greater team’s focus on successfully meeting customer requirements. CONCLUSION In a world of growing complexity, where time is of the essence and the resources required to effectively deploy IT remain constrained, the use of the frameworks and tools outlined in this chapter should prove useful. The keys to success remain: a solid focus on customer value, persistence in the use of standards, rigorously defined and executed processes, quality and continual communication, and a true commitment to collaborative work. Notes 1. This chapter is based on nearly five years of process-development efforts and field testing at New England Financial Services, the Hurwitz Group, and Northeastern University. The author wishes to acknowledge the many fine contributions of his colleagues to the outcomes described herein. The author also wishes to thank in particular Bob Weir and Rick Mickool of Northeastern University, and Jerry Kanter and Jill Stoff of Babson College, for their aid and advice in preparing this chapter. Any errors of omission or commission are the sole responsibility of the author. 2. Note that some organizations wisely set aside an IT reserve each year in anticipation of unexpected costs such as project overruns, emerging technology investments, and changes in the enterprise’s business plans. 3. One of the primary justifications for service level and commitment management is to address the all-too-familiar phenomenon whereby business leaders commit the enterprise to IT investments without fully appreciating the total cost of ownership associated with
69
ACHIEVING STRATEGIC IT ALIGNMENT
4.
5.
6.
7.
8. 9.
10. 11.
12.
70
their choices. Without proper planning, such a course of action can tie the hands of the IS organization for years to come and expose the enterprise to technological obsolescence. SLAs are created through an annual process to address work on existing IT assets, including all nondiscretionary (maintenance and support) IT costs, such as vendor-based software licensing and maintenance fees, as well as the discretionary costs associated with system enhancements below some threshold amount (e.g., $10,000 per enhancement). Typically, SLA work will at times entail the upgrade costs of system or Web site hardware and software as well as internal and external labor costs, license renewals, etc. The project commitment process governs the system development life cycle for a particular project, encompassing all new IT asset project work, as well as those few systems or Web site enhancements that are greater than the SLA threshold project value. Typically, project work will entail the purchase costs of new system/Web site hardware and software as well as internal and external labor costs, initial product licensing, etc. Once a project deliverable is in production, its ongoing cost is added to the appropriate SLA for the coming year of service delivery. A particularly comprehensive consideration of this subject can be found in Rick Sturm, Wayne Morris, and Mary Jander’s Foundations of Service Level Management (Indianapolis, IN: SAMS, 2000). Unless the enterprise’s leadership has chosen to operate IS as a separate entity with its own profit and loss statement, the author would advocate against the establishment of a charge-back or transfer pricing system between IS and its customers. Instead, the business and IS leaderships should jointly agree on the organization’s overall IT funding level and agree that IS manage those funds in line with the service level and project commitment management processes outlined in this chapter. See Sturm, Morris, and Jander, pp. 189–196. For example, an SLA that resembles a formal business contract is appropriate and necessary for a multi-operating unit enterprise where each line of business runs on its own P&L and must be charged back for the IS services that it consumes. On the other hand, such an SLA might only confuse and frighten the executives of an institution of higher education who are unaccustomed to formal and rigorous modes of business communication. As is so often the case, if the IS help desk or call center is the customer’s initial point of entry into IS support services, this bears repeating throughout the SLA. It is particularly important that IS representatives communicate to business unit management what they need to do as part of their partnership with IS to ensure the success of the services reflected in the SLA. This is a customer communication and education process. It is not to be dictated but needs to be agreed to. The particulars of the responsibility list will vary from one organization to another. The following list of customer responsibilities is meant only for illustration purposes: (1) to operate within the information technology funding allocations and funding process as defined by enterprise management; (2) to work in close collaboration with the designated IS customer relationship executive to initially frame this SLA and to manage within its constraints once approved; (3) to collaborate throughout the life cycle of the project or process to ensure the ongoing clarity and delivery of business value in the outcomes of the IT effort, including direct participation in and ownership of the quality assurance acceptance process; (4) to review, understand, and contribute to systems documentation, including project plans and training materials, as well as any IS project or service team communications such as release memos; (5) throughout the life cycle of the process, to evaluate and ultimately authorize business applications to go into production; (6) to distribute pertinent information to all associates within the business unit who utilize the products and services addressed in this SLA; (7) to ensure that business unit hardware and associated operating software meet or exceed the business unit’s system-complex minimum hardware and software requirements; (8) to report problems using the problem-reporting procedure detailed in this service level agreement, including a clear description of the problem; (9) to provide input on the quality and timeliness of service; (10) to prioritize work covered under this service agreement and to provide any ongoing prioritization needed as additional business requirements arise. The key IS service metrics of concern to most customers include system availability, system response time, mean time to failure, mean time to service restoration, support services availability, and as cited above, response time from support staff when problems occur
Running Information Services as a Business
13. 14.
15.
16.
17.
18.
or when hands need to be held. While typical customers would like an immediate respond to their issues, they will recognize the resource constraints of IS as long as they understand in advance the standards of service under which IS operates. However, IS should take pains to ensure that “availability” and “response time” reflect the total customer experience and not some subset of service. For example, the network is not restored from a customer perspective merely because the servers are back up online. The customer’s applications must be restored as well so that regular business may be transacted. The executive sponsor is the senior executive to whom the SLA is addressed and whose general responsibility it is to approve the use of IS resources as described in the SLA. It is absolutely essential that the business and not IS rule on project priorities. But this does not absolve the IS team of the responsibilities for both consolidating and leveraging IT requests that in their view bring the greatest benefit to the enterprise, and identifying infrastructure and other IT-enabling investments that are a necessary foundation for business-enabling IT projects. The project director is the IS party responsible for project delivery and the overall coordination of internal and external information technology resources. The director will work hand-in-hand with the working clients to ensure that project deliverables are in keeping with the customer’s requirements. The IS project manager is staff to the IS project director. This support person will develop and maintain project commitment documents and plans, facilitate and coordinate project activities, carry out business process analysis, prepare project status reports, manage project meetings, record and issue meeting minutes, and perform many other tasks as required to ensure successful project delivery. Some organizations will allow working clients to serve as IS project directors and even project managers. In the author’s view, this is a mistake. While the working client is essential to any IS project’s success, contributing system requirements and business process expertise to the effort, very few working clients have experience in leading multi-tiered IT projects, especially those involving outside technical contractors and consultants. Leave this work to an appropriately skilled IS manager, allowing working clients to contribute where they add greatest value. The levels of commitment run from “request” where the customer asks for IS help, to “speculation” where IS responds based on a series of suppositions, to “offer” where IS nails down its assumptions, to “commit” where both the customer and IS are in a position to formally commit resources to the project.
71
This page intentionally left blank
Chapter 6
Managing the IT Procurement Process Robert L. Heckman
An IT procurement process, formal or informal, exists in every organization that acquires information technology. As users of information systems increasingly find themselves in roles as customers of multiple technology vendors, this IT procurement process assumes greater management significance. In addition to hardware, operating system software, and telecommunications equipment and services — information resources traditionally acquired in the marketplace — organizations now turn to outside providers for many components of their application systems, application development and integration, and a broad variety of system management services. Yet despite this trend, there has to date been little, if any, research investigating the IT procurement process. DEVELOPMENT OF THE FRAMEWORK In January 1994, the Society for Information Management (SIM) Working Group on Information Technology Procurement was formed to exchange information on managing IT procurement, and to foster collaboration among the different professions participating in the IT procurement process. This chapter presents a model of the IT procurement process which was developed by the SIM Working Group to provide a framework for studying IT procurement. Specifically, the IT Procurement Process Framework was developed by a 12-member subgroup comprised of senior IT procurement executives from large North American companies. The task of developing the framework took place over the course of several meetings and lasted approximately one year. A modified nominal group process was used, in which individual members independently developed frameworks that described the IT procurement process as they understood it. In a series of several work sessions, these individual models were synthesized and combined to produce the six-process framework presented below. Once the six major procurement processes had been identified, a modified nominal group process was once again followed to elicit the sub0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
73
ACHIEVING STRATEGIC IT ALIGNMENT
'HSOR\PHQW 3URFHVVHV
0DQDJHPHQW 3URFHVVHV
Exhibit 1.
5HTXLUHPHQWV 'HWHUPLQDWLRQ
$FTXLVLWLRQ
&RQWUDFW )XOILOOPHQW
'
'
'
6XSSOLHU0DQDJHPHQW
0
$VVHW0DQDJHPHQW
0
4XDOLW\0DQDJHPHQW
0
Major Processes in IT Procurement
processes to be included under each major process. Finally, a nominal group process was once again used to elicit a set of key issues, which the group felt presented managerial challenges in each of the six processes. The key issues were conceived of as the critical questions that must be successfully addressed to effectively manage each process. Thus, they represent the most important issues faced by those executives responsible for the management of the IT procurement function. The process framework and key issues were reviewed by the Working Group approximately one year later (summer 1996), and modifications to definitions, sub-processes, and key issues were made at that time. The key issue content analysis described below was conducted following a Working Group review in early 1997. THE IT PROCUREMENT FRAMEWORK: PROCESSES, SUB-PROCESSES, AND KEY ISSUES The IT Procurement Process Framework provides a vehicle to systematically describe the processes and sub-processes involved in IT procurement. Exhibit 1 illustrates six major processes in IT procurement activities: three deployment processes (D1, D2, and D3) and three management processes (M1, M2, and M3). Each of these major processes consists of a number of sub-processes. The Appendix at the end of this chapter lists the subprocesses included in each of the major processes, as well as the key issues identified by the Working Group. Deployment Processes Deployment processes consist of activities that are performed (to a greater or lesser extent) each time an IT product or service is acquired. Each individual procurement can be thought of in terms of a life cycle that begins with requirements determination, proceeds through activities involved in the actual acquisition of a product or service, and is completed as the 74
Managing the IT Procurement Process terms specified in the contract are fulfilled. Each IT product or service that is acquired has its own individual iteration of this deployment life cycle. D1. Requirements determination is the process of determining the business justification, requirements, specifications and approvals to proceed with the procurement process. It includes sub-processes such as organizing project teams, using cost–benefit or other analytic techniques to justify investments, defining alternatives, assessing relative risks and benefits defining specifications, and obtaining necessary approvals to proceed with the procurement process. D2. Acquisition is the process of evaluating and selecting appropriate suppliers and completing procurement arrangements for the required products and services. It includes identification of sourcing alternatives, generating communications (such as RFPs and RFQ) to suppliers, evaluating supplier proposals, and negotiating contracts with suppliers. D3. Contract fulfillment is the process of managing and coordinating all activities involved in fulfilling contract requirements. It includes expediting of orders, acceptance of products or services, installation of systems, contract administration, management of post-installation services such as warranty and maintenance, and disposal of obsolete assets. Management Processes Management processes consist of those activities involved in the overall governance of IT procurement. These activities are not specific to any particular procurement event, but rather are generalized across all such events. Three general classes of IT procurement management processes are supplier management, asset management, and quality management. M1. Supplier management is the process of optimizing customer–supplier relationships to add value to the business. It includes activities such as development of a supplier portfolio strategy, development of relationship strategies for key suppliers, assessing and influencing supplier performance, and managing communication with suppliers. M2. Asset management is the process of optimizing the utilization of all IT assets throughout their entire life cycle to meet the needs of the business. It includes activities such as development of asset management strategies and policies, development and maintenance of asset management information systems, evaluation of the life cycle cost of IT asset ownership, and management of asset redeployment and disposal policies. M3. Quality management is the process of assuring continuous improvement in the IT procurement process and in all products and services acquired for IT purposes in an organization. It includes activities 75
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 2. Eight Themes Identified from 76 Key Issues Code P M ER IR S L F E
Theme Process management, design, and efficiency Measurement, assessment, evaluation (of vendor and self) External relationships (with supplier) Internal relationships (internal teams, roles, communication) Strategy and planning Legal issues Financial, total cost of ownership (TCO) issues Executive support for procurement function
No. Key Issues
21 16 9 9 7 6 6 2
such as product testing, statistical process control, acceptance testing, quality reviews with suppliers, and facility audits. KEY IT PROCUREMENT MANAGEMENT ISSUES The Appendix at the end of the chapter presents 76 key IT procurement management issues organized by process, that were identified by the members of the Working Group. These issues represent the beliefs of these domain experts concerning the most serious challenges facing managers of the IT procurement function. To better understand the key issues, a content analysis was performed to determine if there were a few main themes underlying these questions. The content analysis identified eight themes. The eight themes, their codes, and the frequency of occurrence are shown in Exhibit 2. These theme codes also appear in the Appendix for each key issue. The four themes that were the most important to senior procurement managers in the SIM Working Group are described below. Process Management, Design, and Efficiency [P] Practicing IT procurement managers are most concerned with the issue of how to make the procurement process more efficient. The questions that reflect this theme address the use of automated tools such as EDI and procurement cards, reduction of cycle time in contracting processes, development and use of asset tracking systems and other reporting systems, and the integration of sub-processes at early and later stages of the procurement life cycle. The emergence of process efficiency as the leading issue may indicate that procurement managers are under pressure to demonstrate the economic value of their organizational contribution, and thus follow the last decade’s broad management trend of rigorously managing costs. Measurement, Assessment, Evaluation [M] The second most important theme concerns the search for reliable and valid ways to evaluate and assess performance. This search for useful 76
Managing the IT Procurement Process assessment methods and measures is directed both at external suppliers and at the internal procurement process itself. The latter focus is consistent with the notion that procurement managers are looking for objective ways to assess and demonstrate their contribution. The focus on supplier assessment reflects an understanding that successful supplier relationships must be built on a foundation of high-quality supplier performance. External Relationships [ER] and Internal Relationships [IR] The third and fourth most frequently cited themes deal with the issue of creating effective working relationships. The importance of such relationships is an outgrowth of the cross-functional nature of the IT procurement process within organizations and the general transition from internal to external sources for information resource acquisition. Venkatraman and Loh (1994) characterize the information resource acquisition process as having evolved from managing a portfolio of technologies to managing a portfolio of relationships. The results of our analysis suggest that practicing managers agree. A MANAGEMENT AGENDA FOR THE IT PROCUREMENT PROCESS The process framework and key issues identified by the SIM IT Procurement Working Group suggest an agenda for future efforts to improve the management of the IT procurement process. The agenda contains five action items that may best be carried out through a collaboration between practicing IT procurement managers and academic researchers. The action items are: 1. Develop IT procurement performance metrics and use them to benchmark the IT procurement process. 2. Clarify roles in the procurement process to build effective internal and external relationships. 3. Use the procurement process framework as a tool to assist in reengineering the IT procurement process. 4. Use the framework as a guide for future research. 5. Use the framework to structure IT procurement training and education. Develop IT Procurement Performance Metrics and Use Them to Benchmark the IT Procurement Process. Disciplined management of any process requires appropriate performance metrics, and members of the Working Group have noted that good metrics for the IT procurement processes are in short supply. The process framework is currently providing structure to an effort by the Working Group to collect a rich set of performance metrics that can be used to raise the level 77
ACHIEVING STRATEGIC IT ALIGNMENT of IT procurement management. In this effort, four classes of performance metrics have been identified: 1. 2. 3. 4.
Effectiveness metrics Efficiency metrics Quality metrics Cycle time metrics
Closely related to the metrics development issue is the need felt by many procurement professionals to benchmark critical procurement processes. The framework provides a guide to the process selection activity in the benchmarking planning stage. For example, the framework has been used by several companies to identify supplier management and asset management sub-processes for benchmarking. Clarify Roles in the Procurement Process to Build Effective Internal and External Relationships IT procurement will continue to be a cross-funtional process that depends on the effective collaboration of many different organizational actors for success. Inside the customer organization, representatives of IS, legal, purchasing, finance, and user departments must work together to buy, install, and use IT products and services. Partnerships and alliances with supplier and other organizations outside the boundaries of one’s own firm are more necessary than ever as long-term outsourcing and consortia arrangements become more common. The key question is how these multifaceted relationships should be structured and managed. Internally, organizational structures, roles, standards, policies, and procedures must be developed that facilitate effective cooperation. Externally, contracts must be crafted that clarify expectations and responsibilities between the parties. Recent research, however, suggests that formal mechanisms are not always the best means to stimulate collaboration. The most useful forms of collaboration are often discretionary –– that is, they may be contributed or withheld without concern for formal reward or sanction (Heckman and Guskey, 1997). Formal job descriptions, procedures, and contracts will never cover all the eventualities that may arise in complex relationships. Therefore, managers must find the cultural and other mechanisms that create environments which elicit discretionary collaboration both internally and externally. Use the Procurement Process Framework as a Tool to Assist in Reengineering the IT Procurement Process Another exciting use for the framework is to serve as the foundation for efforts to reengineer procurement processes. One firm analyzed the subprocesses involved in the requirements analysis and acquisition stages of 78
Managing the IT Procurement Process the procurement life cycle to reduce procurement and contracting cycle time. Instead of looking at the deployment sub-processes as a linear sequence of activities, this innovative company used the framework to analyze and develop a compression strategy to reduce the cycle time in its IT contracting process by performing a number of sub-processes in parallel. Use the Framework as a Guide for Future Research The framework has been used by the SIM IT Procurement Working Group to identify topics of greatest interest for empirical research. For example, survey research investigating acquisition (software contracting practices and contracting efficiency), asset management (total life-cycle cost of ownership and asset tracking systems), and supplier management (supplier evaluation) has been recently completed. The key issues identified in the current chapter can likewise be used to frame a research agenda that will have practical relevance to practitioners. Use the Framework to Structure IT Procurement Training and Education The framework has been used to provide the underlying structure for a university course covering IT procurement. It also provides the basis for shorter practitioner workshops, and can be used by companies developing in-house training in IT procurement for users, technologists, and procurement specialists. This five-item agenda provides a foundation for the professionalization of the IT procurement discipline. As the acquisition of information resources becomes more market oriented and less a function of internal development, the role of the IT professional will necessarily change. The IT professional of the future will need fewer technology skills because these skills will be provided by external vendors that specialize in supplying them. The skills that will be critical to the IT organization of the future are those marketplace skills which will be found in IT procurement organizations. The management agenda described in this chapter provides a first step toward the effective leadership of such organizations. References and Further Reading Barki, H., Rivard, S., and Talbot, J. (1993), “A Keyword Classification Scheme for IS Research Literature: An Update,” MIS Quarterly, (I 7:2) June, 1993, 209–226. Davenport, T., and Short, J. (1990), “The New Industrial Engineering: Information Technology and Business Process Redesign,” Sloan Management Review, Summer, 11–27. Hammer, M. (1990), “Reengineering Work: Don’t Automate, Obliterate,” Harvard Business Review, July/August, 104–112. Heckman, R. and Guskey, A. (1997), “The Relationship Between University and Alumni: Toward a Theory of Discretionary Collaborative Behavior,” Journal of Marketing Theory and Practice.
79
ACHIEVING STRATEGIC IT ALIGNMENT Heckman, R., and Sawyer, S. (1996), “A Model of Information Resource Acquisition,” Proceedings of the Second Annual American Conference on Information Systems, Phoenix, AZ. Lacity, M. C., Willcocks, L. P., and Feeny, D. F., “IT Outsourcing: Maximize Flexibility and Control,” Harvard Business Review, May–June 1995, p. 84–93. McFarlan, F. W. and Nolan, R. L., (1995). “How to Manage an IT Outsourcing Alliance,” Sloan Management Review, (36:2), Winter, p. 9–23. Reifer, D. (1994) Software Management, Los Alamitos CA: IEEE Press. Rook, P. (1986) “Controlling Software Projects,” Software Engineering Journal, pp. 79–87. Sampler, J. and Short, J. (1994), “An Examination of Information Technology’s Impact on the Value of Information and Expertise: Implications for Organizational Change,” Journal of Management Information Systems (1:2), Fall, p.59–73. Teng, J., Grover, V., and Fiedler, K. (1994), “Business Process Reengineering: Charting a Strategic Path for the Information Age,” California Management Review, Spring, 9–31. Thayer, R. (1988), “Software Engineering Project Management: A Top–Down View,” in R. Thayer (ed.), IEEE Proceedings on Project Management, Los Alamitos CA: IEEE Press, p. 15–53. Vaughn, M. and Parkinson, G. (1994) Development Effectiveness, New York: John Wiley & Sons. Venkatraman, N. and Loh, L. (1994), “The Shifting Logic of the IS Organization: from Technical Portfolio to Relationship Portfolio,” Information Strategy: The Executive’s Journal, Winter, p. 5–11.
80
Managing the IT Procurement Process
Appendix Major Processes, Sub-Processes, and Key Issues
Deployment Process D1: Requirements Determination Process Definition
The process of determining the business justification, requirements, specifications, and approvals to proceed with the procurement process. Sub-processes
y Identify need. y Put together cross-functional team and identify roles and responsibilities. y Continuously refine requirements and specifications in accordance with user needs. y Gather information regarding alternative solutions. y Perform cost-benefit analysis or other analytic technique to justify expenditure. y Evaluate alternative solutions (including build/buy, in-house/outsource, etc.) and associated risk and benefits. y Develop procurement plans that are integrated with project plans. y Gain approval for the expenditure. y Develop preliminary negotiation strategies. Key Issues [Themes]
y What are the important components of an appropriate procurement plan? [S] y How much planning (front-end loading) is appropriate or necessary for different types of acquisitions (e.g., commodity purchases versus complex, unique acquisitions)? [S] y How should project teams be configured for different types of acquisitions (appropriate internal and external resources, project leader, etc.)? [IR] y How should changes in scope and changes in orders be handled? [P] y What are the important costs versus budget considerations? [F] y What are the most effective methods of obtaining executive commitment? [E] y Can requirements be separated from wants? [P] y Should performance specifications and other outputs be captured for use in later phases, such as quality management? [P] 81
ACHIEVING STRATEGIC IT ALIGNMENT Deployment Process D2: Acquisition Process Definition
The process of evaluating and selecting appropriate suppliers and completing procurement arrangements for the required products and services. Sub-processes
y Develop sourcing strategy, including the short list of suitable suppliers. y Generate appropriate communication to suppliers (RFP, RFQ, etc.), including financing alternatives. y Analyze and evaluate supplier responses and proposals. y Plan formal negotiation strategy. y Negotiate contract. y Review contract terms and conditions. y Award contract and execute documents. y Identify value added from the negotiation using appropriate metrics.
Key Issues [Themes]
y Is there support of corporate purchasing programs, policies, and guidelines (which can be based on technology, financing, accounting, competitive impacts, social impacts, etc.)? [E] y What tools optimize the procurement process? [P] EDI Autofax Procurement cards y What processes in the acquisition phase can be eliminated, automated, or minimized? [P] y Is it wise to be outsourcing all or part of the procurement process? [IR] y What are the appropriate roles of users, legal, purchasing, and IS in the procurement process? [IR]
Deployment Process D3: Contract Fulfillment Process Definition
y The process of managing and coordinating all activities involved in fulfilling contract requirements.
Sub-processes
y Expedite orders and facilitate required changes. y Receive material and supplies, update databases, and reconcile discrepancies. y Accept hardware, software, or services. y Deliver materials and services as required, either direct or to drop-off points. y Handle returns. y Install hardware, software, or services.
82
Managing the IT Procurement Process y y y y y
Administer contract. Process invoices and issue payment to suppliers. Resolve payment problems. Manage post-installation services (e.g., warranty, maintenance, etc.). Resolve financial status and physical disposal of excess or obsolete assets. y Maintain quality records. Key Issues [Themes]
y What are some provisions for early termination and renewals? [L] y What are the best methods for assessing vendor strategies for ongoing maintenance costs? [ER] y What interaction between various internal departments aids the processes? [IR]
Management Process M1: Supplier Management Process Definition
The process of optimizing customer-supplier relationships to add value to the business Sub-processes
y Categorize suppliers by value to the organization (e.g., volume, sole source, commodity, strategic alliance). Allocate resources to most important (key) suppliers. y Develop and maintain a relationship strategy for each category of supplier. y Establish and communicate performance expectations that are realistic and measurable. y Monitor, measure, and assess vendor performance. y Provide vendor feedback on performance metrics. y Work with suppliers to improve performance continuously; know when to say when. y Continuously assess supplier qualifications against requirements (existing and potential suppliers). y Ensure relationship roles and responsibilities are well-defined. y Participate in industry/technology information sharing with key suppliers. Key Issues [Themes]
y How does anyone distinguish between transactional/tactical and strategic relationships? [ER] y How can expectations on both sides be managed most effectively? Should relationships be based on people-to-people understandings, or 83
ACHIEVING STRATEGIC IT ALIGNMENT
y
y y y y y y y y y
y y y y y
solely upon the contractual agreement (get it in writing)? What is the right balance? [ER] How can discretionary collaborative behavior — cooperation above and beyond the letter of the contract — be encouraged? Are true partnerships with vendors possible, or does it take too long? What defines a partnership? [ER] How should multiple vendor relationships be managed? [ER] How should communication networks (both internal and external) be structured to optimize effective information exchange? Where are the most important roles and contact points? [IR] How formal should a measurement system be? What kind of report card is effective? What are appropriate metrics for delivery and quality? [M] What is the best way to continuously assess the ability of a vendor to go forward with new technologies? [M] What legal aspects of the relationship are of most concern (e.g., nondisclosure, affirmative action, intellectual property, etc.)? [L] What is the best way to keep current with IT vendor practices and trends? What role does maintaining market knowledge play in supplier management? [M] What is the optimal supplier–management strategy for a given environment? [S] How important is the development of master contract language? [L] In some sectors there is an increasing number of suppliers and technologies, although in the others vendor consolidation is occurring. Under what circumstances should the number of relationships be expanded or reduced? [ER] What are the best ways to get suppliers to buy into master agreements? [L] What are the best ways to continuously judge vendor financial stability? [M] Where is the supplier positioned in the product life cycle? [M] How should suppliers be categorized (e.g., strategic, key, new, etc.) to allow for prioritization of efforts? [M] What are the opportunities and concerns to watch for when one IT supplier is acquired by another? [M]
Management Process M2: Asset Management Process Definition
The process of optimizing the utilization of all IT assets throughout their entire life cycle to meet the needs of the business 84
Managing the IT Procurement Process Sub-processes
y Develop and maintain asset management strategies and policies. Identify and determine which assets to track; they may include hardware, software licenses, and related services. y Implement and maintain appropriate asset management databases, systems, and tools. y Develop a disciplined process to track and control inventory to facilitate such things as budgeting, help desk, life-cycle management, software release distribution, capital accounting, compliance monitoring, configuration planning, procurement leverage, redeployment planning, change management, disaster recovery planning, software maintenance, warranty coverage, lease management, and agreement management. y Identify the factors that make up the total life-cycle cost of ownership. y Communicate a software license compliance policy throughout the organization. Key Issues [Themes]
y What assets are included in IT asset management (e.g., human resources, consumables, courseware)? [F] y How can legal department holdups be reduced? [P] y What is the best way to communicate corporatewide agreements? [IR] y How should small ticket assets be handled? [P] y How does a company move from reactive to proactive contracting? [S] y Are there ways of dealing with licenses that require counts of users? [L] y What are the best ways of managing concurrent software licensing? [L] y Can one be contracting for efficiency using national contracts for purchase, servicing, licensing? [P] y How can software be managed and tracked as an asset? [F] y How can the workload in software contracting be reduced? [P] y Are there ways to encourage contract administration to be handled by the vendor? [P] y Is it possible to manage all three life cycles simultaneously: technical, functional, and economical? [S] y How does a company become proactive in risk management? [S] y What is the appropriate assignment of internal responsibilities (e.g., compliance)? [IR] y Do all items need to be tracked? [P] y How much control (a) can the company afford? (b) does the company need? (c) does the company want? [F] y What are the critical success factors for effective asset management? [S] 85
ACHIEVING STRATEGIC IT ALIGNMENT y What practices are most effective for the redeployment of assets? [P] y Are there adequate systems available to track both hard and soft assets? Are there any integrated solutions (support, tracking, and contract management)? [P] y What are the best ways to handle the rapid increase in volume and rapid changes in technology? [P] y What is the appropriate reaction to dwindling centralized control of the desktop with nonconformance to guidelines and procedures? [IR] y Is there a true business understanding of the total cost of ownership over the entire life cycle of an asset? [F] y What are the impacts on organizational structure? [IR] y What kind of reporting is most effective? [P] y How can one manage tax issues — indemnification, payments, and insurance issues? [F] y What issues should be considered in end-of-lease processes? [P] Management Process M3: Quality Management Process Definition
The process of assuring continuous improvement in all elements of the IT procurement framework Sub-processes
y Define and track meaningful process metrics on an ongoing basis. y Conduct periodic quality reviews with suppliers: Provide formal feedback to vendors on their performance. Facilitate open and honest communication in the process. y Collect and prioritize ideas for process improvement. y Use formal quality improvement efforts involving the appropriate people: Participants may include both internal resources and vendor personnel. y Recognize and reward quality improvement results on an ongoing basis: Recognize nonperformance/unsatisfactory results. y Audit vendors’ facilities and capabilities. y Conduct ongoing performance tests against agreed upon standards (e.g., acceptance test, stress test, regression test, etc.). y Utilize appropriate industry standards (e.g., ISO 9000, SEI Capability Maturity Model). y Periodically review vendors’ statistical process control data.
86
Managing the IT Procurement Process Key Issues [Themes]
y What is the best way to drive supplier quality management systems? [ER] y What is the appropriate mix of audits (supplier/site/regional, etc.) for quality and procedural conformance? [M] y What is the importance of relating this process to the earliest stages of the requirement determination process? [P] y What corrective actions are effective? [P] y When and how is it appropriate to audit a supplier’s financials? [M] y What is an effective way to audit material or services received? [M] y What is the best way to build quality assurance into the process, as opposed to inspecting for quality after the fact? [P] y What metrics are the most meaningful quantitative measures? [M] y How can one best measure qualitative information, such as client satisfaction? [M] y When should one use surveys, and how can they be designed effectively? [M] y How often should measurements be done? [M] y How does one ensure that the data collected is valid, current, and relevant? [M] y What is the best medium and format to deliver the data to those who need it? [P] y What are used as performance and quality metrics for the IT procurement function? [M] y How does one effectively recognize and reward quality improvement? [ER] y When is it time to reengineer a process rather than just improve it? [P] y How much communication between vendor and customer is needed to be effective? [ER]
87
This page intentionally left blank
Chapter 7
Performance Metrics for IT Human Resource Alignment Carol V. Brown
A key asset of any organization is its human resources. In the late 1990s, attracting, recruiting, and retaining IT workers became a major challenge for human resource managers, and many IT organizations established their own specialists to manage this asset. Today, the supply of IT professionals is more in balance with the demand, and managers need to turn their attention to proactively aligning their IT human resources with the organization’s current and future needs. The objective of this chapter is to present some of the issues involved in designing performance metrics to better align the IT organization with the business. The chapter begins with a high-level goal alignment framework. Then some guidelines for selecting what to measure, and how to measure, are presented. A case example is then used to demonstrate some practices in detail. The chapter concludes with a short discussion of best practices and ongoing challenges. IT PERFORMANCE ALIGNMENT A major assumption underlying the guidelines in this chapter is that organizations align their performance metrics with their goals. As shown in the framework in Exhibit 1, IT performance metrics are directly aligned with the performance goals for an IT organization. The IT performance goals are aligned with the goals of the organization via the goals for the IT function as well as the goals for the human resources (HR) function. In many organizations, alignment with the goals of the HR function is achieved by assigning an HR specialist to the IT organization. Recently, 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
89
ACHIEVING STRATEGIC IT ALIGNMENT
Organizational Goals
HR Function Goals
IT Function Goals
IT Performance Goals
IT Performance Metrics
Exhibit 1. Goal Alignment Framework
many IT organizations have also implemented a matrix reporting relationship for an HR specialist, which creates an accountability to both the IT functional head and the HR head. Another trend has been assigning traditional HR tasks to one or more IT managers. Both of these approaches have become more prevalent as the recruiting, rewarding, and retention of the IT workforce has become recognized as too critical to leave in the hands of HR specialists whose accountability is only to the HR function. IT organizations typically have well-established metrics for IT delivery performance — both IT application project delivery metrics and IT service delivery metrics. For example, the traditional success metrics for a new IT systems project are on-time, within-budget delivery of a high-quality system with the agreed-upon functionality and scope. However, IT organizations that only track IT delivery metrics are not necessarily aligning their IT human resources with the organization’s goals. Other IT performance metrics that should be captured are IT human capital effectiveness measures, such as (1) the desired inventory of internal IT skillsets, (2) the optimal number of external (contract) employees as a percentage of the total IT workforce, and (3) the ideal turnover rate for internal IT employees to ensure knowledge retention as well as an infusion of new skills. These human capital metrics are context driven. For example, ideal turnover rates vary greatly, and even within a given organization the ideal turnover rate for the IT function may be significantly different than the ideal turnover rates for other functions. An average ideal turnover rate for IT workers just above 8 percent was recently reported, based on a sample of more than a hundred U.S.-based manufacturing and service companies.1 However, only about half of those surveyed (48 percent) were able to 90
Performance Metrics for IT Human Resource Alignment
IT Work Environment Effectiveness IT People
IT Processes
IT Delivery Goals Human Capital Goals
Exhibit 2. What to Measure
attain their goal during a time in which there was perceived to be an acute shortage of IT professionals (mid-year 1998). Further, IT managers need to explicitly set guidelines for the ideal “balance” between IT delivery goals and IT human capital goals for their managers to achieve. This is because these two sets of IT goals are often in conflict. For example, repeatedly assigning the most knowledgeable technical resources to a maintenance project because of the in-depth knowledge of that resource may not help that person grow his or her technology skills: the project might be a success but the IT organization’s target inventory of new IT skills for future projects might be jeopardized. WHAT TO MEASURE A framework for thinking about key categories of metrics to assess an IT organization’s performance is shown in Exhibit 2. The outcome goals toward the right of the exhibit include the above-mentioned goals of IT delivery and IT human capital, and the desired balance of these potentially conflicting goals. Metrics are also needed for three other IT organization factors that impact these IT effectiveness goals: characteristics of the IT people (IT workforce), the in-place IT processes (including IT HR processes), and the IT work environment. Each of these categories is described below. IT People Metrics. Of importance here are metrics that capture progress
toward the development of skill proficiencies of the IT workforce and how well IT personnel are currently being utilized, in addition to customer satisfaction with IT resources. 91
ACHIEVING STRATEGIC IT ALIGNMENT IT Process Metrics. These metrics assess the quality of the processes being used to accomplish the IT work, including the effectiveness of IT human resource processes in developing human capital. Typical examples of the latter are recruiting effectiveness metrics, the number of training hours it takes for an employee to achieve a certain level of proficiency with a given technology, and the time it takes to staff a project with the requisite skills. IT Work Environment Metrics. Researchers have consistently found work environment variables to be highly valued by IT workers, including the extent to which they have opportunities to learn about new technologies and the extent to which they find their work to be challenging. More recent surveys of IT workers have also found two other workplace characteristics to be highly valued: opportunities to telecommute and flexible work hours.
Finally, achieving alignment between IT performance metrics and the multiple goals of the IT organization requires not only an investment in metrics design programs, but also an investment in periodic evaluation programs to ensure that the metrics being collected are also helping to achieve the desired behaviors and outcomes. That is, unintended behaviors can sometimes result due to deficiencies in a metrics program. We return to this important idea in the case example below. HOW TO MEASURE Determining how to measure an outcome or behavior requires careful consideration of several design characteristics and their trade-offs. Based on a synthesis of writings by thought leaders (e.g., Kaplan and Norton,2 S.A. and A.M. Mohrman3), these design characteristics can be grouped into four categories. Each category is described below, and examples are provided in Exhibit 3. Criteria for Measurement. The best criterion with which to measure an IT performance variable depends on the intended purpose of the performance metric. For example, is the metric to be used as the basis for a merit award, or is to be used for communications with business unit stakeholders? Is it a team-based or an individual worker metric? Source(s) for Measurement. For each measurement criterion, the best source for the metric needs to be selected. First of all, this will depend on the appropriate level of measurement; for example, is it a project, skillset group, or individual level metric? Can the metric be collected automatically as a part of a regular IT work process — for example, as part of a project log, project document, or computer-based training system? Is only one source needed, or will multiple sources be asked to measure the same variable — and if multiple sources, how will they be selected? For example, 92
Performance Metrics for IT Human Resource Alignment Exhibit 3.
How to Measure
Criteria
Source(s)
• • • • • • •
• • • • Collection Method
Frequency
Aligned with strategy For communication, analysis, rewards To internal and external audiences Team based or individual Time based Ratios or absolute Single versus multiple sources — Multiple projects, matrix reports — Multiple levels — Example: 360°above, lateral, below Employee reports — Potential for rater errors or bias Employee logs Project documentation Automated capture
• Quantitative items — Counts and ratios — Scaled (categorical, Likert-type, bimodal) • Qualitative items — Open-ended questions — Anecdotal accounts (stories) • Survey administration (paper versus electronic) • In-person interviews (formal and informal) • Periodic, annual • Continuous but aggregated (week, month, quarter, bi-annual) • On-demand
in some organizations that have moved to a 360-degree individual performance appraisal system, IT workers help choose employees above them, below them, and peers who will be asked to provide formal evaluations of their work. Collection Method. Even after the measurement criteria and source are determined, design decisions associated with the methods to collect the performance data may still need to be carefully assessed. Some of the design choices may differ based on whether a quantitative measure or a qualitative measure is more appropriate. Quantitative metrics include counts and ratios, as well as scaled items. Common scales for capturing responses to sets of items include bimodal scales (with labels provided for two endpoints of a continuum) and Likert-type scales (such as a scale from 1 to 5 with labels provided for multiple points). Qualitative metrics can be collected as responses to open-ended questions, or as anecdotal accounts (or stories), which could yield insights that otherwise would not have been 93
ACHIEVING STRATEGIC IT ALIGNMENT tapped into. The choice of data collection methods can also have significant cost implications. For example, collection of data from targeted individuals via a survey may be less costly (and less time-intensive) than interview-based methods, and it also allows for anonymous responses. Data collected via a survey form is usually less rich and more difficult to interpret than data collected via telephone or in-person interview methods. Frequency. Another design consideration with major cost implications is the frequency with which to collect a given metric. Some metrics can be collected on an annual basis as part of an annual performance review process or an annual financial reporting process. Other metrics can be collected much more frequently on a regular basis — perhaps even weekly. Weekly metrics may or may not also be aggregated on a monthly, quarterly, or biannual basis. The most effective programs capture metrics at various appropriate time periods, and do not solely rely on annual processes to evaluate individual and unit performance.
The schedule for metric collection, as well as the mechanisms used for reporting results, must also be continuously reevaluated. Some IT organizations have adopted a “scorecard” approach, not unlike the “dashboard” templates that have been adopted for reporting critical business metrics. CASE EXAMPLE: IT PERFORMANCE METRICS AT NATURAL Natural is a large, Fortune 500-sized company competing in the energy industry, that embarked on an IT metrics redesign initiative as part of an organizational restructuring. The IT workforce had been receiving poor ratings in IT delivery and customer satisfaction. The new IT function goals were to achieve a high perceived value for IT delivery, customer service, and workforce utilization, as well as to build an IT talent pool with IT, business, and leadership skills. Exhibit 4 shows how Natural’s IT performance goals were aligned with the organization’s overall goals via not only IT function goals but also the HR function goals. Top management wanted its HR leaders to foster the development and retention of organizational talent to enable the company to meet its aggressive targets for profitability and growth within the context of a rapidly changing world. One of the new core values was to motivate workers with relevant incentives and performance metrics. Natural’s primary IT performance goals were to improve its IT capabilities and to improve perceptions of the value provided by the IT organization. IT application development resources that had been working within IT groups at the business division level were re-centralized to a corporate IT group to focus on improving IT capabilities. Each IT professional was assigned to a skillset group (center of excellence). Each center had one or 94
Performance Metrics for IT Human Resource Alignment
Organizational Goal: Exceed aggressive profitability targets and grow
HR Function Goals: Instill Core Values: - Vision, Decision Rights - Virtues and Talents - Incentives and Measures
-
-
IT Function Goals: High perceived value for IT delivery, customer service, and utilization Build IT talent for IT, business, leadership skills
IT Performance Goals: Focus on improving IT capabilities and perceived value Reduced number of contract employees
IT Performance Metrics for Systems Development Unit: Utilization People Development Technology Leverage Financials Project Execution Customer Satisfaction
Exhibit 4.
Aligning Goals and Metrics at Natural
more “coaches” who were responsible for training programs and personnel scheduling that would help their workers to hone their new IT skills. A related goal was to reduce the number of contract employees. As shown in Exhibit 4, six categories of IT performance metrics were identified as critical for this systems development unit to achieve its new IT performance goals: utilization, people development, technology leverage, customer satisfaction, project execution, and financials. The utilization metrics made explicit Natural’s new thrust on rebalancing its human capital and delivery goals: the specific metrics included time spent on personnel development and retooling. Metrics in two other categories specifically tracked performance gains for people development and technology leverage. Examples of specific metrics implemented for five of the categories (all but Financials) are provided in Exhibit 5. Because of the clear linkage between building IT talent and the organization’s new set of core values, the IT leaders gained approval from its business customers for pricing the group’s IT services with a 10 percent markup, in order to have a funding mechanism for investments in IT workforce development, improved IT processes, and state-of-the-art technologies. Two templates were completed for each of the specific metrics. The first template explicitly linked the IT unit’s vision and strategy to the metric category. Behaviors that the IT managers expected to be impacted by these metrics were also identified in detail. For example, for the Utilization metric 95
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 5. Specific Metrics by Category at Natural Utilization • Total utilization percentage • Total development percentage • Number of retoolings • Number of days to fill personnel requisitions • Contractor numbers People Development • Employee perception of development • Average company experience, time in role • Dollars spent on people development Technology Leverage • Employee proficiency level by tool category • Tool learning curve (time to be proficient) • Technology demand level Customer Satisfaction • Improvement in communication • Accurate time, cost, and scope estimates • Management of change issues • Ability to meet customer quality expectations • Ability to meet customer value expectations Project Execution (Delivery) • Predictability (on-time, under budget) • Quality (maintenance cost versus projected) • Methodology compliance • Resource optimization level • Project health rating (risk management) • Number of trained project managers
category, the anticipated behavioral impacts included improved planning of resource needs, improved estimating of project time and costs, and better decision making about external hires (contractors) by IT HR managers. In addition, however, potentially unintended behaviors were also identified; that is, behaviors that could be inadvertently reinforced by introducing this new metric category. For example, measuring the utilization of employees from a given center of excellence could lead to over-scheduling of employees in order to achieve a high performance rating, but at the cost of less individual progress toward skill-building — because training typically took place during unscheduled “bench-time.” Another potential downside was that the supervisor (coach) might assign an employee who was on-the-bench to a project when she or he was not the best resource for 96
Performance Metrics for IT Human Resource Alignment the project assignment, in order to achieve a high resource utilization rating. In this situation, higher utilization of IT workers might be achieved, but at the expense of project delivery and customer satisfaction goals. A second template was used to document characteristics of each specific metric, such as: • • • • • •
General description of the measure What is measured (specific data elements and how they are related) Why this data was being measured (what it would be used for) The measurement mechanism (source and time period) How the measure is calculated (as relevant) The target performance level or score
Of the six metric categories, two were measured monthly (utilization, and financials); three quarterly (people development, customer satisfaction, and project execution); and one biannually (technology leverage). Four new people development measures were baselined first because it was a new category of metrics that was considered key to demonstrating the success of the newly centralized IT unit. The technology leverage measures were baselined using a self-assessment survey of employees in conjunction with estimated demands for specific technologies and technology skillsets for both current and anticipated IT projects. Whenever possible, intranet-based Web forms were used for data collection. Although there was an emphasis on quantitative measures, qualitative measures were also collected. For example, a special form was available to internal customers to make it easy to collect “success stories.” Natural assigned one IT manager to be the “owner” of each metric category, not only as a category owner during the initial design and implementation of the relevant specific metrics, but also on an ongoing basis. Given the potential for influencing unintended behaviors (as described above), each owner was responsible for monitoring the behavioral impacts of each specific metric in that category. The owner was also relied upon to provide insight into the root causes of missed targets. Over the long term, metric owners would also be held accountable for anticipating potential changes to the efficacy of a specific metric due to changes in goals and processes that occurred inside and outside the IT organization. BEST PRACTICES The metrics project at Natural is a successful case example of quickly developing new metrics to incent IT workers for new behaviors to meet new IT performance goals within a systems development context. It also demonstrates several “best practices” that this author has identified from 97
ACHIEVING STRATEGIC IT ALIGNMENT more than a dozen case examples and readings on human resource management, as follows. Align IT Metrics with Organizational Goals and Processes. I f y o u d o n o t link metrics to an organization’s vision and goals, you only have facts. Natural’s IT managers explicitly linked each performance metric with the IT function’s goals and multiple metrics categories. In addition, the HR program to instill new core values was reinforced by the emphasis that Natural’s IT managers placed on their own metrics initiative. Finally, the company’s aggressive profitability and growth goals were communicated to the IT workforce so that each employee could see why not only IT delivery but also IT human capital development were IT organization goals that were aligned with the goals of the business. Focus on a Salient, Parsimonious Set of Metrics. B y f o c u s i n g o n t h e achievement of six categories of performance metrics, Natural’s IT managers could more easily communicate them to their IT workforce and business customers. Their templates helped them make decisions about which specific metrics in each category should be introduced first, taking into account the potential relationships across metrics categories. Recognize Motivators and Inhibitors. By explicitly stating the relation-
ships between each metric and brainstorming the intended, and unintended, behaviors from introducing a new metrics category, Natural’s IT managers had a head-start at recognizing potential inhibitors to achieving a given performance goal. Incorporate Data Collection into Work Practices. Performance measurements are not cost-free. By incorporating data collection into regular work processes, costs can be minimized and the monitoring of their collection can be minimized. Further, if customer satisfaction with a given project team is collected at regular points in the project, the data is likely more meaningful (and action-able) than if the project satisfaction data is only collected as part of an annual customer satisfaction survey process. Assign “Owners” and Hold Regular Reviews to Identify Unintended Behaviors and Inhibitors. By assigning ownership of each metric category to one
IT manager, the likelihood of early identification of unintended behaviors and impacts due to other changes within the IT organization is considerably higher. Because the metrics initiative was new at Natural, regular posthoc reviews were part of the original metrics project. However, the danger for all organizations is that after an initial implementation period is over, metric monitoring may be forgotten. 98
Performance Metrics for IT Human Resource Alignment Remember: “You Get What You Reinforce.”4 If on-time delivery is the only
metric that is visibly tracked by management, do not be surprised when project teams sacrifice system quality to finish the project on time. ONGOING CHALLENGES Although the performance demands for an IT organization, and therefore its performance metrics, need to continually evolve, several common challenges for designing metrics can also be anticipated. First, it is difficult to show progress when no baseline has been established. One of the early tasks in an IT metrics (re)design program is to establish a baseline for the metric. But, depending on the metric, an internal baseline may take three months, or six months, or longer. Too often, IT organizations undertake major transformation efforts but neglect to take the time to identify and capture “before” metrics (or at least “early” metrics) so that progress can be quantified. In some cases, continuing to also collect “old” metrics will help to show interim progress. Another common challenge is paying enough attention to People and Process metrics when the organization is faced with aggressive project delivery timelines. In most situations, IT human capital initiatives will always take second place to IT delivery demands. This means that attaining a more “balanced” approach in which more weight is given to IT people issues can only be achieved if this is a goal clearly communicated from the top of the organization. Although team-based metrics have become more common, the difficulties encountered with moving from an employee appraisal process based on individual-level metrics to one based on team-level metrics should not be underestimated. For example, it is not uncommon for people who are accustomed to individual rewards to not feel equitably treated when they are rewarded based on the performance of a team or workgroup.3 HR experts have suggested that an employee’s perception of equity increases when the reward system is clearly understood and there are opportunities to participate in group-based efforts to improve group performance. A related challenge is how to develop a set of metrics that will reinforce both excellent team-based outcomes as well as exceptional individual talent and innovation. Finally, today’s increasingly attractive technical options for telework and virtual teaming offer a new kind of flexibility in work arrangements that is likely to be valued by workers of multiple generations, faced with different work–home balance issues, not just Generation X workers who thrive on electronic communications. However, one of the key challenges to 99
ACHIEVING STRATEGIC IT ALIGNMENT implementing telework arrangements is a metrics issue: moving away from behavior metrics to outcome performance metrics. CONCLUSION Human resources are strategic assets that need to be aligned with the goals of the organization. Whether IT human resource skills are plentiful, or scarce, developing metrics that reward desired people behaviors and performance is a strategic capability that deserves increased IT management attention. References 1. Agarwal, R., Brown, C.V., Ferratt, T.W., and Moore J.E., SIM Member Company Practices: Recruiting, Retaining, and Attracting IT Professionals, June 1999 (www.simnet.org). 2. Kaplan, R.-S. and Norton, D.-P., The Balanced Scorecard: Translating Strategy into Action, Harvard Business School Press, Boston, 1996. 3. Mohrman, S.A. and Mohrman, A.M., Jr., Designing and Leading Team-Based Organizations: A Workbook for Organizational Self-Design, Jossey-Bass, San Francisco, 1997. 4. Luthans, F. and Stajkovic, A.D., “Reinforce for Performance: The Need to Go Beyond Pay and Even Rewards,” Academy of Management Executive, 13(2), 49–57, 1999.
Other Suggested Reading Mathis, R.L. and Jackson, J.H., Human Resource Management, 9th edition, South-Western College Publishing, Cincinnati, OH, 2000.
ACKNOWLEDGMENTS The author is grateful to the members of the ICEX Knowledge Exchange for IT Human Resources for sharing their insights on the topic, including the ICEX group leaders Sarah B. Kaull and Kelly Butt.
100
Chapter 8
Is It Time for an IT Ethics Program? Fritz H. Grupe Timothy Garcia-Jay William Kuechler
Technologists often think of themselves as involved in activities that have no ethical implications. They do not see their systems as being good or bad, or right or wrong, in and of themselves. Neither, in many instances, do they feel that these issues are part of their responsibility. Let someone else decide whether they want to use the system. Let them decide how the system should be deployed or how the data collected might be reused. But ethical questions intrude themselves into IT operations whether anyone wants them to or not. The recent designation of St. Isidore of Seville by the Pope as the patron saint of the Internet does not eliminate the need for organizational as well as personal ethics in the area of information technology. Consider the ethical issues raised when discussions focus on questions such as: • Is it permissible for client-related, personally identifiable data to be used, traded, and sold? • Assuming that a company has the legal right to monitor electronic mail, can this mail be read by specific people (i.e., the immediate supervisor, the IT manager, the corporate lawyer)? • Can employee data be shared with an insurance company? • Are systems that store personal data vulnerable to computer hacking? • Should multiple conversational language programs be introduced simultaneously or as they become ready? • What responsibility do technicians have to report “suspicious,” perhaps pornographic, files on corporate microcomputers? • Should tracking software be used to monitor employee movements on the Internet? • At what point do your e-mails to customers become unwelcome spam? 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
101
ACHIEVING STRATEGIC IT ALIGNMENT To be sure, these ethical issues may also have legal and practical implications. Nonetheless, IT personnel should not approach these issues as though their actions are ethically neutral.They are not. Few IT workers know that professional organizations such as the Association for Computing Machinery and the Institute for Electronics and Electrical Engineers have promulgated codes of ethics. Of those who do, even fewer know how to apply the codes and have entered into serious conversations about the ethical trade-offs they may be required to consider. Most IT workers consider themselves ethical, but ethical decision making requires more than just a believing that you are a good person. It also requires sensitivity to the ethical implications of decisions. Further, ethical discussions rarely receive the depth of analysis they deserve. Many of these questions demand an ability to evaluate issues with complex, ambiguous, and incomplete facts. What is right is not necessarily what is most profitable or cheapest for the company. Ethical decision making requires ethical commitment, ethical consciousness, and ethical competency. Currently, the ethical framework of IT is based primarily on the tenets of individual ethics. This is problematical, however, because it suffers from sins of omission (i.e., forgetting to ask relevant questions) and sins of commission (i.e., being asked to undertake unethical actions and not being able to invoke personal ethical standards by a superior). Many governmental IT agencies are implementing formal approaches to raising, discussing, and resolving ethical questions.The time may be ripe to discuss doing the same in business IT departments. WHY AN ETHICS PROGRAM? If management believes that it and its employees are basically ethical, why is a formal ethics program worth pursuing? Perhaps the strongest among many motivations for this effort is the desire to make ethical behavior standard practice within the organization. Employees under pressure to economize, to reach more clients, and to produce more revenue may begin to feel that unethical practices are implied — if not even encouraged. An ethics program announces management’s commitment to ethical behavior in all aspects of the IT effort. It encourages people to adopt and pursue high ethical standards of practice. Knowledge that ethical issues are being debated motivates them to identify these issues and make them visible. It shapes their behavior so that they act ethically and have confidence that management will back them when they take ethically correct actions whenever possible. By considering positive ethical positions prior to the development of new computer systems, the incorporation of conscious decision making in the system can be managed when change is least costly and before damage from an ethically indefensible system is incurred. The support of professional codes of ethics promotes the image of IT workers as professionals in pursuit of reputable goals. Moreover, it has been often observed that good ethics is good business. The customers of an ethical business soon 102
Is It Time for an IT Ethics Program? come to see that the company is committed to providing the best possible service. Ethical considerations should be openly and thoroughly discussed when systems are being implemented that affect the company’s workforce. They also loom larger when they affect vulnerable populations, the poor, or the under-educated. Although it should not be a special consideration, the prospect that spin-off effects of a system might bring the organization into the public perception encourages special attention to the ethical basis of a system and whether or not it can stand public scrutiny. HOW TO ORGANIZE ETHICS AS A PROGRAM IT management is complex, driven by many forces, and subject to issues with a growing number of ethical implications. When conducting daily business activities as a manager, maintaining high personal ethics is extremely important, but maintaining high organizational ethics must be every employee’s responsibility as well. To that end, we suggest that IT organizations adopt an ethics program to help their staff become aware of and deal with these issues. Building and adopting an organizational ethics program cannot make people ethical, but it does help them make better decisions. The benefits accrue to employees who are treated ethically as much as it does to customers and clients. One ethicist suggests that an ethics program includes the need to: • • • • • • • • •
Establish organizational roles to manage ethical issues. Schedule ongoing assessment of ethics requirements. Establish required operating values and behaviors. Align organizational behaviors with operating values. Develop awareness and sensitivity to ethical issues. Integrate ethical guidelines to decision making. Structure mechanisms to resolve ethical dilemmas. Facilitate ongoing evaluation and updates to the program. Help convince employees that attention to ethics is not just a kneejerk reaction done to get out of trouble or to improve one’s corporate public image.
The number and magnitude of challenges facing IT organizations are unprecedented. Ethical issues that contribute to the anxiety of IT executives, managers, and staff are dealt with every day. Included in the sources of this angst are pressure to reduce costs, mergers and acquisitions, financial and other resource constraints, and rapid advances in IT technologies that complicate and often hide the need for ethical decision making during system design, development, and implementation. However, people cannot and should not make such decisions alone or without a decision-making framework. IT organizations should have vehi103
ACHIEVING STRATEGIC IT ALIGNMENT cles, such as a code of ethics and an ethics program, to assist with the decision-making process. Perhaps the precise steps presented here are not as important as the initiation of some well-demarcated means by which to inaugurate a conscious, ethical decision-making process. What is important is not so much the need for an academically defined methodology as the need for IT to adopt a disciplined methodology to deal with ethical decision making. Individuals in the organization need to reflect on the mission and values of IT and use that as a guide, either by itself or in concert with a defined methodology. PRINCIPLES OF ETHICS Before identifying a few core ethical principles that should be taken into account in evaluating a given issue, it is necessary to distinguish between ethical and moral assessments (questions of right and wrong) and ostensibly related principles. Legal principles, for example, impose sanctions for improper actions. One may find that what makes an action right and what makes it legal are different, perhaps even in conflict. It is also important to note that what is politically or technically desirable and what is ethical may not be the same. Guiding ethical principles set standards for the organization that go beyond the law in such areas as professional ethics, personal ethics, and general guiding principles. These principles will not always dictate a single, ethically acceptable course of action, but they help provide a structure for evaluating and resolving competing ethical claims. There are many tools and models for financial and logistic decision making, but few guides to indicate when situations might have an ethical implication. Yet this awareness is a crucial first step before decisions are made. Recognizing the moral context of a situation must precede any attempt to resolve it. Exhibit 1 displays the most commonly asserted ethical principles — generic indicators to be used as compelling guides for an active sense of right and wrong. For each principle, an example is given of an ethical issue that might be raised by people using this principle. STRATEGIES FOR FOSTERING AN ETHICAL ORGANIZATION Adopt the Goal of Implementing an Ethics Program To implement a successful ethics program at any level, executive leadership is desirable on the part of the president of the organization. Within IT, an ethics program will need the equally public support of the IT director. Both executives must be committed to offering leadership in this arena. Public and unequivocal statements supporting the attainment of ethical goals should be promoted as a general goal of the company and of IT. 104
Is It Time for an IT Ethics Program? Exhibit 1. Selected Ethical Bases for IT Decision Making Golden rule: Treat others as you wish to be treated. • Do not implement systems that you would not wish to be subjected to yourself. • Is your company using unlicensed software although your company itself sells software? Kant’s categorical imperative: If an action is not right for everyone, it is not right for anyone. • Does management monitor call center employees’ seat time, but not its own? Descartes’ rule of change (also called, the slipper y slope): If an action is not repeatable at all times, it is not right at any time. • Should your Web site link to another site, “framing” the page, so users think it was created and belongs to you? Utilitarian principle (also called universalism): Take the action that achieves the most good. Put a value on outcomes and strive to achieve the best results. This principle seeks to analyze and maximize the IT of the covered population within acknowledged resource constraints. • Should customers using your Web site be asked to opt in or opt out of the possible sale of their personal data to other companies? Risk aversion principle: Incur least harm or cost. Given alternatives that have varying degrees of harm and gain, choose the one that causes the least damage. • If a manager reports that a subordinate criticized him in an e-mail to other employees, who would do the search and see the results of the search? Avoid harm: Avoid malfeasance or “do no harm.” This basis implies a proactive obligation of companies to protect their customers and clients from systems with known harm. • Does your company have a privacy policy that protects, rather than exploits customers? No free lunch rule: Assume that all property and information belongs to someone. This principle is primarily applicable to intellectual property that should not be taken without just compensation. • Has your company used unlicensed software? • Or hired a group of IT workers from a competitor? Legalism: Is it against the law? Moral actions may not be legal, and vice versa. • Might your Web advertising exaggerate the features and benefits of your products? • Are you collecting information illegally on minors? Professionalism: Is an action contrary to codes of ethics? Do the professional codes cover a case and do they suggest the path to follow? • When you present technological alternatives to managers who do not know the right questions to ask, do you tell them all they need to know to make informed choices? Evidentiary guidance: Is there hard data to support or deny the value of taking an action? This is not a traditional “ethics” value but one that is a significant factor related to IT’s policy decisions about the impact of systems on individuals and groups. This value involves probabilistic reasoning where outcomes can be predicted based on hard evidence based on research. • Do you assume that you know PC users are satisfied with IT’s service, or has data been collected to determine what they really think?
105
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 1.
Selected Ethical Bases for IT Decision Making (continued)
Client/customer/patient choice: Let the people affected decide. In some circumstances, employees and customers have a right to self-determination through the informed consent process. This principle acknowledges a right to self-determination in deciding what is “harmful” or “beneficial” for their personal circumstances. • Are your workers subjected to monitoring in places where they assume that they have privacy? Equity: Will the costs and benefits be equitably distributed? Adherence to this principle obligates a company to provide similarly situated persons with the same access to data and systems. This can imply a proactive duty to inform and make services, data, and systems available to all those who share a similar circumstance. • Has IT made intentionally inaccurate projections as to project costs? Competition: This principle derives from the marketplace where consumers and institutions can select among competing companies, based on all considerations such as degree of privacy, cost, and quality. It recognizes that to be financially viable in the market, one must have data about what competitors are doing and understand and acknowledge the competitive implications of IT decisions. • When you present a build or buy proposition to management, is it fully aware of the risk involved? Compassion/last chance: Religious and philosophical traditions promote the need to find ways to assist the most vulnerable parties. Refusing to take unfair advantage of users or others who do not have technical knowledge is recognized in several professional codes of ethics. • Do all workers have an equal opportunity to benefit from the organization’s investment in IT? Impartiality/objectivity: Are decisions biased in favor of one group or another? Is there an even playing field? IT personnel should avoid potential or apparent conflicts of interest. • Do you or any of your IT employees have a vested interest in the companies that you deal with? Openness/full disclosure: Are persons affected by this system aware of its existence, aware of what data is being collected, and knowledgeable about how it will be used? Do they have access to the same information? • Is it possible for a Web site visitor to determine what cookies are used and what is done with any information they might collect? Confidentiality: IT is obligated to determine whether data it collects on individuals can be adequately protected to avoid disclosure to parties whose need to know is not proven. • Have you reduced security features to hold expenses to a minimum? Trustworthiness and honesty: Does IT stand behind ethical principles to the point where it is accountable for the actions it takes? • Has IT management ever posted or circulated a professional code of ethics with an expression of support for seeing that its employees act professionally?
106
Is It Time for an IT Ethics Program? Establish an Ethics Committee and Assign Operational Responsibility to an Ethics Officer An ethics infrastructure links the processes and practices within an organization to the organization’s core mission and values. An ethics infrastructure promotes a means by which to invite employees to raise ethical concerns without fear of retribution and to demonstrate that the company is interested in fostering ethical conduct. It is a mechanism that reflects a desire to infuse ethics into decision making. First, establish an IT Ethics Committee, the purpose of which is to provide a forum for the improvement of IT and organizational ethics practices. This group, which may not be limited to the IT staff, should include people who possess knowledge and skills in applied ethics. The members should have appropriate knowledge of systems development to assist developers as they create systems that are ethically valid. The members themselves, and especially the chief ethics officer, should be seen as having personal characteristics that are consistent with the functions of the committee. That is, they should be respected, personally honest, of high integrity and courage, ethical, and motivated and committed to creating an ethical organization. The basic functions of the committee include: • The education of IT staff as to the nature and presence of ethical issues and to alert them to methods of dealing with these issues • The recommendation of and the oversight of policies guiding the development of new computer systems and the reengineering of old computer systems • The increase of staff, client, and customer satisfaction due to the deployment of ethically defensible systems • The identification of key system features that avoid institutional and individual liability • The encouragement and support of ethical standards of practice, including the creation of practices that remove ethical uncertainty and conflicts Given that most ethical questions in IT are related to systems development and maintenance practices and data privacy, adequate time to consider the issues at stake is not as significant an issue as it might be in other organizations. At a hospital, for example, ethical issues may take new forms everyday. The committee must have the prestige and authority to effect changes in system development and to keep the affected employees free of reprisals from managers whose priorities and (un)ethical principles might otherwise hold sway. Means should be found to reward rather than punish people who identify ethical problems. This may enable them to focus on broader organizational issues as well as IT conflicts specifically. The committee needs to be proactive in the identification of emerging ethical issues that not all IT personnel have come to anticipate. Initial tasks of the committee 107
ACHIEVING STRATEGIC IT ALIGNMENT and the Chief Policy Officer (CPO) are generally not difficult to determine. They should seek to clearly define the organization’s privacy policy, its security policy, and its workplace monitoring policy. Adopt a Code of Ethics Examine the codes of ethics from the Institute for Electronics and Electrical Engineers and the Association for Computing Machinery. Other codes are also available. Adopt one of the codes as the standard for your IT group as a means of promoting the need for individuals to develop their concern for ethical behavior. Make The Ethics Program Visible Post the code of ethics prominently and refer to it as decisions are being made so that people can see that its precepts have value. Similarly, let IT workers know of decisions made and of issues being discussed so that they gain experience with the processes in place and so that they understand that ethics are of compelling interest to the company. Let them know how ethical errors might have been made in the past, but have been removed or eliminated. Show gratitude to people who raise issues, rather than treating them as troublemakers. Provide occasional workshops on ethical questions as a part of an ongoing in-service training effort to better inform people about how they should proceed if a question arises and also to advertise your efforts more effectively. Establish a Reporting Mechanism For people to raise ethical concerns requires that they feel comfortable doing so. This should be possible even if a supervisor does not wish to see the question raised. Let people know how they can raise an issue without fear of dismissal or retaliation. Conducting Ethical Analysis How does one analyze ethical questions and issues? There are both quantitative and qualitative approaches to this task. The ethics committee must first develop a clear set of mission statements and value statements. Nash, writing for the Harvard Business Review, suggests that participants in a policy discussion of this nature consider the following questions: • Have you defined the problem accurately? • How would you define the problem if you stood on the other side of the fence? • How did this situation occur in the first place? • To whom and to what do you give your loyalty as a person and as a member of the corporation? 108
Is It Time for an IT Ethics Program? • • • • • •
• •
What is your intention in making this decision? How does this intention compare with the probable result? Whom could your decision or action injure? Can you discuss the problem with the affected parties before you make your decision? Are you confident that your position will be as valid over a long period of time as it seems now? Could you disclose without qualm your decision or action to your boss, your CEO, the board of directors, your family, society as a whole? What is the symbolic potential of your action if understood? Misunderstood? Under what conditions would you allow exceptions to your stand?
Such questions are likely to generate many useful discussions, both formal and informal, as questions such as those noted earlier are being reviewed or reevaluated. Consider a Board Committee on Ethics A large company might consider creating a sub-committee on ethics from within the board of directors. This committee would view ethical questions that affect other functional areas such as marketing and financial reporting. Review and Evaluate Periodically determine whether the structures and process in place make sense. Are other safeguards needed? Were recommendations for ethical behavior carried out? Have structural changes elsewhere in the company caused a need to reassess how the program is working and how it can be improved? CONCLUSION Current business literature emphasizes that organizational ethics is not a passing fad or movement. Organizational ethics is a management discipline with a programmatic approach that includes several practical tools. As stated, it is not imperative that this discipline has a defined methodology. However, organizational ethics do need to consist of knowledge of ethical decision making; process skills that focus on resolving value uncertainty or conflict as it emerges in the organization; the ability to reflect, both professionally and personally, on the mission, vision, and values of IT units; and an ethical commitment from the board of trustees and executive leaders. An ethical organization is essential for quality IT and for successful organizations. 109
ACHIEVING STRATEGIC IT ALIGNMENT Based on an exhaustive literature review and comparison of industry standards, we believe it is important that IT develop an organizational ethics discipline that is communicated throughout the organization, from the top down, and as an integral part of daily business operations. It is invaluable to have a process and a structure that guide decisions on questions such as the extent to which it is the company’s responsibility to guard against identity theft, to prevent software piracy in all of its offices — no matter how widely distributed — to protect whistleblowers should the need arise, or to limit the causes of repetitive stress injuries. An ethics program seeks to encourage all personnel to become attentive to the ethical implications of the work in which they are engaged. Once they are conscious of the potentially serious ethical implications of their systems, they begin to consider what they can do to attain ethically responsible goals using equally responsible means to achieve those ends. They incorporate into their thinking the implications other professionals bring to the profession’s attention. Most importantly, ethical perspectives become infused into the operations of the IT unit and the corporation generally. It is clear that ethical organizations do not emerge without the presence of leadership, institutional commitment, and a well-developed program. Further, ethical organizations that have clearly presented mission and values statements are capable of nurturing ethically grounded policies and procedures, competent ethics resources, and broader corporate support for ethical action. It is time for an ethics program in IT. References and Further Information Baase, Sara. A Gift of Fire: Social, Legal and Ethical Issues in Computing, Upper Saddle River, NJ: Prentice-Hall, 1997. Bowyer, Kevin. Ethics and Computing: Living Responsibly in a Computerized World, New York: IEEE Press, 2001. Business Ethics: Managing Ethics in the Workplace and Social Responsibility, http://www.mapnp. org/library/ethics/ethics.htm. Johnson, Deborah G. Computer Ethics, Upper Saddle River, NJ: Prentice-Hall, 1994. Nash, Laurel. “Ethics without the Sermon,” Harvard Business Review, 59, 1981. Spinello, Richard. Cyberethics: Morality and Law in Cyberspace, Sudbury, MA: Jones and Bartlett, 1999. Stacey, L. Edgar. Morality and Machines: Perspectives on Computer Ethics, Sudbury, MA: Jones and Bartlett, 1999.
110
Chapter 9
The CIO Role in the Era of Dislocation James E. Showalter
Peter Drucker has suggested that the role of the CIO has become obsolete. His argument suggests that information technology has become so mission critical for reaching the company’s strategic goals that its responsibility will be ultimately subsumed by the CEO or the CFO. After years of viewing information technology as an excessive but “necessary cost,” executive management has now awakened to the recognition that failing to embrace and manage “dislocating” information technologies can mean extinction. A dislocating technology is defined as a technological event that enables development of products and services whose impact creates completely different lifestyles or commerce. The Internet has been such a dislocating force, and others are on the horizon. Navigating these dislocations requires leadership and vision that must span the total executive staff, not just the CIO. This, I believe, is Drucker’s point: The management of dislocating technologies transcends any individual or organization and must become integral to the corporate fabric. However, I also believe there is still an important role, albeit a different role, for the CIO in the 21st-century enterprise. In his recent book, The Innovator’s Dilemma — When New Technologies Cause Great Firms to Fail, Clayton Christensen provides a superb argument for corporate leadership that takes the company to new enhanced states enabled by technological dislocations. The Silicon Valley success stories have been entrepreneurs who recognize the market potential of dislocations created by technology. I believe the 21st-century CIO’s most important role is to provide entrepreneurial leadership during these periods of dislocation for the company. FROM PUNCTUATED EQUILIBRIUM TO PUNCTUATED CHAOS? Evolutionary biologist Stephen Jay Gould theorizes that the continuum of time is periodically “punctuated” with massive events or discoveries that 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
111
ACHIEVING STRATEGIC IT ALIGNMENT create dislocations of the existing state of equilibrium to a new level of prolonged continuous improvement (i.e., punctuated equilibrium). The dinosaurs became painfully aware of this concept following the impact of the meteorite into the Yucatan peninsula. In an evolutionary sense, the environment has been formed and shaped between cataclysmic dislocations — meteorites, earthquakes, droughts, plagues, volcanoes, and so on. Although exact scenarios are debatable, the concept is plausible even from events occurring in our lifetime. There are many examples of analogous technological discoveries and innovations (the internal combustion engine, antibodies, telephone service, interstate highway system, etc.) that promoted whole new arrays of products and possibilities that forever changed commerce and lifestyles. In each of these examples, our quality of life improved through the conveniences these technologies enabled. The periods between dislocations are getting shorter. For example, the periods between the horse, the internal combustion engine, and the fuel cell took a century, whereas the transformations between centralized computing, distributed computing, desktop computing, network computing, and ubiquitous computing have occurred in about 40 years. In the next century, technological dislocations in communications, genetics, biotechnology, energy, transportation, and other areas will occur in even shorter intervals. In fact, change is expected so frequently that Bill Gates has suggested that our environment is actually in constant change or upheaval marked by brief respites — “punctuated chaos” rather than punctuated equilibrium. We are currently in the vortex of a dislocation or transition period that many companies will not survive in the 21st century. With certainty, many new companies, yet unidentified, will surface and replace many of the companies currently familiar to us. No company is exempt from this threat, even the largest and most profitable today. The successes will be those that best leverage the dislocating technologies. To protect their companies from extinction, CIOs must understand the economic potentials and consequences of dislocating technologies. THE ERA OF NETWORK COMPUTING We are currently experiencing a new technological dislocation that embodies the equivalent or possibly greater potential of any previous innovation. This new dislocation is network computing, or perhaps a better nomenclature, ubiquitous communications. Network computing involves the collaborative exchange of information between objects, both human and inanimate, through the use of electronic media and technologies. Although network computing could arguably be attributed to early telecommunications applications in which unsophisticated display terminals were attached to mainframe computers through a highly proprietary communi112
The CIO Role in the Era of Dislocation cations network, the more realistic definition begins with the Internet. Moreover, thinking must now focus on anything-to-anything interchange and not be limited only to human interaction. Navigating this transition will challenge every company — a mission for the CIO. From today’s vantage, networking computing includes (1) the Internet and Internet technologies and (2) pervasive computing and agent technologies. The Internet and Internet Technologies The compelling and seductive power of the Internet has motivated all major worldwide enterprises to adopt and apply Internet technologies within their internal networks under local auspices. These private networks, called intranets, are rapidly becoming the standard communications infrastructure spanning the total enterprise. Intranets are indigenous and restricted to the business units that comprise the enterprise. They are designed to be used exclusively by employees and authorized agents of the enterprise in such a way that the confidentiality of the enterprise’s data and operating procedures are protected. Ingress and egress to and from intranets are controlled and protected by special gateway computers called firewalls. Gateway services, called portal services, now enable the enterprise to create a single portal to its network of internal Web sites representing specific points of interest that the company allows for limited or public access. In general, the development and stewardship of intranets are under the auspices of the CIO. Whereas the Internet conceptually initiated the possibilities afforded by network computing to an enterprise, it is the intranets that have enabled the restructuring or reengineering of the enterprise. Essentially all major enterprises have launched intranet initiatives. Due largely to ease of implementation and low investment requirements, enterprises are chartering their CIOs to implement intranets posthaste and without time-consuming cost justifications. In most cases, enterprises are initially implementing intranets to provide a plethora of “self-service” capabilities available to all or most employees. In addition to the classic collaboration services (e-mail, project management, document management, and calendaring), administrative services such as human resource management and financial services have been added that enable employees to manage their respective portfolios without the intervention of service staffs. This notion enables former administrative staffs to be transformed into internal consultants, process specialists, and other more useful positions for assisting in the successful implementation of major restructuring issues, staff retraining, and, most important, the development of a new corporate culture. Over time, all applications, including mis113
ACHIEVING STRATEGIC IT ALIGNMENT sion-critical applications, will become part of the intranet. Increasingly, these duties are being outsourced to trusted professional intranet specialists. Clearly, CIOs must provide the leadership in the creation and implementation of the company’s intranet. Companies in the 21st century will be a network of trusted partners. Each partner will offer specific expertise and capabilities unavailable and impractical to maintain within the host or nameplate company. Firms producing multiple products will become a federation of subsidiaries, each specific to the product or services within its market segment. Each company will likely require different network relationships with different expert providers. This fluidity is impossible within the classical organizational forms of the past. To meet these growing requirements and to remain profitable, companies are forced to reduce operating costs and develop innovative supply chain approaches and innovative sales channels. Further, in both the business-to-business (buy side) and the business-to-customer (sell side) supply chains, new “trusted” relationships are being formed to leverage supplier expertise such that finished products can be expedited to the customer. Initially, this requirement has motivated enterprises to “open” their intranets to trusted suppliers (buy side) and to dealers, brokers, and customers (sell side) to reduce cycle times and cost. These extended networks are called extranets. However, the cost of maintaining extranets is extreme and generally limited to large host companies. In addition, lowertier suppliers and partners understandably resist being “hard wired,” maintaining multiple proprietary relationships with multiple host companies. This form of extranet is unlikely to persist and will be replaced by a more open approach. Industry associations such as SITA (Société Internationale de Télécommunications Aéronautiques) for the aerospace industry and the Automotive Network Exchange (ANX) for the automotive industry have recognized the need for a shared environment in which companies within a specific industry could safely and efficiently conduct commerce. Specifically, an environment is needed in which multiple trusted “virtual networks” can simultaneously coexist. In addition, common services indigenous to the industry, such as baggage handling for airlines, could be offered as a saving to each subscribing member. These industry-specific services — “community-of-interest-networks” (COINS) — are evolving in every major industry. COINS are analogous to the concept of an exchange. For example, the New York Stock Exchange is an environment in which participating companies subscribe to a set of services that enable their securities to be traded safely and efficiently. 114
The CIO Role in the Era of Dislocation For all the same reasons that intranets were created (manageability, availability, performance, and security), exchanges will evolve across entire industries and reshape the mode and means of interenterprise commerce. Supply and sales chain participants within the same industry are agreeing on infrastructure and, in some noncompetitive areas, on data and transaction standards. In theory, duplicate infrastructure investments are eliminated and competitiveness becomes based on product/customer relationships. The automotive industry, for example, has cooperatively developed and implemented the ANX for all major original equipment manufacturers and (eventually) all suppliers. In addition, ANX will potentially include other automotive-related market segments, such as financial institutions, worldwide dealers, product development and research centers, and similar participants. Industries such as aerospace, pharmaceuticals, retail merchandising, textiles, consumer electronics, etc. will also embrace industry-specific exchanges. Unlike the public-accessible Internet, which is essentially free to users, exchanges are not free to participants. By agreement, subscription fees are required to support the infrastructure capable of providing service levels required for safe, effective, and efficient commerce. The new “global internet” or “information highway” (or whatever name is ultimately attached) will become an archipelago of networks, one of which is free and open (Internet) while the others are private industry and enterprise subscription networks. The resulting architecture is analogous to today’s television paradigm — free channels (public Internet), cable channels (industry-specific exchange), and pay-for-view channels (one of service, such as a video teleconference). Regardless of how this eventually occurs, intranets are predicted to forever change the internal operations of enterprises, and exchanges are predicted to change commerce among participants within an industry. Again, the CIO must provide the leadership for his or her firm to participate in this evolving environment. Pervasive Computing and Agent Technology The second dislocation is ubiquitous or pervasive computing. Andy Grove of Intel estimated that there would be 500 million computers by 2002. In most cases, today’s computers are physically located in fixed locations, in controlled environments, on desktops, and under airline seats. They are hardly “personal” in that they are usually away from where we are, similar to our automobiles. However, this is changing dramatically. There are already six billion pulsating noncomputer chips embedded in other objects, such as our cars, thermostats, and hotel door locks throughout the world. Called “jelly beans” by Kevin Kelly in his book Out of Control and New Rules for the New 115
ACHIEVING STRATEGIC IT ALIGNMENT Economy, these will explode to over ten billion by 2005. Also known as “bots,” these simple chips will become so inexpensive that they can affordably be attached to everything we use and even discarded along with the item when we are finished using it, such as clothing and perishables. Once the items we use in daily life become “smart” and are capable of “participating” in our daily lives, the age of personal computing will have arrived. Programmable objects or agents are the next technological dislocation. Although admittedly sounding futuristic and even a bit alarming, there is little doubt that technology will enable the interaction of “real objects” containing embedded processors in the very near future. Java, Jini, Java chip, and next-generation (real-time) operating systems are enabling information collection and processing to be embedded within the “real-life” objects. For example, a contemporary automobile contains between 40 and 70 microprocessors performing a vast array of monitoring, control, guidance, and driver information services. Coupled with initiatives for intelligent highway systems (ITS), the next-generation vehicles will become substantially safer, more convenient, more efficient, and more environmentally friendly than our current vehicles. This same scenario is also true of our homes, transportation systems, communications systems (cellular phones), and even our children and persons. Every physical object we encounter or employ within our lifestyles can be represented by a software entity embedded within the object or representing the object as its “agent.” Behavioral response to recognizable stimuli can be “programmed” into these embedded processors to serve as our “agents” (e.g., light switches that sense the absence of people in the room and turn off to save energy and automobiles that sense other automobiles or objects in our path and warn or even take evasive actions). Many other types of agents perform a plethora of other routine tasks that are not specific to particular objects, such as searching databases for information of interest to the reader. The miniaturization of processors (jelly beans), network programming languages (Java), network connectivity (Jini), and appliance manufacturers’ commitment will propel this new era to heights yet unknown. Fixed process systems will be replaced by self-aligning systems enabled by agent technology. These phenomena will not occur naturally but, rather, must be directed as carefully as all other corporate resources. In my judgment, this is the role of the 21st-century CIO. SUMMARY In summary, the Internet has helped launch the information age and has become the harbinger for the concepts and structure that will enable international communication, collaboration, and knowledge access for commerce and personal growth. Although the Internet is not a universal solution to all commerce needs, it has, in an exemplary manner, established the 116
The CIO Role in the Era of Dislocation direction for the global information utility. It will remain an ever-expanding and vibrant source for information, personal communication, and individual consumer retailing. Intranets, developed by enterprises, are reshaping the manner in which all companies will structure themselves for the challenging and perilous journey in the 21st century. Complete industries will share a common exchange infrastructure for exchanging information among their supply, demand, product, and management support chains. Pervasive computing will emerge with thunder and lightning over the next few years and offer a dazzling array of products that will profoundly enrich our standard of living. Agent technology coupled with embedded intelligence in ten billion processors will enable self-aligning processes that adapt to existing environmental conditions. CIOs who focus on the business opportunities afforded by dislocating information technologies will be the ones who succeed. Even if the CIO title changes in the future, an officer of the company must provide leadership in navigating the company through technological transitions or dislocations. In this new millennium, however, there is a lot of work to be done to create the environment discussed in this chapter. As Kevin Kelly observes: “…wealth in this new regime flows directly from innovation, not optimization: that is, wealth is not gained by perfecting the known, but by imperfectly seizing the unknown.”
Successful CIOs will adopt this advice as their credo. Recommended Reading Christensen, C. 1997. The innovator’s dilemma: When new technologies cause great firms to fail. Boston: Harvard Business School Press. Drucker, P. 1994. Introduction. In Techno vision, edited by C. Wang. New York: McGraw-Hill. Gates, B. 1999. Business @ the Speed of Thought, New York: Warner Books. Kelly, K. 1997. The new rules for the new economy — twelve dependable principles for thriving in a turbulent world. Wired, September, 140. Schlender, B. 1999. E-business according to Gates. Fortune, April 12.
117
This page intentionally left blank
Chapter 10
Leadership Development: The Role of the CIO Barton S. Bolton
A successful CIO will always leave a legacy upon leaving an organization. What he or she will be remembered for will not be the applications portfolio, or the beloved infrastructure, or even the security plan. It will be the people and the organization left behind that will represent the real accomplishments of the CIO. It will be that legacy which will make the CIO a “Level 5” leader. Per Jim Collins (2001), a Level 5 leader is one whose organization continues to perform at an extraordinary level even after he or she has left. Put another way, you do not develop the organization; you develop the people and the people develop the organization. It is done successfully no other way. To get there, the CIO must serve as a role model by first understanding his or her “leadership style” and then understand the difference between leadership and management, and when to apply each one. Then, the CIO needs to develop leadership capability throughout all levels of the IT organization. Those capabilities are not just for the CIO’s direct reports and other IT managers, but also for the key individual contributors as they lead various projects, programs, and technical initiatives. WHAT IS A LEADER? So, what is this “thing” called a leader? In its simplest form, a leader is someone who has followers. A leader is found at all levels in an organization and operates very differently from a manager although he or she may have the title of manager. A good leader also knows when and how to be a follower, depending, of course, on the given situation. Let us also dispel some myths about leadership. Leaders are not born but can be created or developed. Having charisma helps but is not a nec0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
119
ACHIEVING STRATEGIC IT ALIGNMENT essary requirement. In some ways, having a personal preference of an extrovert helps, but many leaders, be they CIOs or CEOs, are basic introverts who have learned to be outward facing when they need to step into their roles as leaders. Perhaps the following quotes from leadership experts will help clarify and distinguish leaders from managers: “A leader is best when people barely know he exists, not so good when people obey and acclaim him, worse when they despise him. But of a good leader, who talks little, when his work is done, his aim fulfilled, they will say: We did it ourselves.” —Lao Tzu, The Art of War
“The first responsibility of a leader is to define reality. The last is to say thank you. In between the two, the leader must become a servant and a debtor…. A friend of mine characterized leaders simply like this: ‘Leaders don’t inflict pain; they bear pain.’” —Max DePree, Leadership Is an Art “People are led and things are managed.” —Steven Covey, Principle-Centered Leadership “When leadership is defined not as a position you hold but as a way of ‘being,’ you discover that you can lead from wherever you are.” —Rosamund Stone and Benjamin Zander, The Art of
Possibility What Is a Leadership Style? Once one accepts that there are differences between leadership and management, the next step is to discover one’s own leadership style. It varies from person to person, much like personality. Leadership style is not as structured as management style. And, of course, there is no “silver bullet” on becoming a leader. If there were, there would not be so many books published on the subject in the past several years. There are, however, seven essentials that serve as the foundation of everyone’s leadership style (see Exhibit 1). Every leader knows what he or she believes in, is good at, and what is most important to him or her. If you do not know who you are, how are you ever going to lead others? It is usually easier for someone who has had years of experience to understand what he or she is all about as the patterns of life are more obvious than for those of a younger person. However, one needs to search seriously for and understand one’s mission in life, which is based on the self-awareness that one has. “He who knows others is learned; he who knows himself is wise.” —Lao Tzu, The Art of War 120
Leadership Development: The Role of the CIO Exhibit 1.
Seven Essentials of Leadership Style
1. Self-awareness: who you are and what you are good at 2. Personal values: what you believe in and what is important to you 3. Integrity and character: how you operate 4. Care about people: genuine respect for others 5. Personal credibility: positive reputation and relationships 6. Holistic viewpoint: seeing the big picture 7. Continuous learning: constant personal growth
A leader’s ability to build and maintain good relationships is another major consideration. As Jim Kinney, retired CIO of Kraft Foods, has said: “Credibility is 80 percent relationships and 20 percent expertise.” Business relationships are based on such things as integrity and a real caring for other people. Good leaders, after all, depend on sound relationships for people to follow them. They do not demand respect — they earn it. The seven leadership essentials are augmented by various practices (see Exhibit 2). These may be the approach the leader takes in various situations or a personal viewpoint on a subject. Dealing with ambiguity, for example, is a tough challenge for many IT people, who are inclined to want everything answered with all “i’s” dotted and all “t’s” crossed. However, the business world is not that precise and not all decisions are made with total information being available. So, an effective leader learns to adapt to the situation and become comfortable in the so-called gray area. Exhibit 2. • • • •
Types of Leadership Practices
Ambiguity Cultures Ethics Creativity/innovation
• • • •
Empowerment Use of power Getting results Life balance
Leaders need to be sensitive to the culture in which they operate, as culture is often defined as “how decisions get made around here.” A truly effective leader understands that the role of the top executive is to set the culture for the IT organization, but to do so within the cultural norms of the enterprise. Of course, when the CIO is new to the organization, there is likely an existing, known culture. If change is required, the CIO must have an effective leadership practice to bring about such a change. Ethics is a subject on everyone’s mind today, given some of the corporate scandals in the news. Ethical practices are based on a combination of 121
ACHIEVING STRATEGIC IT ALIGNMENT personal values and societal norms. They clearly vary from country to country. They represent the boundaries in which we operate and define what is good versus bad or acceptable versus unacceptable behavior. Ethics always involve choices. Without ethics one most likely damages, if not destroys, one’s integrity. There are cases of unethical leaders, but one has to question how effective they were when judged by their results…or lack thereof. Many leaders are viewed as people with new ideas; leaders tend to go where others have not thought to go. Most entrepreneurs operate this way; they are not afraid of being innovative, and they depend upon their personal creativity. Most of these leaders like to build things and are not content with just running day-to-day operations. It is all part of the visions they have and the courage to pursue them. Effective leaders practice innovation and creativity to make a difference in whatever group they lead. Given that a leader sets a direction, aligns people in that direction, and motivates them to get the results, he or she must find ways to empower people; the power of the people, those who are the followers, must be unleashed. Because the leader genuinely cares about people, he or she establishes a trust with them — a mutual respect. The followers need to know they have the authority to make decisions and that making a mistake is acceptable, as long as it is not repeated. Empowerment of others to perform on the leader’s behalf is a risk the leader must take. Another practice of a leader is the judicious use of power. There is personal power, which is often based on one’s personal credibility and track record of meeting commitments. There is also positional power, which is a function of where the leader sits in the hierarchy. A third form or base of power is that of the organization, and how it is positioned in its industry and society. A good leader knows when to leverage any of these three forms of power. An effective leader probably depends most on his or her personal power and knows when, and when not, to use it. The overuse of power can diminish one’s credibility as it damages or destroys those vital relationships. The true test of leadership is the results achieved by the leader. All the visions and strategic thinking in the world will not mean anything if nothing is accomplished. Planning is good and necessary, but implementation is the key. The effective leader, using his or her leadership style and all its components, gets things done. It is getting results that really makes the difference. One of the most challenging practices for a leader is seeking and achieving balance in his or her life. It is not only a balance between work and family, but also the third dimension of self. It is a three-legged stool — work, family, you — that must be kept in balance. This kind of balance is not 122
Leadership Development: The Role of the CIO Exhibit 3. • • • •
Key Leadership Skills
Facilitating Team building Listening Project management
Exhibit 4.
• • • •
Change management Communicating Giving feedback Mentoring/coaching
Characteristics of the Leader’s Persona
• Passionate • Intelligent • Persistent
• Consistent • Energetic • Incisive
achieved by an equal amount of time (e.g., eight hours for each leg), but more from a personal set of priorities. There are times when work demands extra hours (e.g., a major systems implementation) and there are times when family gets the priority. We all know the phrase, “If I had one more day to live, it wouldn’t be in the office.” But there are times when one owes it to oneself to find that moment of silence and, of course, to maintain one’s health. Life balance is usually based on one’s personal values, along with understanding one’s priorities in life. Leadership style is further developed by adding skills to one’s toolkit. These are usually associated with training received at workshops or seminars. It is easy to list at least 20 such skills, but eight of the key ones are shown in Exhibit 3. The aspects of leadership style that cannot be taught but are seemingly more part of the persona can be viewed as characteristics. They appear to be more of the adjectives used to describe the leader. They represent how the leader is perceived. Again, the list can exceed 20 in number, but some of the more representative ones can be found in Exhibit 4. In summary, leadership style is based on seven essentials, augmented by key practices, enhanced by various skills, and modified by many characteristics. When totaled, there are some 60 ingredients that determine the style of a leader. The combinations are seemingly endless, which is why there is no “silver bullet” to becoming a leader and why it is a continuous learning process. THE CIO AS ROLE MODEL To be a successful leader, a CIO needs to first discover his or her own leadership style and then nurture or grow it. Developing your own style is a 123
ACHIEVING STRATEGIC IT ALIGNMENT continuous learning process, and much of your learning will come from being a role model for others — including mentoring or coaching people in your own organization. At the same time, the CIO needs to build a leadership development strategy and a supporting set of programs for the IT organization. Depending on the size and resources of the organization, these programs will most likely be a combination of internal and external learning experiences. They will take the form of various curricula that recognize that leadership development is not done in a one-week seminar, but rather in a series of educational forums over an extended period of time. What is learned in the forum is then applied or practiced on the job. Rotating the person in and out of various job assignments helps set up situations for actual practice. The initial targets for the leadership development in the IT organization should be the key potential leaders, be they part of the management group or individual contributors. Given that leadership capabilities are needed throughout the IT organization and based on the premise that the organization needs to grow its future leaders, both early career and middle career employees should be targeted for leadership development. This then becomes the legacy of the CIO: the building of an organization that knows how to both manage effectively and lead. References and Suggested Reading Buckingham, Marcus and Coffman, Curt, First, Break All the Rules, Simon & Schuster, New York, 1999. Collins, Jim, Good to Great, HarperCollins Publishers, New York, 2001. Covey, Stephen R., Principle-Centered Leadership, Simon & Schuster, New York, 1991. DePree, Max, Leadership Is an Art, Dell Publishing, New York, 1989. Kotter, John P., A Force for Change: How Leadership Differs from Management, The Free Press, New York, 1990. Sun Tzu (translated by Thomas Cleary), The Art of War, Shambhala Publications, Boston and London, 1988. Zander, Rosamund Stone and Zander, Benjamin, The Art of Possibility, Harvard Business School Press, Boston, 2000.
124
Chapter 11
Designing a Process-Based IT Organization Carol V. Brown Jeanne W. Ross
As we entered the new millennium, most Fortune 500 and many smaller U.S.-based firms had already invested heavily in a new way of competing: a cross-functional process orientation. Information technology (IT) was a strategic enabler of this new process focus, beginning with the first wave of enterprise systems: enterprise resource planning (ERP) system packages. This new process orientation across internal functions, as well as a new focus on processes with customers and suppliers as part of E-commerce and customer relationship management (CRM) implementations, has added a new complexity to organizational structures. Various governance forms (e.g., matrix, global) and horizontal mechanisms (e.g., new liaison roles and cross-functional teams) have been put in place to help the organization respond more quickly to marketplace changes. These business trends also raise the IT–business alignment issue of how to design an IT organization to effectively support a process-based firm. This chapter provides some answers to this question. Based on the vision and insights of IT leaders in a dozen U.S.-based Fortune 500 firms (see Appendix at the end of this chapter), we first present three organizational catalysts for becoming more process based. Then we summarize four high-level IT processes and six IT disciplines that capture the key thrusts of the new process-based IT organization designs. The chapter ends with a discussion of some organization design challenges faced by IT leaders as they repositioned their IT organizations, along with some specific solutions they initiated to address them. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
125
ACHIEVING STRATEGIC IT ALIGNMENT Competitive Environment -
Globalization Mergers and Acquisitions E-Commerce Demanding Customers
Organizational Imperatives -
Cross-functional process integration Globalization of processes Business restructurings
IT Organization Core IT Processes
IT Disciplines
Exhibit 1. Catalysts for a Process-Based IT Organization
NEW ORGANIZATIONAL IMPERATIVES The process-based companies in our study all faced highly dynamic and competitive business environments, characterized by increased globalization and merger activities, new E-commerce threats, and demanding customers. Their business mandate was to become more process oriented in order to increase responsiveness while simultaneously attaining cost efficiencies. This resulted in new organizational initiatives involving multiple business units to design and implement common, enterprise-level processes. Described below are three organizational imperatives that all of the IT executives we interviewed were proactively addressing. These imperatives were the key catalysts for a rethinking of the role of IT and a redesign of the IT organization (see Exhibit 1). Cross-Functional Process Integration. Effective cross-functional process integration was believed to be key to achieving increased customer responsiveness and reduced cycle times. All 12 companies were focusing on a small set of cross-functional processes, including order fulfillment and material sourcing, for which IT was a critical integration enabler. Most companies were also de-emphasizing functional and geographic distinctions that were part of their structural history in order to focus on both a “single face to the customer” and “a single view of the customer.” Most participants viewed their own ERP implementations as critical to enabling this enterprise-level view and a new process orientation. Globalization of Processes. Although all 12 firms already had a global presence, they were striving to become more global in their processes. In particular, order fulfillment was increasingly being viewed as a global process. However, the extent to which a firm was pursuing a global process model and the extent to which regional customization was tolerated varied across the companies. For example, the consumer products firms were retaining a local flavor for their sales and marketing efforts. 126
Designing a Process-Based IT Organization Business Restructurings. Mergers, acquisitions, and divestitures can impose dramatic changes to a firm’s business model, and at least six of the twelve companies had recently experienced one or more of these strategic jolts. Many firms had aggressive growth goals that could only be achieved by merger and acquisition activities. In other firms, changing market conditions and new E-business opportunities had been the primary catalysts for restructuring. All 12 companies were looking for ways to quickly adapt to these types of changes, and common, standard processes were expected to enable faster business restructurings.
Exhibit 2. • • • •
Core IT Processes
Lead and enable strategic organizational change Creatively deliver IT solutions to strategic business needs Ensure totally reliable yet cost-effective infrastructure services Manage intellectual assets
CORE IT PROCESSES Given these organizational imperatives for a new process orientation, what are the core IT processes of an IT organization in a process-based firm? Four core IT processes (Exhibit 2) are critical for a proactive IT leadership role. Each is described below. Lead and Enable Strategic Organizational Change. New information technologies, particularly Web technologies and packaged software, have created new competitive challenges for organizations that lag in their abilities to implement IT-enabled business processes. The executives we interviewed felt that it was becoming increasingly important for the IT unit to be proactive in identifying how specific technologies could become strategically important to the business. Some noted that in the past, IT organizations had tended to wait for a business imperative to which new IT applications could be applied. In today’s more dynamic and competitive business environments, IT is viewed as a catalyst as well as a solution provider. As two CIOs described it:
“The CIO has to help the company recognize what’s coming and lead — become a visionary.” “The IT organization is propelling our business…driving the business forward.” Creatively Deliver IT Solutions to Strategic Business Needs. The responsibility for IT applications has shifted from a software development mind-set to an emphasis on delivering solutions — whether custom built, reused, insourced, or 127
ACHIEVING STRATEGIC IT ALIGNMENT outsourced. This requires identifying alternative approaches to solving strategic business needs. The IT unit is relied upon to assess the trade-offs and obtain the best IT fit in the shortest possible amount of time and at the lowest cost. At one firm, internal personnel provided only 25 percent of the IT services, so the processes to manage outsourcers who provisioned the remainder had become critically important. Ensure Totally Reliable yet Cost-Effective Infrastructure Services. An increased dependence on centralized databases for integrated global operations has placed an entirely new level of importance on network and data center operations. The criticality of world-class IT operations now rivals that of strategic IT applications. Although highly reliable, low-cost, 24/7 support has been important for several years, what is different is that the impact of a failure has significantly increased. Firms have become so dependent on IT that there is zero tolerance for downtime. One CIO described the responsibility as “end-to-end management of the environment.” Manage Intellectual Assets. As customers become more demanding and market conditions more dynamic, organizations need to leverage individual knowledge about how best to respond to these conditions. The participants expected to be increasingly relied upon to implement a knowledge management platform that both supports processes and provides user friendly tools: 1) processes for sharing best practices and 2) tools to capture, store, retrieve, and link to knowledge about products, services, and customers. One CIO emphasized the need to understand the flow of ideas across functions and the set of processes about which information is shared across businesses.
Exhibit 3. • • • • • •
Key IT Disciplines
Architecture design Program management Sourcing and alliances management Process analysis and design Change management IT human resource development
KEY IT DISCIPLINES Given the above core IT processes, what are the key disciplines, or capabilities, that an IT organization needs to develop? Six IT disciplines (Exhibit 3) are key to the effective performance of a process-based IT organization. Each is described below. (Note that this list is intended to identify critical high-level disciplines, not to be exhaustive.) 128
Designing a Process-Based IT Organization Architecture Design. An IT architecture specifies how the infrastructure will be built and maintained; it identifies where computing hardware, software, data, and expertise will be located. To address the complexities of highly distributed global firms requires a well-designed IT architecture that distinguishes global from local requirements, and enterprisewide from business unit and site requirements. Architectures model a firm’s current vision, structure, and core processes and define key system linkages. Architectures are a vehicle for helping the company “recognize what is coming” and leading the way. Program Management. Program management includes not just the traditional responsibilities of priority-setting and project management, but also the management of increasingly synergistic and evolutionary application solutions. Program managers are responsible for the coordination and integration of insourced and outsourced IT solutions. Several firms had “systems integration” and “release management” capabilities as part of their IT organization structures. Increased reliance on enterprise system packages and business partner solutions also results in application solutions that are expected to evolve via frequent upgrades. The initial implementation of an ERP solution, for example, is expected to be followed by many waves of opportunities for continuous improvement. Sourcing and Alliances Management. IT units are increasingly taking responsibility for negotiating and managing contracts with both internal business units and external alliances. Firms use service level agreements or other negotiated arrangements to ensure that business-unit priorities will be addressed cost-effectively. At the same time, corporate IT leaders are also managing outsourcers who provide global and local services. Some CIOs spoke of outsourcing all “commodity-based services,” including data center operations, help desk support, and network management. The new emphasis on external provisioning and ongoing alliances has heightened the need for a sourcing and alliances management capability. Some participants noted that they increasingly required contracts that detail expectations for knowledge transfer from external to internal resources. One IT leader mentioned the special challenge of renegotiating external alliances following a merger. Process Analysis and Design. As firms become more process based, they
require mechanisms for identifying, analyzing, storing, and communicating business processes. They also need to be able to identify when new technologies offer new opportunities to improve existing processes. Several participants noted that analysis and design expertise for cross-functional processes was now an explicit IT organization skillset. Process mapping 129
ACHIEVING STRATEGIC IT ALIGNMENT was being used, not only for business process redesign but also to ensure compliance with standard processes. Change Management. Because of the ongoing emphasis on process improvements and the implementation of new releases of packaged solutions, change management has become a key IT discipline. For example, continuous improvement projects to take advantage of new versions of enterprise system packages typically also involve changes in organizational processes, making a competence in change management a significant competitive advantage. One participant noted:
“We need to put something in, get value out of it, and replace it more or less painlessly.” IT Human Resource Development. Ensuring a high-quality pool of IT professionals with the skills needed for the above five disciplines is a critical discipline in its own right. IT leaders need to consistently provide opportunities for their workforce to renew technical skills, expand business understanding, and develop interpersonal skills. Global teams require IT professionals who can collaborate cross-functionally as well as cross-culturally. Internal consulting relationships with business units and external alliance relationships with vendors, implementation partners, and contractors demand that they recruit and develop an IT staff with strong interpersonal relationship-building skills. The need for IT professionals to remain committed to honing their technical skills as well as their business skills is as acute as ever. Some participants even emphasized that technical skills were only useful to the extent that they solved business problems, and that “language barriers” between IT and business units can still exist.
ORGANIZATION DESIGN CHALLENGES Summarized below are four major challenges faced by IT leaders as they forged a new kind of process-based IT organization, as well as some specific initial solutions to address them. Working under Complex Structures The evolution to a process-oriented organization adds an additional challenge to management decision making due to the addition of the process dimension. It also can result in more complex organizational structures. All 12 companies had introduced a variety of structures and mechanisms to ensure that they “didn’t lose sight of” their new processes. Several firms had designated process executives to manage a newly consolidated, cross-functional process such as order fulfillment. In one firm in which the top management team wanted to leverage its processes across its strategic business units (SBUs), the leader of each major process was also the vice president of an SBU. 130
Designing a Process-Based IT Organization In some firms, functional business units were still “holding the money,” which can be a constraint to adopting more process-based IT solutions. One participant distinguished between firms in an early stage process-oriented organization versus a later stage, as follows: • Process-focused firms are in an early stage in which process management is the responsibility of senior executives. • Process-based firms are in a later stage in which process thinking has become more pervasive in the firm and the responsibilities for managing processes have been diffused throughout lower management levels. Essentially all of our participants supported the notion that alignment with a process-oriented firm was possible under various IT governance structures (centralized, federal, or hybrid), although some level of centralization was needed to support globalization. Several firms with IT organizations in their business units had increased enterprise-level accountability by creating a dual reporting relationship for the divisional IT heads: a solid-line report to the corporate CIO had been added. Some corporate IT units were organized around cross-functional business processes (aligned with enterprise system modules or process owners). One firm had appointed process stewards within the product-oriented business units to work with IT personnel on global business processes. Other corporate IT units were organized around core IT processes. Newer structural solutions were also being experimented with; at one company, a “two-in-the-box” (co-leadership) approach was used to ensure information flows across critical functions. One recently recentralized corporate IT unit had created three major structures. First, global development teams were aligned with process executives (e.g., materials management, customer fulfillment) or process councils (e.g., a four-person council for the global manufacturing processes). Second, global infrastructure service provider teams were aligned by technology (e.g., telecommunications). Third, IT capability leaders (e.g., systems integration, sourcing, and alliances management) were responsible for developing the common IT processes and IT skills needed by the global development teams and infrastructure service providers. Devising New Metrics Most of the participants were still using traditional metrics focused on operational efficiencies but were also gradually introducing new metrics to measure IT value to the business. Overall, the predominant view was that metrics should be “pervasive and cohesive.” 131
ACHIEVING STRATEGIC IT ALIGNMENT For example, formal service level agreements were being used to assess the services provided to business units in support of their processes. To ensure that IT staff were focused on key organizational processes and that IT priorities were aligned with organizational priorities, metrics to measure business impacts had also been implemented in some firms. At one firm, an IT investment that was intended to reduce cost of goods sold was being assessed by the change in cost of goods sold — although other business factors clearly influenced the achievement of the metric. To help assess the IT unit’s unique contribution, several firms were also involving business unit managers in the performance reviews of IT managers. As stated by one IT leader: “Metrics help firms become more process-based.” Making Coordination “Natural” Another new challenge was to make cross-unit coordination a “natural” activity. Enterprise systems can enable new cross-functional processes but “old ways of working” can create bottlenecks to achieving them. Although some cross-functional process integration can be achieved via formal lines of authority, other types of horizontal designs (e.g., liaison roles, crossfunctional councils) were also relied upon to address coordination and communication deficiencies. For example, one participant described “problems with hand-offs” between application and infrastructure groups: • Analysts who proposed projects and product dates did not always tap into infrastructure and capacity issues. • Developers preparing new applications for desktop workstations did not alert infrastructure teams to the new desktops specs required to run them. In some situations, teams or committees were used to promote coordination: • Periodic, multi-day meetings of geographically dispersed IT management team members in order to share “what everyone is doing” • Cross-functional, cross-process councils to set IT resource priorities • Centers of Excellence approaches to build and leverage valued IT competencies Building a New Mix of IT Expertise Finding IT professionals with the desired mix of competencies and skillsets is still a tough challenge. The need for a range of technical, interpersonal, business consulting, and problem-solving skillsets is not new but there is a new emphasis on finding people with combined skillsets. Among the skill shortages in IT organizations increasingly dependent on external vendor 132
Designing a Process-Based IT Organization solutions are “business technologists” who have a combination of business organization knowledge and package application skills. A combination of contract management knowledge and technology expertise was also increasingly important to “see through vendor hype.” Sourcing solutions included not only importing new talent, but also growing it from within. One firm used standardized methodologies as training tools, in much the same way that they have been used by consulting organizations: the methodology guided behavior until it was internalized by the IT staff member. CONCLUSION IT organizations in process-based firms no longer merely support the business — they are an integral part of the business. IT leaders therefore need to develop new sets of IT processes and disciplines to align the IT organization with the business. Some of the challenges to be addressed when evolving to a process-based IT organization include working under complex structures, devising new metrics, making coordination “natural,” and building a new mix of internal and external IT expertise. Related Readings Brown, C.V., “Horizontal Mechanisms under Differing IS Contexts,” MIS Quarterly, 23(3), 421–454, September 1999. Brown, C.V. and Sambamurthy, V., Repositioning the IT Organization to Enable Business Transformation, Pinnaflex, Cincinnati, OH, 1999. Feeney, D.F. and Wilcocks, L.P., “Core IS Capabilities for Exploiting Information Technology,” Sloan Management Review, 40, 9–21, Spring 1998. Rockart, J.F., Earl, M.J., and Ross, J.W., “Eight Imperatives for the New IT Organization,” Sloan Management Review, 38(1), 43–56, Fall 1996. Ross, J.W., Beath, C.M., and Goodhue, D.L., “Develop Long-Term Competitiveness through IT Assets,” Sloan Management Review, 38(1), 31–42, Fall 1996.
133
ACHIEVING STRATEGIC IT ALIGNMENT
Appendix We began our study by developing a short list of companies known to us in which top management was striving to develop a more process-based organization. All 12 companies were Fortune 500 global manufacturing firms headquartered in the United States, competing primarily in consumer products, healthcare, chemicals, and high-technology industries. All but one had implemented, or were in the process of implementing, an ERP system. Nine of the participants were corporate CIOs and three were direct reports to corporate CIOs. Two interviews were conducted on-site jointly by both authors, while the other ten interviews were conducted over the telephone by one of the two authors. Each interview lasted approximately one hour. To ensure a common framework and provide a consistent pattern of questions across the interviews, a one-page description of the study with the general questions of interest was provided to each participant in advance of the interview.
134
Chapter 12
Preparing for the Outsourcing Challenge N. Dean Meyer
The difference between fruitful partnerships with vendors and outsourcing nightmares is not simply how well you select the vendor and negotiate the deal. It has more to do with how you decide when to use vendors (vs. internal staff) and how well someone manages those vendors once you have hired them. To use vendors effectively, executives must see through the hype of sales pitches and the confusion of budget numbers to understand the fundamental trade-offs between vendors and internal staff and the unique value that each delivers. Our research shows that it requires healthy internal organizations, in which same-profession staff decide “make-vs.-buy” in a fact-based manner, case by case, day after day, and in which staff use their specialized expertise to manage vendors to internal standards of excellence. In other words, successful management of vendors starts with the effective management of internal staff. This thesis may be counterintuitive, because outsourcing is generally viewed as an alternative to internal staff. It differs from much of the “common wisdom” about outsourcing for the following reason: it is a perspective on outsourcing from someone who is not in the outsourcing business and who has no vested interest in selling outsourcing. It is written from the vantage of someone who has spent decades helping executives solve the problems of poorly performing organizations, including enhancing their partnerships with vendors. Excerpted and adapted from: Outsourcing: How to Make Vendors Work for Your Shareholders, copyright 1999 NDMA Publishing, Ridgefield, CT. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
135
ACHIEVING STRATEGIC IT ALIGNMENT Recognizing that business executives’ interest in outsourcing often reflects frustration with internal IT operations, this chapter looks at the typical sources of dissatisfaction. Such a look leads to an understanding of what it takes to make internal service providers competitive alternatives to outsourcing, and how they can help a corporation get the best value from vendors. But, first, it examines vendors’ claims to put the alternative into perspective. CLAIMS AND REALITY Outsourcing vendors have promised dramatic cost savings, along with enhanced flexibility and the claim that line executives will have more time to focus on their core businesses. Although economies of scale can theoretically reduce costs, outsourcing vendors also introduce new costs, not the least of which is profits for their shareholders. Cost savings are typically real only when there are significant economies of scale that cross corporate boundaries. Similarly, the sought-after ability to shift fixed costs (such as people) to variable costs is diminished by vendors’ requirements for long-term contracts for basic services that provide them with stable revenues over time. Performance claims beyond costs are also suspect. For example, the improved client accountability for the use of services that comes from clear invoicing can usually be achieved at lower cost by improving internal accounting. Similarly, outsourcing vendors rarely have better access to new technologies as claimed. How often do you hear of technology vendors holding products back from the market simply to give an outsourcing customer an advantage? As Tom Peters and Robert Waterman said years ago, successful companies “stick to their knitting.”1 Vendors claim that outsourcing leaves business managers more time to focus on the company’s primary lines of business. But this is only true if the people who used to manage the outsourced function are transferred into other business units. On the other hand, if these managers are fired or transferred to the outsourcing vendor, there will be no more managers focusing on the “knitting” than before outsourcing. Moreover, managing outsourcing vendors is no easier (in fact, it may be more difficult) than managing internal staff. Contracts and legal interpretations are involved, and it is challenging to try to guide people when you do not write their performance appraisals. Our research reveals that, contrary to conventional wisdom, many executives pursue outsourcing with or without fundamental economic benefits. Their real motivation is dissatisfaction with internal service functions. 136
Preparing for the Outsourcing Challenge THE REAL MOTIVATION Our analysis shows that there are four main reasons why executives might be willing to pay more to replace internal service providers with external vendors: 1. Customer focus. Internal providers may not treat their clients as customers and may attempt to dictate corporate solutions or audit clients’ decisions. External providers, of course, recognize these clients as customers and work hard to please them. 2. Tailoring. Corporate staff may believe they only serve corporatewide objectives, as if “one size fits all.” Of course, every business unit has a unique mission and a unique role in strategy, and hence unique requirements. Outsourcers are quite pleased to tailor their products and services to what is unique about their customers (for a price). 3. Control over priorities. To get internal providers to do any work may require a convoluted project-approval process, sometimes even requiring justifications to an executive committee. In other cases, it requires begging the internal providers, who set their own priorities. With outsourcing, on the other hand, all it takes is money. You buy what you want, when you want, with no need for approvals other than that of your boss who gave you the money to spend on your business. 4. Response time. Sometimes, internal staff develop long backlogs, and acquiring their services requires waiting in line for an untenably long time. By contrast, outsourcers can be very responsive (as long as the customer pays for the needed resources). When internal service providers address these four concerns, outsourcing must compete on its own merits — that is, on fundamental economics. If there is any good that comes from the threat of “losing the business” to an outsourcing company, it is that a complacent staff department is forced to respond to these legitimate concerns. The following sections discuss, first, a practical approach to improving the performance of an internal service function; and, second, methods needed to make fair service and cost comparisons between internal staff and outsourcing vendors. BUILDING COMPETITIVE INTERNAL SERVICE ORGANIZATIONS To improve internal service performance to competitive levels, the starting point is data collection. Client interviews and staff feedback reveal problems that need to be addressed. These symptoms provide impetus to change and guidance on what needs to be changed. 137
ACHIEVING STRATEGIC IT ALIGNMENT Next, it is vital to create a practical vision of the ideal organization. A useful way to approach this is to brainstorm answers to the following question: “What should be expected of a world-class service provider?” Examples include the following: • The provider is expected to designate an “account executive” for each business unit who is available to answer questions, participate in clients’ meetings, and facilitate their relationships with the function. • The provider is expected to proactively approach key opinion leaders and help them identify breakthrough opportunities for the function’s products in an unbiased, strategy-driven manner. • The provider is expected to proactively facilitate the formation and operation of consortia of clients with like needs. • The provider is expected to help clients plan and defend budgets to buy the function’s products. • In response to clients’ requests, the provider is expected to proactively offer a range of viable alternatives (as in Chevrolet, Cadillac, or Rolls Royce) and supply all the information clients need to choose. • Whenever possible, the provider is expected to design products using common components and standards to facilitate integration (without sacrificing its ability to tailor results to clients’ unique needs). • The provider is expected to assign to each project the right mix of skills and utilize a diversity of vendors whenever others offer more cost-effective solutions. Such a brainstorming stretches leaders’ thinking about what is expected of them, and builds a common vision of the organization they wish to build. These vision statements can also teach clients to demand more of their suppliers, internal and external. Clearly, when clients express interest in outsourcing, there is a good chance that they see it as a commodity rather than a core competence of the company. On the other hand, when its strategic value is appreciated, a function may be kept internal even if its costs are a bit higher. The price premium is more than repaid by the incremental strategic value that internal staff can contribute (and outside vendors cannot). Next, leaders assess the current performance of the organization against their vision. This reaffirms the need for change and identifies additional concerns to be addressed. A plan is then developed by analyzing the root causes of the concerns identified in the self-assessment and by identifying the right sequence of changes needed to build a high-performance organization that delivers visible strategic value.2 138
Preparing for the Outsourcing Challenge RESPONDING TO AN OUTSOURCING CHALLENGE It is generally difficult to compare an internal service provider’s budget to a vendor’s outsourcing proposal. This is the probably the greatest problem faced by internal service providers when they attempt to respond to an outsourcing challenge. There are two primary causes for this confusion: First, internal budgets are customarily presented in a fashion that makes it difficult to match costs to individual deliverables. Second, internal staff are generally funded to do things that external vendors do not have to (and should not) do. Budgeting by Deliverables Most internal budgets are presented in a manner that does not give clients an understanding of what they are buying. To permit a fair comparison of costs, an internal service provider must change the way it presents its budget. Consider a budget spreadsheet, where the columns represent cost factors such as salaries, travel expenses, professional development, etc. The rows represent deliverables (i.e., specific projects and services). Project 1 Project 2 Service 3 Service 4
Salaries $ $ $ $
Travel $ $ $ $
Training $ $ $ $
This sort of spreadsheet is a common, and sensible, way to develop a budget. The problem is, after filling in the cells in this spreadsheet, most organizations total the columns instead of the rows, presenting the budget in terms of cost factors. This, of course, invites the wrong kind of dialogue during the budget process. Executives debate the organization’s travel budget, micromanaging staff in a way that they never would an outsourcing vendor. Even worse, executives lose sight of the linkage between the organization’s budget and the deliverables they expect to receive during the year. They do not know what they are getting for their money, so the function seems expensive. At the same time, this approach leads clients to expect that they will get whatever they need within the given budget, making it the staff’s problem to figure out how to fulfill clients’ unlimited demands. Put simply, clients are led to expect infinite products and services for a fixed price! 139
ACHIEVING STRATEGIC IT ALIGNMENT Success in this situation is, of course, impossible. As hard as staff try, the internal service provider gets blamed for both high costs and unresponsiveness. Meanwhile, outsourcing vendors can offer bids that appear less costly simply by promising less. Executives have no way of knowing if the proposed level of service is comparable to what they are receiving internally. While vendors are generally quite clear about the deliverables within their proposed contracts, the internal organization’s deliverables remain undocumented. When comparing a short list of outsourced services to a long but undocumented list of internal services, the vendor may very well appear less expensive. Of course, comparing “apples to oranges” is quite misleading and unfair. The answer to this predicament is simply presenting the internal budget in a different way. The internal service provider should total the rows, not the columns. This is termed “budget by deliverables,” the opposite of budgeting by cost factors. With a budget presented in terms of deliverables, executives are often surprised to learn just how much an internal service provider is doing to earn its keep. Budget by deliverables permits a fair comparison of the cost of buying each product and service from internal staff vs. an outsourcing vendor. In many cases, clients learn that, although the vendor appears to be less expensive in total, it is offering fewer services and perhaps a lower quality of service than internal staff currently provide. It is unfortunate that it often takes an outsourcing challenge to motivate the consideration of a budget-by-deliverables approach, as it is broadly useful. One key benefit is that the debate during the budget process becomes much more constructive. Instead of demanding that staff do more with less, executives decide what products and services they will and won’t buy. Trimming the budget is driven by clients, not staff, and, as a result, is better linked to business priorities. Once a budget by deliverables is agreed on, another ongoing benefit is that clients understand exactly what they can expect from staff. Of course, if they want more, internal staff should willingly supply it — at an additional cost. This is one critical part of an “internal economy” that balances supply and demand. Recognizing Subsidies Staff do some activities for the common good (to benefit any and all clients). Because these deliverables are done on behalf of the entire firm, they are often “taken for granted” or not noticed at all by clients. Nonetheless, 140
Preparing for the Outsourcing Challenge these important “corporate good” activities must be funded. We call these “subsidies.” One example is the service of facilitating the development of corporate standards and policies. Another example of a subsidy activity is commodity-product research and advice (a “consumers’ report”). For example, in IS, staff may research the best configurations of personal computers for various uses. This research service helps clients make the right choices, whether they buy PCs through mail order or internal staff. Corporate-good activities should not be delegated to vendors who have different shareholders in mind. In a budget by deliverables, subsidies should be highlighted as separate rows. Their costs should not be buried within the price of other deliverables. If the costs of these services were spread across other internal products, they would inflate the price of the rest of staff’s product line and put them at an unfair disadvantage when compared to external competitors who do not do these things. In our IS example, if the costs of the PC research were buried in the price of PCs, then mail-order vendors would outcompete the internal IS department (even though staff’s bulk purchasing might negotiate an even better deal for the firm). As more clients bought directly from external vendors, the fixed costs of PC research would have to be spread across fewer units, and the price of a PC would rise further — chasing even more business away. Eventually, this drives the internal IS department out of the business of supplying PCs. This distortion is particularly critical during an outsourcing study. If subsidies are not separated from the cost of competitive products, the outsourcing vendor may win the business, even though its true unit costs may be higher. Later, the corporation will find that critical corporate-good activities do not get done. Funding subsidies individually separate the outsourcing decision (which focuses on ongoing products and services) from the decision to invest in the subsidies. It permits a fair comparison with competitors of the prices of specific products and services. It also encourages a thoughtful decision process around each such activity, leading to an appropriate level of corporate-good efforts. It is worth noting that once we compare “apples to apples” in this way, many internal service providers are found to offer a very competitive deal. Making sure that clients are aware of this is a key to a permanent role as “supplier of choice.” 141
ACHIEVING STRATEGIC IT ALIGNMENT Activity-Based Costing Analysis While the logic of budget by deliverables is straightforward and compelling, the mechanics are not so simple. Identifying an organization’s products and services — not tasks, but deliverables — is, in itself, a challenge. The level of detail must be carefully managed so that each row represents a meaningful client purchase decision, without inundating clients with more than they can comprehend. Once the products are identified, allocating myriad indirect costs to a specific set of deliverables is a challenge in “activity-based costing.” Many have found an activity-based costing analysis difficult for even one or two lines of business. To prepare a budget by deliverables requires a comprehensive analysis across all products and services. This adds unique problems, such as “circles,” where two groups within the organization serve one another, and hence each is part of the other’s cost structure and neither can determine its price until the other does. Fortunately, there is a step-by-step process that resolves such complications and leads to a clear result.3 The budget-by-deliverables process begins with the identification of lines of business and deliverables. For each deliverable, a unit of costing (such as hours or clients supported) is identified, and a forecast of the number of units required to produce the deliverable is made. Next, indirect costs are estimated and allocated to each row (each deliverable). Direct costs are added to each row as well. Overhead costs (initially their own rows) are “taxed” to the other deliverables. Then, all the groups within the organization combine their spreadsheets, and the total cost for each deliverable is summed. With minor modifications to the budget-by-deliverables process, the analysis can produce unit prices (fees for services) at the same time as the budget. The result of the budget-by-deliverables process is a proposal that estimates the true cost to shareholders of each staff deliverable, making for fact-based budgeting decisions and fair (and, hopefully, favorable) comparisons with outsourcing vendors’ proposals. VENDOR PRICING: LESSONS FROM THE PAST When comparing a staff’s budget with an outsourcing proposal, some additional considerations are important to note. Even if internal costs are lower than outsourcing, comparisons may be distorted by some common vendor tactics. Outsourcing vendors sometimes “buy the business” by offering favorable rates for the first few years of a contract and making up for the loss of profits throughout the rest of the relationship. 142
Preparing for the Outsourcing Challenge This tactic may be supported by excess capacity that the vendor can afford to sell (for a short while) below full costs. Or, the vendor may be generating sufficient profits from other companies to afford a loss for a certain period. Neither enabling factor lasts for long. In the long run, this leads to higher costs, even in discounted-present-value terms, because entrepreneurs will always find ways to be compensated for taking such risks. A similar technique is pricing basic services at or below costs, and then making up the profits on add-on business. Tricks that make outsourcing appear less expensive are best countered by demanding a comparison of all costs over a longer period. Costs should include all activities of the function, itemized in a way that permits comparisons under different scenarios, forecasting increased and decreased demands. The term need not be limited to the initial proposed contract. A longer time frame is justifiable, because an outsourcing decision is difficult to reverse. It takes years to rebuild an internal capability and transition competencies from a vendor back to staff. Thus, it makes sense to ask vendors to commit to prices for ten or more years. EXTENDED STAFFING Too often, if a staff function is not working right, outsourcing has simply been a method of paying someone else to take the pain. It is a way to avoid expending the time and energy to build a high-performance internal service provider. But paying profits to other shareholders is short-sighted because it sacrifices a potentially valuable component of business strategy and drives up long-term costs. Shirking tough leadership duties in this manner is also mean-spirited. It destroys careers, with little appreciation for people’s efforts and the obstacles they faced, and it creates an environment of fear and destroys morale for those who remain. Even partial outsourcing — sometimes buying from vendors and at other times from staff — is not constructive in the long term because it allows internal service groups to deteriorate. As they lose market share, internal organizations shrink, lose critical mass, and get worse and worse. There is no substitute for proper resolution of clients’ concerns by investing in building an internal service provider that earns clients’ business. This, of course, does not mean that outside vendors are avoided where they can contribute significant and unique value. In fact, one aspect of a healthy internal service provider is its proactive use of vendors. We call this approach to managing vendors extended staffing. 143
ACHIEVING STRATEGIC IT ALIGNMENT A healthy organization divides itself into clearly defined lines of business, each run by an entrepreneur. Each entrepreneur should know his or her competitors and continually benchmark price and performance against them. This is not tough to do. Those very competitors are also potential extensions to the internal staff. If demand goes up, every entrepreneur should have vendors and contractors lined up, ready to go. And whenever they bid a deal, staff should propose a “buy” alternative along side their “make” option. By treating vendors and contractors as extensions to internal staff — rather than replacements for them — extended staffing enhances, rather than undermines, internal service providers. And by bringing in vendors through (not around) internal staff, extended staffing gives employees a chance to learn and grow. When internal staff proactively use vendors, making educated decisions on when it makes sense to do so, the firm always gets the best deal. With confidence in their niche, ethical vendors are happy to compete for business on the merits of their products rather than attempt to replace internal staff with theirs. Furthermore, by using the people who best know the profession to manage vendors and contractors, extended staffing also ensures that external vendors live up to internal standards of excellence. Extended staffing automatically balances the many trade-offs between making and buying goods and services. It leads to the right decisions, in context, day after day. Notes 1. Peters, T.J. and Waterman Jr., R.H. 1982. In Search of Excellence. New York: Harper & Row. 2. A tested, effective process of systemic change is discussed in detail in Meyer, N. D. 1998. Road map: How to Understand, Diagnose, and Fix Your Organization. Ridgefield, CT: NDMA Publishing. 3. Meyer, N.D. 1998. The Internal Economy: A Market Approach. Ridgefield, CT: NDMA Publishing.
144
Chapter 13
Managing Information Systems Outsourcing S. Yvonne Scott
IS outsourcing is not a new trend. Today, it is a mature concept — a reality. The use of service bureaus, contract programmers, disaster recovery sites, data storage vendors, and value-added networks are all examples of outsourcing. However, outsourcing is not a transfer of responsibility. Tasks and duties can be delegated, but responsibility remains with the organization’s management. This chapter provides guidelines for effectively managing 15 outsourcing arrangements. OUTSOURCING AGREEMENTS Although it is desirable to build a business partnership with the outsource vendor, it is incumbent on the organization to ensure that the outsourcer is legally bound to take care of the company’s needs. Standard contracts are generally written to protect the originator (i.e., the vendor). Therefore, it is important to critically review these agreements and ensure that they are modified to include provisions that adequately address the following issues. Retention of Adequate Audit Rights It is not sufficient to generically specify that the client has the right to audit the vendor. If the specific rights are not detailed in the contract, the scope of a review may be subject to debate. To avoid this confusion and the time delays that it may cause, it is suggested that, at a minimum, the following specific rights be detailed in the contract:
0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
145
ACHIEVING STRATEGIC IT ALIGNMENT • Who can audit the outsourcer (i.e., client internal auditors, outsourcer internal auditors, independent auditors, user-controlled audit authority)? • What is subject to audit (e.g., vendor invoices, physical security, operating system security, communications costs, and disaster recovery tests)? • When the outsourcer can or cannot be audited. • Where the audit is to be conducted (e.g., at the outsourcer’s facility, remotely by communications). • How the audit is conducted (i.e., what tools and facilities are available). • Guaranteed access to the vendor’s records, including those that substantiate billing. • Read-only access to all of the client company’s data. • Assurance that audit software can be executed. • Access to documentation. • Long-term retention of vendor records to prevent destruction. Continuity of Operations and Timely Recovery The timeframes within which specified operations must be recovered, as well as each party’s responsibilities to facilitate the recovery, should be specified in the contract. In addition, the contract should specify the recourse that is available to the client, as well as who is responsible for the cost of carrying out any alternative action, should the outsourcer fail to comply with the contract requirements. Special consideration should be given to whether or not these requirements are reasonable and likely to be carried out successfully. Cost and Billing Verification Only those costs applicable to the client’s processing should be included in invoices. This issue is particularly important for those entering into outsourcing agreements that are not on a fixed-charge basis. Adequate documentation should be made available to allow the billed client to determine the appropriateness and accuracy of invoices. However, documentation is also important to those clients who enter into a fixed invoice arrangement. In such cases, knowing the actual cost incurred by the outsourcer allows the client to effectively negotiate a fair price when prices are open for renegotiation. It should also be noted that, although long-term fixed costs are beneficial in those cases in which costs and use continue to increase, they are equally detrimental in those situations in which costs and use are declining. Therefore, it is beneficial to include contract clauses that allow rates to be reviewed at specified intervals throughout the life of the contract, or in the event of a business downturn (e.g., sale of a division).
146
Managing Information Systems Outsourcing Security Administration Outsourcing may be used as an agent for change and, therefore, may represent an opportunity to enhance the security environment. In any case, decisions must be made regarding whether the administration (i.e., granting access to data) and the monitoring (i.e., violation reporting and followup) should be retained internally or delegated to the outsourcer. In making this decision, it is imperative that the company have confidence that it can maintain control over the determination of who should be granted access and in what capacity (e.g., read, write, delete, execute) to both its data and that of its customers. Confidentiality, Integrity, and Availability Care must be taken to ensure that both data and programs are kept confidential, retain their integrity, and are available when needed. These requirements are complicated when the systems are no longer under the physical control of the owning entity. In addition, the concerns that this situation poses are further compounded when applications are stored and executed on systems that are shared with other customers of the outsourcer. Of particular concern is the possibility that proprietary data and programs may be resident on the same physical devices as those of a competitor. Fortunately, technology has provided us with the ability to logically control and separate these environments with virtual machines (e.g., IBM’s Processor Resource/System Management). It should also be noted that the importance of confidentiality does not necessarily terminate with the vendor relationship. Therefore, it is important to obtain nondisclosure and noncompete agreements from the vendor as a means of protecting the company after the contract expires. Similarly, adequate data retention and destruction requirements must be specified. Program Change Control and Testing The policies and standards surrounding these functions should not be relaxed in the outsourced environment. These controls determine whether or not confidence can be placed in the integrity of the organization’s computer applications. Vendor Controls The physical security of the data center should meet the requirements set by the American Society for Industrial Security. In addition, there should be close compatibility between the vendor and the customer with regard to control standards.
147
ACHIEVING STRATEGIC IT ALIGNMENT Network Controls Because the network is only as secure as its weakest link, care must be taken to ensure that the network is adequately secured. It should be noted that dial-up capabilities and network monitors can be used to circumvent established controls. Therefore, even if the company’s operating data is not proprietary, measures should be taken to ensure that unauthorized users cannot gain access to the system. This should minimize the risks associated with unauthorized data, program modifications, and unauthorized use of company resources (e.g., computer time, phone lines). Personnel Measures should be taken to ensure that personnel standards are not relaxed after the function is turned over to a vendor. As was noted earlier, in many cases the same individuals who were employed by the company are hired by the vendor to service that contract. Provided these individuals are competent, this should not pose any concern. If, however, a reason cited for outsourcing is to improve the quality of personnel, this situation may not be acceptable. In addition, care should be taken to ensure that the client company is notified of any significant personnel changes, security awareness training is continued, and the client company is not held responsible should the vendor make promises (e.g., benefits, salary levels, job security) to the transitional employees that it does not subsequently keep. Vendor Stability To protect itself from the possibility that the vendor may withdraw from the business or the contract, it is imperative that the company maintain ownership of its programs and data. Otherwise, the client may experience an unexpected interruption in its ability to service its customers or the loss of proprietary information. Strategic Planning Because planning is integral to the success of any organization, this function should be performed by company employees. Although it may be necessary to include vendor representatives in these discussions, it is important to ensure that the company retains control over the use of IS in achieving its objectives. Because many of these contracts are long term and business climates often change, this requires that some flexibility be built into the agreement to allow for the expansion or contraction of IS resources. In addition to these specific areas, the following areas should also be addressed in the contract language: 148
Managing Information Systems Outsourcing • Definition and assignment of responsibilities • Performance requirements and the means by which compliance is measured • Recourse for nonperformance • Contract termination provisions and vendor support during any related migration to another vendor or in-house party • Warranties and limitations of liability • Vendor reporting requirements PROTECTIVE MEASURES DURING TRANSITION After it has been determined that the contractual agreement is in order, a third-party review should be performed to verify vender representations. After the contract has been signed and as functions are being moved from internal departments to the vendor, an organization can enhance the process by performing the following: • Meeting frequently with the vendor and employees • Involving users in the implementation • Developing transition teams and providing them with well-defined responsibilities, objectives, and target dates • Increasing security awareness programs for both management and employees • Considering a phased implementation that includes employee bonuses for phase completion • Providing outplacement services and severance pay to displaced employees CONTINUING PROTECTIVE MEASURES As the outsourcing relationship continues, the client should continue to take proactive measures to protect its interests. These measures may include continued security administration involvement, budget reviews, ongoing reviews and testing of environment changes, periodic audits and security reviews, and letters of agreement and supplements to the contract. Each of these client rights should be specified in the contract. In addition, a continuing review and control effort typically includes the following types of audit objectives: • Establishing the validity of billings • Evaluating system effectiveness and performance • Reviewing the integrity, confidentiality, and availability of programs and data • Verifying that adequate measures have been made to ensure continuity of operations • Reviewing the adequacy of the overall security environment • Determining the accuracy of program functionality 149
ACHIEVING STRATEGIC IT ALIGNMENT AUDIT ALTERNATIVES It should be noted that resource sharing (i.e., the sharing of common resources with other customers of the vendor) may lead to the vendor’s insistence that the audit rights of individual clients be limited. This may be reasonable. However, performance review by the internal audit group of the client is only one means of approaching the control requirement. The following alternative measures can be taken to ensure that adequate control can be maintained. • Internal reviews by the vendor. In this case, the outsourcing vendor’s own internal audit staff would perform the reviews and report their results to the customer base. Auditing costs are included in the price, the auditor is familiar with the operations, and it is less disruptive to the outsourcer’s operations. However, auditors are employees of the audited entity; this may limit independence and objectivity, and clients may not be able to dictate audit areas, scope, or timing. • External auditor or third-party review. These types of audits are normally performed by an independent accounting firm. This firm may or may not be the same firm that performs the annual audit of the vendor’s financial statements. In addition, the third-party reviewer may be hired by the client or the vendor. External auditors may be more independent than employees of the vendor. In addition, the client can negotiate for the ability to exercise some control over the selection of the third-party auditors and the audit areas, scope, and timing, and the cost can be shared among participating clients. The scope of external reviews, however, tends to be more general in nature than those performed by internal auditors. In addition, if the auditor is hired by the vendor, the perceived level of independence of the auditor may be impaired. If the auditor is hired by each individual client, the costs may be duplicated by each client and the duplicate effort may disrupt vendor operations. • User-controlled audit authority. The audit authority typically consists of a supervisory board comprising representatives from each participating client company, the vendor, and the vendor’s independent accounting firm and a staff comprising some permanent and temporary members who are assigned from each of the participating organizations. The staff then performs audits at the direction of the supervisor y b o a rd . I n a d d i t i o n , a c h a r t e r, d e t a i l i n g t h e r i g h t s a n d responsibilities of the user-controlled audit authority, should be developed and accepted by the participants before commissioning the first review. This approach to auditing the outsourcing vendor appears to combine the advantages and minimize the disadvantages previously discussed. In addition, this approach can benefit the vendor by providing a marketing 150
Managing Information Systems Outsourcing advantage, supporting its internal audit needs, and minimizing operational disruptions. CONCLUSION Outsourcing arrangements are as unique as those companies seeking outsourcing services. Although outsourcing implies that some control must be turned over to the vendor, many measures can be taken to maintain an acceptable control environment and adequate review. The guidelines discussed in this chapter should be combined with the client’s own objectives to develop individualized and effective control.
151
This page intentionally left blank
Chapter 14
Offshore Development: Building Relationships across International Boundaries Hamdah Davey Bridget Allgood
Outsourcing information systems (IS) has grown in popularity in recent years for a wide variety of reasons — it is perceived to offer cost advantages, provide access to a skilled pool of labor, increase staffing flexibility, and allow the company to concentrate on core competencies. These factors have challenged managers to rethink the way in which IS has been delivered within their organizations and to consider looking overseas for technical skills. Outsourcing IS work is not a new phenomenon; many aspects of IS have historically been outsourced. Traditionally, service bureaus dealt with payroll processing for companies. In recent times, hardware maintenance or user support is frequently provided by an external company. IS activities that can be easily separated and tightly specified are seen as ideal candidates for outsourcing. Systems development work is by its very nature more difficult to outsource because it is rarely a neat, tightly specified project. The continually changing business environment means that requirements refuse to stand still; they cannot simply be drawn up and
0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
153
ACHIEVING STRATEGIC IT ALIGNMENT thrown over the wall to software developers. An interactive development environment is needed, where business managers and users communicate with the development team, thus ensuring that systems that people want are being developed. OFFSHORE IS OUTSOURCING Offshore outsourcing has grown in popularity and is rapidly emerging in many countries such as India, Mexico, and Egypt. In particular, India is regarded as one of the major offshore outsourcing centers where private and government partnerships have proactively worked together to develop IT capability. To some, India may not seem to be the obvious country in which to locate technical computing expertise because of its struggles to provide basic sanitation, water, and electricity to many of its rural areas. Yet the reality is that the Indian government and the Indian software industry have worked together to create software technology parks (STPs) that provide high-speed satellite communication links, office space, computing resources, and government liaison. In this supportive environment, software houses have thrived and provide a variety of services that range from code maintenance and migration work, to designing and building new applications with the latest software technologies. The availability of a skilled pool of English-speaking IT developers with the latest technical knowledge, able to handle large projects and produce quality software, is attracting many major companies to outsource systems development projects to India. The cost of developer time in India is significantly less than market prices in the West, which makes offshore outsourcing particularly attractive. Below we describe the experiences of a U.K.-based company that outsourced system development work to India. CASE STUDY: IS OUTSOURCING This case example describes the company's experiences and answers key IS outsourcing questions such as: • “What factors led you to make the decision to outsource systems development work to India?” • “How was a relationship built and maintained across international borders?” • “What cross-cultural issues were evident?” Driving for a Change in Direction LEX Vehicle Leasing is one of the largest vehicle contract hire companies in the United Kingdom, managing more than 98,000 vehicles. LEX specializes 154
Offshore Development in buying, running, and disposing of company vehicles for its customers. The concept of outsourcing was not new to LEX. Hardware maintenance, network maintenance, and desktop hardware support had been outsourced to U.K. suppliers for some time. In 1997, LEX signed a contract to outsource the systems development work of its contract leasing administration system to an India-based software house. This step was a new direction because at the time, LEX had no previous experience in outsourcing system development work offshore. The decision to outsource the work to India was based on considerations such as the need to develop a system quickly, the availability of skilled IT personnel in India, and substantial cost savings. Managing Projects and Relationships across International Boundaries Offshore outsourcing presents many challenges to business. Building effective client–supplier relationships and managing a project across national and cultural boundaries can be particularly difficult. From the beginning of the project, LEX was aware of the challenges it would face in this area and felt that it was crucial to feel comfortable with the outsourcing supplier. LEX staff visited the outsourcing vendor in India prior to signing a contract for the project so that they could gain insight into the organizational culture of the potential supplier to ensure that they would be able to work in partnership with the organization. LEX also wanted to maintain close control of the project and work closely with the supplier throughout the project. Alignment of Personal Qualities and Cultural Fit. A team from LEX visited the office in Bombay prior to drawing up the contract. During this visit, the LEX team had the opportunity to liaise and meet with the outsourcing supplier staff. The LEX team was comfortable with the skills of the outsourcing staff and the key personnel. They felt that the two organizations had shared values within their cultures, and the personalities of key personnel within the outsourcing company aligned well with the staff within LEX. “There is synergy between our companies. We both believe in developing our employees, keeping staff happy, and also delivering substantial profit.”
The Indian outsourcing supplier was enthusiastic and keen to take on the work. They were perceived to be a quality company with a positive customer service culture. One LEX manager commented: “To delight customers is an attractive trait sadly lacking in U.K. software houses.” Project Management and Control. The systems development project outsourced to India was highly structured with fully defined systems requirements. Although interaction between users and designers was acknowledged as being needed, the project was felt to be fairly tightly specified. 155
ACHIEVING STRATEGIC IT ALIGNMENT LEX was keen to be closely involved with the day-to-day running of the project because one of the biggest worries in offshore outsourcing is managing the project from afar. The outsourcing supplier encouraged close involvement; this approach compared favorably with the company's experiences with some U.K.-based software houses that preferred LEX to take a hands-off approach. LEX was aware of the importance of effective communication between all staff involved with the project and took steps to encourage effective communication at all levels. Cultural differences had a significant impact on the project, and sensitive management was needed when handling situations. Working in close cooperation was seen as important to ensure the success of the project. However, building strong relationships takes time, and early on in the project both parties found it difficult to discuss concerns that arose. One manager noted a “them and us” attitude at the beginning of the project. With an emphasis on good communication, and building and maintaining close relationships at all levels, strong relationships did develop over time. Also, neither party resorted “to the contract” when there were disputes or problems. Steps to Ensure Effective Communication A number of initiatives were implemented to facilitate effective communication between LEX and the outsourcing supplier during this project: • The importance of face-to-face meetings was recognized throughout the project. Meetings were arranged in the United Kingdom between the users and the Indian staff at the beginning of the project. After the initial meetings, some of the Indian designers remained in the United Kingdom, while others returned to India to lead teams. When coding was completed, users from LEX went to India to undertake acceptance testing. • New communications technology, such as computer conferencing, email, and a centralized store of documents relating to the project, was used. E-mail was used extensively as a communication medium; the project manager established communication procedures to manage the high volume of e-mails between India and the United Kingdom. For example, if a change was made to a program, an automatic message would be sent to all parties concerned. • To further improve communication, LEX formed a team of ten business users to act as a link with the Indian employees. The Indian developers had ready access to these users and channeled their problems to them. This enabled the Indian designers to resolve issues effectively and save time. 156
Offshore Development • LEX appointed a full-time project manager who was culturally sensitive and very aware of “people” issues. This manager had responsibility for managing the outsourcing arrangement. Whether they were offshore in Bombay or onshore in the United Kingdom, the Indian employees worked for the LEX project manager. This arrangement meant that the project was able to respond quickly to changing business needs. This was viewed as a major success factor in the project. Steps to Address Cultural Differences Recognizing the complexity of the human element of the project due to cultural differences is important. Unforeseen problems, misunderstandings, and incorrect assumptions occurred due to cultural differences, and sensitivity was required when dealing with such issues. • The LEX management found that the Indian staff had a very positive attitude and were very committed to the project. The Indian staff were also flexible and generally quick to pick up ideas. • At the beginning of the project, a team of Indian designers was brought to the United Kingdom for a period of time so that they could meet and work with the LEX users. Many of the Indian employees had never been outside India and suffered culture shock when they first arrived in the United Kingdom, needing time to adapt to the British culture. • Although the Indian staff spoke English, they experienced difficulties in interacting with the LEX employees at the beginning of the project. This was because of variations in pronunciation and differences in the meanings of words. There was an expectation from the U.K. users that, when the Indian staff arrived, they could explain the features they wanted in the system and be understood. But this was not the case, partly because of the business language used (e.g., bank mandates and contract payment schedules). Also, the concepts of company cars and leasing vehicles did not exist in India. The Indian staff, therefore, faced a steep learning curve in familiarizing themselves with both the business and the business terminology. • The Indiana culture places a great importance on the aspect of control. This led to some problems when working on the project because the Indian designers tended to defer to the project leader even for a minor decision. RECOMMENDED ACTIONS FOR SUCCESSFUL OFFSHORE OUTSOURCING Building and maintaining relationships across international boundaries can be very demanding, and good communication procedures and staff skills are needed to support effective offshore outsourcing. It needs to be appreciated that relationships take time to build and are particularly fragile during the introductory stages of a project. Cultural differences can 157
ACHIEVING STRATEGIC IT ALIGNMENT result in misunderstandings, frustrations, and incorrect assumptions being made. Both time and effort are required to develop and sustain effective, strong offshore outsourcing partnerships. The following actions are recommended for successful offshore outsourcing: • Careful consideration of outsourcing partners is crucial to ensure that they are an organization with which you a can build a relationship. • It is important that the outsourcing organization takes actions to ensure that appropriate channels for good quality communications are used. Although e-mail technology can play an important role, giving staff the opportunity to get to know one another through face-to-face meetings is important if strong relationships are to be built. • Close involvement in the management and day-to-day running of the project by the outsourcing organization is important because it ensures that problems that arise are dealt with effectively and that a partnership style of working together is achieved. • Sensitivity is needed when handling the various language and cultural issues that may emerge. This means that particular consideration should be given to the personal skills of the project manager and of all those coming into contact with the outsourcing supplier. THE FUTURE Offshore outsourcing offers many benefits to companies. Fruitful future opportunities lie with sustaining a high-quality partnership over time. The 30 Indian employees who worked on the LEX project have developed relationships with LEX staff, gained an understanding of LEX’s business practices, and are ideally positioned to maintain the existing system and to develop future systems. By working together over a period of time, trust and understanding between the parties develop, allowing the full benefits of offshore outsourcing to be realized.
158
Chapter 15
Application Service Providers Mahesh Raisinghani Mike Kwiatkowski
Application service providers (ASPs) have received a large amount of attention in the information technology (IT) service industry and the financial capital markets. A quote from Scott McNealy, CEO of Sun Microsystems, is typical of the enthusiasm industry leaders have for the ASP service model: “Five years from now, if you’re a CIO with a head for business, you won’t be buying computers anymore. You won’t buy software either. You’ll rent all your resources from a service provider.” Conceptually, ASPs have a great deal to offer customers. They can maintain applications such as e-mail, enterprise resource planning (ERP), and customer relationship management (CRM), while providing higher levels of service by utilizing economies of scale in order to provide a quality software product at a lower cost to the organization. The ASP value proposition has particular value to small- and mid-sized enterprises (SMEs) that do not possess the IT infrastructure, staff, or capital to purchase high-end, corporatewide applications such as SAP, PeopleSoft, or Siebel. The goal of ASPs is to enable customers to use mission-critical enterprise applications in a better, faster, and more cost-efficient manner. ROLE OF ASPS IN THE 21ST CENTURY ORGANIZATION Many business scholars have attempted to define the shape and structure of effective organizations in the future. Overall, they unanimously predict technology will dramatically change the delivery methods of products and services. Organizational structures such as the “networked organizations” and “virtual organizations” will prosper. Networked organizations will differ from traditional hierarchical organization in a few major ways. First, the structure will be more informal, flatter, 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
159
ACHIEVING STRATEGIC IT ALIGNMENT and loosely structured. Second, employees will be more empowered, treated as an asset, and their contributions will be measured based on how they function as a team. Finally, information will be shared and available both internally and externally to the organization. The ASP industry model can facilitate each of the major areas in the networked organization. Structurally, ASPs are a perfect fit for organizations that desire to become flatter and loosely structured because all of the IT staff overhead and data center infrastructure required to support the business is outsourced to the ASP. The increased need for employee empowerment to make decisions requires additional knowledge. This knowledge must be provided to employees through advanced information systems such as intelligent systems and the Internet. ASPs have the potential to provide the expert and intelligent systems required for supporting “knowledge workers.” Historically, these systems have been cost prohibitive to install and maintain. The ASP model lowers the cost by sharing the system with many users and capitalizing on economies of scale. Information sharing is more efficient via the ASP’s delivery method than that of traditional private networks or value-added networks. These networks required expensive lease lines and specialized telecommunications equipment for organizations to pass data and share information. In the ASP model, a business partner can access your systems via the Internet simply by pointing their browser to your ASP’s host site. The virtual corporation (VC) is an organizational structure that is gaining momentum in today’s economy, especially in the “E-economy” or world of electronic commerce. It can be defined as an organization composed of several business partners sharing costs and resources for the purpose of producing a product or service. Each partner brings strength to the organization such as creativity, market expertise, logistical knowledge, or low cost. The ASP model has a strong value proposition for this type of organization because it can provide the application expertise, technology, and knowledge at a lower cost. Exhibit 1 lists the major attributes of VCs and highlights how ASPs can fit the attribute and service the organization’s needs. The continued evolution of VCs presents ASPs with unique opportunities. The first is a greater need for messaging, collaboration software, and tools. Some ASPs are currently targeting the corporate e-mail application. Because partners of virtual corporations can be located anywhere, but will not relocate to join a VC, the need for interorganizational information systems will grow. Software vendors will need to address this need by developing systems that can be effectively utilized between organizations because currently most systems are primarily designed for single-firm use. Ease of integration and the use of Internet standards such as TCP/IP and 160
Application Service Providers Exhibit 1. Strengths of ASPs in the Virtual Corporation Attributes of Virtual Corporations (VCs)
Strength of ASPs
Excellence — Each partner brings its core competence and an all-star winning team is created. No single company can match what the virtual corporation can achieve. Utilization — Resources of the business partner are frequently underutilized or utilized in a merely satisfactory manner. In the virtual corporation, resources can be put to use more profitability, thus providing a competitive advantage. Opportunism — The partnership is opportunistic. A VC is organized to meet a market opportunity. Lack of borders — It is difficult to identify the boundaries of a virtual corporation; it redefines the traditional boundaries. For example, more cooperation among competitors, suppliers, and customers makes it difficult to determine where one company ends and another begins in the VC partnership. Trust — Business partners in a VC must be far more reliant on each other and require more trust than ever before. They share a sense of destiny.
By providing in-depth application expertise, technology experience, and the ability to provide high levels of service, the ASP organization is suited to deliver the technology excellence sought by VCs. The economies of scale that allow ASPs to provide low--cost service, require a high degree of system utilization; therefore they are incented to partner with VCs to ensure their resources are efficiently utilized.
Adaptability to change — The VC can adapt quickly to environmental changes in a given industry or market.
Technology — Information technology makes the VC possible. A networked information system is a must.
To capitalize on opportunities in the marketplace, VCs can utilize ASPs to implement required support systems quickly. The ASP business model is characterized by many business partnerships between software vendor, systems integrators, and infrastructure providers. VC partnerships can leverage a shared data center or shared application. Because costs are determined by a number of users, technology costs can be shared among partners and not owned by one firm in the VC. As organizations evolve into more trusting environments, this will lower some of the barriers to ASP adoption. ASPs must focus on maintaining good service levels to ensure VC customers continue to trust an ASP with valuable data and mission-critical systems. ASPs in today's marketplace are a constantly evolving duet to the uncertainty in the industry and the pace of technology changes. Successful ASP organizations will possess an innate ability to change and assist their customers in implementing technology rapidly. Because technology is a critical component and VCs do not want to build their own IT infrastructure, the ASP service delivery model or outsourcing is the only alternative. Additionally, the ASP model is a networked delivery system and therefore a perfect match for the VC.
161
ACHIEVING STRATEGIC IT ALIGNMENT XML by software vendors will allow ASPs to offer system access to many organizations in a secure and integrated environment. POTENTIAL IT MANAGEMENT ISSUES According to the Gartner Group, “The ASP model has emerged as one of the foremost global IT trends driving phenomenal growth in the delivery of applications services. Long term, this model will have a significant impact on IT service delivery and management.” IT organizations will have to deal with a variety of changes in the culture of the organization, make infrastructure improvements, and manage the people, technology, and business processes. Culture Changes With the increased adaptation of the ASP delivery model, IT organizations will need to adapt culturally to being less responsible for proving technology internally and begin embracing the concept that other organizations can provide a higher degree of value. Most IT professionals take a negative view of outsourcing because successful adaptation of this principle means fewer projects to manage, fewer staff members to hire, and a perception of a diminishing role in the organization. Self-preservation instincts of most IT managers will view this trend with negativism and skepticism. Therefore, increased trust of service providers and software companies will be required to effectively manage the ASP relationships of the future. For IT professionals to survive the future changes ASPs promise, they must evolve from programming and technical managers to vendor managers. Additionally, they should think strategically and help position their organization to embrace the competitive advantages an ASP can provide with packaged software and quick implementations. IT leaders will need to gain a better understanding of the business they support rather than implementing the latest and greatest technologies. With an increased understanding of the business drivers and the need for improvements in efficiency and customers service, IT practitioners must focus increasingly on business processes and understanding how the technology provided by an ASP can increase the firm’s competitive advantage. Reward and compensation programs should be modified whereby IT compensation plans are based on achieving business objectives rather than successful completion of programming efforts or systems integration projects. IT managers should work to improve communication skills because they are required to effectively interact within the VCs and the many partnerships they represent. 162
Application Service Providers Infrastructure Changes There are five major components of the information infrastructure. These components consist of computer hardware, general-purpose software, networks and communication facilities (including the Internet and intranets), databases, and information management personnel. Adaptation of the ASP delivery model will require changes in each of these areas. Typically, organizations deploy two types of computer hardware — desktop workstations and larger server devices — which run applications and support databases. The emerging ASP model will impact both types of hardware. First, desktop systems can become “thinner” because the processing and application logic is contained in the ASP’s data center. The desktop will also become increasingly standardized, with organizations taking more control of software loaded on the workstation to ensure interoperability with the applications provided via the Internet. Second, there will be a decreasing need to purchase servers or mainframe computing environments because the ASP will provide these services. Also, support staff and elaborate data center facilities will not be required in ASP adopters. General-purpose software such as transaction processing systems, departmental systems, and office automation systems will reside at the ASP and be accessed via the network. End-user knowledge of the system’s functionality will be required, however, and users will gain the knowledge through training provided by the ASP rather than the in-house application support staff. Programming modifications and changes will be reduced because the low customization approach of ASPs will force organizations to change business processes and map themselves to the application. While the expenditures in hardware and software dwindle, infrastructure investment will be focused on better communication networks. The next generation networks will be required to support many types of business applications such as voice, data, and imaging, as well as the convergence of data and voice over a single network. Network infrastructures must be flexible to accommodate new standards. People required to support these advanced networks will be in high demand and difficult to retain in-house. However, ASP providers are not likely to offer internal network support directly, but may partner with third-party firms to manage the internal networks of an organization. Traditional telephone and network equipment providers do offer turnkey solutions in an outsourced delivery model today. Perhaps the most critical piece of an organization’s infrastructure whose role is not fully defined in the ASP model is the role of data. The speed and direction of how this issue is addressed will be critical to the ASP industry’s future growth. Typically, data architecture was a centralized function along with communications and business architecture. XML will 163
ACHIEVING STRATEGIC IT ALIGNMENT be a factor in the adaptation of ASPs due to the improvements XML promises in easing the integration of different systems. This presents an integration issue for organizations to synthesize two different data architectures, namely, Internet data in XML format and legacy data. Another large concern for many corporations is the security of the data. Because an organization’s data is a source of strategic advantage, vital to the “core business,” many firms will not want to outsource the care and management of their data to an outside firm. Service Level Agreements Simply stated, the purpose of an a service level agreement (SLA) is to provide the user of the service with the information necessary to understand and use the contracted services. For SLAs to be effective, they must be able to: • • • •
Be measured and managed Be audited Be provided at an economic price Give maximum value to users of the services
Additionally, they should be structured to reward behavior instead of triggering penalties in the contract. By incorporating this philosophy, ASPs are able to generate additional revenue from providing superior service and are incentivized to remain customer focused. Given that SLAs must include factors such as measurability, manageability, and auditability, performance metrics must be defined or at least very well-understood. Comprehensive SLAs in any outsourcing relationship should attempt to define service levels by utilizing the five metrics presented below. 1. Availability: percentage of time the contracted services are actually accessible over a defined measure of time 2. Reliability: frequency with which the scheduled services are withdrawn or fail over a defined measurement period 3. Serviceability: extension of reliability that measures the duration of available time lost between the point of service failure and service reinstatement (e.g., 95 percent of network failure will be restored within 30 minutes of initial reporting) 4. Response: measures the time of delay between a demand for service and the subsequent reply; response time can be measured as turnaround time 5. User satisfaction: measure of perceived performance relative to expectation; satisfaction is often measured using a repeatable survey process to track changes over time 164
Application Service Providers Exhibit 2.
A Comparison of Internal and External SLAs
External SLAs
Internal SLAs
Terminology defined Legalized Responsibilities defined Service definition precise Processes defined Price rather than cost
Terminology is “understood” Not legalized Responsibilities defined Service definition not precise Processes understood Cost rather than price, if at all
Exhibit 3. Proposed Service Level Agreement Characteristics ASP “Best of” SLAs Terminology is “defined” when practical, and “understood” when not Not legalized Responsibilities defined Service definition precise; however, also measured by metrics such as user satisfaction and service response time Processes understood Price based on service levels attained rather than flat fees with penalties for nonperformance
As seen in Exhibit 2, external and internal SLAs have different characteristics. Internal SLAs are generally more flexible than external SLAs. Given that contract flexibility is a key concern of ASP adopters, provider organizations can differentiate themselves based on their ability to keep SLAs flexible while still providing the needed level of contract formality. By incorporating a combination of “best practices” from external and internal SLAs, the ASP can incorporate a “best of” approach, as shown in Exhibit 3. Further to create a partnership between the ASP and adopting organization, the ASP must focus on both the direct aspects of SLAs as well as the indirect aspects. In-house IT organizations are often successful service providers due to the additional value provided by a focus on the indirect aspects. These activities build and foster a relationship with the business. This businessfocused approach is critical in the IT service delivery model. Larson1 further defines the two types of SLAs as direct and indirect. Examples of factors that comprise the direct aspects of SLAs can be characterized as the IT functions that companies are seeking in their ASP relationship (see Exhibit 4). The ASP is either the primary provider of these services or the secondary provider based on the partnering relationships in place with infrastructure providers or service aggregators. Most ASPs are unique in that they must manage SLAs with both customers and suppliers. 165
ACHIEVING STRATEGIC IT ALIGNMENT Exhibit 4. Other Aspects of Service Level Agreements Direct Examples
Indirect Examples
Processing services Processing environments
Periodic status reviews or meetings Attendance at meetings to provide expert advice Performance reporting Testing or disaster recovery Maintenance of equipment in asset management Consulting on strategy and standards Service billing
Infrastructure services Infrastructure support Other support (i.e., help desk)
Source: Larson.1
In addition to the direct services an ASP will provide in an outsourcing agreement, they will be expected to provide other services or value to the business. The amount of expertise and indirect support should be addressed on a case-by-case basis. THE FUTURE OF THE ASP INDUSTRY The ASP industry future is heavily dependent on software vendors to: • • • •
Provide Internet-architectured applications Develop formal ASP distribution channel programs Refrain from competing against these channels Implement new licensing programs for the ASP customer
The evolution of contracts from a cost per user to a cost per minute service agreement is also hypothesized; however, an ASP organization will require an exceptionally large customer base to offer this type of billing program. Additionally, software packages must be flexible to allow the mass customization required and provide all the functionality many different types of organizations require. A large opportunity exists in other specialized business applications not addressed by the ERP and CRM vendors. ASPs that can partner with “best-of-breed” solutions for an industry, possess the industry experience, and have an existing distribution channel will succeed. CONCLUSION The ability to manage service levels is an important factor in determining successful ASPs as well as successful adopting organizations. All market segments of the ASP model are required to contract for service level agreements. For businesses to effectively utilize the low costs offered by ASP services, they must fully understand their business requirements or why they will pay ASPs. 166
Application Service Providers Editor’s Note: This chapter is based on two published articles by the authors: “The Future of Application Service Providers,” Information Strategy: The Executive’s Journal, Summer 2001, and “ASPs versus Outsourcing: A Comparison,” Enterprise Operations Management, August–September 2001.
References and Further Reading 1. Larson, Kent D., The Role of Service Level Agreements in IT Service Delivery, Information Management and Computer Security, June 3, 1998, pp. 128–132. 2. Caldwell, Bruce, Outsourcing Deals with Competition, Information Week, (735): 140, May 24, 1999. 3. Caldwell, Bruce, Revamped Outsourcing, Information Week, (731): 36, May 24, 1999. 4. Dean, Gary, ASPs: The Net’s Next Killer App, J.C. Bradford & Company, March 2000. 5. Gerwig, Kate, Business: The 8th Layer, Apps on Tap: Outsourcing hits the Web, NSW, September 1999. 6. Hurley, Margaret and Schaumann, Folker, KPMG Survey: The Outsourcing Decision, Information Management and Computer Security, May 4, 1997, pp. 126–132. 7. Internet Research Group, Infrastructure Application Service Providers, Los Altos, CA, 2000. 8. Johnson, G. and Scholes, K., Exploring Corporate Strategy: Text and Cases, Prentice-Hall, Hemel Hempstead, U.K. 9. Leong, Norvin, Applications Service Provider: A Market Overview, Internet Research Group, Los Altos, CA, 2000. 10. Lonsdale, C. and Cox, A., Outsourcing: Risks and Rewards, Supply Management, July 3, 1997, pp. 32–34. 11. Makris, Joanna, Hosting Services: Now Accepting Applications, Data Communications, March 21, 1999. 12. Mateyaschuk, Jennifer, Leave the Apps to US, Information Week, October 11, 1999. 13. McIvor, Ronan, A Practical Framework for Understanding the Outsourcing Process, Supply Chain Management: An International Journal, 5 (1), 2000, pp. 22–36. 14. PA Consulting Group, International Strategic Sourcing Survey 1996, London. 15. Porter, M.E., Competitive Advantage: Creating and Sustaining Superior Performance, Free Press, New York, 1985. 16. Prahald, C.K. and Hamel, G., The Core Competence of the Corporation, Harvard Business Review, July–August, pp. 79–91. 17. Teridan, R., ASP Trends: The ASP Model Moves Closer to Prime Time, Gartner Group Research Note, January 11, 2000. 18. Turban, E., McLean, E., and Wetherbe, J., Information Technology for Management: Making Connections for Strategic Advantage, John Wiley & Sons, New York, 1999. 19. Williamson, O.E., Markets and Hierarchies, Free Press, New York, 1975. 20. Williamson, O.E., The Economic Institutions of Capitalism: Firms, Markets, and Relational Contracting, Free Press, New York, 1985. 21. Yoon, K. P. and Naadimuthu, G., A Make-or-Buy Decision Analysis Involving Imprecise Data, International Journal of Operations and Production Management, 14(2), 1994, pp. 62–69.
167
This page intentionally left blank
Section 2
Designing and Operating an Enterprise Infrastructure
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Issues related to the design, implementation, and maintenance of the IT infrastructure are important for every modern company and particularly challenging for larger enterprises. Very rapid technical change and increasing demands for connectivity to external systems make these tasks even more difficult. The purpose of this section is to help IS managers broaden and deepen their understanding of the following core issues: • • • • •
Managing a distributed computing environment Developing and maintaining the networking infrastructure Data warehousing Quality assurance and control Security and risk management
MANAGING A DISTRIBUTED COMPUTING ENVIRONMENT IT solutions are increasingly being built around core Internet technologies that enable the distribution of processing and data storage resources via standardized protocols. Effective management of distributed environments requires both careful technical infrastructure analysis and design, as well as appropriate allocation and structuring of organizational resources to support the chosen model of technology distribution. This section begins with three chapters that discuss the broad organizational and technical context in which modern distributed systems are managed. Chapter 16, “The New Enabling Role of the IT Infrastructure,” is based on 15 in-depth case studies on IT infrastructure development. The authors identify four key elements in the development of the IT infrastructure and three partnership processes that link the key elements together. The chapter emphasizes the importance of developing a well-defined architecture based on corporate strategy and maintaining an IT infrastructure that effectively supports organizational processes. The telecommunications and networking links that connect the geographically distant parts of an organization are critical elements of any IT infrastructure. Therefore, the tumultuous state of the telecommunications industry and the serious difficulties facing various communications equipment manufacturers and operators cause high levels of uncertainty for network users. Chapter 17, “U.S. Telecommunications Today,” provides an updated analysis of the telecommunications industry. The author also shares insights that will help managers avoid the potential pitfalls caused by the high levels of uncertainty within this critically important industry. The concept of IT infrastructure has broadened significantly in recent years because of the wide variety of devices (both mobile and stationary) that continuously communicate with each other. Computing has become truly ubiquitous, and wireless connectivity between mobile devices has increased the flexibility of configurations significantly. Chapter 18, “Infor170
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE mation Everywhere,” presents a model of the new “pervasive information environment” that helps us conceptualize, understand, and address the key challenges management faces when implementing effective utilization plans for this new environment. The chapter presents numerous examples to clarify the benefits that organizations and individuals can derive from the pervasive information environment. DEVELOPING AND MAINTAINING THE NETWORKING INFRASTRUCTURE The networking infrastructure of an organization is increasingly tightly integrated with the rest of the IT infrastructure. In addition, the telecommunications infrastructure for voice (and potentially video) communication is merging with the data networking infrastructure. A highly reliable networking infrastructure that provides adequate capacity without interruptions is, therefore, a vitally important element of any modern infrastructure. Chapter 19, “Designing and Provisioning an Enterprise Network,” provides a comprehensive overview of the process of redesigning an organizational network infrastructure or building one from scratch. It strongly emphasizes the importance of choosing a vendor and a design that provide the best possible return on investment and managing vendor relationships. Both internal and external users of an organization’s IT infrastructure are beginning to connect to it using wireless access technologies and, many times, these users are truly mobile users who require and want to utilize connectivity regardless of their physical location. Chapter 20, “The Promise of Mobile Internet: Personalized Services,” presents a framework for decision makers to identify new application opportunities offered by wireless access to the Internet. The chapter provides a comprehensive analysis of the differences between fixed-line and wireless access technologies from the perspective of their potential use in organizations. Chapter 21, “Virtual Private Networks with Quality of Service,” focuses on two of today’s fundamentally important networking technologies: (1) virtual private networks (VPNs), which make it possible to use public networking infrastructures (particularly the Internet) to implement secure private networks, and (2) quality of service (QoS) in the context of VPNs — that is, the prioritization of network traffic types based on their immediacy requirements. The convergence of voice, video, and data networks requires effective utilization of QoS mechanisms; although these technologies are still in early stages of development, gaining the full benefits of VPNs requires heeding attention to these issues. Data storage is one area where the integration needs between networking technologies and general IT infrastructure have expanded. Increasingly 171
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE often, storage capacity is separated from servers and implemented using either storage area networks (SANs) or network attached storage (NAS) devices. Chapter 22, “Storage Area Networks Meet Enterprise Data Networks,” provides the reader with an overview of SAN technologies and describes the factors that ensure effective integration of SANs with the rest of the organization’s networking and computing infrastructure. DATA WAREHOUSING Data warehousing has become an essential component of an enterprise information infrastructure for most large organizations, and many small and mid-sized companies have also found data warehousing to be an effective means to provide high-quality decision support data. At the same time, many organizations are struggling to find the best approach to implement data warehousing solutions. Chapter 23, “Data Warehousing Concepts and Strategies,” provides an introduction to data warehousing and covers issues related to the fundamental characteristics, design and construction, and organizational utilization. The effects that the introduction of Web technologies have had on data warehousing, as a foundation for modern decision support systems, are emphasized. The authors remind us that successful implementation entails not only technical issues, but also attention to a variety of organizational and managerial issues. In particular, both financial and personnel resources must be committed to the project, and sufficiently strong attention must be paid to the quality of data at the conceptual level. Data marts, which are scaled-down versions of data warehouses that have a narrower (often departmental) scope, can be an alternative to a fullscale data warehousing solution. Chapter 24, “Data Marts: Plan Big, Build Small,” discusses how data marts can be used by organizations as an initial step. This approach requires careful planning and a clear view of the eventual goal. When implemented well, data marts can be both a cost-effective and organizationally acceptable solution to providing organizational decision makers with high-quality decision support data. An excellent introduction to the differences between data marts and data warehouses, which can be used for educating business managers, is also provided. Data mining applications utilize data stored in data warehouses and transactional databases to identify previously unknown patterns and relationships. The introduction to data mining techniques and applications provided in Chapter 25, “Data Mining: Exploring the Corporate Asset,” convincingly shows why traditional, verification-based mechanisms are insufficient when analyzing very large databases. This chapter provides an excellent rationale for investing in the tools, knowledge, and skills required for successful data mining initiatives. 172
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Chapter 26, “Data Conversion Fundamentals,” is a practical introduction to the process of data conversion and the decisions a group responsible for the conversion may face when an organization moves from one data management platform to another, develops a new application that uses historical data from an existing, incompatible system, or starts to build a data warehouse and wants to preserve old data. In addition to guiding the reader through a ten-step process, the author emphasizes the importance of ensuring data quality throughout the process and the potentially serious consequences if such issues are ignored — especially when moving data from transaction processing systems to a data warehouse. QUALITY ASSURANCE AND CONTROL These two chapters focus on a vitally important topic: mechanisms that are needed to manage the quality of an organization’s IT infrastructure and the services it provides. Chapter 27, “Service Level Management Links IT to the Business,” highlights the new set of challenges created when shifting focus from managing service levels for a technical infrastructure (e.g., servers and networks) to managing service levels for applications and user experience. The author points out the need to use both technical and perceptual measures when evaluating service quality. Defining and negotiating service level agreements (SLAs) is not easy, but it is essential that service level management and SLAs are closely linked to the business objectives of the organization. Information systems audits are an important management tool for maintaining high ethical and technical standards. Chapter 28, “Information Systems Audits: What’s in It for Executives?,” presents an overview of the IS audit function and its role in organizations. Traditionally, non-auditors have viewed an audit as a negative event that is “done to” an organizational unit. This chapter presents a more contemporary, value-added approach in which auditing is done in cooperation with business unit personnel and in support of the business unit’s quest for excellence in IS quality. At the same time, it is important to maintain the necessary independence of the audit function. The authors demonstrate how regular, successfully implemented IS audits lead to significant benefits for the entire organization. SECURITY AND RISK MANAGEMENT Infrastructure security is requiring more and more of managers’ time and attention as the awareness of the need for proactive management of security has grown. Chapter 29, “Cost-Effective IS Security via Dynamic Prevention and Protection,” emphasizes that even the best technologies are not enough to guarantee security, if the organization does not have mechanisms in place to dynamically identify new security threats and prevent them from being realized. Risk analysis, security policies, and security audits are all neces173
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE sary components of dynamic prevention and protection. However, they have to be implemented in a context that enables the adaptation of policies, technologies, and structures based on the changes in real and anticipated threats. The author suggests a method to help decision makers build approaches for their organizations that take into account the specific requirements of the environments in which they operate. In addition to protecting the IT infrastructure against external and internal intruders, recent events have made managers responsible for maintaining the IT infrastructure highly aware of the need for business continuity planning and implementation, that is, putting in place all the mechanisms that are needed to ensure continuous availability of an organization’s IT resources even in the event of a major catastrophe. Chapter 30, “Reengineering the Business Continuity Planning Process,” addresses the need to progress from a narrow traditional disaster recovery approach to a much broader business continuity planning approach. The author points out the importance of measuring the success of continuity planning and suggests that the balanced scorecard method be used for success metrics. The chapter also discusses several special issues related to Web-based systems with 24/7 availability requirements. This section ends with two chapters that focus on specific security topics. Chapter 31, “Wireless Security: Here We Go Again,” demonstrates how many of the issues related to securing various wireless network access methods are similar to issues that organizations faced when transitioning from mainframe systems to distributed architectures. The chapter provides a thorough review of wireless access technologies, the specific security issues associated with each of them, and the technologies that can be used to mitigate these risks. Finally, Chapter 32, “Understanding Intrusion Detection Systems,” is an introduction to a widely utilized set of technologies for detecting attacks against network resources and, in some cases, automatically responding to those attacks. The chapter categorizes the intrusion detection technologies, discusses them in the general context of organizational security technologies, and points out some of their most significant limitations.
174
Chapter 16
The New Enabling Role of the IT Infrastructure Jeanne W. Ross John F. Rockart
Recently, some large companies have made some very large investments in their information technology (IT) infrastructures. For example: • Citicorp invested over $750 million for a new global database system. • Dow Corning and most other Fortune 500 companies invested tens of millions of dollars or more to purchase and install enterprisewide resource planning systems. • Johnson & Johnson broke with tradition by committing corporate funds to help its individual operating companies acquire standard desktop equipment. • Statoil presented all 15,000 of its employees with a high-end computer for home or office use. At firms all over the world, senior executives in a broad cross-section of industries are investing their time and money to shore up corporate infrastructures. In the past, many of these same executives had, in effect, given their IT units a generous allowance and admonished them to spend it wisely. Now, in contrast, they are engaging in intense negotiations over network capabilities, data standards, IT architectures, and IT funding limits. The difficulty of assessing the value of an IT infrastructure, coupled with technical jargon and business uncertainties, has made these conversations uncomfortable for most executives, to say the least. But the recognition that global markets are creating enormous demands for increased information sharing within and across firms has led to the realization that a powerful, flexible IT infrastructure has become a prerequisite for doing business. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
175
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE The capabilities built into an infrastructure can either limit or enhance a firm’s ability to respond to market conditions (Davenport and Linder, 1993). To target a firm’s strategic priorities, senior executives must shepherd the development of the infrastructure (Broadbent and Weill, 1997). Sadly, most senior executives do not feel qualified to do so. As one CEO described it: “I’ve been reading on IT, but I’m terrified. It’s the one area where I don’t feel competent.” New infrastructure technologies are enabling new organizational forms and, in the process, creating a competitive environment that increasingly demands both standardization for cost-effectiveness and customization for responsiveness. Most firms’ infrastructures are not capable of addressing these requirements. Accordingly, firms are ripping out their old infrastructures in an attempt to provide features such as fast networks, easily accessible data, integrated supply chain applications, and reliable desktop support. At the firms that appear to be weathering this transition most successfully, senior management is leading the charge. Over the past years, we have done in-depth studies of the development of the IT infrastructure at 15 major firms. We have examined their changing market conditions and business imperatives, and we have observed how they have recreated their IT infrastructures to meet these demands. This chapter reports on our observations and develops a framework for thinking about IT infrastructure development. It first defines IT infrastructure and its role in organizations. It then describes how some major corporations are planning, building, and leveraging new infrastructures. Finally, it describes the roles of senior, IT, and line managers in ensuring the development of a value-adding IT infrastructure. WHAT IS AN IT INFRASTRUCTURE? Traditionally, the IT infrastructure consisted primarily of an organization’s data center, which supported mainframe transaction processing (see Exhibit 1.) Effectiveness was assessed in terms of reliability and efficiency in processing transactions and storing vast amounts of data. Running a data center was not very mysterious, and most large organizations became good at it. Consequently, although the data center was mission critical at most large organizations, it was not strategic. Some companies, such as Frito-Lay (Mead and Linder, 1987) and Otis Elevator (McFarlan and Stoddard, 1986), benefited from a particularly clear vision of the value of this infrastructure and converted transaction processing data into decision-making information. But even these exemplary infrastructures supported traditional organizational structures, consolidating data for hierarchical decision-making purposes. IT infrastructures 176
The New Enabling Role of the IT Infrastructure
Exhibit 1. The Role of IT Infrastructure in Traditional Firms
in the data center era tended to reinforce existing organizational forms rather than enable entirely new ones. In the current distributed processing era, the IT infrastructure has become the set of IT services shared across business units (Broadbent and Weill, 1997). Typically, these services include mainframe processing, network management, messaging services, data management, and systems security. While still expected to deliver reliable, efficient transaction processing, the IT infrastructure must also deliver capabilities, such as facilitating intraorganizational communications, providing ready access to data, integrating business processes, and establishing customer linkages. Delivering capabilities through IT infrastructure is much more difficult than managing a data center. Part of the challenge is technological because many of the individual components are immature, making them both unreliable and difficult to integrate. The bigger challenge, however, is organizational, because process integration requires that individuals change how they do their jobs and, in most cases, how they think about them. CHANGING ORGANIZATIONAL FORMS AND THE ROLE OF INFRASTRUCTURE Historically, most organizations could be characterized as either centralized or decentralized in their organizational structures. While centralization and decentralization were viewed as essentially opposite organiza177
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 2. Traditional Organizational Models
tional structures, they were, in fact, different manifestations of hierarchical structures in which decisions made at the top of the organization were carried out at lower levels (see Exhibit 2.) Decentralized organizations differed from centralized in that more decision making was pushed down the hierarchy but communication patterns were still vertical and decisions involving two business units were usually made at a higher level so that business units rarely recognized any interdependencies. Centralization and decentralization posed significant trade-offs in terms of their costs and benefits. Simply stated, centralization offered economies of scale while decentralization allowed firms to be more responsive to individual customers. Thus, the degree to which any firm was centralized or decentralized depended on which of these benefits offered the most value. As global markets have forced firms to speed up decision making and to simultaneously recognize both the global scope of their customers and their unique demands, firms have found it increasingly important to garner the benefits of both centralization and decentralization simultaneously. Johnson & Johnson and Schneider National demonstrate how firms are addressing this challenge. Johnson & Johnson For almost 100 years, Johnson & Johnson (J&J), a global consumer and healthcare company, achieved success as a decentralized firm (Ross, 1995a). Both J&J management and external analysts credited the autonomy of the firm’s approximately 160 operating companies with stimulating innovation and growth. In the late 1980s, however, top management 178
The New Enabling Role of the IT Infrastructure observed that a new breed of customer was emerging, and those customers had no patience for the multiple salespersons, invoices, and shipments characteristic of doing business with multiple J&J companies. For example, executives at Wal-Mart, the most powerful of the U.S. retailers, noted that J&J companies were sending as many as 17 different account representatives in a single month. In the future, Wal-Mart mandated, J&J should send just one. In response, J&J created customer teams to service each of its largest multi-business accounts. The teams consolidated data on sales, distribution, accounts receivable, and customer service from the operating companies and presented a single face to the customer. Initially, much of the reconciliation among the businesses required manipulating spreadsheets populated with manually entered data. Ultimately, it meant that J&J would introduce complex structural changes that would link its independent operating companies through franchise management, regional organizations, and market-focused umbrella companies. Schneider National In contrast, Schneider National, following deregulation of the U.S. trucking industry in 1980, relied on a highly centralized organizational structure to become one of the country’s most successful trucking companies. Schneider leveraged its efficient mainframe environment, innovative operations models, centralized databases, and, later, satellite tracking capabilities to provide its customers with on-time service at competitive prices. By the early 1990s, however, truckload delivery had become a commodity. Intense price competition convinced Schneider management that it would be increasingly difficult to grow sales and profits. Schneider responded by moving aggressively into third-party logistics, taking on the transportation management function of large manufacturing companies (Ross, 1995b). To succeed in this market, management recognized the need to organize around customer-focused teams where operating decisions were made at the customer interface. To make this work, Schneider installed some of its systems and people at customer sites, provided customer interface teams with powerful desktop machines to localize customer support, and increasingly bought services from competitors to meet the demands of its customers. Pressures toward Federalist Forms These two firms are rather dramatic examples of a phenomenon that most large firms are encountering. New customer demands and global competition require that business firms combine the cost efficiency and tight integration afforded by centralized structures with the creativity and customer 179
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 3.
Federalist Organizational Model
intimacy afforded by decentralized structures. Consequently, many firms are adopting “federalist” structures (Handy, 1992) in which they push out much decision making to local sites. In federalist firms, individuals at the customer interface become accountable for meeting customer needs, while the corporate unit evolves to become the “core” rather than headquarters (see Exhibit 3.) The role of the core unit in these firms is to specify and develop the core competencies that enable the firm to foster a unique identity and generate economies of scale (Hamel and Prahalad, 1990; Stalk, Evans, and Shulman, 1992). Federalist firms require much more horizontal decision making to apply shared expertise to complex problems and to permit shared resources among interdependent business units (Quinn, 1992). Rather than relying on hierarchical processes to coordinate the interdependencies of teams, these firms utilize shared goals, dual reporting relationships, incentive systems that recognize competing objectives, and common processes (Handy, 1992). Management techniques such as these require greatly increased information sharing in organizations, and it is the IT infrastructure that is expected to enable the necessary information sharing. However, an edict to increase information sharing does not, in itself, enable effective horizontal processes. To ensure that investments in information technology generate the anticipated benefits, IT infrastructure must become a top management issue. 180
The New Enabling Role of the IT Infrastructure
Exhibit 4.
The IT Infrastructure Pyramid
ELEMENTS OF INFRASTRUCTURE MANAGEMENT At the firms in our study we observed four key elements in the design and implementation of the IT infrastructure: organizational systems and processes, infrastructure services, the IT architecture, and corporate strategy. These build on one another (as shown in Exhibit 4) such that corporate strategy provides the basis for establishment of the architecture while the architecture guides decisions on the infrastructure, which provides the foundation for the organizational systems and processes. Corporate Strategy The starting point for designing and implementing an effective infrastructure is the corporate strategy. This strategy defines the firm’s key competencies and how the firm will deliver them to customers. Many large, decentralized firms such as J&J have traditionally had general corporate strategies that defined a firm-wide mission and financial performance goals, but allowed individual business units to define their own strategies for meeting customer needs. In the global economy, these firms are focusing on developing firm-wide strategies for addressing global customer demands and responding to global competition.
181
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE For purposes of developing the IT infrastructure, senior management must have an absolutely clear vision of how the organization will deliver on its core competencies. General statements of financial and marketing goals do not provide the necessary precision to develop a blueprint for the foundation that will enable new organizational processes. The necessary vision is a process vision in which senior management actually “roughs out” the steps involved in key decision-making and operational processes. Based on a clear vision of how it would service customers, Federal Express developed its Powership product, which allows any customer — be it an individual or a major corporation — to electronically place and track an order. Similarly, JC Penney’s internal management support system evolved from a clear vision of the process by which store managers would make decisions about inventory and sales strategies. This process included an understanding of how individual store managers could learn from one another’s experiences. Such a clear vision of how the firm will function provides clear prescriptions for the IT infrastructure. A corporate strategy that articulates key processes is absolutely essential for designing an IT infrastructure because otherwise neither IT nor business management can define priorities. The vision peels back corporate complexities so that the infrastructure is built around simple, core processes. This peeling provides a solid foundation that can adapt to the dynamics of the business environment. Some firms have attempted to compensate for a lack of clarity in corporate goals by spending more money on their infrastructures. Rather than determining what kinds of communications they most need to enable, they invest in state-of-the-art technologies that should allow them to communicate with “anyone, anytime, anywhere.” Rather than determining what data standards are most crucial for meeting immediate customer needs, they attempt to design all-encompassing data models. This approach to infrastructure building is expensive and generally not fruitful. Money is not a good substitute for direction. IT Architecture The development of an IT architecture involves converting the corporate strategy into a technology plan. It defines both the key capabilities required from the technology infrastructure and the places where the technologies, the management responsibility, and the support will be located. Drawing on the vision of the core operating and decision-making processes, the IT architecture identifies what data must be standardized corporatewide and what will be standardized at a regional level. It then specifies where data will be located and how they will be accessed. Similarly, the 182
The New Enabling Role of the IT Infrastructure architecture differentiates between processes that must be standardized across locations and processes that must be integrated. The architecture debate is a critical one for most companies because the natural tendency, where needed capabilities are unclear, is to assume that extensive technology and data standards and firm-wide implementation of common systems will prepare the firm for any eventuality. In other words, standard setting serves as a substitute for architecture. Standards and common systems support many kinds of cross-business integration and provide economies of scale by permitting central support of technologies. However, unnecessary standards and common systems limit business unit flexibility, create resistance and possibly ill will during implementation, prove difficult to sustain, and are expensive to implement. The elaboration of the architecture should help firms distinguish between capabilities that are competitive necessities and those that offer strategic advantage. It guides decisions on trade-offs between reliability and state-of-the art, between function and cost, and between buying and building. Capabilities recognized as strategic are those for which a firm can justify using state-of-the-art technologies, de-emphasizing standards in favor of function, and building rather than buying. IT Infrastructure Although firms’ architectures are orderly plans of the capabilities that their infrastructures should provide, infrastructures themselves tend to be in a constant state of upheaval. At many firms, key elements of the IT infrastructure have been in place for 20 to 30 years. Part of the infrastructure rebuilding process is recognizing that the fast pace of business change means that such enduring infrastructure components will be less common. Architectures evolve slowly in response to major changes in business needs and technological capabilities, but infrastructures are implemented in pieces, with each change introducing the opportunity for more change. Moreover, because infrastructures are the base on which many individual systems are built, changes to the infrastructure often disrupt an uneasy equilibrium. For example, as firms implement enterprisewide systems, they often temporarily replace automated processes with manual processes (Ross, 1997a). They may need to construct temporary bridges between systems as they deliver individual pieces of large, integrated systems or foundation databases. Some organizations have tried to avoid the chaos created by temporary fixes by totally replacing big pieces of infrastructure at one time. But infrastructure implementations require time for organizational learning as the firm adapts to new capabilities. “Big bang” approaches to infrastructure implementations are extremely risky. Successful companies often rely on incremental changes to move them toward 183
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE their defined architectures, minimizing the number of major changes that they must absorb. For example, Travelers Property & Casualty grasped the value of incremental implementations while developing its object-oriented infrastructure. In attempting to reuse some early objects, developers sometimes had to reengineer existing objects because new applications clarified their conceptualizations. But developers at Travelers note that had they waited to develop objects until they had perfected the model, they never would have implemented anything (Ross, 1997c). Stopping, starting, and even backing up are part of the learning process inherent in building an infrastructure. Organizational Systems and Processes Traditionally, organizations viewed their key systems and processes from a functional perspective. Managers developed efficiencies and sought continuous improvement within the sales and marketing, manufacturing, and finance functions, and slack resources filled the gaps between the functions. New technological capabilities and global markets have emphasized three very different processes: (1) supply chain integration, (2) customer and supplier linkages, and (3) leveraging of organizational learning and experience. For many manufacturing firms, supply chain integration is the initial concern. To be competitive, they must remove the excess cost and time between the placement of an order and the delivery of the product and receipt of payment. The widespread purchase of all-encompassing enterprisewide resource planning (ERP) systems is testament to both the perceived importance of supply chain integration to these firms and the conviction that their existing infrastructures are inadequate. Supply chain integration requires a tight marriage between organizational processes and information systems. ERP provides the scaffolding for global integration, but a system cannot be implemented until management can describe the process apart from the technology. At the same time, firms are recognizing the emergence of new channels for doing business with both customers and suppliers. Where technology allows faster or better customer service, firms are innovating rapidly. Thus, being competitive means gaining enough organizational experience to be able to leverage such technologies as electronic data interchange and the World Wide Web, and sometimes even installing and supporting homegrown systems at customers’ sites. Finally, many firms are looking for ways to capture and leverage organizational learning. As distributed employees attempt to customize a firm’s core competencies for individual customers, they can increase their effectiveness if they can learn from the firm’s accumulated experiences. The 184
The New Enabling Role of the IT Infrastructure
Exhibit 5.
Partnership Processes in Infrastructure Development
technologies for storing and retrieving these experiences are at hand, but the processes for making that happen are still elusive. Firms that adapt and improve on these three processes can be expected to out-perform their competitors. It is clear that to do so will require a unique combination of a visionary senior management team, a proactive IT unit, and a resourceful workforce. Together they can iteratively build, evaluate, redesign, and enhance their processes and supporting systems. IMPLEMENTING AND SUSTAINING THE INFRASTRUCTURE It is clear that the top and bottom layers of the IT pyramid are primarily the responsibility of business managers, whereas the middle layers are the responsibility of IT managers. Three partnership processes provide the glue between the layers, as shown in Exhibit 5. Communication and Education The process of moving from a strategy to an IT architecture involves mutual education of senior business and IT managers. Traditional approaches to education, such as lectures, courses, conferences, and readings, are all useful. Most important, however, is that management sched185
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ules IT-business contact time in which the focus of the discussion is business strategy and IT capability. For example, at Schneider Logistics, senior business managers meet formally with IT managers for two hours each week. This allows IT management to identify opportunities while senior management specifies priorities and targets IT resources accordingly. Thus, the IT architecture debate is a discussion among senior managers with insights and advice from the IT unit. Senior management articulates evolving strategies for organizational processes, whereas IT clarifies capabilities of the technologies. A key role of IT becomes one of explaining the potential costs of new capabilities. Typical return-on-investment computations are often not meaningful in discussions of infrastructure development, but senior managers need to know the size of an investment and the accompanying annual support costs for new capabilities before they commit to large infrastructure investments. To avoid getting bogged down in arguments over who would pay for new capabilities, some firms have made “speed-bump” investments. Texas Instruments (TI), for example, traditionally funded infrastructure by attaching the cost of incremental infrastructure requirements to the application development project that initiated the need. But when the corporate network proved inadequate for a host of capabilities, senior management separately funded the investment (Ross, 1997b). In this way, TI avoided the inherent delays that result from investing in infrastructure only when the business units can see specific benefits that warrant their individual votes in favor of additional corporate taxes. Technology Management Moving from the architecture to the infrastructure involves making technology choices. Senior managers need not be involved in discussions of the technologies themselves as long as they understand the approximate costs and risks of introducing new capabilities. Instead, core IT works with local IT or business liaisons who can discuss the implications of technology choices. Selecting specific technologies for the corporate infrastructure involves setting standards. Local IT staff must understand those choices so that they can, on the one hand, comply with standards and, on the other hand, communicate any negative impacts of those choices. Standards will necessarily limit the range of technologies that corporate IT will support. This enables the IT unit to develop expertise in key technologies and limits the costs of supporting the IT infrastructure. However, some business units have unique needs that corporate standards do not address. Negotiation between corporate and local IT managers should allow them to recognize when deviations from standards can enhance business unit operations without compromising corporatewide goals. IT units 186
The New Enabling Role of the IT Infrastructure that clearly understand their costs have an edge in managing technologies because they are able to discuss with business managers the value of adherence to standards and the trade-offs inherent in noncompliance (Ross, Vitale, and Beath, 1997). Process Redesign Although the infrastructure can enable new organizational forms and processes, the implementation of those new processes is dependent on the joint efforts of business unit and IT management. Successful process redesign demands that IT and business unit management share responsibility and accountability for such processes as implementing common systems, establishing appropriate customer linkages, defining requirements for knowledge management, and even supporting desktop technologies. The joint accountability is critical to successful implementation because the IT unit can only provide the tools. Business unit management needs to provide the vision and leadership for implementing the redesigned processes (Davenport, 1992). Many process changes are wrenching. In one firm we studied, autonomous general managers lost responsibility for manufacturing in order to enable global rationalization of production. Initially, these managers felt they had been demoted to sales managers. A fast-food firm closed the regional offices from which the firm had audited and supported local restaurants. Regional managers reorganized into cross-functional teams and, armed with portable computers, took to the road to spend their time visiting local sites. In these and other firms, changes rarely unfolded as expected. In most cases, major process changes take longer to implement, demand more resources, and encounter more resistance than management expects. IMPLICATIONS OF INFRASTRUCTURE REBUILDING We observed significant obstacles to organizations’ attempts to build IT infrastructures to enable new federalist structures. Most of the changes these firms were implementing involved some power shifts, which led to political resistance. Even more difficult to overcome, however, was the challenge of clarifying the firm’s strategic vision and defining IT priorities. This process proved to be highly iterative. Senior management would articulate a vision and then IT management would work through the apparent technological priorities that the strategy implied. IT could then estimate time, cost, and both capabilities and limitations. This would normally lead to an awareness that the strategy was not clear enough to formulate an IT architecture. When the organization had the necessary fortitude, management would continue to iterate the strategy and architecture, but most abandoned the task midstream and the IT unit was left trying to establish priorities and implement an architecture that lacked clear man187
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE agement support. This would lead either to expensive efforts to install an infrastructure that met all possible needs or to limited investment in infrastructure that was not strategically aligned with the business (Henderson and Venkatraman, 1993). Although it is difficult to hammer out a clear architecture based on corporate strategy and then incrementally install an IT infrastructure that supports redesigned organizational processes, the benefits appear to be worth the effort. At Travelers, the early adoption of an object environment has helped it retain a high-quality IT staff and allowed it to anticipate and respond to changing market opportunities. Johnson & Johnson’s development of a corporatewide infrastructure has allowed it to address global cost pressures and to respond to the demands of global customers. Senior management sponsorship of global systems implementations at Dow Corning has enabled the firm to meet due dates for implementation and anticipate potential process redesign. As firms look for opportunities to develop competitive advantage, they find it is rarely possible to do so through technological innovations (Clark, 1989). However, the firms in this study were attempting to develop infrastructures that positioned them to implement new processes faster and more cost effectively than their competitors. This kind of capability is valuable, rare, and difficult for competitors to imitate. Thus, it offers the potential for long-term competitive advantage (Collis and Montgomery, 1995). Rebuilding an infrastructure is a slow process. Firms that wait to see how others fare in their efforts may reduce their chances for having the opportunity to do so. Notes 1. Broadbent, M., and Weill, P. 1997. Management by maxim: How business and IT managers can create IT infrastructures. Sloan Management Review 38(3): 77–92. 2. Clark, K.B. 1989. What strategy can do for technology. Harvard Business Review (November–December): 94–98. 3. Collis, D.J. and Montgomery, C.A. 1995. Competing on resources: Strategy in the 1990s. Harvard Business Review, 73 (July-August): 118–129. 4. Davenport, T.H. 1992. Process Innovation: Reengineering Work Through Information Technology. Boston: Harvard Business School Press. 5. Davenport, T.H. and Linder, J. 1993. Information management infrastructure: The new competitive weapon? Ernst & Young Center for Business Innovation Working Paper CITA33. 6. Hamel, G. and Prahalad, C.K. 1990. The Core Competence of the Corporation, Harvard Business Review, 68 (May-June). 7. Handy, C. 1992. Balancing corporate power: A new federalist paper. Harvard Business Review, 70 (November-December): 59–72. 8. Henderson, J.C. and Venkatraman, N. 1993. Strategic alignment: Leveraging information technology for transforming organizations. IBM Systems Journal, 32(1): 4–16. 9. McFarlan, F.W. and Stoddard, D.B. 1986. Otisline. Harvard Business School Case No. 9-186304.
188
The New Enabling Role of the IT Infrastructure 10. Mead, M. and Linder, J. 1987. Frito-Lay, Inc.: A strategic transition. Harvard Business School Case No. 9-187-065. 11. Quinn, J.B. 1992. Intelligent Enterprise: A Knowledge and Service Paradigm for Industry. New York: Free Press. 12. Ross, J.W. 1995a. Johnson & Johnson: Building an infrastructure to support global operations. CISR Working Paper No. 283. 13. Ross, J. W. 1995b. Schneider National, Inc.: Building networks to add customer value. CISR Working Paper No. 285. 14. Ross, J.W. 1997a. Dow Corning: Business processes and information technology. CISR Working Paper No. 298. 15. Ross, J.W. 1997b. Texas Instruments: Service levels agreements and cultural change. CISR Working Paper No. 299. 16. Ross, J.W. 1997c. The Travelers: Building an object environment. CISR Working Paper No. 301. 17. Ross, J.W., Vitale, M.R., and Beath, C.M. 1997. The untapped potential of IT chargeback. CISR Working Paper No. 300. 18. Stalk, G., Evans, P., and Schulman, L.E. 1992. Competing on capabilities: The new rules of corporate strategy. Harvard Business Review, 70 (March-April): 57–69.
189
This page intentionally left blank
Chapter 17
U.S. Telecommunications Today Nicholas Economides
This chapter examines the current conditions in the U.S. telecommunications sector (i.e., October 2002). It examines the impact of technological and regulatory change on market structure and business strategy. Among others, it discusses the emergence and decline of the telecom bubble, the impact of digitization on pricing, and the emergence of Internet telephony. The chapter briefly examines the impact of the 1996 Telecommunications Act on market structure and strategy in conjunction with the history of regulation and antitrust intervention in the telecommunications sector. After discussing the impact of wireless and cable technologies, the chapter concludes by venturing into some short-term predictions. There is concern about the derailment of the implementation of the 1996 Act by the aggressive legal tactics of the entrenched monopolists (the local exchange carriers), and we point to the real danger that the intent of the U.S. Congress in passing the 1996 Act to promote competition in telecommunications will not be realized. The chapter also discusses the wave of mergers in the telecommunications and cable industries. INTRODUCTION Presently, the U.S. telecommunications sector is going through a revolutionary change. There are four reasons for this. The first reason is the rapid technological change in key inputs of telecommunications services and in complementary goods, which have reduced dramatically the costs of traditional services and have made many new services available at reasonable prices. Cost reductions have made feasible the World Wide Web (WWW) and the various multimedia applications that “live” on it. The second reason for the revolutionary change has been the sweeping digitization of the telecommunications and the related sectors. The underly0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
191
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ing telecommunications technology has become digital. Moreover, the consumer and business telecommunications interfaces have become more versatile and closer to multifunction computers than to traditional telephones. Digitization and integration of telecommunications services with computers create significant business opportunities and impose significant pressure on traditional pricing structures, especially in voice telephony. The third reason for the current upheaval in the telecommunications sector was the passage of a major new law to govern telecommunications in the United States, the Telecommunications Act of 1996 (1996 Act). Telecommunications has been traditionally subject to a complicated federal and state regulatory structure. The 1996 Act attempted to adapt the regulatory structure to technological reality, but various legal challenges by the incumbents have thus far delayed, if not nullified, its impact. The fourth reason is the “bubble” in investment in telecommunications and of the valuation of telecommunications companies of the years 1997 to 2000 and the deflation of the bubble since late 2000. As one looks at the telecommunications sector in the Fall of 2002, one observes: • The collapse of prices of the long-distance (LD) sector, precipitating in the bankruptcy of WorldCom, the collapse of the stock prices of longdistance companies, and the voluntary divestiture of AT&T. This comes naturally, given the tremendous excess capacity in long distance from new carriers’ investment and from huge expansion of Internet backbones that are very close substitutes (in production) to traditional long distance. • The fast, but not fast enough, growth of the Internet. In terms of bits transferred, the Internet has been growing at 100 percent a year rather than 400 percent a year as was earlier predicted. As a result, huge excess capacities in Internet backbone and in long-distance transmission were created. The rush to invest in backbones created a huge expansion and then, once the final demand did not get realized, the collapse of the telecom equipment sector. • The bankruptcy of many entrants in local telecommunications, such as Covad. The reason for this was the failure of the implementation of the Telecommunications Act of 1996. • A wave of mergers and acquisitions. Before going into a detailed analysis, it is important to point out the major, long-run driving forces in U.S. telecommunications today. These include: • Dramatic reductions in the costs of transmission and switching • Digitization 192
U.S. Telecommunications Today • Restructuring of the regulatory environment through the implementation of the 1996 Telecommunications Act coming 12 years after the breakup of AT&T • Move of value from underlying services (such as transmission and switching) to the interface and content • Move toward multi-function programmable devices with programmable interfaces (such as computers) and away from single-function, nonprogrammable consumer devices (such as traditional telephone appliances) • Reallocation of the electromagnetic spectrum, allowing for new types of wireless competition • Interconnection and interoperability of interconnected networks; standardization of communications protocols • Network externalities and critical mass These forces have a number of consequences, including: • Increasing pressure for cost-based pricing of telecommunications services • Price arbitrage between services of the same time immediacy requirement • Increasing competition in long-distance services • The possibility of competition in local services • The emergence of Internet telephony as a major new telecommunications technology This short chapter touches on technological change and its implications in the next section. It first discusses the Telecommunications Act of 1996 and its implications, followed by a review of the impact of wireless and cable technologies. The chapter concludes with some predictions and short-term forecasts for the U.S. telecommunications sector. TECHNOLOGICAL CHANGE The past two decades have witnessed (1) dramatic reductions in costs of transmission through the use of technology; (2) reductions in costs of switching and information processing because of big reductions of costs of integrated circuits and computers; and (3) very significant improvements in software interfaces. Cost reductions and better interfaces have made feasible many data- and transmission-intensive services. These include many applications on the World Wide Web, which were dreamed of many years ago but only recently became economically feasible. The general trend in cost reductions has allowed for entry of more competitors in many components of the telecommunications network and an intensification of competition. Mandatory interconnection of public telecommunications networks and the use of common standards for intercon193
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE nection and interoperability have created a “network of networks,” that is, a web of interconnected networks. The open architecture of the network of networks allowed for entry of new competitors in markets for particular components, as well as in markets for integrated end-to-end services. Competition intensified in many, but not all markets. Digital Convergence and “Bit Arbitrage” Entry and competition were particularly helped by (1) the open architecture of the network and (2) its increasing digitization. Currently, all voice messages are digitized close to their origination and are carried in digital form over most of the network. Thus, the data and voice networks are one, with voice treated as data with specific time requirements. This has important implications for pricing and market structure. Digital bits (zeros or ones) traveling on the information highway can be parts of voice, still pictures, video, or of a database or other computer application, and they appear identical: “a bit is a bit is a bit.” However, because some demands are for real-time services while others are not, the saying that “a bit is a bit is a bit” is only correct among services that have the same index of time immediacy. Digitization implies arbitrage on the price of bit transmission among services that have the same time immediacy requirements. For example, voice telephony and video conferencing require real-time transmission and interaction. Digitization implies that the cost of transmission of voice is hundreds of times smaller than the cost of transmitting video of the same duration. This implies that if regulation-imposed price discrimination is eliminated, arbitrage on the price of bits will occur, leading to extremely low prices for services, such as voice, that use relatively very few bits. Even if price discrimination remains imposed by regulation, arbitrage in the cost and pricing of bits will lead to pressures for a de facto elimination of discrimination. This creates significant profit opportunities for the firms that are able to identify the arbitrage opportunities and exploit them. Internet Telephony Digitization of telecommunication services imposes price arbitrage on the bits of information carried by the telecommunications network, thus leading to the elimination of price discrimination between voice and data services. This can lead to dramatic reductions in the price of voice calls, thereby precipitating significant changes in market structure. These changes were first evident in the emergence of the Internet, a ubiquitous network of applications based on the TCP/IP protocol suite. Started as a text-based network for scientific communication, the Internet grew dramatically in the late 1980s and 1990s once not-text-only applications 194
U.S. Telecommunications Today became available.1 In 2001, the Internet reached 55 percent of U.S. households, while 60 percent of U.S. households had PCs. Of the U.S. households connected to the Internet, 90 percent used a dial-up connection and 10 percent reached the Internet through a broadband service, which provides at least eight times more bandwidth/speed than a dial-up connection. Of those connecting to the Internet with broadband, 63 percent used a cable modem connection, 36 percent used DSL, and 1 percent used a wireless connection. Internet-based telecommunications are based on packet switching. There are two modes of operation: (1) a time-delay mode in which there is a guarantee that the system will do whatever it can to deliver all packets; and (2) a real-time mode, in which packets can in fact be lost without possibility of recovery. Most telecommunications services do not have a real-time requirement, so applications that “live” on the Internet can easily accommodate them. For example, there are currently a number of companies that provide facsimile services on the Internet, where all or part of the transport of the fax takes place over the Internet. Although the Internet was not intended to be used in real-time telecommunications, despite the loss of packets, presently telecommunications companies use the Internet to complete ordinary voice telephone calls. Voice telecommunications service started on the Internet as a computer-to-computer call. As long as Internet telephony was confined to calls from a PC to a PC, it failed to take advantage of the huge network externalities of the public switched network (PSTN) and was just a hobby. About seven years ago, Internet telecommunications companies started offering termination of calls on the public switched network, thus taking advantage of the immense externalities of reaching anyone on the PSTN. In 1996, firms started offering Internet calling that originated and terminated on the public switched network, that is, from and to the regular customers’ phone appliances. These two transitions became possible with the introduction of PSTN–Internet interfaces and switches by Lucent and others. In 1998, Qwest and others started using Internet Protocol (IP) switching to carry telephone calls from and to the PSTN using their own network for long-distance transport as an intranet.2 Traditional telephony keeps a channel of fixed bandwidth open for the duration of a call. Internet calls are packet based. Because transmission is based on packet transport, IP telephony can more efficiently utilize bandwidth by varying in real-time the amount of it used by a call. But, because IP telephony utilizes the real-time mode of the Internet, there is no guarantee that all the packets of a voice transmission will arrive at the destination. Internet telephony providers use sophisticated voice sampling meth195
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ods to decompose and reconstitute voice so that packet losses do not make a significant audible difference. Because such methods are by their nature imperfect, the quality and fidelity of an Internet call depends crucially on the percentage of packets that are lost in transmission and transport. This, in turn, depends on, among other factors, (1) the allocation of Internet bandwidth (pipeline) to the phone call, and (2) the number of times the message is transmitted.3 Because of these considerations, one expects that two types of Internet telephony will survive: the low-end quality, carried over the Internet, with packets lost and low fidelity; and a service of comparable quality with traditional long distance, carried on a company’s Intranet on the long-distance part. Internet-based telecommunications services pose a serious threat to traditional national and international long-distance service providers. In the traditional U.S. regulatory structure, a call originating from a computer to an Internet service provider (ISP) (or terminating from an ISP to a computer) is not charged an “access charge” by the local exchange carrier. This can lead to substantial savings to the consumer. The FCC, in its decision of February 25, 1999, muddles the waters by finding, on one hand, that “Internet traffic is intrinsically mixed and appears to be largely interstate in nature,” while, on the other hand, it validates the reciprocal compensation of ISPs which were made under the assumption that customer calls to ISPs are treated as local calls. If Internet calls are not classified as local calls, the price that most consumers will have to pay to make Internet calls would become a significant per-minute charge. Because it is difficult to distinguish between phone calls through the Internet and other Internet traffic, such pricing will either be unfeasible or will have to apply to other Internet traffic, thereby creating a threat to the fast growth of the Internet. In fact, one of the key reasons for Europe’s lag in Internet adoption is the fact the in most countries, unlike the United States, consumers are charged per minute for local calls. The increasing use of broadband connections is changing the model toward fixed monthly fees in Europe. THE TELECOMMUNICATIONS ACT OF 1996 AND ITS IMPACT Goals of the Act The Telecommunications Act of 1996 (the 1996 Act) attempted a major restructuring of the U.S. telecommunications sector. The 1996 Act will be judged favorably to the extent that it allows and facilitates the acquisition by consumers of the benefits of technological advances. Such a function requires the promotion of competition in all markets. This does not mean immediate and complete deregulation. Consumers must be protected from monopolistic abuses in some markets as long as such abuses are feasible 196
U.S. Telecommunications Today under the current market structure. Moreover, the regulatory framework must safeguard against firms exporting their monopoly power in other markets. In passing the Telecommunications Act of 1996, the U.S. Congress took radical steps to restructure U.S. telecommunications markets. These steps may result in very significant benefits to consumers of telecommunications services, telecommunications carriers, and telecommunications equipment manufacturers. But the degree of success of the 1996 Act depends crucially on its implementation through decisions of the Federal Communication Commission (FCC) and State Public Utility Commissions as well as the outcome of the various court challenges that these decisions face. The 1996 Act envisions a network of interconnected networks composed of complementary components and generally provides both competing and complementary services. The 1996 Act uses both structural and behavioral instruments to accomplish its goals. The Act attempts to reduce regulatory barriers to entry and competition. It outlaws artificial barriers to entry in local exchange markets, in its attempt to accomplish the maximum possible competition. Moreover, it mandates interconnection of telecommunications networks, unbundling, nondiscrimination, and costbased pricing of leased parts of the network, so that competitors can enter easily and compete component by component and service by service. The 1996 Act imposes conditions to ensure that de facto monopoly power is not exported to vertically related markets. Thus, the 1996 Act requires that competition be established in local markets before the incumbent local exchange carriers are allowed in long distance. The 1996 Act preserves subsidized local service to achieve “Universal Service,” but imposes the requirement that subsidization is transparent and that subsidies are raised in a competitively neutral manner. Thus, the 1996 Act leads the way to the elimination of subsidization of Universal Service through the traditional method of high access charges. The 1996 Act crystallized changes that had become necessary because of technological progress. Rapid technological change has always been the original cause of regulatory change. The radical transformation of the regulatory environment and market conditions that is presently taking place as a result of the 1996 Act is no exception. History Telecommunications has traditionally been a regulated sector of the U.S. economy. Regulation was imposed in the early part of this century and remains today in various parts of the sector.4 The main idea behind regulation was that it was necessary because the market for telecommunications 197
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE services was a natural monopoly, and therefore a second competitor would not survive. As early as 1900, it was clear that all telecommunications markets were not natural monopolies, as evidenced by the existence of more than one competing firm in many regional markets, prior to the absorption of most of them into the Bell System. Over time, it became clear that some markets that may have been natural monopolies in the past are not natural monopolies anymore, and that it is better to allow competition in those markets while keeping the rest regulated. The market for telecommunication services and for telecommunications equipment went through various stages of competitiveness since the invention of the telephone by Alexander Graham Bell. After a period of expansion and consolidation, by the 1920s, AT&T had an overwhelming majority of telephony exchanges and submitted to state regulation. Federal regulation was instituted by the 1934 Telecommunication Act, which established the Federal Communications Commission. Regulation of the U.S. telecommunications market was marked by two important antitrust lawsuits that the U.S. Department of Justice brought against AT&T. In the first one, United States v. Western Electric, filed in 1949, the U.S. Department of Justice (DoJ) claimed that the Bell Operating Companies practiced illegal exclusion by buying only from Western Electric, a part of the Bell System. The government sought a divestiture of Western Electric, but the case was settled in 1956 with AT&T agreeing not to enter the computer market but retaining ownership of Western Electric. The second major antitrust suit, United States v. AT&T, started in 1974. The government alleged that (1) AT&T’s relationship with Western Electric was illegal and (2) that AT&T monopolized the long-distance market. The DoJ sought divestiture of both manufacturing and long distance from local service. The case was settled by the Modified of Final Judgment (MFJ). This decree broke away from AT&T seven regional operating companies (RBOCs). Each RBOC was comprised of a collection of local telephone companies that were part of the original AT&T. The RBOCs remained regulated monopolies, each with an exclusive franchise in its region. Microwave transmission was a major breakthrough in long-distance transmission that created the possibility of competition in long distance. Microwave transmission was followed by technological breakthroughs in transmission through satellite and through optical fiber. The breakup of AT&T crystallized the recognition that competition was possible in long distance, while the local market remained a natural monopoly. The biggest benefits to consumers during the past 15 years have come from the long-distance market, which during this period was 198
U.S. Telecommunications Today transformed from a monopoly to an effectively competitive market. However, often consumers do not reap the full benefits of cost reductions and competition because of an antiquated regulatory framework that, ironically, was supposed to protect consumers from monopolistic abuses but instead protects the monopolistic market structure. Competition in long distance has been a great success. The market share (in minutes of use) of AT&T fell from almost 100 percent to 53 percent by the end of 1996, and is presently significantly below 50 percent. Since the MFJ, the number of competitors in the long-distance market has increased dramatically. In the period up to 1996, there were four large facilities-based competitors: AT&T, MCI-WorldCom, Sprint, and Frontier.5 In the period after 1996, a number of new large facilities-based competitors entered, including Qwest, Level 3, and Williams. There are also a large number of “resellers” that buy wholesale service from the facilities-based longdistance carriers and sell to consumers. For example, currently, there are about 500 resellers competing in the California interexchange market, providing very strong evidence for the ease of entry into this market. At least 20 new firms have entered the California market each year since 1984. In California, the typical consumer can choose from at least 150 long-distance companies. Exhibit 1 shows the dramatic decrease in the market share of AT&T in long distance up until 1998, after which the declining trend has continued. Prices of long-distance phone calls have decreased dramatically. The average revenue per minute of AT&T’s switched services has been reduced by 62 percent between 1984 and 1996. AT&T was declared “non-dominant” in the long-distance market by the FCC in 1995.6 Most economists agree that presently the long-distance market is effectively competitive. Exhibit 2 shows the declining average revenue per minute for AT&T and the average revenue per minute net of access charges. Local telephone companies that came out of the Bell System (i.e., RBOCs) actively petitioned the U.S. Congress to be allowed to enter the long-distance market, from which they were excluded by the MFJ. The MFJ prevented RBOCs from participation in long distance because of the anticompetitive consequences that this would have for competition in long distance. The anticompetitive effects would arise because of the control by RBOCs of essential “bottleneck” inputs for long-distance services, such as terminating access of phone calls to customers who live in the local companies’ service areas. The RBOCs enjoyed monopoly franchises. A long-distance phone call is carried by the local telephone companies of the place it originates and the place it terminates, and only in its longdistance part by a long-distance company. Thus, “originating access” and “terminating access” are provided by local exchange carriers to long-dis199
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
Exhibit 1. AT&T’s Market Share of Interstate Minutes
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
200
Exhibit 2. Average Revenue per Minute of AT&T Switched Services
201
U.S. Telecommunications Today
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE tance companies and are essential bottleneck inputs for long-distance service. Origination and termination of calls are extremely lucrative services.7 Access has an average cost (in most locations) of $0.002 per minute. Its regulated prices vary. The national average in 2001 was $0.0169 per minute. Such pricing implies a profit rate of 745 percent.8 Access charges reform is one of the key demands of the pro-competitive forces in the current deregulation process. The great success of competition in long distance allowed the U.S. Congress to appear “balanced” in the Telecommunications Act of 1996 by establishing competition in local telephony, while allowing RBOCs into long distance after they meet certain conditions. However, the transition of local markets to effective competition will not be as easy or as quick as in the long-distance markets. This is because of the nature of the product and the associated economics. Many telecommunications companies are presently trying to be in as many markets as possible so that they can bundle the various products. Companies believe that consumers are willing to pay more for bundled services for which the consumer receives a single bill. Bundling also discourages consumers from migrating to competitors, who may not offer the complete collection of services, so consumer “churn” is expected to be reduced. Entry in Local Services as Envisioned by the 1996 Act Currently, the “last mile” of the telecommunications network that is closest to the consumer (the “local loop”) remains a bottleneck controlled by a local exchange carrier (LEC). In 1996, RBOCs (i.e., Ameritech, Bell Atlantic, BellSouth, SBC, and US West) had 89 percent of the telephone access lines nationwide. Most of the remaining lines belonged to GTE and independent franchise holders. Basic local service provided by LECs is not considered particularly profitable. However, in addition to providing access to longdistance companies, LECs also provide lucrative “custom local exchange services” (CLASS), such as call waiting, conference calling, and automatic number identification. The Telecommunications Act of 1996 boldly attempted to introduce competition in this last bottleneck, and, before competition takes hold, the Act attempts to imitate competition in the local exchange. To facilitate entry into the local exchange, the 1996 Act introduces two novel ways of entry in addition to entry through the installation of its own facilities. The first way allows entry into the retailing part of the telecommunications business by requiring incumbent local exchange carriers (ILECs) to sell at wholesale prices to entrants any retail service that they offer. Such entry is essentially limited to the retailing part of the market. 202
U.S. Telecommunications Today The second and most significant novel way of entry introduced by the 1996 Act is through leasing of unbundled network elements from incumbents. In particular, the 1996 Act requires that ILECs (1) unbundle their networks and (2) offer for lease to entrants network components (unbundled network elements [UNEs]) “at cost plus reasonable profit.”9 Thus, the 1996 Act envisions the telecommunications network as a decentralized network of interconnected networks. Many firms, including the large interexchange carriers AT&T and MCI WorldCom, attempted to enter the market through “arbitration” agreements with ILECs under the supervision of State Regulatory Commissions, according to the procedure outlined by the 1996 Act. The arbitration process proved to be extremely long and difficult, with continuous legal obstacles and appeals raised by the ILECs. To this date (October 2002), over six years after the signing of the 1996 Act by President Clinton, entry in the local exchange has been small. In the latest statistics, collected by the FCC,10 as of June 30, 2001, entrant competitive local exchange carriers (CLECs) provided 17.3 million (or about 9.0 percent) of the approximately 192 million nationwide local telephone lines. The majority (55 percent) of these lines were provided to business customers. Approximately one third of CLEC service provision is over their own facilities. For services provided over leased facilities, the percentage of CLEC service (which is total service resale of ILEC services) declined to 23 percent at the end of June 2001, while the percentage provisioned over acquired UNE loops grew to 44 percent. Entry of RBOCs into Long-Distance Service The 1996 Act allows for entry of RBOCs in long distance once a list of requirements has been met and the petitioner has proved that its proposal is in the public interest. These requirements can be met only when the market for local telecommunications services becomes sufficiently competitive. If the local market is not competitive when an incumbent LEC monopolist enters into long distance, the LEC can leverage its monopoly power to disadvantage its long distance rivals by increasing its costs in various ways, and by discriminating against them in its pricing. If the local market is not competitive when an incumbent LEC monopolist enters into long distance, an ILEC would control the price of a required input (switched access) to long-distance service, while it would also compete for long-distance customers. Under these circumstances, an ILEC can implement a vertical price squeeze on its long-distance competitors, whereby the price-tocost ratio of long-distance competitors is squeezed so that they are driven out of business.11 203
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE In allowing entry of local exchange carriers into the long-distance market, the 1996 Act tries not to endanger competition that has developed in long distance by premature entry of RBOCs into the long-distance market. However, on this issue, the 1996 Act’s provisions guarding against premature entry are insufficient. Hence, to guard against anti-competitive consequences of premature entry of RBOCs in long distance, there is need of a deeper analysis of the consequences of such entry on competition and on consumer and social welfare. Currently, RBOCs have been approved in 15 states for in-region provision of long-distance services. As of October 2002, the approved, pending, rejected, and withdrawn applications are summarized in Exhibit 3.12 THE IMPACT OF WIRELESS AND OF CABLE TELEVISION During the past 20 years there has been a tremendous (and generally unanticipated) expansion of the mobile phone market. The very significant growth has been limited by relatively high prices resulting from (1) the prevention of entry of more than two competitors in each metropolitan area, and (2) the standard billing arrangement that imposes a fee on the cellular customer for receiving (as well as initiating) calls. However, during the past six years, the FCC has auctioned parts of the electromagnetic spectrum that will enable the transmission of personal communication services (PCS) signals.13 The auctioned spectrum will be able to support up to five additional carriers in the major metropolitan markets.14 Although the PCS spectrum band is different from traditional cellular bands, PCS is predicted to be a low-cost, high-quality mobile alternative to traditional phone service. Other wireless services may chip away at the ILEC markets, especially in high-capacity access services.15 The increase in the number of competitors has already created very significant decreases in prices of mobile phone services. By its nature, PCS is positioned between fixed local service and traditional wireless (cellular) service. Presently, there is a very significant price difference between the two services. Priced between the two, PCS first drew consumers from traditional cellular service in large cities, and has a chance to become a serious threat to fixed local service. Some PCS providers already offer data transmission services that are not too far in pricing from fixed broadband pricing. Industry analysts have been predicting the impending entry of cable television in telephony for many years. Despite numerous trials, such entry in traditional telecommunications services has fully not materialized. There are a number of reasons for this. First, to provide telephone service, cable television providers will need to upgrade their networks from analog to digital. Second, they will need to add switching. Third, most of the cable 204
U.S. Telecommunications Today Exhibit 3.
Status of Long-Distance Applications by RBOCs in October 2002
State
Filed by
Status
Date Filed
Date Resolved
CO, ID, IA, MT, NE, ND, UT, WA, WY CA FL, TN VA MT, UT, WA, WY NH, DE AL, KY, MS, NC, SC CO, ID, IA, NE, ND NJ ME GA, LA VT NJ RI GA, LA AR, MO PA CT MO MA KS, OK MA TX TX NY LA LA SC MI OK MI
Qwest
Pending
09/30/02
Due by 12/27/02
SBC BellSouth Verizon Qwest Verizon BellSouth Qwest Verizon Verizon BellSouth Verizon Verizon Verizon Bellsouth SBC Verizon Verizon SBC Verizon SBC Verizon SBC SBC Verizon BellSouth BellSouth BellSouth Ameritech SBC Ameritech
Pending Pending Approved Withdrawn Approved Approved Withdrawn Approved Approved Approved Approved Withdrawn Approved Withdrawn Approved Approved Approved Withdrawn Approved Approved Withdrawn Approved Withdrawn Approved Denied Denied Denied Denied Denied Withdrawn
09/20/02 09/20/02 08/01/02 07/12/02 06/27/02 06/20/02 06/13/02 03/26/02 3/21/02 2/14/02 1/17/02 12/20/01 11/26/01 10/02/01 08/20/01 6/21/01 4/23/01 4/4/01 1/16/01 10/26/00 9/22/00 4/5/00 1/10/00 9/29/99 7/9/98 11/6/97 9/30/97 5/21/97 4/11/97 1/02/97
Due by 12/19/02 Due by 12/19/02 10/30/02 09/10/02 09/25/02 09/18/02 09/10/02 06/24/02 6/19/02 5/15/02 4/17/02 3/20/02 2/24/02 12/20/01 11/16/01 9/19/01 7/20/01 6/7/01 4/16/01 1/22/01 12/18/00 6/30/00 4/05/00 12/22/99 10/13/98 2/4/98 12/24/97 8/19/97 6/26/97 2/11/97
industry has taken a high debt load and has difficulty making the required investments in the short run. When it is able to provide switching on a large scale, cable television will have a significant advantage over regular telephone lines. Cable TV lines that reach the home have a significantly higher bandwidth capacity than regular twisted-pair lines. Thus, it is possible to offer a number of “telephone lines” over the cable TV wire as well as broadband (high-bandwidth) access to the World Wide Web that require high-bandwidth capacity. A key reason for AT&T’s acquisition of cable companies was the provision of telephone services through cable. Upgrades to provide telephony proved to be more expensive and much slower than expected, and it is uncertain if Comcast/AT&T will continue the upgrade of cable lines to telephony. The announcement by AT&T of the provision of telephony through cable and the entry of independent DSL providers prompted incumbent LECs to aggressively market their DSL services, because it was generally accepted 205
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE that customers would not switch easily from DSL to broadband cable and vice versa. As the threat of AT&T and independent DSL providers diminished, ILECs reduced their DSL campaigns. At the end of 2001, broadband connections were 63 percent with cable, 36 percent with DSL, and 1 percent with wireless. THE CURRENT WAVE OF MERGERS Legal challenges have derailed the implementation process of the 1996 Act and have increased significantly the uncertainty in the telecommunications sector. Long-distance companies have been unable to enter the local exchange markets by leasing unbundled network elements (UNEs) because the arbitration process that started in April 1996 has resulted in long delays to the decision of final prices. Given the uncertainty of the various legal proceedings, and without final resolution on the issues of nonrecurring costs and the electronic interface for switching local service customers across carriers, entry into the local exchange through leasing of unbundled network elements has been minimal. Moreover, entry into the retailing part of the business through total service resale has also been minimal because the wholesale discounts have been small. In the absence of entry into the local exchange market as envisioned by the 1996 Act, the major long-distance companies are buying companies that give them some access to the local market. MCI merged with WorldCom, which had just merged with Brooks Fiber and MFS, which in turn also own some infrastructure in local exchange markets. MCI–WorldCom focused on the Internet and the business longdistance market.16 WorldCom proposed a merger with Sprint. The merger was stopped by both the United States Department of Justice (DoJ) and by the Competition Committee of the European Union (EU). The DoJ had reservations about potential dominance of the merged company in the market for global telecommunications services. The EU had objections about potential dominance of the Internet backbone by the merged company.17 In June 2002, WorldCom filed for Chapter 11 bankruptcy protection after a series of revelations about accounting irregularities; as of October 2002, the full effects of these events on the future of WorldCom and the entire industry are still open. AT&T acquired TCI, which owned a local exchange infrastructure that reaches business customers. AT&T unveiled an ambitious strategy for reaching consumers’ homes using cable TV wires for the “last mile.” With this purpose in mind, AT&T bought TCI. AT&T promised to convert the TCI cable access to an interactive broadband, voice, and data telephone link to residences. AT&T also entered into an agreement with TimeWarner to use 206
U.S. Telecommunications Today its cable connection in a way similar to that of TCI. In April 1999, AT&T outbid Comcast and acquired MediaOne, the cable spin-off of US West. TCI cable reached 35 percent of U.S. households. Together with TimeWarner and MediaOne, AT&T could reach a bit more than 50 percent of U.S. households. Without access to UNEs to reach all residential customers, AT&T had to find another way to reach the remaining U.S. households. The provision of telephony, Internet access, broadband, data, and two-way video services exclusively over cable lines in the “last mile” requires significant technical advances, significant conversion of the present cable networks, and an investment of at least $5 billion (and some say $30 billion) just for the conversion of the cable network to two-way switched services. Moreover, there is some inherent uncertainty in such a conversion, which has not been successful in the past. Thus, it was an expensive and uncertain proposition for AT&T but, at the same time, it was one of the few remaining options of entry into the local exchange. Facing tremendous pressure from financial markets, AT&T decided on a voluntary breakup into a wireless unit, a cable TV unit, and a long-distance and local service company that retained the name AT&T and the symbol “T” at NYSE. Financial markets tended to underestimate the value of AT&T by looking at it only as a long-distance company. The cable part of AT&T was merged with Comcast, and the full breakup should be almost finished by the end of 2002. In a complicated financial transaction, AOL/TimeWarner plans to divest the part of it that AT&T controls to AT&T/Comcast. Meanwhile, Pacific Bell was acquired by SBC, and NYNEX by Bell Atlantic, despite antitrust objections, in an attempt by the RBOCs to maximize their foothold, looking forward to the time when they would be allowed to provide long-distance service. SBC bought Southern New England Telephone (SNET), one of the few companies that, as an independent (not part of AT&T at divestiture), was not bound by MFJ restrictions and has already entered into long distance. Bell Atlantic merged with GTE to form Verizon, and SBC bought Ameritech. US West merged with Qwest, a new long-distance service provider. Thus, the eight large local exchange carriers of 1984 (seven RBOCs and GTE) have been reduced to only four: Verizon, BellSouth, SBC, and Qwest. The smallest one, BellSouth already feels the pressure, and it has been widely reported to be in merger/acquisition talks with a number of parties. Recently, BellSouth announced a pact with Qwest to sell Qwest’s long-distance service once BellSouth is allowed to sell long-distance service. A crucial cross-media merger occurred with the acquisition of TimeWarner by AOL at the height of AOL’s stock price. The merger was achieved with the requirement that AOL/TimeWarner will allow independent ISPs to access its cable monopoly for broadband services. Synergies and new joint 207
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE products failed to materialize at AOL/TimeWarner, and there is wide speculation that AOL will be divested. The present crisis in telecommunications arose out of an incorrect prediction of the speed of expansion of the Internet. It was widely believed that the Internet would grow at 400 percent in terms of bits per year. In retrospect, it is clear that for the years 2000 and 2001, only 100 percent growth was realized. Of course, it was difficult to pin down the growth rate in the early stages of an exponential network expansion. The Internet was growing at 400 percent per year when the predictions were made. However, the rate of growth slowed with respect to the number of new hosts connected. And because no new “killer application” that required a lot of bandwidth was unveiled, the rate of growth in bits transferred also slowed. This is despite the very fast growth of transfers of bits in peer-to-peer (P2P) transfers of files, mainly songs in MP3 format, popularized by Napster and still going strong even after Napster has been practically closed down. Based on the optimistic prediction of Internet growth, there was tremendous investment in Internet transport and routing capacity. Moreover, because capital markets were very liberal in providing funds, a number of companies invested and deployed telecommunications equipment more than would have been prudent given their then-current market share. This was done for strategic reasons, essentially in an attempt to gain market share in the process of the rapid expansion of the Internet. Once the growth prediction was revised downward, the immediate effect was a significant reduction in orders and investment in optical fiber, switching, and router equipment. Service companies wait for higher utilization rates of their existing capacity as the Internet expands. There is presently a temporary overcapacity of the Internet in the United States. And, as mentioned, because it is easy to run the Internet backbone as a long-distance network, the huge overcapacity of the Internet backbone, combined with new investment and overcapacity of traditional long-distance networks, lead to very significant pressure and reductions of longdistance prices. THE COMING WORLD The intent of the 1996 Act was to promote competition and the public interest. It will be a significant failure of the U.S. political, legal, and regulatory systems if the interests of entrenched monopolists rather than the public interest as expressed by the U.S. Congress dictate the future of the U.S. telecommunications sector. The market structure in the telecommunications sector two years ahead will depend crucially on the resolution of the LEC’s legal challenges to the 1996 Telecommunications Act and its final implementation.18 We have already seen a series of mergers leading to the re208
U.S. Telecommunications Today monopolization of local telecommunications. As the combinations of former RBOCs are approved state by state for long distance, we see a reconstitution of the old AT&T monopoly (without the present AT&T). We also see significant integration in the cable industry as AT&T found it extremely difficult to enter the local exchange market. Whatever the outcomes of the legal battles, the existence of arbitrage and the intensification of competition necessitate cost-based pricing and will create tremendous pressure on traditional regulated prices that are not cost-based. Prices that are not based on cost will prove unsustainable. This includes access changes that LECs charge to IXCs (long-distance providers), which have to become cost based if the vision of a competitive network of interconnected networks is to be realized. Computers are likely to play a larger role as telephone appliances and in running intermediate-sized networks that will compete with LECs and intensify the arbitrage among IXCs. Telephony based on the Internet Protocol (IP) will become the norm. Firms that have significant market share in computer interfaces, such as Microsoft, are likely to play a significant role in telephony.19 Hardware manufacturers — especially firms such as Cisco, Intel, and 3Com — that make switches and local networks will play a much more central role in telephony. Internet telephony (voice, data, and broadband) is expected to grow quickly. Finally, the author expects expect that, slowly but steadily, telecommunications will drift away from the technical standards of Signaling System Seven (SS7) established by AT&T before its breakup. As different methods of transmission and switching take a foothold, and as new interfaces become available, wars over technical standards are very likely.20 This will further transform telecommunications from the traditional quiet landscape of regulated utilities to the mad-dash world of software and computer manufacturing. This change will create significant business opportunities for entrants and impose significant challenges on traditional telecommunications carriers. Notes 1. Critical points in this development were the emergence of GOPHER in the late 1980s and MOSAIC by 1990. 2. In November 1997, Deutsche Telecom (DT) introduced Internet long-distance service within Germany. To compensate for the lower quality of voice transmission, DT offers Internet long distance at one fifth its regular long-distance rates. Internet telephony is the most important challenge to the telecommunications sector. 3. A large enough bandwidth increases the probability that fewer packets will be lost. And, if each packet is sent a number of times, it is much more likely that each packet will arrive at the destination at least once, and the quality of the phone call will not deteriorate. Thus, the provider can adjust the quality level of an Internet call by guaranteeing a lot of bandwidth for the transmission, and by sending the packets more than once. This implies that the quality of an Internet call is variable and can be adjusted upward using the vari-
209
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
4.
5. 6. 7. 8. 9.
10. 11.
12. 13.
14.
15.
16.
ables mentioned. Thus, high-quality voice telephony is immediately feasible in intranets because intranets can guarantee a sustained, lsufficient bandwidth. There is no impediment to the quality level of a phone call that is picked from the PSTN at the local switch, carried over long distance on leased lines, and redelivered to the PSTN at the destination local switch, using the recently introduced Lucent switches. For Internet calls that originate or terminate in computers, the method of resending packets can be used on the Internet to increase the quality of the phone call, as long as there is sufficient bandwidth between the computer and the local telephone company switch. The fidelity of calls can also be enhanced by manipulation of the sound frequencies. This can be done, for example, through the elemedia series of products by Lucent. The telecommunications sector is regulated both by the federal government through the Federal Communications Commission (FCC) and by all states, typically through a Public Utilities Commission (PUC) or Public Service Commission. Usually, a PUC also regulates electricity companies. Frontier was formerly Rochester Telephone. See Federal Communications Commission (1995). These fees are the single largest cost item in the ledgers of AT&T. Termination pricing varies. In 2001, the FCC reported access charges ranging from $0.011 to $0.0369. The FCC and State Regulatory Commissions have interpreted these words to mean Total Element Long Run Incremental Cost (TELRIC), which is the forward-looking, long-running (minimized) economic cost of an unbundled element and includes the competitive return on capital. See “Trends in Telephone Service,” Federal Communications Commission, May 2002, Tables 9.1–9.6. Avoiding a vertical price squeeze of long-distance competitors, such as MCI, was a key rationale for the 1981 breakup of AT&T in the long-distance division that kept the AT&T name, and the seven RBOCs that remained local monopolists in local service. See Economides, (1998, 1999). Source: FCC, Despite this and other auctions of spectrum, the FCC does not have a coherent policy of efficient allocation of the electromagnetic spectrum. For example, the FCC recently gave (for free) huge chunks of electromagnetic spectrum to existing TV stations so that they can provide high-definition television (HDTV). Some of the recipients have publicly stated that they intend to use the spectrum to broadcast regular TV channels and information services rather than HDTV. We do not expect to see five entrants in all markets because laxity in the financial requirements of bidders has resulted in default of some of the high bidders in the PCS, prompting a significant dispute regarding their financial and other obligations. A striking example is the collapse and bankruptcy of all three main bidders in the C-band auction. In this auction set aside for small companies, the government required that companies do not have high revenues and assets, and allowed them to pay only 10 percent of the winning bid price immediately and then pay in 5 percent installments over time. All large bidders winning were organized with the single purpose of winning the licenses and hardly had enough money to pay the required 10 percent. They all expected to receive the remaining money from IPOs. Given the fact that they were bidding with other peoples’ money, spectrum bids skyrocketed in the C-band auction. Even worse, prices per megahertz were very much lower at the D-band auction that occurred before legal hurdles were cleared and before C-band winners could attempt their IPOs. As a result, no large C-band winner made a successful IPO and they all declared bankruptcy. The FCC took back the licenses but would not reimburse the 10 percent deposits. Thus, a long series of legal battles ensued, with the end result that most of the C-band spectrum is still unused, thus resulting in fewer competitors in most markets. The so-called “wireless loop” proposes to bypass the ILEC’s cabling with much less outlay for equipment. Trials are underway to test certain portions of the radio spectrum that were originally set aside for other applications: MMDS for “wireless cable” and LMDS as “cellular television.” The MCI–WorldCom merger was challenged by the European Union Competition Committee, the Department of Justice, and GTE on the grounds that the merged company would
210
U.S. Telecommunications Today
17.
18.
19.
20.
have a large market share of the Internet “backbone” and could sequentially target, degrade interconnection, and kill its backbone rivals. Despite (1) a lack of an economically meaningful definition of the Internet “backbone,” (2) the fact that MCI was unlikely to have such an incentive because any degradation would also hurt its customers, and (3) that it seemed unlikely that such degradation was feasible, the Competition Commission of the European Union ordered MCI to divest of all its Internet business, including its retail business where it was never alleged that the merging companies had any monopoly power. MCI’s Internet business was sold to Cable & Wireless, the MCI–WorldCom merger was finalized, and WorldCom has used its UUNET subsidiary to spearhead its way in the Internet. The merged company proposed to divest Spring’s backbone. Thus, objections of the EU were based on WorldCom’s market share of about 35 percent in the Internet backbone market. The EU used a very peculiar theory that predicted that “tipping” and dominance to monopoly would occur starting from this market share because WorldCom would introduce incompatibilities into Internet transmission and drive all competitors out of the market. Time proved that none of these concerns were credible. In one of the major challenges, GTE and a number of RBOCs appealed (among others) the FCC (1996) rules on pricing guidelines to the 8th Circuit. The plaintiffs won the appeal; the FCC appealed to the Supreme Court, which ruled on January 25, 1999. The plaintiffs claimed (among others) that (1) the FCC’s rules on the definition of unbundled network elements were flawed; (2) the FCC “default prices” for leasing of UNEs were so low that they amounted to confiscation of ILEC property, and (3) the FCC’s “pick-and-choose” rule allowing a carrier to demand access to any individual interconnection, service, or network element arrangement on the same terms and conditions the LEC has given anyone else in an approved local competition entry agreement without having to accept the agreement’s other provisions would deter the “voluntarily negotiated agreements.” The Supreme Court ruled in favor the FCC in all these points, thereby eliminating a major challenge to the implementation of the Act. Microsoft owns a share of WebTV and has made an investment in Qwest and AT&T, has broadband agreements with a number of domestic and foreign local exchange carriers, but does not seem to plan to control a telecommunications company. A significant failure of the FCC has been its absence in defining technical standards and promoting compatibility. Even when the FCC had a unique opportunity to define such standards in PCS telephony (because it could define the terms while it auctioned spectrum), it allowed a number of incompatible standards to coexist for PCS service. This leads directly to a weakening of competition and higher prices wireless PCS consumers have to buy a new appliance to migrate across providers.
References 1. Crandall, Robert W., After the Breakup: U.S. Telecommunications in a More Competitive Era. Brookings Institution, Washington, D.C., 1991. 2. Economides, Nicholas, “The Economics of Networks,” International Journal of Industrial Organization, 14(2), 675–699, 1996. 3. Economides, Nicholas, “The Incentive for Non-Price Discrimination by an Input Monopolist,” International Journal of Industrial Organization, 16, 271–284, March 1998. 4. Economides, Nicholas, “The Telecommunications Act of 1996 and Its Impact,” Japan and the World Economy, 11(4), 455–483, 1999. 5. Economides, Nicholas, Giuseppe Lopomo, and Glenn Woroch, “Regulatory Pricing Policies to Neutralize Network Dominance,” Industrial and Corporate Change, 5(4), 1013–1028, 1996. 6. Federal Communications Commission, “In the Matter of Motion of AT&T Corp. to be Reclassified as a Non-Dominant Carrier,” CC Docket No. 95-427, Order, Adopted October 12, 1995. 7. Federal Communications Commission, “First Report and Order,” CC Docket N. 96–98, CC Docket No. 95-185, Adopted August 8, 1996.
211
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE 8. Federal Communications Commission, “Trends in Telephone Service,” May 2002. 9. Gregg, Billy Jack, “A Survey Of Unbundled Network Element Prices in the United States,” Ohio State University, 2001. 10. Hubbard, R.G. and Lehr, W.H., Improving Local Exchange Competition: Regulatory Crossroads, mimeo., February 1998. 11. Mitchell, Bridger and Vogelsang, Ingo, Telecommunications Pricing: Theory and Practice, Cambridge University Press, 1991. 12. Noll, Roger G. and Owen, Bruce, (1989), “The Anti-Competitive Uses of Regulation: United States v. AT&T,” in John E. Kwoka and Lawrence J. White, Eds., The Antitrust Revolution, Harper Collins, New York, 1989, 290–337. 13. Technology Futures, “Residential Broadband Forecasts,” 2002.
212
Chapter 18
Information Everywhere Peter Tarasewich Merrill Warkentin
The growing capabilities of wireless technologies have created the potential for a “totally connected” society, one in which every individual and every device is (or can be) linked to all others without boundaries of place or time. The potential benefits of such an environment to individuals, workgroups, and organizations are enormous. But the implementation of such an environment will be difficult, and will require not just the ubiquity of computer technology, but also the transparent availability of data and the seamless integration of the networks that tie together data and devices. This chapter presents a model for a pervasive information environment that is independent of technological change, and presents reasons why such a vision is necessary for long-term organizational success. Welcome to the unwired world. Communication is no longer restricted by wires or physical boundaries. People can communicate with each other anytime of the day or night. Users can access their data and systems from anywhere in the world. Devices can communicate with other devices or systems without the need for human intervention. At least in theory. The technology that currently exists, although still limited in certain regards (such as bandwidth and battery life), enables the creation of the devices and networks necessary for these wireless communications. But data is also part of the communication process for much of what we do, and is essential for communication between devices. Unfortunately, no matter how much effort is invested in creating a seamless technology network, the efforts will be in vain unless an equally seamless data structure is implemented along with it. Organizations must ensure that their information systems are structured to allow manipulation of data on or between the myriad of once and future technologies. Only then will there exist a truly pervasive environment that will benefit the dynamic information requirements of the individual, an organization, or society as a whole. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
213
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE The idea of pervasive computing is not new. Discussions related to pervasive computing have been oriented toward technological factors. Proponents predict that networked computers (devices ranging from handhelds to information appliances to embedded sensors) will be diffused throughout everything we do and all systems with which we interact. Computing devices will be used to access information on anything and everything. Much attention is being paid to mobile devices and applications, but not to the information systems to which they belong. The ultimate goal should not be to make computing pervasive, but to make information available whenever and wherever it is needed, and to allow for complete flexibility. In his article entitled “Pervasive Information Systems,” Birnbaum1 presented a vision for pervasive computing and technology, but did not address pervasive information systems in their fullest sense. There is often a strong technological orientation in the way information systems are defined. But while an information system is implemented using technology, it should not be bound or driven by it. Furthermore, in addition to hardware and software considerations, an information system comprises people, procedures, and data functioning within an environment, all of which dictate important considerations for design and use. Ubiquitous computing also plays a key role in the vision of a successful pervasive information environment. With ubiquitous computing, computers will be everywhere; they will be so prevalent that they will blend into the background. Technology will be embedded in many everyday devices, such as automobiles, home appliances, and even in building materials.2 Sensors will be able to constantly transmit data from anywhere, while global positioning systems and proximity detection technologies will enable the tracking of devices as they move. In some instances, these embedded sensors will automatically respond to changes in their environment, a concept known as proactive computing.3 However, certain researchers have recognized the overemphasis on the latest gadgetry and gizmos.4 The information appliances and other applications that we see appearing as part of pervasive computing may be nothing more than solutions in search of problems. There is a call for emphasizing data management in per vasive computing, and ensuring that information is available in the spatial or temporal context that will be most useful. Devices are simply used to accept or display data — the infrastructure that ties everything together is the most important concept. Mark Bregman, general manager of pervasive computing at IBM, presented an insightful strategic viewpoint of pervasive information systems during a recent conference keynote address.5 He noted that wireless technologies must be seen as an extension of E-business, but that successful companies need to implement them seamlessly, through a smarter infra214
Information Everywhere structure. Bregman emphasized that once this has occurred, “people will move quickly past the ‘I have to get a device to access the information’ mode of thought to the ‘I have to access the information’ mode of thought.” The Oxygen Project6 advocates an information marketplace model, an environment of freely exchanged information and information services. In addition to this is the idea of “doing more by doing less,” which is based on three concepts. These are (1) bringing technologies into people’s lives (rather than the opposite), (2) using technologies to increase productivity and usability, and (3) ensuring that everyone benefits from these gains. Oxygen calls for general-purpose communication devices that can take the place of items such as televisions, pagers, radios, and telephones. These devices are software-configurable, and can change communication modes on demand (e.g., from a cell phone to an FM radio). Oxygen also calls for more powerful devices to be integrated into the environment (e.g., buildings, vehicles). These devices can control other kinds of devices and appliances, such as sensors, controllers, and fax machines. Oxygen links these two communication devices through a special network to allow secure collaboration and worldwide connectivity. Other research also recognizes the current limitations and conflicting viewpoints of the current mobile computing environment. One vision for pervasive computing is based on three principles7: 1. A device is a portal into an application/data space and not a usermanaged repository of custom software. 2. Applications should be viewed as a set of user-specified tasks, not as programs that run on certain devices. 3. The computing environment in general should be perceived as an extension of the user’s surroundings, not as a virtual environment for storing and running software. A similar vision, part of the Portolano Project, calls for a shift away from technology-driven, general-purpose devices toward ubiquitous devices that exist to meet specific user needs.8 It calls for data-centric networks that run flexible, horizontally based services that can interface with different devices. WHY DO WE NEED A PERVASIVE ENVIRONMENT? A truly pervasive information environment stands to greatly benefit individuals, workgroups, organizations, and society as a whole. The wireless applications that exist today — such as messaging, finding a restaurant, getting a stock quote, or checking the status of a plane flight — are all device and provider dependent. They are implemented on systems that are separate from other Internet or Web applications. Data is replicated from Web servers and stored separately for use with wireless applications. Ulti215
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE mately, a single source of data is needed to allow for any type of situation that could arise. Data must be logically separated from tasks, applications, and technologies. What follows is a series of scenarios meant to illustrate different types of situations that require flexible access to data resources to be effective. Workgroup Meeting A competitor’s introduction of a new product hits a company by surprise, and the company hastily calls a meeting to brainstorm a retaliatory product. Representatives from several of the company’s key suppliers are able to join a cross-functional team at company headquarters. Several other suppliers are participating virtually in the meeting from offices, hotels, and other locations. The company’s marketing executive is currently on a cruise ship in the Mediterranean, but under the circumstances agrees to spend some of her vacation time participating in the meeting from her cabin. Devices used by the participants during the meeting range from PCs to laptops to handhelds, all attached to the Internet, some by wires and some not. Manufacturing calls attention to specifications, electronic drawings, and materials analyses taken from a reverse-engineering session conducted on the competitor’s product. Finance runs an estimate of the materials and labor costs of the product. A supplier says that, based on an analysis he just performed, the performance of such a product could be increased tenfold with very little additional cost. Another supplier adds that some of the other materials could also be substituted with lower-cost alternatives. A new product specification is worked out and approved by the group. Based on current inventories of all the suppliers, production capacity of the company, and estimates of initial demand, a product introduction date is set. The executive notifies her global marketing staff to prepare for the new product launch. Emergency Sensors embedded in the paint of a house detect a rapid increase in temperature and notify Emergency Services that a fire has developed. Firefighting and medical crews are immediately dispatched to the scene. As the vehicles travel, the drivers are shown when and where to turn, by voice and by a visual guidance system. Other people in the fire truck receive information on wireless display tablets concerning the construction of the house. A map of the neighborhood and a blueprint of the burning house are also shown. The fire captain begins to develop a strategy on how to attack the fire, still minutes before arriving on the scene. Based on information received about the occupants of the house, he plans a search and rescue. The strategy details appear on the tablets of the other crew members. 216
Information Everywhere When arriving on the scene, a final assessment of the situation is conducted, and the fire is attacked. Ninety seconds later, someone is brought out of the burning house –— unconscious, but still breathing. Monitoring devices are placed on the patient. The emergency crew, along with a nearby hospital, begin to diagnose the patient’s condition. The identification of the person, who was already thought to be living in the house, has been confirmed through a fingerprint scan. Past medical history on the patient, fed through a decisionsupport system, aids the process of determining a treatment. Doctors at the hospital confirm the course of action and request that the patient be transported to the hospital. Business Traveler A businesswoman is traveling from New York City to Moscow to close a business deal with a large multinational organization. On the way to the airport, her handheld device gives her updated gate information and departure times for her flight. A few minutes after her plane reaches cruising altitude, the phone at her seat rings. It is one of the marketing directors at her organization’s Boston office. He asks her to review some updates that he has made to the proposal that she will be presenting tomorrow afternoon. The proposal appears on the screen embedded in the seat in front of her. She pulls a keyboard out of an armrest and makes modifications to two paragraphs. The director sees the changes as she makes them, and says he agrees with them. He also mentions that he received a call from Moscow five minutes ago, and that he agreed to push the meeting time back an hour. She checks the calendar of her handheld device and notices that it has already been updated to reflect the new time. Her device is also displaying a map showing an alternative route to the meeting, which is being recommended because of problems detected by the traffic sensor network in the city streets. Logistics Management The flexible manufacturing system in a Seattle plant automatically orders components for next month’s production from the Asian supplier. The containers report their position to the shipper and to the plant at regular intervals as the containerized cargo is loaded onto ships by cranes at the cargo terminal, as they cross the ocean, and as they are offloaded onto flatbed rail cars or trucks. Because tens of thousands of containers on thousands of ships report their location to the global logistics network linking all supply-chain partners, the entire system is continually rationalized to reduce waste and delays. New times lots are automatically calculated for scheduling ground transportation and for activities pursuant to anticipated delivery times. 217
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Overall Pervasive Environment The underlying activities and problems addressed in each of these scenarios are not uncommon. But what is unique is the use of a pervasive information environment that allows for efficient real-time solutions to each problem. The technology in each situation is independent of the task and can utilize any required data sets. This requires a model that divorces data management from the applications that use data and the devices that interface with the applications. The following section describes such a model. MODELING THE PERVASIVE INFORMATION ENVIRONMENT Organizations must maintain a focus on IT as an enabler for information systems. Technology will provide the ability to communicate over the Internet anytime from anywhere. The form of input and output is determined (and limited) by the I/O device used (e.g., cell phone, PDA, embedded sensor, robot, laptop, personal computer). Wireless and mobile devices may always have limitations in terms of screen size, interactivity, and communication speed relative to physically connected devices.9 Welldesigned processes are a necessity and will ensure that information systems function well, no matter what technology is used to access them or what the user needs from them. System flexibility is required to support users in this new, fast-paced dynamic global environment. The user should never have to concentrate on the task of using technology or applications, but only the task at hand. Access to data and systems should be straightforward and intuitive, without regard to configuring devices, reformatting output, selecting protocols, or switching to alternate data sets. The governing principle for establishing a pervasive information environment is “access to one set of information anytime from anywhere through any device.” To accomplish this goal, information systems should be structured according to the four-layer model presented in Exhibit 1. This model is an extension of similar multi-layer models that have been used for database and network systems. The user interacts only with the highest layer: the presentation layer. In this layer, the devices utilized to access, send, and receive information must be selected and implemented. Bandwidth limitations will affect the use of most of these devices, especially wireless ones, for the foreseeable future. The application logic layer comprises the applications that process and manipulate data and information for use on devices. Application logic can reside on a device itself or elsewhere within the information system. Applications may also provide context for converting raw data into organizational knowledge. Data access concerns the actual retrieval of stored data or information, and the execution of basic processing, if required. Database queries fall into this level. At the data access level, the use of various wireless and other protocols facilitates the smooth transfer of data from the disparate sources of data. 218
Information Everywhere
Presentation Application Logic Data Access Data
Exhibit 1. Pervasive Information Systems Architecture
The lowest layer is the data storage layer, which forms the foundation for all information networks. Its focus is how and where to store data in an unambiguous secure form. Data integrity is critical, so the underlying data structures (whether object oriented, relational, or otherwise) must be carefully conceived and implemented. One crucial issue is data format; incompatible data formats can prevent flexibility in the applications that access the data. We must maintain an environment in which data from heterogeneous and distributed sources, including embedded technologies, can be readily combined and delivered. This may require the use of middleware to achieve compatibility between the data sources. In addition to issues affecting individual layers, there are some that concern the interaction between layers. Seamless transfer between presentation environments will be necessary when moving from location to location and from device to device. Security issues will affect all four layers. Organizations must also ensure that their network communications are not intercepted in this “anytime, anywhere” environment, and that data privacy requirements are met. These challenges will continue to shape the process of designing and implementing truly useful pervasive systems in the future. CREATING THE PERVASIVE INFORMATION ENVIRONMENT To accomplish the goal of a flexible, pervasive information environment, one set of data must be maintained. Yet this data must be available to any application and any device, now or in the future. This information vision will become an imperative for all organizations as they struggle to compete in a rapidly changing information environment. There may be many different ways to implement the model described in this chapter, and these paths may be difficult to execute. The remainder of this chapter, while not proposing or advocating a specific implementation design, describes research that might support a solution. Also presented are general concerns that must be addressed to achieve this environment. 219
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE The Oxygen Project described above may offer the best implementation design to allow for the vision of pervasive information environment articulated in this chapter. Although it appears to allow a great amount of flexibility, it is still very device dependent. Access and collaboration revolves around specific devices, although they seem to be multi-purpose and can communicate with other devices. Furthermore, Oxygen does not address the concerns of the data infrastructure necessary to support such a pervasive information environment. Evolving technologies may provide the foundation for implementing the individual layers of the pervasive information model. The presentation layer may benefit from several of these technologies.7 One is the concept of a distinct user-interface management system, which clearly separates the user interface from application logic. Web technologies such as Java applets, which are device-independent programs, may also be useful. The relatively newer service technologies (also known as E-services), which allow applications to dynamically request needed software services from outside sources, are also promising. One solution for the application layer and its interface with the presentation layer has already been proposed, using an application model based on a vision for pervasive computing described previously in this chapter.7 The model is currently being implemented as part of continuing research, based on the assumptions of a services-based architecture wherein users will interact with applications through devices that are most readily available at the time. The Portolano Project also supports this application model, with research focused on practical applications such as a pervasive calendar system and infrastructure issues such as service discovery. The most difficult layers to address are the data and data access layers. For the sake of application independence and data integrity, data representation format must be independent of the presentation format. Yet raw data is rarely useful without the lens of organizational and environmental context. Although raw data itself is not useful for most purposes, and must be converted into information to provide meaning and value to decision makers and processes, the data representation format is critical. Logical data storage designs must be based on sound principles of data integrity. And the metadata or context must also be stored and communicated to the applications or devices that use it. Further complications arise if the context changes, or if the data can be used in multiple contexts. Data and metadata models must be carefully designed to ensure that all systems and individuals using the data will be presented with meaningful and accurate data within the context of its use. Raw data must always be available and metadata must be maintained independently to ensure that they can be altered as the environment and context change. Thus, metadata really forms a sub220
Information Everywhere layer on top of the lowest layer, converting the data into information as it is acquired by higher layers. In terms of inter-layer issues, there are several technologies that might play a part in creating a secure, seamless environment. Agents could be used to follow a user from place to place and from task to task. Research conducted with software agents shows that they can be used to facilitate the movement of users from one device to another.10 Data must also be protected as it travels to where it is needed. Risks of unauthorized data interception can be reduced through frequency hopping and encryption, and biometric technologies can be used to verify the identity of people who are accessing data through different devices. CONCLUSION One corporate mantra from the early 1990s was “the network IS the computer.” The concept of pervasive computing depends on pervasive access to all data. If we migrate toward a pervasive computing environment, we also need ubiquitous, secure access to all data from any device, wired or wireless. This ubiquitous network means that we will not only have access to our data and information from everywhere, but that it will reside on the network, where it will be secure and accessible from multiple locations and with multiple devices. This valuable corporate asset should be accessible from anywhere, and transparent data availability needs to be maintained. However, data should not be stored everywhere and anywhere. By following the multilayer approach described above, organizations can ensure that data will be available to users and automated systems whenever and wherever they are in a secure manner with embedded contextual metadata. The days of individuals “going to the data” (walking to computers tethered to the network) instead of having the data come to the individual are numbered. The world is quickly becoming a place of full-time and ubiquitous connectivity. Technology is marching forward, but these advances often precede the development of corporate and public policies and procedures for integrating and managing them. Decisions concerning system architectures necessary for maximum leverage must be carefully evaluated and executed. It is also necessary to evaluate the extent of the benefits that this connectivity will have on individuals and organizations. With the current technology-driven model, the benefits of wireless technology will be limited. A more comprehensive perspective must drive the process, one that enables seamless integration of networks. The pervasive information model presented in this chapter can foster a flexible environment that can meet all users’ long-term dynamic needs and ensure that the real potential of the technologies can be achieved. 221
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE References 1. Birnbaum, J., “Pervasive Information Systems,” Communications of the ACM, 40(2), 40–41, February 1997. 2. Estrin, D., Govindan, R., and Heidemann, J., “Embedding the Internet,” Communications of the ACM, 43(5), 38–41, May 2000. 3. Tennenhouse, D., “Proactive Computing,” Communications of the ACM, 43(5), 43–50, May 2000. 4. Huang, A.C. et al., “Pervasive Computing: What Is It Good For?,” Proceedings of the ACM International Workshop on Data Engineering for Wireless and Mobile Access, 1999, 84–91. 5. O’Hanton, C., “IBM: Wireless E-Commerce Next Revolution,” Computer Reseller News, June 13, 2000, http://crn.com/dailies/digest/ dailyarchives.asp?ArticleID=17468. 6. Dertouzos, M., “The Oxygen Project: The Future of Computing,” Scientific American, 281(2), 52–55, August 1999. 7. Banavar, G. et al., “Challenges: An Application Model for Pervasive Computing,” Proceedings of the Sixth Annual ACM/IEEE International Conference on Mobile Computing and Networking, 2000, 266–274. 8. Esler, M. et al., “Next Century Challenges: Data-Centric Networking for Invisible Computing. The Portolano Project at the University of Washington,” Proceedings of the Fifth Annual ACM/IEEE International Conference on Mobile Computing and Networking, 1999, 256–262. 9. Tarasewich, P. and Warkentin, M., “Issues in Wireless E-Commerce,” ACM SIGecom Exchanges, 1(1), 19–23, Summer 2000. 10. Kotz, D. et al., “AGENT TCL: Targeting the Needs of Mobile Computers,” IEEE Internet Computing, 1(4), 58–67, July 1997.
222
Chapter 19
Designing and Provisioning an Enterprise Network Haywood M. Gelman
The purpose of this chapter is to provide an overview of managing the task of designing and provisioning an enterprise network infrastructure. This chapter examines all the major aspects of architecting a network, including analyzing customer needs, determining a budget, managing vendor relationships, writing and awarding an RFP, and finally implementing the newly designed network. The chapter does not analyze this process at a highly technical level, but rather from the perspective of a manager who has been tasked with leading a network design project for an enterprise. At the end of this chapter, there is a reading list of selected titles readers can use for more detailed study. Although it would be tempting to claim that network design is a simple process, just the opposite is often the case. It requires attention to detail, adherence to well-established design principles, and a healthy dose of creative human resources and strict financial management. KNOW WHAT YOU HAVE NOW If you already have an existing network, performing an equipment inventory is a critical first step. If you are designing a completely new installation, you can skip to the next section, “Planning Is Half the Battle.” Determining the inventory of existing networking resources is a painstaking but vitally important process that includes analyzing everything from copper and fiber-optic cabling plants, to the network interface cards installed in desktop computer systems, to routers, hubs, and switches that connect your computing resources and peripherals. This is an essential stage to success because later steps depend on this being completed very accurately. Essential tasks during this stage include the following: 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
223
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • Hire a networking cabling contractor that can work on fiber optics as well as copper to assist in assessing the current state of your cable plant. Have the contractor perform the following: — Check all your data jacks with a continuity test. This test ensures that faulty cables do not cause data transmission errors. — Check all data jacks to make sure they are all Category 5 or better. This will ensure that whatever networking equipment you put in place, your cable plant will operate with it seamlessly. — Make sure that none of the jacks have been installed using what is called “split-pair wiring.” This is when a Category 5 data cable, which has four pairs of wires, has both pairs connected to separate computers hanging off the same cable. This cost-saving cabling technique should be avoided at all costs because of potential interference problems it can create. — Test, count, and label all copper and fiber-optic cabling. The contractor should produce a report that details, by data closet, what both cable plants look like, with a diagram of all of the cabling. (We discuss this report at a more detailed level later in this chapter.) • Hire a networking consultant to do a baseline bandwidth utilization analysis and a network infrastructure analysis. Work that you should consider having the consultant perform for you include the following: — Perform a baseline bandwidth utilization analysis to determine the utilization of your current network. Also require the networking consultant to return after your network installation is complete to perform a post-installation analysis. This will allow you to compare bandwidth utilization after your installation to what it was before. Require the networking consultant to produce spreadsheets and charts detailing the analysis, as well as a statement on how the data was gathered and later analyzed. Having the original data will allow you to generate your own charts and what-if scenarios at a later date, as well as have a historical record of the work that was performed. In addition, be sure the networking consultant uses the same data-gathering techniques both pre- and post-install to allow for a true like-comparison. — Routers, switches, and hubs, including the make, model, serial number, memory, software and hardware revisions, and available interfaces for each device type. It will be important to know the status of your routers, hubs, and switches later on as you move into the design phase. Gathering this information now will allow you to make the decision later as to which network equipment you will keep and which equipment you will replace. 224
Designing and Provisioning an Enterprise Network — Desktop computers and servers, including the make, model, serial number, operating system revision, applications, memory and hard disk capacity, available hard disk space, system name, IP address, address assignment method (static or dynamic), and speed of available network cards. Gathering this information now will help you figure out later whether you need to replace network cards, upgrade operating systems, install larger hard drives, or install more memory when you get to that stage of the design process. — Network printers, including make, model, serial number, printer name, IP address, address assignment method (static or dynamic), and network interface card speed and type. This information will be important to note later on when it comes time to decide whether or not the new applications you will place on your network can still make use of existing printers. — Remote access services: have the consultant gather information on the access method (modems or virtual private networking). If you are currently using modems, it would also make makes sense at this time to have the consultant work with your telephony manager to perform a cost/benefit analysis on the potential cost savings of a VPN solution. Part of your budget will be determined by the results you get from your network infrastructure analysis. For example, you might find that the copper wiring in your walls is all Category 5 cabling, capable of supporting gigabit speeds over copper, but that your patch panels are only Category 3 (phone spec) wiring. Your cabling contractor, who can replace your patch panels at a much lower cost than replacing the entire infrastructure, can easily remedy this problem. Also, the implementation decision on how your migration strategy will take place later on will be aided by knowing how much available fiber-optic cabling you have in your building or campus. Because the latest networking technologies use fiber-optic cabling to transport data, whether or not you migrate to your new network over a short period of time (often called a “flash-cut”) or with a phased approach will be determined by the status of available fiber-optic pairs in your cable plant. If you have enough fiber-optic cabling, you will be able to build your new network parallel with your old one, interconnect the old network with the new network, and then migrate over time. If you do not have sufficient fiber, or the budget to add more fiber, you will likely need to use a flash-cut migration strategy. Finally, knowing what networking equipment and desktop connectivity you have now will allow you to make some critical path decisions early on as to which direction you will take for your desktop connectivity requirements. If your equipment is no longer on maintenance, you are running an antiquated network technology (such as Token Ring), or your network equipment has been end-of-lifed by the manufacturer, you 225
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE will need to give serious thought to replacing it at this time so you can ensure supportability for your network over the long run. PLANNING IS HALF THE BATTLE The best-built network starts with a strong foundation rooted in the requirements that meet the needs of the users it will service. The first step in finding out what the network will be used for is to ask the right people the right questions, and to ask many of them. Some essential questions to ask include: • What applications will be utilized on the network? File and print services? Web services? Client/server? Terminal applications? • What types of traffic flows can be expected? Are traffic flows going to be bursty in nature (as with file and print services), or are they going to be streaming in nature (as with voice and video)? Is there going to be a combination of the two (as is the case with most networks these days)? Will you plan for the future need for voice and video, or will you be implementing it right away? • How many users does the network need to support? How many IP addresses are necessary now, and how large will the network likely be in the next five to seven years? • Is IP telephony a requirement for this network? If so, what features do we need when the system goes live? Do we need just basic services, such as call answer, call park, call transfer, and voice mail, or do we also need meet-me call conferencing and unified messaging? • Are any new applications planned for the near or not-too-distant future? Are these applications streaming in nature (such as videoconferencing), or are they batch-oriented (like Citrix)? Will the new applications be used in conjunction with a Web-browser front end, or will the application use its own client? It should be noted that most major enterprise resource planning (ERP) and database applications (e.g., Oracle and SAP) can use both. • Will any of the applications on the new network benefit from quality of service (QoS) congestion management techniques? All networks have congestion at one point or another, but the true equalizer lies in how well your potential vendors handle congestion on the network. Technologies such as streaming video (in the form of videoconferencing) and IP telephony require you to deploy some form of QoS to guarantee delivery of this data over less-critical data. Asking the right questions (and carefully documenting the responses for later use) can save a lot of aggravation and money. This is especially true if it means that you do not have to go back to the design drawing board after vendors have already been selected and equipment purchased. 226
Designing and Provisioning an Enterprise Network KNOW YOUR BUDGET Once you have done your network infrastructure analysis and your needs analysis, it is time to take a hard look at the numbers. Be realistic about how much you can afford to accomplish at a time. Knowing what you can afford to spend at this stage of the game will help save you from serious irritation and lost credibility with vendors later on. If you have a limited budget, your users will be better served if you break up your network build-out over two or three budget cycles rather than try to build a network all at once that does not meet the objectives uncovered during the needs analysis. Although it might seem like we are putting the cart before the horse, this is how the capital expenditures game is often played: before you can spend any money, you have to know how much money you can spend. The following are some suggestions to keep you from making any serious mistakes: • Start looking at the potential networking vendor products that will be considered for your network implementation. Select vendors whose products have the features you are looking for, but try not to get hung up on price at this point. What you need are product literature and list prices from the vendors who will be likely providers of bids for your project. You should be able to get product literature and list prices directly from the manufacturers’ Web sites, without committing yourself to a reseller at this early stage. In your analysis of vendor products, you will want to consider networking products that meet the features, port density, and performance requirements uncovered during your needs analysis. • Learn how each vendor implements its equipment in a typical network scenario. You do not have to be a networking expert to do this effectively. At its simplest level, network design is an interconnection game: the design starts at the point where you plug in the equipment. All vendors should be able to provide you with recommended network designs, and you will quickly notice some very obvious similarities. The specifics of this part of the design are outside the scope of this chapter, but you should spend some time learning the basics of how each vendor implements a typical network. If you are inexperienced in network design, this will be a great opportunity for you to get involved at a very basic level. The reason this is important is because when resellers come in to pitch a network design, you should at least be familiar with the basic network design models for each of the manufacturers they represent. • Decide whether to buy chassis-based switches or fixed-configuration LAN switches. Although this may seem like a design decision as opposed to a budget decision, the cost differential between chassisbased switches and fixed-configuration switches can be as high as 30 227
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE percent. This is often enough of a cost difference to force this decision to be made during the budgeting process. This decision will be made based on your available budget, your port-count requirements, your performance requirements, and your plan to manage the network. Some things to evaluate when making this decision include: — Fixed configuration switches have a fixed number of ports, whereas chassis-based switches have slots where line cards are inserted. If your budget allows, having chassis-based switches in a large network will mean fewer switches to manage. To add more ports to a chassis-based switch, simply insert additional line cards in the chassis. To add more ports to a network that uses fixed-configuration switches, you will need to add additional switches. However, in a small environment, a chassis-based switch would be inappropriate unless your performance requirements and budget dictate it. — In general, chassis-based switches have better performance, features, and functionality, while fixed-configuration switches typically have a subset of the features and functionality as their chassisbased counterparts, yet are more cost-effective. Many companies opt to get the best of both worlds: put chassis-based switches in the core of the network where high performance is dictated, and put fixed-configuration switches in the data closet where cost-effectiveness is warranted. VENDOR MANAGEMENT: GENERAL STRATEGY Perhaps the most difficult, yet an essential, part of this process is vendor management. There are vendors of all shapes and sizes with which you will have to deal. This includes everyone from networking manufacturers (some of whom sell direct to end customers and some who do not), to resellers (also known as “value-added resellers”; the value-add comes from the installation services they will sell you with the equipment you choose), to network engineering and security companies that only sell services, not equipment. To keep with the focus of this chapter, we will discuss networking manufacturers. This includes routers, switches, firewalls, and VPN (virtual private network) devices. First, you will need to decide which vendors will be considered for the RFP (Request for Proposal) that needs to be written. Specifics of the RFP process are detailed later in this chapter. To determine which vendors will be considered, you need to do a great deal of research. Your past experiences can be an excellent guide: If you have worked with a particular vendor’s products in the past, using what you know about that vendor will give you a better foundation from which to work with new vendors. Some things you will want to consider when evaluating networking vendors include: 228
Designing and Provisioning an Enterprise Network • Innovativeness in product design and functionality. There are many networking vendors in the market, any number of whom might make the grade as a potential vendor for your network build-out. However, you should more heavily consider those that are helping to push the technology envelope to make the products you are buying now work even better in the years to come by adding new functionality to protect your investment. Some ways to determine whether or not your vendors are technology innovators include: — How much money does a manufacturer spend on research and development compared to revenue? This is an important statistic to look at because it tells you how much money the company is making, and how much of that money is going right back into improving your products. — Does the manufacturer hold prominent seats on IEEE standards committees? The reason this is important is because it will give you a strong indication of the relative importance that each manufacturer places on the development of emerging standards. Certainly, being a technology innovator is important but the development of new technology standards makes all vendors better. This is in stark contrast to the vendor that develops innovative technology that cannot be ported to other vendors, thus locking you into that vendor’s proprietary technology. — Has the manufacturer won industry recognition for product innovation in the form of awards? Numerous industry magazines have annual product awards in various categories, ranging from LAN switching, to quality-of-service congestion management, to routing, to security. Winning product awards will give you a good view of how the industry feels about your list of vendors’ level of innovation relative to one another. • Customer support. This should be one of the most important consideration points when deciding which vendors to consider for your network implementation. This is important because having the most innovative products at the best price means nothing if you cannot fix the equipment when it breaks. Once the equipment is implemented, you will be responsible for managing the network on a day-to-day basis, and you will need to call the manufacturer when it comes time to get help. Some ways to determine how good a manufacturer’s customer support is include: — Industry recognition. It is important to know what the rest of the industry thinks about the customer support for the vendors on your list. Generally, this recognition comes in the form of industry awards from customer-support trade magazines. 229
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE — After-hours support. You need to be very careful when analyzing vendors, and especially cognizant of the after-hours support provided by your vendors. This is important is because network problems are not, unfortunately, limited to business hours. Without 24/7 coverage, you may have to wait for an engineer to be paged who will call you back, or worse yet, be directed to a Web site for support. Each vendor should have a 24-hour, 7-day-a-week coverage model. This is especially important if your company has international locations and operates in multiple time zones. — First-call closure rates. This is a measure of how often a problem is solved on the first call to the customer support center. This will give you a feel for the general skill level of the first-level support team in its ability to effectively close calls quickly. — Vendor’s Web site content. It is important to know how much and what kind of information is available through the vendor’s Web site. Make sure that common items such as manuals, configuration guides, software, forms for obtaining software licenses, and the ability to both open and track trouble tickets through the Web site are all easily accessible. Make sure that the vendor’s Web site is easy to navigate, because you will probably be using it extensively. — One phone number to call for support. If you plan to purchase more than just LAN switches (including routers, firewalls, and VPN devices), it is important to know whether or not you will need to call the same phone number to receive support on all devices or if you will need to call a different number for each product. Oftentimes, large manufacturers will grow their product portfolio by purchasing companies instead of developing the products themselves. When this happens, sometimes the product’s original support team is not integrated into the overall support system for all of that vendor’s products, and support thus becomes inconsistent. Be wary of vendors that give you different contact numbers to receive support on different products from the same vendor. — Local engineering support. It is important to find out about the local presence of the sales and engineering support staff in your area. It is often necessary for customer support to coordinate a visit to your facility with an engineer or salesperson from the local office near you where the network was sold. Vendors with limited penetration in certain areas of the country may have large areas covered by only a few account teams. This could mean trouble if you need to have an engineer come onsite to help you solve a problem, especially if that engineer lives two states away and cannot come to see you for several days due to time conflicts with other commitments. A vendor that has a large local presence will likely be able to send an engineer right away because he or she lives and works nearby. 230
Designing and Provisioning an Enterprise Network • Pay particular attention to the availability of industry training and technical certifications for the products you have chosen. Several manufacturers these days offer training and technical certifications that you can take to help you learn how to better support your own network. The goal of all manufacturers should be to help their customers learn as much as possible about the equipment they are selling them so customers will be able to fix many problems on their own. Be cautious of any vendor whose products do not have third-party training available from certified vendors for their products. Technical certifications are also a plus because this will offer the customer industry recognition for their efforts in learning your equipment, and give them transferable skills they can take with them as they make their way through the networking industry. • Be cognizant of the financial standing of the vendors you have chosen. Try not to pay too much attention to marketing reports. Ask your vendors for financial statements and hold them accountable for the financial concessions they agree to provide as part of the agreement you sign with them. • Ask the vendors for large-scale references on networks based on the products that will be placed in your network. Make certain that the references they give you are relevant. The references should be from networks that use the same products that are in your proposed network, and should be of comparable size to your network. In addition, you should ask for references of customers who will allow an on-site visit to inspect the installation, or at least will agree to a phone interview on the proposed products. Either of these will provide you with the opportunity to have an open discussion with someone else who has built a network based on the same products you plan to install. Next, you need to decide whether you want to use the best-of-breed approach (where you pick a different vendor for each product that you need) or a single-vendor approach (where you use one vendor that has all the products you need). This is an important consideration, especially if you are looking to implement more than just routers and switches. Several companies build routers and switches, but only a few also build other devices such as firewalls and VPN servers. Although many companies these days use a single-vendor approach, there is a formula to help you determine what is best for you. The variables needed to calculate this equation include: • The cost of managing multiple vendor platforms (Do you have enough staff?) • The cost of maintenance agreements from multiple vendors (Do you have enough money?) 231
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • The time required to properly develop the skillsets required for administering multiple vendors (Do you have enough time?) If your calculations for a best-of-breed approach yield values that are too high for your environment, you should consider the alternative: use a single-vendor approach. A single-vendor approach will, in many cases, save you staff, money, and time. A single-vendor environment will allow you to: • Save staff by requiring fewer people to manage a smaller collection of equipment • Save money by having one maintenance agreement to manage all of the products in your network infrastructure • Save time by only having to train your staff on one operating system to manage the equipment VENDOR MANAGEMENT: COMPETITION With all the vendors to choose from, you will most certainly be able to narrow the list to a selected few once you take into account your port requirements, performance and application needs, and your budget. When managing a list of vendors in a competitive environment, you will want to consider some of the following: • When a network vendor makes a claim about either its equipment or that of a competitor, put that vendor to the task of proving it. Do not be concerned about insulting your potential vendors; they have come to expect their customers to question them. • Be wary of vendors that make inflammatory comments about their competition without providing unbiased, substantive proof. The difficulty most vendors have is not in providing proof of their claims — it is providing proof that has not been compromised in some way. • Look very carefully at the testing procedures that were employed, and be prepared to ask some tough questions. These questions should include: — Are there any procedures that were employed during the testing process that would unfairly bias one vendor over another? — Were features disabled that might create a false-negative in the test results? — Was production, publicly available software used during the testing, or was one-off test-built code used? — Did one vendor pay a third party to produce the test, unfairly and unequivocally biasing one vendor over another? These are some of the questions you should be prepared to ask if you want to keep your vendors honest. 232
Designing and Provisioning an Enterprise Network VENDOR MANAGEMENT: PRODUCT INTEROPERABILITY Sometimes it is not possible to replace a network with another vendor’s equipment overnight, but it is necessary for equipment from one vendor to coexist on a network with equipment from another vendor, until such time that it is possible to have a single-vendor network. Circumstances that would require you to have a heterogeneous network include a limited budget, servers or printers whose addresses cannot easily be changed, or simply the sheer size of the network that prohibits an overnight replacement. Whatever the reason, your new equipment needs to have proven interoperability with the equipment you already have. As part of your vendor selection process, you need to ensure interoperability. This includes: • Require each vendor to produce a statement of adherence to standardsbased frame tagging. Frame-tagging (more commonly known as 802.1Q) allows two vendors’ products to seamlessly coexist on a switched internetwork. Packets from one vendor product are tagged with a standard format so that one vendor can properly pass data generated by another vendor. • Require a statement of adherence to the IEEE 802.1d Spanning-Tree standard. Spanning-tree is a link-layer (Layer 2) protocol designed to ensure a loop-free topology. Spanning-tree uses BPDUs (Bridge Protocol Data Units) for sensing when a loop has occurred, and automatically shuts off links between switches that will create problems. • Require proof of interoperability. This will be comprised of interoperability testing that should be conducted at your facility using network equipment that exists in your network, and the equipment from the vendors you have chosen. The selected vendors should provide their equipment for the test to be conducted, as well as an engineer to help with the testing. You should also be able to provide the selected vendors with a piece of equipment from your network, so the interoperability testing will be accurate. If any of the vendors you have selected at this point do not meet even one of the above requirements, they should be excluded from any further consideration. SECURITY CONSIDERATIONS Security is a critical component of any network infrastructure. Particular care should be taken in today’s world to make sure that all of the most important security threats are addressed. It is no longer sufficient to simply lock the front door of your network to protect against outside intrusions: it is necessary to know who is knocking on your door, and whether or not the knocks came from the inside of your network or the outside. 233
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Some things to consider when addressing network security issues with potential vendors include: • Make sure each proposed product has the ability to implement some level of security at every point of ingress or egress on your network. This includes anything from access control lists on routers and switches to protect you from ingress traffic, to security measures that restrict port access on switches down to the MAC (Media Access Control) address of the network card. You might not implement every available security feature, but the lack of feature availability should not be the reason underlying this. • Most security breaches in a networked environment are initiated from inside that network. Sometimes, this means that a disgruntled employee could be trying to gain retribution for perceived mistreatment on the part of the corporation. More often than not, it actually means that one of your computer systems has been compromised from the outside and is being used as a point of attack from the inside of your network. Inside attacks can take the form of denial-of-service (DoS) attacks (where a computer is programmed to send millions of packets of worthless data to servers on your network until the servers have to be shut down), viruses sent in e-mail or Web pages, or hidden applications that can be later exploited under remote control through holes in the computer operating system. Potential network vendors need to have the ability to allow you to track individual computers using an IP address or MAC address. The reason this is important is because in a switched internetwork, there are no shared segments where MAC addresses and IP addresses can easily be discovered. The vendor should have a user-locator tool that will allow you to easily input a MAC address or IP address and have the tool tell you the switch and port number to which the computer is attached. • Make sure that every vendor has support for standard security access protocols such as 802.1x, a new and important protocol for port-level security. This standard, which uses EAP (Extensible Authentication Protocol)/ LEAP (Lightweight Extensible Authentication Protocol) for user authentication, can be implemented on switch ports where desktop computers are attached to the network, or on wireless access points to allow laptop computers with wireless Ethernet cards to have secure access to your network. As long as each vendor has implemented 802.1x, you will be able to take advantage of this security feature when your infrastructure is ready to support the technology. • Consider purchasing an intrusion detection system (IDS). Numerous vendors offer competing products that will allow you to analyze what are known as “hack signatures.” A hack signature is not simply a rogue packet that is running untamed through your network, but rather it is a set of actions that are strung together in a certain order to form a 234
Designing and Provisioning an Enterprise Network pattern that can be recognized and acted upon. IDSs will be connected to your network typically at the point of entry (behind your Internet router) and will passively collect data as it goes in and out of your Internet router. An IDS is designed to reassemble all the data it passively collects into its component application, analyze it for any known hack signatures, and act upon what it finds. Actions can be as simple as notifying you when a hack has been attempted, or can take the form of an access list that is dynamically placed on your Internet router to block the attack at the point of ingress. An IDS can be a powerful tool in protecting your network, and in helping to capture would-be perpetrators by employing auditing capabilities whose export can be given to law enforcement authorities to aid in the capture of would-be criminals. Make no mistake — no matter how young or old, whether for fun or profit, any individual who knowingly compromises your network is a criminal who should be punished to the full extent of the law. Become familiar with security terminology. Knowing the language of the trade will allow you to be conversant with those vendors you wish to evaluate for an IDS purchase. It will also help you better understand the full extent of all the potential hacks that could be perpetrated so you can make an informed decision as to what level of security you would feel comfortable with implementing in your network. WRITING A REQUEST FOR PROPOSAL (RFP) Now that you have completed all of your analyses and have selected a list of vendors and products to consider for your new network, you will need to write a Request for Proposal (RFP). The RFP should address all the issues that were previously discussed in this chapter, and take into account all your research. The RFP should require the selected vendors to produce the following information: • Overview on the background of the vendor. This can include a history of the vendor in the industry, maturity of the proposed products, and a list of resources that can be publicly accessed for further research. • A diagram of the new proposed network. • A detailed description of how the new network will function. • A statement of compliance with all of the features that the new network will need to support. This information will come from the needs analysis you performed at the onset of the project. • A bill of materials detailing list prices and discounts for each item purchased. Be sure there are no hidden costs; the quote needs to include a detailed breakdown of all chargeable items, such as hardware and software, installation costs, and ongoing maintenance. • A statement of compliance with 802.1Q interoperability. 235
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • A statement of compliance with the 802.1X security authentication protocol • A detailed test plan to test interoperability with your existing network equipment. • A detailed implementation plan for installing the new network. This test plan should take into account interconnecting to an existing network, migration plan recommendations, a timetable, and installation costs broken down by the hour. It is common to require that the vendor provide you with a not-to-exceed price for installation services. This gives the vendor the ultimate incentive to either put more engineers on the job to complete the implementation quickly, or lose money. Simply put, a not-to-exceed quote for installation services means that any time spent by the vendor past the agreed upon cost limit is at the loss of the vendor. • An overview from the vendor on customer support. Items that should be addressed include: — Hours of operation — The number and location of the vendor’s customer support centers — First-call closure rates — Content of the vendor’s Web site (manuals, configuration guides, software, access to forums, ability to open trouble tickets on the Web) — Local engineering presence — Any awards that the vendor has received for its customer support — A statement from the vendor to commit resources for product training. This could be in the form of training credits at certified third-party training partners, or on-site training in your facility before, during, or after the installation is complete. — A statement of financial standing. This should include a copy of the vendor’s annual report and any other relevant financial data, such as bond rating, debt load, and available cash (in liquid assets and investments). — A list of three to five reference accounts that can be contacted by phone and perhaps visited in person. At least two of these references should be local to your area. The reference accounts should be of similar size, in the same industry as your company, and have implemented similar technology with a like design. — A letter signed by the vendor, certifying the submission of the RFP. This can be a very simple letter that states who the vendor is and for what that vendor is submitting the RFP. This letter should be signed by a manager working for the vendor who is directly responsible for making good on any promises made to you in the RFP response, and by the sales team that will proposing the network to you. This is important because it tells the vendor that you will not only hold that 236
Designing and Provisioning an Enterprise Network vendor to any promises it makes in the RFP, but that you know exactly who to turn to if problems arise later on. — A list of concessions from the vendor that it is willing to make in order to win your business. This list could include items such as future discounts for smaller purchases, ongoing training, a commitment to quarterly visits to assist with any open issues, a commitment to assist your staff in obtaining a level of industry certification on the vendor’s equipment, and other items. Vendors can be very creative in this regard. It is possible for a deal to be won or lost based on the willingness of a vendor to make certain concessions. — A date and time for when submissions must be complete. It is important to be firm on these time limits, as they will apply fairly to all vendors. — A common format that all vendors need to use when submitting their responses to the RFP. This should include a requirement to structure the document using certain formatting rules, and a requirement to submit the RFP both on paper and electronically. It is common to provide each vendor with an electronic template to use when they write their responses. This will ensure a common format, and allow you to more easily compare responses later on. You should also require the vendors to provide you with a certain number of paper copies so you will not have to make copies yourself after the submissions are complete. — A statement from you that says you reserve the right to continue negotiation on the overall pricing after the RFP responses have been submitted but before the project has been awarded to any vendor. This will give you the opportunity to fine-tune the quotes from the vendors once you get to the evaluation stage of the RFP process. Vendors can often meet every criterion to win an RFP, but be off on price. If you feel that you can make the price work, given that all other criteria have been successfully met, you will want to allow yourself the opportunity to further negotiate later on. — A defined means of acceptable communication throughout the RFP process. During the process of responding to an RFP, vendors will have questions. You need to determine what will be an acceptable means of communication with you that can be applied fairly to all vendors. Acceptable means of communication can be any of the following: — Assign one person in your organization as a point of contact through which all communications during the RFP process will be funneled. — Any questions that are posed to your internal contact person will be compiled and sent via electronic mail to all RFP participants, to 237
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ensure fairness in the RFP process. The questions should be sent to all participants in a timely manner, as should your response. — If you decide toward the end of the RFP process that your vendors need to have more time, grant the same extension to all participants. — Define in advance how you will communicate your decision on the RFP. This could be via e-mail, U.S. mail, or by telephone or voice mail. Most vendors are honest and will treat you and your RFP process with respect. However, some vendors are only concerned about closing the sale, and anything they can do to close the deal is, in their mind, fair game. There are things that you should do during the RFP process to keep the process fair and to protect your company from any semblance of impropriety: • Make sure that you and anyone else involved in the decision process avoid any questionable contact with the vendors during the RFP. During the process, vendors will try many things to curry favor with you. These things include taking you to lunch or dinner, giving you and your staff shirts with the vendor logo on them, offering you side deals that are not part of the RFP, taking you to a sporting event or golfing, or even trying to outright bribe you. It is extremely important, once the RFP process has begun, that you avoid any and all contact with vendors outside the predefined communications guidelines set forth in the RFP. We live in a litigious society, and you will be opening up your company to potential liability if you accept improper gifts from a vendor or facilitate improper contact with a vendor during the RFP. The networking industry is very small, and rumors about actions such as accepting improper gifts get around. A vendor that is not selected could use this against you in a court proceeding, arguing that the process was compromised by your actions. Financial damages could be exacted as a result, and the blame will fall squarely on your shoulders. • Treat all vendors fairly. There is nothing that will gain you a better reputation than treating all vendors during an RFP fairly and with respect. Share all communications between vendors, give all vendors the same concessions, hold all vendors to the same timetables, and grant all extensions equally. SELECTING A VENDOR Once you have completed writing your RFP, submitted the RFP to your selected vendors, managed the submission process, and accepted all of the responses, it is time to select a vendor. There are many different ways to evaluate RFP responses, but there are a few things that should be included in your evaluation method and criteria, regardless of how you 238
Designing and Provisioning an Enterprise Network make your final decision. Strongly consider having each vendor present its RFP in-person in a meeting with your staff and all other decision makers. Sometimes, despite the best of intentions, even the best-written proposals are not always interpreted as intended. Holding an RFP proposal meeting gives respondents an opportunity to have their proposal viewed in both writing and in-person. This way, any subtleties that did not come out in the written response can be pointed out in the meeting, and any questions responded to in an open format. You can assign a grade to each vendor’s presentation and factor this in as an evaluation criterion when making your final decision. Once all vendors have submitted their RFPs and presented their responses in-person, you will need to evaluate the written proposals. To evaluate RFPs, you can use a scoring system similar to how a teacher grades a test. Start with a total point score, and then go through each section of the original RFP template, assigning a point value to each section relative to its importance. (An item of greater significance to you, such as feature compliance, might be of higher value to you than the vendor overview. This means you would give feature compliance a higher point value than the vendor overview.) Once all sections have been assigned a point value, review each RFP response section by section, and grade each vendor’s response with the best responses obtaining the total point value for each section and the worst responses receiving fewer points or no points at all. Once the grading is complete, tally all the scores to see which vendor had the best overall rating. There will likely be things that you will want to use in your decision process that will be more subjective than objective. The most important thing to keep in mind when evaluating RFPs is to apply all evaluation criteria to all vendors equally. With the scoring complete, you will need to lay all of the information in front of you and make your final evaluations. When analyzing all the data, try to keep in mind your original goals for this network design and implementation project. You want to build a new network that will meet the needs of your user community, that will allow for a secure, applicationenabled infrastructure that will scale and grow as your needs change, and that will be completed on a budget. You should keep an open mind on price: Sometimes, a vendor will meet every criterion but miss on price. This is why you included a statement in your RFP that gave you the option to further negotiate with all vendors on price after the RFP responses have been submitted. This will also give you the opportunity to eliminate price as a barrier to choosing one vendor over another. You may often find a particular vendor’s story so compelling that you are willing to pay a premium to use their equipment. 239
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Once your evaluations are complete, your negotiations have been settled, and your vendor selected, you will need to inform all participants of your decision. Make sure you communicate this decision by way of the method that was determined in advance in your RFP. You can expect the vendor that is awarded the RFP to be quite pleased, and those that lost to be quite unhappy or even upset. The RFP process is very time-consuming for both the author (you) and the participants. It is common to feel let down after a rejection, but you must keep your composure and understand that your vendors are also human. Most will be very professional, but every once in a while you will find a vendor that is less than professional. Simply make a mental note of it and move on. CONCLUSION The process of designing, building, and implementing an enterprise network is full of pitfalls and rewards. Using this chapter as a guide, you will be off to a good start in building your own network. This chapter has provided you with just an introduction to the process of managing a network design and provisioning project. In time, you will bring your own experience and the experience of others to bear as you proceed with this process yourself. There are few experiences in networking that will give you better access to more technologies in such a short period of time than managing a network design, provisioning, and implementation project. Be careful, deliberate, meticulous, and fair, and you will find that your efforts will be greatly rewarded in the end. Recommended Readings Perlman, R. (1999). Interconnections: Bridges, Routers, Switches, and Internetworking Protocols, 2nd edition. Reading, MA: Addison-Wesley, Reading, MA. Thomas, T., Freeland, E., Coker, M., and Stoddard, D. (2000). Designing Cisco Networks. New York: McGraw-Hill, New York.. Oppenheimer, P. (1999). Top Down Network Design. Indianapolis, IN: Cisco Press, Indianapolis, IN.
240
Chapter 20
The Promise of Mobile Internet: Personalized Services Heikki Topi
Public perceptions regarding the importance of wireless Internet access technologies and their role in the corporate IT infrastructure have fluctuated dramatically during recent years. Until early 2000, investors, technology analysts, and corporate IT executives held very positive views on the future of these technologies and the prospects for mobile commerce and other corporate uses of mobile Internet applications. However, the post2000 downturn in the telecommunications industry in particular, and IT industries in general, have led to a more pessimistic view about the speed of innovation in information and communication technology solutions in organizations. As with any new technology, the future will be full of surprises. Nevertheless, it is already clear that wireless access technologies can be integrated effectively into a variety of new applications and services, both within an organization’s internal systems and as part of services targeted to external customers. Innovative organizations that begin now to evaluate the business opportunities with an open approach and willingness to learn will be the best prepared to leverage the benefits from a new infrastructure that enables widespread wireless access to the Internet on truly mobile devices. The objective of this chapter is to assist decision makers in evaluating the current status and likely future of wireless Internet access and mobile commerce technologies. The chapter begins by focusing on those characteristics of mobile services that differentiate them from fixed-line access — including both the limitations of today’s mobile devices as well as the 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
241
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE mobile capabilities that have not yet been fully exploited. Next, the chapter provides a framework for thinking about the business opportunities for real-time and non-real-time personal applications for wireless Internet access: person-to-person and person-to-computer. Finally, the chapter discusses some of key assumptions regarding mobile technology trends that we see as a reasonable basis for decisions about these technologies and provides recommendations for organizations considering the integration of mobile services into their infrastructure. WHAT IS SPECIAL ABOUT MOBILE INTERNET? Some of the most fundamental questions related to wireless Internet access and mobile commerce involve (1) the differences between wireless and fixed-line Internet access methods, (2) the current limitations of today’s mobile devices, and (3) the special capabilities the mobile Internet provides compared to other Internet access mechanisms. This chapter uses the term “wireless Internet access” to refer to technologies that provide access to the Internet using methods that do not require cabling, and the term “mobile Internet” to refer to the use of wireless Internet access technologies to provide Internet-based services without constraints created by wired access. Wireless versus Fixed-Line Internet Access At the surface level, the core elements of the mobile Internet are not fundamentally different from those of the fixed-line Internet. In both contexts, various types of client devices are linked to an access network that, in turn, is connected to a higher-level distribution network, and are eventually routed to the Internet core. In both contexts, servers are accessed through fixed links. Thus, the real technical differences are in the characteristics of the networks providing access to the client devices. Exhibit 1 presents three wireless Internet access technologies on a continuum from personal area networks (PANs) to wide area networks (WANs). With certain types of wireless access technologies, the differences between wireless and fixed-line Internet access are small. Wireless local area network (WLAN) technologies using the 802.11 protocol can be used to provide high-speed (currently mostly 11 Mbps, but increasingly 54 Mbps and soon faster) wireless access within a local area (such as a home, office, or commercial hot-spot). In practice, the only difference between the wireless access technology and fixed LAN access is the nature of the link between the user terminal and the network access point/access device. The Bluetooth wireless protocol offers restricted wireless access for personal use (often called personal area networking). This technology is intended mostly for communication between various mobile devices and their accessories; for example, Bluetooth could be used to link a PDA (per242
7KH,QWHUQHW
7KH,QWHUQHW
['6/&DEOH0RGHP 7FDUULHU 5RXWHU
)DVW(WKHUQHW :/$1 $FFHVV3RLQW
%OXHWRRWK (QDEOHG3KRQH
3'$RU/DSWRS
*DWHZD\*356 6XSSRUW1RGH *356 %DFNERQH 6HUYLQJ*356 6XSSRUW1RGH %DVH6WDWLRQ 6XEV\VWHP
%OXHWRRWK &HOOSKRQH3'$RU/DSWRS
3'$
Wireless Protocols Extent of Access 243
Exhibit 1.
%OXHWRRWK
E:L)L D J
&'3' *356H[DPSOHDERYH &'0$[ 8076:&'0$ &'0$[
3HUVRQDO
/RFDO
:LGH
Three Examples of Wireless Internet Access Technologies on a Continuum
The Promise of Mobile Internet: Personalized Services
*356 LQIUDVWUXFWXUH VHHULJKWPRVW FROXPQ
7KH,QWHUQHW
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE sonal digital assistant) to a cell phone that provides packet-switched Internet connectivity. However, providing uniform, reliable wireless access for wide geographical areas (using, for example, CDPD, GPRS, or UMTS) is a much more complex task than providing WLAN access at a single location. Complex technical solutions are required to ensure smooth hand-offs from one base station to another, roaming between the networks of different service providers, efficient management of available frequencies, maintaining connections over long distances between terminals and base stations, providing accurate location information, providing carrier-class wireless security, and customer management and billing. In these environments, special protocol families (such as WAP) are still often used for carrying the packet data on the mobile part of the networks. Exhibit 1 also includes a diagram describing a simplified version of the GPRS architecture as an example of wireless WAN access to the Internet. Current Limitations of Mobile Technologies At this time and in the foreseeable future, mobile access devices and wireless networks also have some inherent disadvantages. True mobility unavoidably requires that an access device is physically small, both in terms of weight and external dimensions (that is, it can be carried unobtrusively in a pocket). With current technology, this means that it has a small display and a slow data entry mechanism. Further, mobile devices are normally battery powered. With current technology, this means the need to limit the power consumption of various components and to take issues related to power into account in all design decisions. Also, the bandwidth available for the communication between mobile devices and their respective base stations is currently constrained by the limitations set by radiofrequency technologies. It is highly likely that the capacity disadvantage of wireless links compared to fixed-line connections is here to stay. In addition, wireless links are less reliable and less secure than fixed-line connections. When these limitations are taken into account, it is obvious that wireless Internet access devices have to offer something qualitatively different from fixed-line access before they will be widely adopted. Neither consumers nor corporate users are likely to choose a mobile interface to an application, over an interface based on a fixed-line connection, unless the mobile solution provides some clear advantages that outweigh the limitations outlined above. Potential Benefits of Wireless Applications Successful mobile Internet applications must, in one way or another, be based on the most important advantages of mobility, such as the following. 244
The Promise of Mobile Internet: Personalized Services Always Available, Always Connected. A user can, if he or she so chooses, have a mobile access device always available and always connected to the network. Consequently, a user can continuously access information and act very quickly based on information received, either from current physical environment or through the wireless device. A potentially very significant advantage is that wireless access devices provide ubiquitous access to any information resources available on the Internet and corporate (or personal) extra- and intranets. With well-organized databases, proper user interfaces, and highly advanced search and retrieval tools adapted to the mobile environment, this may lead to significantly improved performance in tasks that depend on the availability of up-to-date factual data or the ability to update organizational information resources in real-time. One challenge facing corporate users is that of finding applications where mobility genuinely makes a difference. In many cases, the availability of real-time information does not provide meaningful benefits because no user action is dependent on it. Among the first employee groups to benefit from applications built using mobile Internet technologies are sales and service personnel who can receive up-to-date data regarding the customers they deal with or internal corporate data (such as inventory status or product configuration information), and who also can continuously update the results of their actions in a centralized database. This means that other employees communicating with other representatives of the same client company or managers interested in receiving continuous updates regarding a particular situation are able to follow the development of relevant events in real-time.
Wireless access provides new opportunities for highly accurate and timely data entry. As long as the infrastructure provides sufficient but still transparent security, mobile devices with Internet access can be used as universal interfaces to enterprise applications that require data entry as an integral part of an employee’s work. Thus, it may become significantly easier to collect data where and when a real-world transaction occurs. Unlike portable devices without network connectivity that enable only data collection for further batch processing, continuous wireless connectivity makes it possible to update the data in organizational systems in real-time. Personal Services and Support. Wireless Internet access devices (or at least their identification components, such as GSM SIM cards) are more personal than any other terminals that provide network connectivity. Thus, they have the potential to be extensions of users’ personal capabilities in a much stronger way than any other access device type. Both cellular handsets and PDAs are characteristically personal devices linked to one individual, and whatever the dominant form factor(s) for wireless Internet access devices will eventually become, it is highly likely that these devices will be equally personal. This is a significant difference compared 245
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE to the fixed-access world where computers are often shared.1 With appropriate software, organizational systems utilizing mobile terminals can provide every employee personalized support in their tasks wherever and whenever they happen to be performing them. The desired support level (e.g., instructions needed for performing maintenance tasks or foreign language support) can either be chosen by the user or automatically set by the application that adapts based on the user’s past actions. Similarly, a company can provide its customers with highly personalized product support or other services through a mobile terminal. In practice, this also means that both a user’s and any other party’s ability to collect data about the user and the user’s actions is significantly better than with fixed-access devices. Thus, it is possible for a device to not only adapt to its user but to also provide highly personalized services based on the data collected during previous usage and the current context. This, of course, creates potentially significant privacy problems if the collected data is used in an inappropriate way. Context-Specific Services and Support. A mobile access device can identify its location using a variety of technologies with an ever-increasing accuracy (with an integrated Global Positioning System (GPS) receiver, the error can be as little as 30 feet). This creates an opportunity for companies to offer services based on an integration of three factors: location, time, and a user’s identity. Any two of these factors alone are already enough to provide highly personalized services, but when they are combined, the usefulness of the contextual information increases significantly. Unfortunately, privacy and security issues become more difficult at the same time. This is, nevertheless, an area where corporations have an opportunity to offer highly context-specific value-added services to their customers. It is, however, a true challenge to find mechanisms to communicate with customers using wireless devices so that they feel that the information they receive from the interaction is valuable for them and that the communication targeted to them is not intrusive. Many early experiments with mobile marketing suggest that approaches in which customers are given an incentive to initiate the communication — for example, by inviting them to participate in sweepstakes with their mobile device (the pull approach) — are more successful than sending unsolicited advertisements to them (the push approach), even in countries where the latter is not illegal.
A FRAMEWORK OF PERSONAL USES OF A MOBILE INTERNET Exhibit 2 presents a framework of the potential personal uses of the mobile Internet along two dimensions: (1) the mobile Internet enables communication either between people (person-to-person) or between a person and a computer or “smart” device (person-to-computer), and (2) a specific use 246
The Promise of Mobile Internet: Personalized Services Exhibit 2.
Uses of Mobile Internet Technologies Real-Time
Non-Real-Time
Calls Messaging • Basic messaging • Basic voice calls • Rich messaging, including e-mail and • Rich calls — video, still fax access images, audio • Access to electronic conferencing • Application sharing Info Retrieval and Applications Person-to-computer • Use of multimedia libraries • Access to intranets and internal applications • Presentation support • Sales support • Sales support • Training/HR development • Service/maintenance support • Retail sales interface for customers • Automated customer • Customer service support Person-to-person
of wireless Internet access technologies may need either real-time or nonreal-time communication.2 Person-to-Person Both research and anecdotal evidence suggest that flexible applications and services that can link end users directly to each other, either synchronously (in real-time) or asynchronously (in non-real-time), will at least initially be the most common uses of wireless Internet technologies. Basic synchronous voice service will, in all likelihood, maintain its position as the most popular communication mode, augmented with video and other rich call elements when they become technologically and economically feasible. The richness of synchronous mobile calls will increase, but most likely in the form of an exchange of shared application elements and multimedia data simultaneously with a regular voice call. Application sharing requires, of course, interoperability between application versions on different access devices and adaptability of applications to different terminal environments. Video communication is widely used as an example of a future technology in the marketing materials of wireless equipment manufacturers and operators and, therefore, a brief discussion of this topic is warranted. Both experience with and research on desktop videoconferencing suggest that, particularly when the focus is on the task(s) (and not on, for example, learning to know each other), the video component is the first to go if desktop space is a constraint. This will likely be true even more when display size is very limited, as it unavoidably will be at least for the first few years of availability of real-time wireless videoconferencing. One of the questions that will remain important for the managers responsible for corporate information and communication infrastructure is the cost effectiveness of 247
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE mobile video, which will likely be a premium service for a significant period of time. Recreational and personal uses of mobile videoconferencing might eventually become significantly more popular than professional uses. Linking individuals and groups to social events across geographical boundaries is an attractive idea if the quality of the technology is sufficient. In addition to the real-time calls, messaging is and will continue to be another approach to person-to-person communication — an approach that does not require a real-time connection between participants, such as email in the fixed Internet environment and short messages (SMS) that have become very popular particularly among GSM users in Europe and in parts of Asia. The richness of messages will also increase with multimedia components, such as images and video clips; the first commercial multimedia messaging services (MMS) were launched in the summer of 2002. It is easy to see the advantages and the attractiveness of being able to access one’s messages without the limitations of time and place, whatever the technical mechanism with which they have been created. With the introduction of true mobile Internet, mobile handsets are likely to become one of the primary mechanisms used to access one’s messages, irrespective of the technology with which they were originally transmitted. From the corporate perspective, the effects are twofold: (1) an increasingly large number of mobile phones will allow easy and affordable access to e-mail (including attachments); and (2) particularly in geographical areas where one mobile standard dominates (e.g., GSM/GPRS/WCDMA in Europe), rich messages that are exchanged directly between mobile terminals will become a business tool, especially if mobile handsets will be equipped with improved displays and versatile input devices (e.g., digital cameras, pen scanners) as has been predicted. The true value added by the two usage types described above (rich mobile calls and messaging) is an extension of the already-existing modes of communication to new contexts without the limitations of time and place. The fundamental nature of these services appears to stay the same, but the new freedom adds value for which consumers and corporations are probably willing to pay a premium, at least in the beginning. For a corporation that is considering the adoption of mobile technologies, the implications are clear. As shown, for example, by cellular phone usage, corporate users will adopt new communication technologies and make them part of their everyday toolkit if they perceive them as personally convenient, and it will be up to management to decide whether or not boundaries are needed for the usage of the premium services. The only questions are the timing of adoption and the development of appropriate policies by organizations. 248
The Promise of Mobile Internet: Personalized Services Mobile rich calls and messaging will be standard communication technologies used by the great majority of corporate users soon after technologically viable and reasonably priced options become available. This is, however, hardly surprising, and it is difficult to see the opportunities for the creation of true economic value compared to other companies within an industry. These technologies will be offered as a standard service and, as such, any innovative usage is easy to copy. First movers will probably have some advantage, but it is unlikely to be sustainable. The best opportunities for the development of new applications can probably be found by integrating rich messaging with intelligent back-office solutions, which capture, maintain, and utilize data about the messaging communication. Person-to-Computer Truly innovative applications are likely to be developed for person-to-computer services, which include information retrieval and interactive applications. These may require either real-time or non-real-time delivery of data from the servers that provide the service. Unlike the first two methods, which are person-to-person infrastructure services, the usage of which is universally the same, the use of mobile networks as an application platform is just a starting point for the development of both tailored and packaged applications that utilize mobile access as one of the fundamental features of the infrastructure. Nobody will be able to reliably predict what the most used and most profitable applications and services will be until they have been introduced and have stood the test of time. This, naturally, means that experimentation is essential. Without experimentation it is impossible to find new, profitable applications and create new innovations. Thus, it is essential to create a network of partners to allow for breadth in experimentation. THE FEATURES OF WINNING PERSONAL APPLICATIONS Although the success of any single mobile application cannot be predicted with any certainty, we can use the characteristics discussed above that differentiate the mobile Internet from the fixed-line Internet as a basis for evaluating the characteristics of the applications that have the potential to be successful. Successful mobile applications are not applications that have simply been ported to mobile terminals from the fixed-line environment. Powerful, productivity-enhancing mobile applications are likely to have the following features: • They are highly context specific and adapt to the user, the location, and the task for which they are used. Context specificity is one of the fundamental strengths of mobility, and high-quality mobile applications must be built with this in mind. 249
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • They adapt to the characteristics of the terminal devices and the network being used. Wireless access devices, like mobile phones, are very personal choices, and it is unlikely that we will see convergence toward any one device type. • They bring relevant aspects of fixed-line applications to the mobile environment. Instead of simply offering all the functionality of traditional desktop applications, successful mobile applications implement a relevant subset of functionality that is specifically chosen for the mobile context. • They are fast and easy to use. In many cases, a user of a mobile application is literally moving and can pay only limited attention to applications. Complex command sequences or navigational structures therefore prevent their use. • They allow users to perform simple, well-defined tasks that directly enable the users to improve their productivity, such as retrieving data from a catalog, getting instructions for performing a task, or comparing alternatives. • Most applications can become successful only if they are willingly adopted by a large number of people, and the adoption rates of applications with complex user interfaces have never been particularly high. • They increase a user’s sense of belonging to a group or community while simultaneously emphasizing his or her individual characteristics. This is true particularly with consumer applications, but it can also be utilized within a corporate setting. Many successful early applications utilizing SMS technology (e.g., the downloading of ring tones in Europe) give the users a chance to identify and communicate with a specific group and at the same time convey an individualistic message. TECHNOLOGY TREND ASSUMPTIONS Decisions about investments in mobile applications and services should be made based on the following assumptions regarding technology trends: • Decreasing cost of bandwidth. The cost of bandwidth for accessing Internet resources through mobile devices will continue to decrease. For example, currently in the United States, unlimited packet-based wireless access to the Internet costs $40 to $50 per month at the basic capacity level (in 2002, 20 to 40 kbps). It is likely that the cost for the user will remain approximately the same while the data rates continue to increase. Particularly in the United States, widespread acceptance of mobile technologies is likely to be possible only with plans that offer unlimited access because most potential adopters are accustomed to this pricing model with fixed access. Pricing models will likely vary 250
The Promise of Mobile Internet: Personalized Services
•
•
•
•
globally, depending on the region. The penetration rates for mobile Internet devices have the potential increase at a very rapid pace if consumers perceive pricing models (covering both terminal devices and services) to be such that they provide fair value. Overhyped wireless data rates. At the same time, it is important to note that the increase in real data rates will not be as rapid as predicted. Data rates that have been promised for 2.5G (GPRS) and 3G (UMTS)3 technologies in the marketing literature are up to 180 kbps and 2Mbps, respectively. It is highly unlikely, however, that an individual user will ever experience these rates with these technologies. According to some estimates, realistic data rates are around 50 kbps for GPRS and 200 kbps for UMTS by 2005.4 This, of course, has serious implications from the perspective of the services that realistically can be offered to the users. Immediate, always-on access. Widespread adoption of any Internet access technology requires access that is always available without lengthy log-on procedures. Products based on circuit-switched technologies that establish a connection at the beginning of the session will be unlikely to succeed, as the example of WAP (Wireless Application Protocol) using circuit-switched technologies has shown. For example, the packet-switched approach was one of the major advantages of i-mode over the early WAP. All new 2.5G and 3G technologies provide mechanisms for packet-switched communication. Technologically, this is based on an increased utilization of the Internet Protocol (IP) on mobile networks. New network architectures are separating intelligent services that will take place at the network edge from the fast packet-forwarding capabilities of the core networks. Widespread availability of mobile access. The geographical area within which mobile Internet service is available is likely to continue to grow. In the United States, it is unlikely that ubiquitous availability will be achieved at the national level because very large, sparsely populated areas make it unprofitable; but at least in Western Europe (including the Nordic countries), there will be few areas without coverage. We will see several waves of capacity increases, which will invariably and understandably start in large metropolitan areas and gradually make their way to smaller cities and, from there, to rural communities. This pattern is likely to be repeated at least twice during the next ten years — first with 2.5G technologies such as GPRS, GPRS/EDGE, and CDMA2000 1x and then with 3G technologies such as WCDMA and CDMA2000 3x. The speed of 3G deployment will depend on the financial status of operators, the success of 2.5G technologies, consumer demand for fast-speed access, and regulatory actions. More than one global access standard. In the foreseeable future, there will not be one clearly globally dominant technology for providing 251
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE wireless Internet access despite the efforts of the ITU-T and other global standardization organizations. The technology development path is defined most clearly for those regions where GSM is currently the dominant 2G technology (Europe and Asia except Japan). In these areas, GPRS is a natural 2.5G step in the direction of WCDMA as the 3G technology. In the United States, however, the proliferation of access technologies currently prevalent in the 2G markets seems likely to also continue in the future. Not only are operators using 2G CDMA technology likely to migrate to CDMA2000 1X/3X networks and operators using GSM or TDMA technologies toward WCDMA through GPRS/EDGE, but the market also has a large number of players whose services are currently available and based on lesser-known and slower packet-based technologies such as Mobitex (Cingular Wireless) or DataTAC/Ardis (Motient). In addition, WLAN technologies (currently the 11 Mbps 802.11b and soon the 20–54 Mbps 802.11g and 54 Mbps 802.11a) using unlicensed frequencies have recently received a lot of attention as a possible alternative for 3G technologies — particularly for data — in limited geographical areas with high user densities. As a result, at least at the present time, the migration plans toward higher speeds are not clearly specified in the United States. Even in Europe, WLAN technologies may cause interesting surprises to the operators that have invested heavily in 3G licenses. Fortunately, providers and users of mobile services are not limited to one operator or to one access technology, but they should be aware of the potential technological constraints created by the dominant access technologies in particular geographical regions. Service providers and companies developing in-house solutions therefore need to design their services so that they are usable from a variety of devices and take the special characteristics of each end-user access device into account. Future terminal devices must be able to use multiple access technologies from different providers without dropping a service when the client moves from one access mode to another, and future applications must be able to adapt to variable data rates. • Consumer needs still rule. The underlying technologies are irrelevant to most consumers, and all companies designing and marketing consumer services should keep this clearly in mind. Several times during the history of mobile services, we have seen cases in which a product that seems to be technologically inferior has become dominant because of seemingly irrelevant features that consumers prefer. Often, apparently meaningless services have become hugely successful because of a feature or a characteristic, the appeal of which nobody was able to predict. This is also likely to happen in the area of the mobile Internet and mobile commerce. The most successful i-mode services in Japan and short message (SMS) services in Europe have included 252
The Promise of Mobile Internet: Personalized Services service types for which major commercial success would have been very difficult to predict (e.g., virtual fishing in Japan or downloading of various ring tones in Europe). • 3G is not the end of development. Technological development will certainly not end with 3G. It is likely that 4G solutions will provide seamless integration and roaming between improved versions of technologies in each of the current main access categories: personal area network technologies (Bluetooth), wireless local area networks (802.11 protocols), and wide area network access technologies (GPRS, WCDMA, CDMA2000). 5G technologies are already under development, and some observers predict that they may replace 3G as soon as 2010. Based on the recent experiences with 3G deployment, however, delays are likely. RECOMMENDATIONS Based on the above discussion of the differences between mobile and fixed-access Internet technologies, the potential uses for the mobile Internet, and assumptions about the development of mobile communication technologies, several recommendations can be made to organizational decision makers. • Development of wireless handsets and infrastructure is still in its infancy, yet the importance of understanding mobility and providing relevant mobile capabilities for the members of an organization is intensifying. Learning how mobility will change the links between an organization and its employees, customers, suppliers, and other stakeholders will take time and require experience; thus, it is important to start now. • Understanding the unique and sometimes non-intuitive characteristics of mobility and applications utilizing wireless access technologies will require individual and organizational learning. Some of the most difficult issues are related to the development of adaptive user interfaces that provide the best possible user experience in a variety of access environments. The challenges are greatest in environments where the constraints are most severe; that is, on the physically smallest mobile devices. • Few organizations will have all the capabilities needed to develop the best possible applications for every purpose. Partnering and various other collaborative arrangements will be just as, if not more, important in the mobile Internet context as in the fixed Internet environment. • The introduction of mobile terminals and applications is, as any other technology deployment project, a change management project and should be seen as such. This is particularly important in multicultural 253
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE environments in which cultural factors may strongly affect people’s perceptions and expectations. • Mobile Internet technologies introduce serious issues regarding security and privacy. These issues must be resolved satisfactorily before widespread deployment. The use of truly mobile devices is very personal, and users want to keep it so. Users’ perceptions matter as much as reality. • Global confusion regarding the access technologies is likely to continue, although there are good reasons to believe that future technological integration will be based on IPv6 at the network layer and vendorindependent technologies such as Java and XHTML at the application layer. Together, these form a sufficient standards platform for non-proprietary applications to succeed as long as terminal devices can effectively utilize multiple radio access technologies and connect to multiple service providers. • Expect a fast-moving target. The technology infrastructure will continue to improve quickly, as will the capabilities of the access devices. CONCLUSION It is essential that decision makers begin to evaluate the capabilities and the promise of various mobile access technologies and mobile applications. Systems that utilize wireless Internet access technologies will create new business opportunities, many of which will be highly profitable for those corporations that are best positioned to take advantage of them. Notes 1. Shared devices can, of course, be personalized in networked environments by utilizing the capabilities of network operating systems, but the level of personal attachment between a user and a terminal device is closer if the device is used exclusively by one person. 2. Nokia’s MITA architecture (Mobile Internet Technical Architecture — The Complete Package, IT Press, Helsinki, 2002) divides the uses of mobile communication infrastructure into three categories in terms of the immediacy needs: rich call, messaging, and browsing. 3. A typical notation that is used to refer to different generations of mobile technologies uses 2G (the second generation) for current digital TDMA, CDMA, and GSM technologies, 2.5G for packet-based technologies that were originally intended to be intermediate technologies between 2G and 3G such as GPRS, and 3G for next-generation, high-speed, packetbased technologies such as WCDMA and CDMA2000. 4. Durlacher Research & Eqvitec Partners (2001). UMTS Report. An Investment Perspective. Available at http://www.tuta.hut.fi.
Sources 1. 2. 3. 4.
www.3gpp.org www.3gpp2.org www.cdg.org www.gsmworld.com
254
The Promise of Mobile Internet: Personalized Services 5. 6. 7. 8.
www.i-mode.nttdocomo.com http://standards.ieee.org www.umts-forum.org www.wapforum.org
GLOSSARY Bandwidth: Formally refers to the range of frequencies a communications medium can carry but, in practice, is used to refer to the data rate of a communications link. Measured in bits per second (bps) and its multiples (kbps = kilobits per second = 1000 bps, Mbps = megabits per second = 1,000,000 bps) Circuit-switched: Refers to transmission technologies that reserve a channel between the communicating parties for the entire duration of the communication event. Used widely in traditional telephony, but is relatively inefficient and cumbersome for data transmission. CDMA: Code Division Multiple Access is a radio access technology developed by Qualcomm that is widely used for cellular telephony in the United States and in South Korea. CDMA2000: One of the international radio access standards that will be used for 2.5G and 3G mobile telephony. CDMA2000 3x will provide significantly higher speeds than the transition technology CDMA2000 1x. CDPD: Cellular Digital Packet Data is a 2G packet-switched technology used for data transmission on cellular networks mostly in the United States that provides 19.2 kbps data rates. Data rate: The number of bits (digital units of transmission with two states represented with a 0 or a 1) a communications channel can transmit per second. EDGE: A radio-access technology intended to make it possible for telecom operators to provide 2.5G mobile services utilizing the radio spectrum resources they already have. It can be used by both TDMA and GSM operators. GPRS: General Packet Radio Service is a mechanism for sending data across mobile networks. It will be implemented as an enhancement on GSM networks, and will provide at first 20 to 30-kbps and later 50 to 70-kbps access speeds using a packet-switched technology. Therefore, it can provide an always-on environment in which users are not charged for connection time, but instead either for the amount of data transmitted or a flat fee per month. It is considered a 2.5G mobile technology. GSM: A 2G cellular standard used widely in Europe and in Asia but relatively little in the United States. It is used for voice, short messaging (SMS), and for circuit-switched data. 255
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE i-mode: A set of mobile Internet access services offered on the network of the Japanese cellular provider NTT DoCoMo’s network. Most of the services have been designed to be accessed using specific i-mode handsets. IP: Internet Protocol. The network layer protocol that forms the foundation of the Internet and most current internal data networks. ITU, ITU-T: International Telecommunications Union and its Telecommunication standardization section. An international organization that coordinates international telecommunications standardization efforts. Packet-switched: Refers to transmission technologies that divide the content of the transmission into small packets that are transmitted separately. Mostly used for data but increasingly often also utilized for audio and video transmission. SMS: Short Message System is a mechanism for sending short text messages (originally 160 characters long) between cellular phones. Originated on GSM networks but services can now be found on all digital network types (GSM, TDMA, CDMA). TDMA: Time Division Multiple Access is a radio-access technology for mobile telephony that is widely used in the United States. GSM is based on the same basic idea as TDMA but still requires different equipment. UMTS: Universal Mobile Telecommunications System is one of the major 3G technologies that utilizes WCDMA as its radio access technology. Telecom operators paid more than $100 billion for UMTS licenses in Germany, Great Britain, and Italy in auctions that were organized in the summer of 2000. In the fall of 2002, only experimental UMTS networks were operational. WAP: Wireless Application Protocol is a set of standards for delivering information to mobile devices and presenting it on them. The first versions of WAP suffered from serious practical implementation problems, and WAP services have not yet reached the popularity that was originally expected. WCDMA: The radio access technology that will be used in UMTS. WLAN: A Wireless Local Area Network provides wireless access to local area networks (LANs) through wireless access points. IEEE 802.11 working groups are responsible for the development of standards for WLANs.
256
Chapter 21
Virtual Private Networks with Quality of Service Tim Clark
Virtual private networks (VPNs) have certainly received their fair share of press and marketing hype during the past few years. Unfortunately, a side effect of this information overload is that there now appears to be a great deal of confusion and debate over what the term “VPN” means. Is it synonymous with IPSec? Frame Relay? Asynchronous Transfer Mode (ATM)? Or is it just another name for remote access? If one were to take a quick survey of VPN information on the Internet, one would have to answer, “yes, it applies to all these technologies” (see Exhibit 1). This chapter first offers a definition that encompasses all these solutions. It then specifically discusses where the convergence of voice, video, and quality of service (QoS) fits in to this brave new VPN world. Finally, the discussion focuses on what offerings are deployable and testable today and what evaluation methodology to use in deciding which solution works best for a particular network. DEFINING VIRTUAL PRIVATE NETWORKS One way to define any technology is to examine it in terms of the services it provides to the end user. In the world of high technology, these poor souls are too often left scratching their heads, telephone in hand, waiting on the IT help-line for someone to explain to them why what worked yesterday in the network does not work today. VPNs offer the end user primarily three key services: 1. Privacy 2. Network connectivity 3. Cost savings through sharing 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
257
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 1. IP, ATM, and Frame Relay Networks Can Support VPNs
Privacy VPNs promise the end user that his data is his own, it cannot be stolen, and his network is safe from penetration. This is implemented in a couple of ways. Sometimes, a network provider may simply convince his customer that his network service is secure and that no further security measures are required. This is often the case in Frame Relay or ATM-based networks where connections are permanent and are established via the service provider’s network management interface. This, however, can be a risky proposition in that few, if any, service providers offer security guarantees or penalties in their service level agreements. Frame Relay and ATM have no inherent security designed into their protocols and are, therefore, susceptible to a number of attacks. Security solutions for Frame Relay, ATM, and IP networks are available and implemented either by the customer or service provider via router software or a stand-alone product that provides security services. The security services that these products provide typically include: • • • • 258
Private key encryption Key exchange Authentication Access control
Virtual Private Networks with Quality of Service
Exhibit 2.
A System of Public, Private, and Session Keys Protects Information and Maintains Performance
Private key encryption utilizes cryptographic algorithms and a shared session key to hide data. Examples of widely available private key encryption schemata are Digital Encryption Standard (DES), Triple DES, and IDEA. Because both sides of a network connection share the same key, there must be some way to exchange session keys. This is where a key exchange protocol is required (see Exhibit 2). Key exchange protocols usually involve two keys: a public key that is used to encrypt and a private key that is used to decrypt. Two sides of a network connection exchange public keys and then use these public keys to encrypt a session key which is exchanged and decrypted using the key exchange protocol’s private key. Key exchange is susceptible to man-in-the-middle attacks and replay attacks, which involve an attacker pretending to be a trusted source. Authentication prevents this by insuring the source. When public keys are exchanged, the source is verified either by a shared secret or via a third party called a Certificate Authority. Access control typically rounds out a VPN product’s feature set. Its primary purpose is to prevent network penetration attacks and ensure that only trusted sources can gain entry to the network. This is be done by defining a list of trusted sources that are allowed access to the network. Network Connectivity Network connectivity relates to the ability of the nodes in a network to establish connections quickly and to the types of connections that can be established. Connection types include (see also Exhibit 3): • One-to-one • One-to-many • Many-to-many 259
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 3.
Primary Types of Network Connectivity
IP networks provide the highest degree of connectivity. IP is a connectionless protocol that routes from site to site and establishes connections via TCP very quickly. With IP, one can create one-to-one, one-to-many, or many-to-many connections very quickly. This makes it ideal for Internet and retail commerce applications where businesses are connecting to millions of customers — each hopping from Web site to Web site. ATM switched virtual circuits (SVCs) offer a lower degree of connectivity than IP network services. ATM is a connection-oriented protocol and requires that a signaling protocol be completed prior to connection establishment. This usually takes milliseconds, but is substantially slower than connections in an IP cloud (not running Resource Reservation Protocol [RSVP] or Differentiated Services [Diff-Serv]). Additionally, ATM requires some rather complex configurations of protocols such as LANE, PNNI, or MPOA. ATM SVCs support one-to-one and one-to-many connections, but manyto-many connections are difficult and require multiple SVCs. This makes ATM SVCs ideal for intranet applications such as large file transfers, video catalogs, and digital film distribution, where connections last minutes instead of seconds and tend to be one-to-one or one-to-many. In the case of Frame Relay and ATM permanent virtual circuits (PVCs), connectivity is low. While one-to-one and one-to-many connections are supported, the network service provider must establish channels. This can take anywhere from a few minutes to a few days, depending on the service provider. For this reason, these services are permanent in nature and utilized to connect sites that have round-the-clock or daily traffic. 260
Virtual Private Networks with Quality of Service
Exhibit 4.
Shared Circuits Use Bandwidth More Efficiently
Cost Savings through Sharing Just like your kindergarten teacher told you, sharing is a good thing. The “virtual” in VPN is due to the fact that, in VPNs, bandwidth is shared (see Exhibit 4). Unlike dedicated circuits in which one owns a certain bandwidth across the network whether one happens to be using it or not, VPNs allow one to share bandwidth with peers on the network. This allows the service provider to better engineer his network so that bandwidth is utilized more efficiently. Service providers can then pass substantial cost savings to their customers. Estimates for cost savings on switching from a dedicated network to a shared bandwidth network can be as much as 20 to 40 percent. The key to sharing bandwidth is that it be fair and this is why QoS is essential. QUALITY-OF-SERVICE (QoS) A survey by Infonetics listed QoS as the second leading concern of IT managers, behind security in its importance in their network design decisions. QoS is the ability of a network to differentiate between different types of traffic and prioritize accordingly. It is the cornerstone of any convergence strategy. Voice, video, and data display very different traffic patterns in the network. Voice and video are very delay dependent and have very predict261
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 5. Required Bandwidth is the Sum of All Components
able patterns, while data is very bursty and is less delay sensitive. If all three types of traffic are put on a network, the data traffic will usually interfere with voice and video and cause it to be unintelligible. A good example of what convergence is like without QoS would be to use an Internet phone during peak traffic hours. You will not like it. One thing that QoS does not do is guarantee that data traffic will get from node A to node B under congestion conditions. It does not prioritize one company’s traffic over another company’s traffic if they are the same type. It merely prioritizes delay-sensitive traffic over traffic that is not delay sensitive. The only way a service provider can guarantee traffic will get from node A to node B is by designing the network’s bandwidth capacity so that it can handle worst-case congestion conditions (see Exhibit 5). Be aware that many carriers that offer QoS overbook their networks. ATM QoS ATM networks have the advantage of being designed from the ground up for QoS. ATM networks offer Constant Bit Rate, Variable Bit Rate, Variable Bit Rate Real-time, Available Bit Rate, and Unspecified Bit Rate as Classes of Service within their QoS schemata. Additionally, the ATM Security Forum has defined security as a Class of Service. Access control, encryption, data integrity, and key exchange are all defined and integrated with QoS into a nice, interoperable package. 262
Virtual Private Networks with Quality of Service IP QoS An acceptable standard for QoS within an IP-based network is still very much a work in progress. The Internet Engineering Task Force (IETF) has three standards: RSVP, Diff-Serv, and Multi-Protocol Label Switching (MPLS). RSVP RSVP was the first attempt at a universal, full-feature standard for IP QoS. However, based on its inability to scale, it failed to gain acceptance within the community. RSVP requires that all routers within the network maintain state information for all application flows routed through it. At the core of a large service provider network, this is impossible. It has found a small place in enterprise networks and in PVC-like applications. In these applications, flows are not set up by the end user, but by the network management system. Flows consist of aggregated traffic rather than specific host-to-host application. Differentiated Services (Diff-Serv) Diff-Serv is the IETF’s attempt at a solution that scales. It is more modest in its QoS offerings than RSVP. Diff-Serv groups individual traffic flows into aggregates. Aggregates are serviced as a single flow, eliminating the need for per-flow state and per-flow signaling. Within the Layer 3 IP header is a section designated for the Diff-Serv Code Point. Routers and IP switches use this mark to identify the aggregate flow to which a packet belongs. DiffServ does not supply per-application QoS. Instead, it is utilized to guarantee that the service level agreement (SLA) between a customer and a service provider is kept. By setting the Diff-Serv Code Point to a specific typeof-service (ToS), the customer designates the class of service required. Assuming that the network is engineered so that the sum of all SLAs can be met, the customer’s data traffic can be guaranteed to arrive at its destination within the delay, throughput, and jitter tolerances specified in the SLA. MPLS MPLS is the up-and-coming favorite in the search to provide a standard that includes QoS and security for IP-based networks. MPLS has the advantage that it operates across and binds in a number of Layer 2 protocols, including Frame Relay, ATM, PPP, and IEEE 802.3 LANs. MPLS is a method for IP routing based on labels. The labels are used to represent hop-by-hop or explicit routes and to indicate QoS security, and other information that affects how a given type of traffic is prioritized and carried across a network. MPLS combines the routing — and any-to-any connection capabilities within IP — and integrates them with Layer 2 pro263
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE tocols. For example, within ATM, IP flows are labeled and mapped to specific VPI/VCIs. The IP QoS types are mapped to ATM QoS types, creating a QoS environment where Layer 2 and Layer 3 are cohesive. Enabling any-toany connections within ATM and Frame Relay allows service providers to connect the many nodes of a single customer’s network by specifying a single connection rather than the one-to-many connections that are normally required. MPLS is somewhat new, and a number of issues will arise — its interoperability with IPSec, for example. It is a work in progress. Thus, depending on one’s immediate needs and how much blood one is willing to shed for the cause, one may want to wait until the dust settles before implementing an MPLS network. EVALUATION METHODOLOGY In evaluating any VPN networking solution or carrier service offering, testing is crucial. I cannot emphasize enough the importance of testing. The progress of the technological society can be traced back to the philosophies of Rationalism and Empiricism. Reliance, not on authorities in the field but on empirical evidence as revealed by sound evaluation methodology, is key to designing a networking solution that will fit one’s needs and grow as one’s requirements grow. Research the Technology One cannot evaluate a technology without some understanding of its underpinnings. Ignore the hype. Hype’s sole purpose is to try and sell you something. Do not waste time on chapters discussing “technology wars.” This is the media’s way of trying to make a rather dry subject interesting. The decision as to what is needed is going to be based on the quality of information that has been gathered. Read books, take classes, get some hands-on experience with the different VPN technologies. Decide for yourself what technology is the “winner.” Seek out your peers. Have they already been through an evaluation process? What were their evaluation criteria? What applications are they planning on running? Remember, the Internet started out as a place for “techies” to share information and help each other solve problems. The time spent researching the technology is worth the cost. Define the Criteria Application Traffic. The first step in evaluating a VPN solution is to fully understand the traffic characteristics of the applications that are running or will be running on the network for at least the next three years. If planning to integrate voice and video, take the time to understand what these applications are going to do to the network’s traffic. If not involved in planning the applications that the company will be utilizing, get involved. One 264
Virtual Private Networks with Quality of Service cannot plan a network without understanding what will be running on it. Far too many IT organizations are in pure reactionary modes, in which resources are absorbed in fixing problems that could have been avoided by a little planning. Technology can solve a lot of problems but it can create more if sufficient planning has not occurred. Security. Defining a company’s security requirements is purely a matter of discovering the acceptable risks. What value does the company place on its data? In a worst-case scenario, what damage could be done by releasing confidential information? Who are you protecting your information from? Try to use scenarios to educate company management on the cost and value of data security.
In the age of the Internet, it is a safe assumption that any traffic sent out over a public network is fair game. Even separate networks like ATM and Frame Relay are subject to attack from the Internet. Remember, security is only as strong as its weakest link. If a company with poor security has a connection to both the Internet and a Frame Relay or ATM public network, a network attacker can use that company’s network to access its Frame Relay or ATM network. Unless one is trying to cure insomnia, do not get bogged down in discussions of cryptography. In most cases, open standards such as Triple DES or IDEA will provide an acceptable level of security. If someone needs this year’s model of supercomputer versus last year’s model to decode your data, does it matter? Unless you are the Defense Department or the Federal Reserve, the answer is usually “no.” Generally, one finds that certain standards are used across the board. These include, but are not limited to, Triple DES, IDEA, RSA Key Exchange, Diffie-Hellman Key Exchange, and X.509 Certificates. Cost. In estimating the cost savings, be very careful of some of the quote estimates provided by VPN vendors. Be sure to differentiate between costs savings based on Internet models versus cost savings based on an intranet model. Sure, one might save up to 80 percent if all data is sent across the Internet, but the Internet is years away from being ready for mission-critical data. So, if information to be sent is mission critical, the Internet may not be an option. Bandwidth. Defining a bandwidth requirement and allowing for growth is especially tricky in today’s convergent environment. Video and voice traffic require much more consistent bandwidth than data traffic. It tends to fill up the pipe for longer periods of time. Imagine a large file transfer that lasts anywhere from five minutes to an hour. If packets drop in a data environment, it is usually transparent to the user. In a voice and video environment, one dropped packet can be very apparent and is more likely to gen265
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE erate a complaint. Be aware that current IPSec solutions max out at 90 Mbits simplex, which is about 45 Mbytes duplex. If one has OC-3 bandwidth requirements, then ATM may be the only choice. This is one area where it is very important to test. QoS and Performance. Understand that QoS, as it relates to convergence, may not be the same as QoS as defined in one’s service level agreement (SLA). The key is that the network has the capability to differentiate between different types of service. By putting voice, video, and data in the same pipe as defined by a single QoS, they will still stomp all over each other.
In evaluating and measuring QoS, remember that QoS is an additive effect. It may be necessary to simulate multiple switches or routers. Also, be sure to generate enough low-priority background traffic to tax the network. Important measurement criteria for QoS are delay, delay variation, dropped cells, and throughput. Transparency. Networks and network security should be invisible to the end user. This is one of the most important evaluation criteria in any security scheme. The end user must never know that a new security product has been implemented in one’s network. It is not the end user’s job to secure the network — it is yours. Nothing good can come from adversely affecting an end user in the performance of his duties. An end user will usually do one of two things: scream to the high heavens and sit around with his hands in his pockets until the problem is repaired, or find some way of subverting the network’s security — putting everyone at risk. Ease of Management. Ease of management is where the hedonists separate themselves from the masochists. Networking security solutions should be simple to implement and control. If they are not, then two very bad things might happen. One might accidentally subvert one’s own security, exposing the company’s network to risk and oneself to possible unemployment. One could cause the security solution to become opaque to the user, which is already known to be a bad thing from the previous paragraph. Make, sure there is a comprehensive checklist defined for evaluating a VPN’s management system. Keep in mind that, depending on a person’s experience with networking paradigms, two people could come up with very different answers in this category. One should probably go with the dumb lazy guy’s choice, assuming he was not too lazy to evaluate the network management interface. Objectivity. It is important to remain objective in evaluating one’s requirements. This is an opportunity to be a true scientist. Try not to become emotionally attached to any one technology or vendor. It clouds 266
Virtual Private Networks with Quality of Service one’s judgment and makes one annoying to be around at parties. The best solution to a specific problem is the important thing. Testing. Testing is where vendor claims hit the anvil of reality. Take the time to have evaluation criteria and a test plan in place well before the evaluation. Share it with and discuss it with vendors ahead of time. There is nothing to be gained by keeping a test plan secret. Many vendors will offer criteria for consideration and will help with test equipment. Good communication is a two-way street and will be absolutely essential in any installation.
Evaluations are opportunities to not only verify the vendor’s claims, but to test that vendor’s customer support capability. Does equipment arrive at the stated time? How long does one have to wait on hold before obtaining technical support online? Is the vendor willing to send someone in person to support the install? How important are you to them? These are factors that are often ignored, but are just as important as the technical criteria. SUMMARY VPNs and QoS networks are new technologies that developed in separate camps. In a number of instances, they may not be interoperable with each other, even in the same vendor’s product. Careful definition of one’s requirements and planning for an evaluation test period are absolute essentials in implementing a successful solution. Remember that, while vendors may not purposefully misrepresent their product, they do sometimes become confused on what they have now versus what they plan to have. Be careful to consider and prioritize requirements, and develop specific tests for those that are high priority. Be objective in studying what each of the technologies has to offer. The end result will be that people are able to communicate safely and more effectively; such a goal is worthy of some effort. ADDITIONAL READING John Vacca, Virtual Private Networks: Secure Access over the Internet, Data Communications Management, August 1999, No. 51-10-35. Donna Kidder, Enable the Future with Quality of Service, Data Communications Management, June 2000, No. 53-10-42.
267
This page intentionally left blank
Chapter 22
Storage Area Networks Meet Enterprise Data Networks Lisa M. Lindgren
Until recently, people who manage enterprise data networks and people who manage enterprise storage have had little in common. Each has pursued a separate path, with technology and solutions that were unique to their particular environments. Enterprise network managers have been busy building a secure and switched infrastructure to meet the increasing bandwidth and access demands of corporate intranets and extranets. Storage management has been more closely related with particular applications like data backup and data mirroring. Enterprises have built standalone storage area networks (SANs) to manage the exponentially increasing volume of data that must be stored, retrieved, and safeguarded. With recent announcements, some enterprises will begin to merge storage-related networks with their data networks. This move, while making financial sense in some cases and providing tangible benefits, will create new challenges for the enterprise data network. This chapter provides a look at the rationale for SANs, the evolution of SANs, and the implications for the enterprise data network. A few definitions are in order. A storage area network (SAN) is a network that is built for the purpose of moving data to, from, or between storage devices, such as tape libraries and disk subsystems. A SAN is built of many of the elements common in data networks — namely, switches, routers, and gateways. The difference is that these are not the same devices that are implemented in data networks. The media and protocols are different, and the nature of the traffic is different as well. A SAN is built to efficiently move very large data blocks and to allow organizations to manage a vast 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
269
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE amount of SAN-attached data. By contrast, a data network must accommodate both large file transfers and small transactions such as HTTP requests and responses, and 3270/5250-style transactions. A related term that one encounters when dealing with storage is network-attached storage, or NAS. This is not just a mix-up of the SAN acronym. An NAS is a device, often called a filer or an appliance, that is a dedicated storage device. It is attached to a data LAN (or, in some cases, a SAN) and allows end users or servers to write data to its local storage. An NAS separates the storage of the data from the client’s system and the typical LAN-based application server. The NAS implements an embedded or standard OS, and must mimic at least one network operating system and support at least one workstation operating system (WOS). Many NAS systems claim support for multiple NOSs and multiple WOSs. One common use of an NAS is to provide data backup without involving the CPU of a generalpurpose application server. In summary, a SAN is a storage infrastructure designed to store and manage terabytes of data for the enterprise. An NAS is a low-end device designed to service a workgroup and store tens or hundreds of gigabytes. However, they share a common benefit to the enterprise. Both SANs and NAS devices separate the data from the file server. This important benefit is explored in more detail later. Exhibit 1 depicts a basic SAN and its elements in addition to the relationship between a SAN and a data network with NAS devices. RATIONALE FOR SANs SANs allow the decoupling of data storage and the application hosts that access and process the data. The concept of decoupling storage from the application host and sharing data storage devices between application hosts is not new. Mainframe-based data centers have been configured in this way for many years. The unique benefit of SANs, as compared to mainframe-oriented storage complexes, is that a SAN supports a heterogeneous mix of different application hosts. Theoretically, a SAN could be comprised of back-office systems based on Windows NT, Web servers based on Linux, ERP systems based on Sun Solaris, and customer service applications based on OS/390. All hosts could seamlessly access data from a pool of common storage devices, including NAS devices, JBOD (just a bunch of disks), RAID (redundant array of inexpensive disks), tape libraries, tape backup systems, and CD-ROM libraries (see Exhibit 1). Decoupling the application host from the data storage can provide dramatically improved overall availability. Access to particular data is not dependent on the health of a single application host. When there is a oneto-one relationship between host and data, the host must be active and 270
Storage Area Networks Meet Enterprise Data Networks
Exhibit 1.
Conceptual Depiction of a Storage Area Network (SAN)
have sufficient available bandwidth to respond to a request for the data. SANs allow storage-to-storage connectivity so that certain procedures can take place without the involvement of an application host. For example, data mirroring, backup, and clustering can be easily implemented without impacting the mission-critical application hosts or the enterprise LAN or WAN. This enhances an organization’s overall high availability and disaster recovery abilities. SANs permit organizations to respond quickly to demands for increased storage. Without a SAN, the amount of storage available is proportionally related to the number of servers in the enterprise. This is a 271
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE critical benefit of SANs. Most organizations that have embarked upon E-commerce and E-business initiatives have discovered that their storage requirements are increasing almost exponentially. According to IBM, as organizations begin to perform business transactions via the Internet or extranet, they can expect to see information volume increase eightfold. SANs allow organizations to easily add new storage devices with minimal impact on the application hosts. SAN EVOLUTION AND TECHNOLOGY OVERVIEW Before SANs were around, the mainframe world and the client/server world had completely different storage media, protocols, and management systems. In the mainframe world, ESCON channels and ESCON directors provided a high-speed, switched infrastructure for data centers. An ESCON director is, in fact, a switch that allows mainframes and storage subsystems to be dynamically added and removed. ESCON operated initially at 10 MBps and eventually 17 MBps, which is significantly faster than its predecessor channel technology, Bus-and-Tag (4.5 MBps maximum). Networking of mainframe storage over a wide area network using proprietary protocols has also been available for many years, from vendors such as Network Systems Corporation and CNT. In the client/server world, Small Computer Systems Interface (SCSI) is an accepted and evolving standard. SCSI is a parallel bus that supports a variety of speeds, starting at 5 MBps for SCSI-1 and now supporting up to 320 MBps for the new Ultra320, although most devices installed operate at 20, 40, or 80 MBps. However, unlike the switched configurations possible with ESCON, SCSI is limited to a daisy-chaining configuration with a maximum of four, eight, or sixteen devices per chain, depending on which SCSI standard is implemented. There must be one “master” in the chain, which is typically the host server. It was the development and introduction of Fibre Channel technology that made SANs possible. Fibre Channel is the interconnect technology that allows organizations to build a shared or switched infrastructure for storage that parallels in many ways a data network. Fibre Channel: • Is a set of ANSI standards • Offers high speeds of 1 Gbps with a sustained throughput of 97 MBps (standard is scalable to up to 4 Gbps) • Supports point-to-point, arbitrated loop, and fabric (switched) configurations • Supports SCSI, IP, video, and raw data formats • Supports fiber and copper cabling • Supports distances up to 10 km 272
Storage Area Networks Meet Enterprise Data Networks Fibre Channel is used primarily for storage connectivity today. However, the Fibre Channel Industry Association (www.fibrechannel.com) positions Fibre Channel as a viable networking alternative to Gigabit Ethernet and ATM. They cite CAD/CAE, imaging, and corporate backbones as good targets for Fibre Channel networking. In reality, it is unlikely that Fibre Channel will gain much of a toe-hold in the enterprise network because it would require a wholesale conversion of NICs, drivers, and applications — the very reason that ATM has lost out to Gigabit Ethernet in many environments. SANs within a campus are built using Fibre Channel hubs, switches, and gateways. The hubs, like data networking hubs, provide a shared bandwidth approach. Hubs link individual elements together to form an arbitrated loop. Disk systems integrate a loop into the backplane and then implement a port bypass circuit so that individual disks are not swappable. Fibre Channel switches are analogous to Ethernet switches. They offer dedicated bandwidth to each device that is directly attached to a single port in a point-to-point configuration. Like LAN switches, Fibre Channel switches are stackable so that the switch fabric is scalable to thousands of ports. Host systems (i.e., PC server, mainframes) support Fibre Channel host adapter slots or cards. Many hosts are configured with a LAN or WAN adapter as well for direct access to the data network (see Exhibit 2). Newer storage devices have direct Fibre Channel adapters. Older storage devices can be integrated into the Fibre Channel fabric by connecting to an SCSI-toFC gateway or bridge. FIBRE CHANNEL DETAILS Fibre Channel has been evolving since 1988. It is a complex set of standards that is defined in approximately 20 individual standards documents under the ANSI standards body. Although a thorough overview of all of the details of this complex and comprehensive set of standards is beyond the scope of this chapter, the basics of Fibre Channel layers, protocols, speeds and media, topologies, and port types are provided. Like other networking technologies, Fibre Channel provides some of the services defined by the Open Systems Interconnect (OSI) seven-layer reference model. The Fibre Channel standards define the physical layer up to approximately the transport layer of the OSI model, broken down into five different layers: FC-0, FC-1, FC-2, FC-3, and FC-4. Fibre Channel itself does not define a particular transport or upper layer protocol. Instead, it defines mappings from several popular and common upper layer protocols (e.g., SCSI, IP) to Fibre Channel. Exhibit 3 summarizes the functions of the five Fibre Channel layers. 273
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 2.
Components of a Storage Area Network
Exhibit 3. Fibre Channel Layers Layer
Functions
FC-0 FC-1 FC-2
Signaling, media specifications, receiver/transmitter specifications 8B/10B character encoding, link maintenance Frame format, sequence management, exchange management, flow control, classes of service, login/logout, topologies, segmentation and reassembly Services for multiple ports on one node Upper Layer Protocol (ULP) mapping • Small Computer System Interface (SCSI) • Internet Protocol (IP) • High Performance Parallel Interface (HIPPI) • Asynchronous Transfer Mode — Adaption Layer 5 (ATM-AAL5) • Intelligent Peripheral Interface — 3 (IPI-3) (disk and tape) • Single Byte Command Code Sets (SBCCS) • Future ULPs
FC-3 FC-4
Source: University of New Hampshire InterOperability Lab.
274
Storage Area Networks Meet Enterprise Data Networks Although its name may imply otherwise, the Fibre Channel standard supports transmission over both fiber and copper cabling for transmission up to the “full-speed” rate of 100 megabytes per second (MBps). Slower rates are supported, and products are currently available at half-, quarter-, and eighth-speeds representing speeds of 50, 25, and 12.5 MBps, respectively. Higher speeds of 200 and 400 MBps are also supported and implemented in today’s products, but only fiber cabling is supported at these higher speeds. The Fibre Channel standards support three different topologies: pointto-point, arbitrated loop, and fabric. A point-to-point topology is straightforward. A single cable connects two different end points, such as a server and a disk subsystem. The arbitrated loop topology is analogous to a shared media LAN such as Ethernet or Token Ring. Like a LAN, the devices on an arbitrated loop share the total bandwidth. This is a complex topology because issues like contention for the loop must be resolved, but it is the most common topology implemented today. The devices in an arbitrated loop can be connected from one to another in a ring-type topology, or a centralized hub can be implemented to allow for an easier and more flexible star-wired configuration. A single arbitrated loop can connect up to 127 devices, which is sufficient for many SAN implementations. The final topology is the fabric. This is completely analogous to a switched Fast Ethernet environment. The devices and hosts are directly attached, point-to-point, to a central switch. Each connection can utilize the full bandwidth of the connection. Switches can be networked together. The fabric can support up to 224 devices. The fabric is the topology that offers the maximum scalability and availability. Obviously, it is also the most costly of the three topologies. The Fibre Channel standards define a variety of different types of ports that are implemented in various products. Exhibit 4 provides a definition of the various types of ports. ACCOMMODATING SAN TRAFFIC ON THE ENTERPRISE DATA NETWORK Enterprises are widely implementing SANs to meet the growing demand for enterprise storage. The benefits are real and immediate. However, in some cases, the ten-kilometer limit of a SAN can be an impediment. For example, a disaster recovery scheme may require sending large amounts of data to a sister site located in another region of the country hundreds of miles away. For this and other applications, enterprises need to send SAN traffic over a WAN. This should not be done lightly, because WAN speeds are often an order of magnitude lower than campus speeds and the amount of data 275
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Exhibit 4. Fibre Channel Port Types Port Type
Definition
N_Port
Node port, implemented on an end node such as a disk subsystem, server, or PC Port of the fabric, such as on an FC switch Arbitrated loop port, such as on an FC hub Node port that also supports arbitrated loop Fabric port that also supports arbitrated loop Connect FC switches together A port that may act either as an F_Port or an E_Port A G_Port that also supports arbitrated loop
F_Port L_Port NL_Port FL_Port E_Port G_Port GL_Port
can be enormous. However, there are very real and valid instances in which it is desirable or imperative to send storage traffic over a WAN, including: • Remote tape backup for disaster recovery • Remove disk mirroring for continuous business operations • Use of a storage service provider for outsourced storage services Enterprises have two basic choices in extending the SAN to the wide area. They can either build a stand-alone WAN that is used only for storage traffic, or they can integrate it with the existing data WAN. A stand-alone WAN can be built with proprietary protocols over high-speed links or it can utilize ATM. The obvious downfall of this approach is its high cost of ownership. If the links are not fully utilized for a large portion of the day and week, it may be difficult to justify a separate infrastructure and ongoing telecommunication costs. The advantage of this approach is that it dedicates bandwidth to storage management. A shared network approach may be viable in certain instances. With this approach, the SAN traffic shares the WAN with the traditional enterprise data network. Various approaches exist to allow this to happen. As already detailed, the Fibre Channel standards define a mapping for IP-over-FC so products that implement the IP will work natively over any IP-based data WAN. Other approaches encapsulate proprietary storage-oriented protocols (e.g., EMC’s proprietary remote data protocol, Symmetrix Remote Data Facility — SRDF) within TCP/IP so that the traffic is seamlessly transported on the WAN. What does all this mean to networking vendors and enterprise network managers? First and foremost, it means that the data WAN, already besieged with requests for increased bandwidth to support new E-business applications, may need to deal with a potentially huge new type of traffic not previously anticipated. The key in making a shared storage/data network work will be the cooperative planning between the affected IT organizations. For example, can the storage traffic only use the network during periods of low 276
Storage Area Networks Meet Enterprise Data Networks transaction traffic? What is the amount of data, and what is the window in which the transfer of data must be completed? What bandwidth management, quality-of-service, and queuing tools are available to allow the two environments to coexist peacefully? These are the critical questions that the enterprise data manager must ask to begin the process of defining a solution that will minimize the impact on the regular data traffic. SUMMARY Storage area networks (SANs) are being implemented in enterprises of all sizes. The separation of the storage of data from the application or file server has numerous benefits. Fibre Channel, a set of standards defined over a period of years to support high speeds and ubiquitous connectivity, offers the enterprise a variety of different topologies. However, in some cases, the SAN must be extended over a wide area data network. When this happens, the impact to the data network can be severe if proper planning and tools are not put in place. The enterprise data manager must understand the type, quantity, duration, and timing of the storage traffic to effectively integrate the storage data with the enterprise data network while minimizing the impact on both operations.
277
This page intentionally left blank
Chapter 23
Data Warehousing Concepts and Strategies Bijoy Bordoloi Stefan M. Neikes Sumit Sircar Susan E. Yager
Many IT organizations are increasingly adopting data warehousing as a way of improving their relationships with corporate users. Proponents of data warehousing technology claim the technology will contribute immensely to a company’s strategic advantage. According to Gartner, U.S. companies spent $7 billion in 1999 on the creation and operation of database warehouses; the amount spent on these techniques has grown by 35 percent annually since 1996.1 Companies contemplating the implementation of a data warehouse need to address many issues concerning strategies, type of the data warehouse, front-end tools, and even corporate culture. Other issues that also need to be examined include who will maintain the data warehouse and how often, and most of all, which corporate users will have access to it. After defining the concept of data warehousing, this chapter provides an in-depth look at design and construction issues; types of data warehouses and their respective applications; data mining concepts, techniques, and tools; and managerial and organizational impacts of data warehousing. HISTORY OF DATA WAREHOUSING The concept of data warehousing is best presented as part of an evolution that began about 35 years ago. In the early 1960s, the arena of computing was limited by punch cards, files on magnetic tape, slow access times, and an immense amount of overhead. About the mid-1960s, the near-explosive 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
279
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE growth in the usage of magnetic tapes increased the amount of data redundancy. Suddenly, new problems, ranging from synchronizing data after updating to handling the complexity of maintaining old programs and developing new ones, had to be resolved. The 1970s saw the rise of direct-access storage devices and the concomitant technology of database management systems (DBMSs). DBMSs made it possible to reduce the redundancy of data by storing it in a single place for all processing. Only a few years later, databases were used in conjunction with online transaction processing (OLTP). This advancement enabled the implementation of such applications as automated teller machines and reservations systems used by travel and airline industries to store up-todate information. By the early 1980s, the introduction of the PC and fourthgeneration technology let end users innovatively and more effectively utilize data in the database to guide decision making. All these advances, however, engendered additional problems, such as producing consistent reports for corporate data. It was difficult and timeconsuming to accomplish the step from pure data to information that gives meaning to the organization and to overcome the lack of integration across applications. Poor or nonexistent historical data only added to the problems of transforming raw data into intelligent information. This dilemma led to the realization that organizations need two fundamentally different sets of data. On one hand, there is primitive or raw data, which is detailed, can be updated, and is used to run the day-to-day operations of a business. On the other hand, there is summarized or derived data, which is less frequently updated and is needed by management to make higher-level decisions. The origins of the data warehouse as a subject-oriented collection of data that supports managerial decision making are therefore not surprising. Many companies have finally realized that they cannot ignore the role of strategic information systems if they are to attain a strategic advantage in the marketplace. CEOs and CIOs throughout the United States and the world are steadily seeking new ways to increase the benefits that IT provides. Data is increasingly viewed as an asset with as much importance in many cases as financial assets. New methods and technologies are being developed to improve the use of corporate data and provide for faster analyses of business information. Operational systems are not able to meet decision support needs for several reasons: • Most organizations lack online historical data. • The data required for analysis often resides on different platforms and operational systems, which complicates the issue further. 280
Data Warehousing Concepts and Strategies • The query performance of many operational systems is extremely poor, which in turn affects their performance. • Operational database designs are inappropriate for decision support. For these reasons, the concept of data warehousing, which has been around for as long as databases have existed, has suddenly come to the forefront. A data warehouse eliminates the decision support shortfalls of operational systems in a single, consolidated system. Data is thus made readily accessible to the people who need it, especially organizational decision makers, without interrupting online operational workloads. The key of a data warehouse is that it provides a single, more quickly accessible, and more accurately consolidated image of business reality. It lets organizational decision makers monitor and compare current and past operations, rationally forecast future operations, and devise new business processes. These benefits are driving the popularity of data warehousing and have led some advocates to call the data warehouse the center of IS architecture in the years ahead. THE BASICS OF DATA WAREHOUSING TECHNOLOGY According to Bill Inmon, author of Building the Data Warehouse,2 a data warehouse has four distinguishing characteristics: 1. 2. 3. 4.
Subject orientation Integration Time variance Nonvolatility
As depicted in Exhibit 1, the subject-oriented database characteristic of the data warehouse organizes data according to subject, unlike the application-based database. The alignment around subject areas affects the design and implementation of the data found in the data warehouse. For this reason, the major subject areas influence the most important part of the key structure. Data warehouse data entries also differ from applicationoriented data in the relationships. Although operational data has relationships among tables based on the business rules that are in effect, the data warehouse encompasses a spectrum of time. A data warehouse is also integrated in that data is moved there from many different applications (see Exhibit 2). This integration is noticeable in several ways, such as the implementation of consistent naming conventions, consistent measurement of variables, consistent encoding structures, and consistent physical attributes of data. In comparison, operational data is often inconsistent across applications. The preprocessing of information aids in reducing access time at the point of inquiry. 281
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 1. The Data Warehouse Is Subject Oriented
Exhibit 3 shows the time-variant feature of the data warehouse. The data stored is up to five to ten years old and is used for making consistent comparisons, viewing trends, and providing a forecasting tool. Operational environment data reflects only accurate values as of the moment of access. The data in such a system may change at a later point in time through updates or inserts. On the contrary, data in the data warehouse is accurate as of some moment in time and will produce the same results every time for the same query. The time-variant feature of the data warehouse is observed in different ways. In addition to the lengthier time horizon as compared to the operational environment, time variance is also apparent in the key structure of a data warehouse. Every key structure contains — implicitly or explicitly — 282
Data Warehousing Concepts and Strategies
Exhibit 2. Integration of Data in the Data Warehouse
Exhibit 3. The Data Warehouse Is Time Variant
283
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 4. The Data Warehouse Is Nonvolatile
an element of time, such as day, week, or month. Time variance is also evidenced by the fact that the data warehouse is never updated. Operational data is updated as the need arises. The nonvolatility of the warehouse means that there is no inserting, deleting, replacing, or changing of data on a record-by-record basis, as is the case in the operational environment (see Exhibit 4). This difference has tremendous consequences. At the design level, for example, there is no need to be cautious about update anomalies. It follows that normalization of the physical database design loses its importance because the design focuses on optimized access of data. Other issues that simplify data warehouse design involve the nonpresence of transaction and data integrity as well as detection and remedy of deadlocks, which are found in every operational database environment. Effective and efficient use of the data warehouse necessitates that the data warehouse run on a separate platform. If it does not, it will slow down the operations database and reduce response time by a large factor. DESIGN AND CONSTRUCTION OF A DATA WAREHOUSE Preliminary Considerations Like any other undertaking, a data warehouse project should demonstrate success early and often to upper management. This ensures high visibility and justification of the immense commitment or resources and costs asso284
Data Warehousing Concepts and Strategies ciated with the project. Before undertaking the design of the data warehouse, however, it is wise to remember that a data warehouse project is not as easy as copying data from one database to another and handing it over to users, who then simply extract the data with PC-based queries and reporting tools. Developers should not underestimate the many complex issues involved in data warehousing. These include architectural considerations, security, data integrity, and network issues. According to one estimate, about 80 percent of the time that is spent constructing a data warehouse is devoted to extracting, cleaning, and loading data. In addition, problems that may have gone undetected for years can surface during the design phase. The discovery of data that has never been captured as well as data that has been altered and stored are examples of these types of problems. A solid understanding of the business and all the processes that have to be modeled is also extremely important. Another major consideration important to up-front planning is the difference between the data warehouse and most other client/server applications. First, there is the issue of batch orientation for much of the processing. The complexity of processes (which may be executed on multiple platforms), data volumes, and resulting data synchronization issues must be correctly analyzed and resolved. Next, the data volume in a data warehouse, which can be in the terabyte range, must be considered. New purchases of large amounts of disk storage space and magnetic tape for backup should be expected. It is also vital to plan and provide for the transport of large amounts of data over the network. The ability of data warehousing to support a wide range of queries — from simple ones that return only limited amounts of information to complex ones that might access several million rows — can cause complications. It is also necessary to incorporate the availability of corporate metadata into this thought process. The designers of the data warehouse have to remember that metadata is likely to be replicated at multiple sites. This points to the need for synchronization across the different platforms to avoid inconsistencies. Finally, security must be considered. In terms of location and security, data warehouse and non-data warehouse applications must appear seamless. Users should not need different IDs to sign on to different systems, but the application should be smart enough to provide users the correct access with only one password. Designing the Warehouse After having addressed all the preliminary issues, the design task begins. There are two approaches to designing a data warehouse: the top-down 285
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE approach and the bottom-up approach. In the top-down approach, all of an organization’s business processes are analyzed to build an enterprisewide data warehouse in one step. This approach requires an immense commitment of planning, resources, and time, and it results in a new information structure from which the entire organization benefits. The bottom-up approach, on the other hand, breaks down the task and delivers only a small subset of the data warehouse. New pieces are then phased in until the entire organization is modeled. The bottom-up approach allows data warehouse technology to be quickly delivered to a part of the organization. This approach is recommended because its time demands are not as rigorous. It also allows development team members to learn as they implement the system, identify bottlenecks and shortfalls, and find out how to avoid them as additional parts of the data warehouse are delivered. Because a data warehouse is subject oriented, the first design step involves choosing a business subject area to be modeled and eliciting information about the following: • • • •
The business process that needs to be modeled The facts that need to be extracted from the operational database The level of detail required Characteristics of the facts (e.g., dimension, attribute, and cardinality)
After each of these areas has been thoroughly investigated and more information about facts, dimension, attributes, and sparsity has been gathered, yet another decision must be made. The question now becomes which schema to use for the design of the data warehouse database. There are two major options: the classic star schema and the snowflake schema. The Star Schema. In the star design schema, a separate table is used for each dimension, and a single large table is used for the facts (see Exhibit 5). The fact table’s indexed key contains the keys of the different dimension tables.
With this schema, the problem of sparsity, or the creation of empty rows, is avoided by not creating records where combinations are invalid. Users are able to follow paths for detailed drilldowns and summary rollups. Because the dimension tables are also relatively small, precalculated aggregation can be embedded within the fact table, providing extremely fast response times. It is also possible to apply multiple hierarchies against the same fact table, which leads to the development of a flexible and useful set of data. The Snowflake Schema. The snowflake schema depicted in Exhibit 6 is best used when there are large dimensions, such as time. The dimension 286
Data Warehousing Concepts and Strategies
Exhibit 5. The Star Design Schema
tables are split at the attribute level to provide a greater variety of combinations. The breakup of the time dimension into a quarter entity and a month entity provides more detailed aggregation and also more exact information. DECISION SUPPORT SYSTEMS AND DATA WAREHOUSING Because many vendors offer decision support system (DSS) products and information on how to implement them abounds, insight into the different technologies available is helpful. Four concepts should be evaluated in terms of their usability for decision support and relationship to the socalled real data warehouse: (1) virtual data warehouses, (2) multidimensional online analytical processing (OLAP), (3) relational OLAP, and (4) Web-based data warehouses. The Virtual Data Warehouse The virtual data warehouse promises to deliver the same benefits as a real data warehouse, but without the associated amount of work and difficulty. The virtual data warehouse concept can be subdivided into the surround 287
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 6. The Snowflake Design Schema
data warehouse and the OLAP/data mart warehouse. In a surround data warehouse, legacy systems are merely surrounded with methods to access data without a fundamental change in the operational data. The surround concept thus negates a key feature of the real data warehouse, which integrates operational data in a way that allows users to make sense of it. In addition, the data structure of a virtual data warehouse does not lend itself to DSS (decision support system) processing. Legacy operational systems were built to ease updating, writing, and deleting, and not with simple data extraction in mind. Another deficiency with this technology is the minimal amount of historical data that is kept, usually only 60 to 90 days worth of information. A real data warehouse, on the other hand, with its
288
Data Warehousing Concepts and Strategies five to ten years worth of information, provides a far superior means of analyzing trends. In the case of direct OLAP/data marts, legacy data is transferred directly to the OLAP/data mart environment. Although this approach recognizes the need to remove data from the operational environment, it too falls short of being a real data warehouse. If only a few small applications were feeding a data mart, the approach would be acceptable. The reality is, however, that there are many applications and thus many OLAP/data mart environments, each requiring a customized interface, especially as the number of OLAP/data marts increases. Because the different OLAP/data marts are not effectively integrated, different users may arrive at different conclusions when analyzing the data. As a result, it is possible for the marketing department to report that the business is doing fine and another department to report just the opposite. This drawback does not exist with the real data warehouse, where all data is integrated. Users who examine the data at a certain point in time would all reach the same conclusions. Multidimensional OLAP Multidimensional database technology is a definite step up from the virtual data warehouse. It is designed for executives and analysts who want to look at data from different perspectives and have the ability to examine summarized and detailed data. When implemented together with a data warehouse, multidimensional database technology provides more efficient and faster access to corporate data. Proprietary multidimensional databases facilitate the hierarchical organization of data in multiple dimensions, allowing users to make advanced analyses of small portions of data from the data warehouse. The technology is understandably embraced by many in the industry because of its increased usability and superior analytical functionality. As a stand-alone technology, multidimensional OLAP is inferior to a real data warehouse for a variety of reasons. The main drawback is that the technology is not able to handle more than 20 to 30 gigabytes of data, which is unacceptable for most of the larger corporations with needs ranging from 100 gigabytes to several terabytes. Furthermore, multidimensional databases do not have the flexibility and measurability required of today’s decision support systems because they do not support the necessary ad hoc creation of multidimensional views of products and customers. Multidimensional databases should be considered for use in smaller organizations or at a department level only. 289
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Relational OLAP Relational OLAP is also used with many decision support systems and provides sophisticated analytical capability in conjunction with a data warehouse. Unlike multidimensional database technology, relational OLAP lets end users define complex multidimensional views and analyze them. These advantages are only possible if certain functionalities are incorporated into relational OLAP. Users must be removed from the process of generating their own Structured Query Language (SQL). Multiple SQL statements should be generated by the system for every analysis request to the data warehouse. In this way, a set of business measurements (e.g., comparison and ranking measurements) is established, which is essential to the appropriate use of the technology. Although relational OLAP technology works well in conjunction with a data warehouse, the technology by itself is somewhat limited. Examination of the three preceding decision support technologies leads to the only correct deduction — that data warehouse technology is still best suited to larger firms. The benefit of having integrated, cleansed data from legacy systems, together with historical information about the business, makes a properly implemented data warehouse the primary choice for decision support. Web-Based Data Warehouses The Internet and World Wide Web have already exhibited their influence on all aspects of today’s business environment, and they are exerting a significant impact on data warehousing as well.3 Web-based data warehouses (often called data Webhouses) are Web instantiations of data warehouses.4 The basic purpose of the data Webhouse is to provide information to internal management, employees, customers, and business partners, and to promote the exchange of experiences and ideas for problem resolution and effective decision making. Use of a Web-based architecture provides ease of access, platform independence, and lower cost than traditional data warehouses. A data Webhouse can be considered a marriage of data warehousing and the Web, resulting in a more robust interface capable of presenting information to the users in a desirable format. Any user with a Web browser can access stored information, including teleworkers, sales representatives at customer sites, customers, suppliers, and business partners. In addition, the number of mobile business users is growing rapidly.5 Although data warehouses are becoming increasingly popular as tools for E-business, customer relation management (CRM), and supply-chain management (SCM), the required distribution of a data Webhouse con290
Data Warehousing Concepts and Strategies trasts with the centralized, multidimensional, traditional data warehouse. In a data Webhousing environment, data is gathered from different nodes spread across the network. The architecture is typically three-tiered, including client, Web server, and application server; and the Internet/intranet/extranet is the communication medium between the client and servers. Three of the biggest challenges to this architecture involve scalability, speed, and security. Although the Internet/intranet/extranet provides ease of access to interested and involved parties, it can be extremely difficult to estimate the number of users who will be accessing the data warehouse concurrently. Unmanaged congestion over the network can lead to slower transmission, lower performance, increased server problems, and lower user satisfaction. Any transmission over a network exposes both the information, and the network itself, to security risks. One possible solution to these risks is to make decision-making data and tools available only to users by a more secured channel, such as an intranet. THE BENEFITS OF WAREHOUSING FOR DATA MINING The technology of data mining is closely related to that of data warehousing. It involves the process of extracting large amounts of previously unknown data and then using the data to make important business decisions. The key phrase here is “unknown information,” buried in the huge mounds of operational data that, if analyzed, provides relevant information to organizational decision makers. Significant data is sometimes undetected because most data is captured and maintained by a particular department. What may seem irrelevant or uninteresting at the department level may yield insights and indicate patterns important at the organizational level. These patterns include market trends, such as customer buying patterns. They aid in such areas as determining the effectiveness of sales promotions, detecting fraud, evaluating risk and assessing quality, or analyzing insurance claims. The possibilities are limitless and yield a variety of benefits ultimately leading to improved customer service and business performance. Data provides no real value to business users if it is located on several different systems, in different formats and structures, and redundantly stored. This is where the data warehouse comes into play as a source of consolidated and cleansed data that facilitates analysis better than do regular flat files or operational databases. Three steps are needed to identify and use hidden information: 1. The captured data must be incorporated into a view of the entire organization, instead of only one department. 2. The data must be analyzed, or mined, for valuable information. 291
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE 3. The information must be specially organized to simplify decision making. Data Mining Tasks In data mining, data warehouses, query generators, and data interpretation systems are combined with discovery-driven systems to provide the ability to automatically reveal important yet hidden data. The following tasks need to be completed to make full use of data mining: 1. 2. 3. 4.
Creating prediction and classification models Analyzing links Segmenting databases Detecting deviations
Creating Models. The first task makes use of the data warehouse’s contents to automatically generate a model that predicts desired behavior. In comparison to traditional models that use statistical techniques and linear and logistic regression, discovery-driven models generate accurate models that are also more comprehensible because of their sets of if-then rules. The performance of a particular stock, for example, can be predicted to assess its suitability for an investment portfolio. Analyzing Links. The goal of the link analysis is to establish relevant connections between database records. An example here is the analysis of items that are usually purchased together, like a washer and dryer. Such analysis can lead to a more effective pricing and selling strategy. Segmenting Databases. When segmenting databases, collections of records with common characteristics or behaviors are identified. One example is the analysis of sales for a certain time period (such as President’s Day or Thanksgiving weekend) to detect patterns in customer purchase behavior. For the reasons discussed previously, this is an ideal task for a data warehouse. Detecting Deviations. The fourth and final task involves detection of deviation, which is the opposite of data segmentation. Here, the goal is to identify records that vary from the norm, or lie outside of any particular cluster with similar characteristics. This discovery from the cluster is then explained as normal or as a hint of a previously unknown behavior or attribute.
Web-Based Data Mining Data mining in the Web environment has been termed “Web mining,” the application of data mining techniques to Web resources and activities.8 Three categories of Web mining include: (1) content mining, (2) structure 292
Data Warehousing Concepts and Strategies mining, and (3) usage mining. Web content mining is an automatic process for extracting online information. Web structure mining uses the analysis of link structures on the Web to identify more preferable documents. Web usage mining records and accumulates information about user interactions with Web sites. This type of information has proven invaluable to firms by allowing the tailoring of content and offerings to best serve customers and maximize potential sales. Data Mining Techniques At this point, it is important to present several techniques that aid mining efforts. These techniques include the creation of predictive models, and performing supervised induction, association, and sequence discovery. Creating Predictive Models. The creation of a so-called predictive model is facilitated through numerous statistical techniques and various forms of visualization that ease the user’s recognition of patterns. Supervised Induction. With supervised induction, classification models are created from a set of records, which is referred to as the training set. This method makes it possible to infer from a set of descriptors of the training set to the general. In this way, a rule might be produced that states that a customer who is male, lives in a certain zip code area, earns $25,000 to $30,000 per year, is between 40 and 45 years of age, and listens more to the radio than watches TV might be a possible buyer for a new camcorder. The advantage of this technique is that the patterns are based on local phenomena, whereas statistical measures check for conditions that are valid for an entire population. Association Discovery. Association discovery allows for the prediction of the occurrence of some items in a set of records if other items are also present. For example, it is possible to identify the relationship among different medical procedures by analyzing claim forms submitted to an insurance company. With this information the prediction could be made, within a certain margin of error, that the same five medicines are usually required for treatment. Sequence Discovery. Sequence discovery aids the data miner by providing information on a customer’s behavior over time. If a certain person buys a VCR this week, he or she usually buys videotapes on the next purchasing occasion. The detection of such a pattern is especially important to catalog companies because it helps them better target their potential customer base with specialized advertising catalogs. 293
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 7. Neural Network
Data Mining Tools The main tools used in data mining are neural networks, decision trees, rule induction, and data visualization. Neural Networks. A neural network consists of three interconnected layers: an input, an output layer, and a hidden layer in between (see Exhibit 7). The hidden processing layer is like the brain of the neural network because it stores or learns rules about input patterns and then produces a known set of outputs. Because the process of neural networks is not transparent, it leaves the user without a clear interpretation of the resulting model, which, nevertheless, is applied. Decision Trees. Decision trees divide data into groups based on the values that different variables take on (see Exhibit 8). The result is often a complex hierarchy of classifying data, which enables the user to deduct possible future behavior. For example, it might be deducted that for a person who only uses a credit card occasionally, there is a 20 percent probability that an offer for another credit card would be accepted. Although decision trees are faster than neural networks in many cases, they do have drawbacks. One of these is the handling of data ranges, such as age groups, which can inadvertently hide patterns.
294
Data Warehousing Concepts and Strategies
Exhibit 8. Decision Tree
Rule Induction. The method of rule induction is applied by creating nonhierarchical sets of possibly overlapping conditions. This is accomplished by first generating partial decision trees. Statistical techniques are then used to determine which decision trees to apply to the input data. This method is especially useful in cases where there are long and complex condition lists. Data Visualization. Data visualization is not really a data mining tool. However, because it provides a picture for the user with as many as four graphically represented variables, it is a powerful tool for providing concise information. The graphics products available make the detection of patterns much easier than is the case when more numbers are analyzed.
Because of the pros and cons of the various data mining tools, software vendors today incorporate all or some of them in their data mining packages. Each tool is essentially a matter of looking at data by different means and from different angles. One of the potential problems in data mining is related to performance. To get faster processing, it might be necessary to subset the data, either by the number of rows accessed or by the number of variables examined. This can lead to slightly different conclusions about a data set. Consequently, in most cases it is better to wait for the correct answer using a large sample.
295
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE MANAGERIAL AND ORGANIZATIONAL IMPACTS OF DATA WAREHOUSING Although organizational managers eagerly await the completion of a data warehouse, many issues must be dealt with before the fruits of this new technology are harvested. This is especially true in today’s fast changing enterprise with its quick reaction time. The subject of economic benefit also deserves mention when dealing with data warehousing because some projects have already acquired the reputation of providing little or no payback on the huge technology investments involved. Data warehouses are sometimes accused of being pits into which data disappear, never to be seen again. Managers must understand at the outset that the quality of the data is of extreme importance in a data warehousing project. The sometimes-difficult challenge for management is to make data entering the data warehouse consistent. In some organizations, data is stored in flat, VSAM, IMS, IDMS, or SA files and a variety of relational databases. In addition, different systems, designed for different functions, contain the same terms but with different meanings. If care is not taken to clean up this terminology during data warehouse construction, misleading management information results. The logical consequence of this requirement is that management has to agree on the data definition for elements in the warehouse. This is yet another challenging task. People who use the data in the short term and the long term must have input into the process and know what the data means. The manager in charge of loading the data warehouse has four ways to handle erroneous data: 1. If the data is inaccurate, it must be completely rejected and corrected in the source system. 2. Data can also be accepted as is, if it is within a certain tolerance level and if it is marked as such. Capture and correct the data before it enters the warehouse. Capture and correction are handled programmatically in the process of transforming data from one system to the data warehouse. An example might be a field that was in lowercase and needs to be stored in uppercase. 3. Replace erroneous data with a default value. If, for example, the date February 29 of a non-leap year is defaulted to February 28, there is no loss in data integrity. 4. Data warehousing affects management, and organizations in general, in today’s business motto of “working smarter, not harder.” Today’s data warehouse users can become more productive because 296
Data Warehousing Concepts and Strategies they will have the tools to analyze the huge amounts of data that they store, rather than just collect it. Organizations are also affected by the invalid notion that implementing data warehousing technology simply consists of integrating all pertinent existing company data into one place. Managers need to be aware that data warehousing implies changes in the job duties of many people. For example, in an organization implementing a data warehouse, data analysis and modeling become much more prevalent than just requirements analysis. The database administrator position does not merely involve the critical aspects of efficiently storing data, but takes on a central role in the development of the application. Furthermore, because of its data model-oriented methodology, data warehouse design requires a development life cycle that does not fully follow traditional development approaches. The development of a data warehouse virtually begins with a data model, from which the warehouse is built. In summary, it must be noted that data warehouses are high-maintenance systems that require their own support staff. In this way, experienced personnel implement future changes in a timely manner. It is also important to remember that users will probably abandon a technically advanced and fast warehouse if it adds little value from the start — thus reiterating the immense importance of clean data. One of the most important issues that is often disregarded during the construction and implementation of a data warehouse is data quality. This is not surprising because in many companies the concern for data quality in regard to legacy and transaction systems is not a priority. Accordingly, when it comes to ensuring the quality of data being moved into the warehouse, many companies continue with their old practices. This can turn out to be a costly mistake and has already led to many failures of corporate warehousing projects. As more and more companies are making use of these strategic database systems, data quality must become the numberone priority of all parties involved with data warehousing effort. Unreliable and inaccurate data in the data warehouse cause numerous problems. First and foremost, the confidence of the users in this technology is shattered and contributes to the already existing rift between business and IT. Furthermore, if the data is used for strategic decision making, unreliable data hurts not only the IT department, but also the entire company. One example is banks that had erroneous risk exposure data on a Texas-based business. When the oil market slumped in the early 1980s, those banks that had many Texas accounts encountered major losses. In another case, a manufacturing firm scaled down its operations and took action to rid itself of excess inventory. Because of inaccurate data, it had overestimated the inventory requirements and was forced to sell off criti297
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE cal business equipment. Such examples demonstrate the need and importance of data quality. Poor-quality data appears to be the norm rather than the exception, and points out that many technology managers have largely ignored the issue of quality. This is caused, in part, by the failure to recognize the need to manage data as a corporate asset. One cannot simply allow just anything to be moved into a data warehouse, or it will become useless and might be likened to a “data garbage dump.” To avoid data inaccuracies and their potential for harboring disasters, general data quality awareness must be emphasized. There are critical success factors that each company needs to identify before moving forward with the issue of data quality: • Senior management must make a commitment to the maintenance of the quality of corporate data. This can be achieved by instituting a data administration department that oversees the management of the corporate data resource. Furthermore, this department will establish data management standards, policies, procedures, and guidelines pertaining to data and data quality. • Data quality must be defined. For data to be useful, it must be complete, timely, accurate, valid, and consistent. Data quality does not simply consist of data “scrubbing” or auditing to measure its usefulness. The definition of data quality also includes the degree of quality that is required for each element being loaded into the data warehouse. If, for example, customer addresses are stored, it might be acceptable that the four-digit extension to the zip code is missing. However, the street address, city, and state are of much higher importance. Again, this must be identified by each individual company and for each item that is used in the data warehouse. • The quality assurance of data must be considered. Because data is moved from transactional/legacy systems to the data warehouse, the accuracy of this data needs to be verified and corrected if necessary. This might be the largest task because it involves the cleansing of existing data. No company is able to rectify all of its unclean data, and therefore procedures have to be put in place to ensure data quality at the source. Such a task can only be achieved by modifying business processes and designing data quality into the system. In identifying every data item and its usefulness to the ultimate users of this data, data quality requirements can be established. One might argue that this is too costly; but keep in mind that increasing the quality of data as an after-the-fact task is five to ten times more expensive than capturing it correctly at the source. If companies want to use data warehousing as a competitive advantage and reap its benefits, data quality must become one of the most important 298
Data Warehousing Concepts and Strategies issues. Only when data quality is recognized as a corporate asset, and treated as such by every member of the organization, will the promised benefits of a data warehouse initiative be realized. CONCLUSION The value of warehousing to an organization is multidimensional. An enterprise-wide data warehouse serves as a central repository for all data names used in an organization, and therefore simplifies business relationships among departments by using one standard. Users of the data warehouse get consistent results when querying this database and understand the data in the same way and without ambiguity. By its nature, the data warehouse also allows quicker access to summarized data about products, customers, and other business items of interest. In addition, the historical aspect of such a database (i.e., information kept for five to ten years) allows users to detect and analyze patterns in the business items. Organizations beginning to build a data warehouse should not undertake the task lightheartedy. It does not simply involve moving data from the operational database to the data warehouse, but rather the cleansing of data to improve its future usefulness. It is also important to distinguish between the different types of warehouse technologies (i.e., relational OLTP, multidimensional OTLP, and virtual data warehouses) and understand their fundamental differences. Other issues that need to be addressed and resolved range from creating a team dedicated to the design, implementation, and maintenance of a data warehouse, to the need for top-level support from the outset and management education on the concepts and benefits of corporate sharing of data. A further benefit of data warehousing results from the ability to mine the data using a variety of tools. Data mining aids corporate analysts in detecting customer behavior patterns, finding fraud within the organization, developing marketing strategies, and detecting inefficiencies in the internal business processes. Because the subject of data warehousing is immensely complex, outside assistance is often beneficial. It provides organizational members with training in the technology and exposure, both theoretical and hands-on, which enables them to continue with later phases of the project. The data warehouse is, without doubt, one of the most exciting technologies of our time. Organizations that make use of it increase their chances of improving customer service and developing more effective marketing strategies. 299
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE References 1. Chordas, L., “Building a Better Warehouse,” Best’s Review, 101, 117, 2001. 2. Inmon, W.H., Building the Data Warehouse, John Wiley & Sons, New York, 1993. 3. Marakas, G.M., Decision Support Systems in the 21st Century, 2nd ed., Prentice Hall, Upper Saddle River, NJ, 2003, 319. 4. Peiris, C., “Is It Data Webhouse or Warehouse?,” accessed at http://www.chrispeiris.com, September, 2002. 5. Chen, L. and Frolick, M., “Web-Based Data Warehousing: Fundamentals, Challenges, and Solution,” Information Systems Management, 17, 80, 2000. 6. Zhu, T., “Web Mining,” University of Alber ta Edmonton, accessed at http://www.cs.ualber ta.ca, September, 2002.
300
Chapter 24
Data Marts: Plan Big, Build Small John van den Hoven
In today’s global economy, enterprises are challenged to do more with less in order to successfully compete with a host of competitors: big and small, new and old, domestic and international. With less people resources and less financial resources with which to operate and ever-growing volumes of data, enterprises need to better manage and leverage their information resources to operate more efficiently and effectively. This requires improved access to timely, accurate, and consistent data that can be easily shared with other team members, decision makers, and business partners. It is currently acknowledged that data warehousing is the most effective way to provide this business decision support data. Under this concept, data is copied from operational systems and external information providers, then conditioned, integrated and transformed into a read-only database that is optimized for direct access by the decision maker. The term “data warehousing” is particularly apt in that it describes data as being an enterprise asset that must be identified, cataloged, and stored using discipline, structure, and organization to ensure that the user will always be able to find the correct information when it is needed. Data warehousing is a popular topic in information technology and business journals, and at computer conferences. Like many areas of information technology, data warehousing has attracted advocates who peddle it as a panacea for a wide range of problems. Data warehousing is just a natural evolution of decision support technology. Although the concept of data warehousing is not new, it is only recently that the techniques, methodologies, software tools, database management systems, disk storage, networks, and processor capacity have all advanced to the point where it has become possible to deliver an effective working product. DATA MARTS AND DATA WAREHOUSES The term “data warehousing” can be applied to a broad range of approaches for providing improved access to business decision support 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
301
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE data. These approaches can range from the simple to the more complex, with many variations in between. However, there are two major approaches, which differ greatly in scale and complexity: the data mart and the data warehouse. One approach to data warehousing is the data mart. A data mart is a subject-oriented or department-oriented data warehouse. It is a scaled-down version of a data warehouse that focuses on the local needs of a specific department such as finance or sales. A data mart contains a subset of the data that would be in an enterprise’s data warehouse because it is subject or department-oriented. An enterprise may have many data marts, each focused on a subset of the enterprise. A data warehouse is an orderly and accessible repository of known facts or things from many subject areas, used as a basis for decision making. In contrast to the data mart approach, the data warehouse is generally enterprisewide in scope. Its goal is to provide a single, integrated view of the enterprise’s data, spanning all the enterprise’s activities. The data warehouse consolidates the various data marts and reconciles the various departmental perspectives into a single enterprise perspective. There are advantages and disadvantages associated with both the data mart and data warehouse approaches. These two approaches differ in terms of the effort required to implement them, in their approaches to data, supporting technology, and in the way the business and the users utilize these systems (see Exhibit 1 for more details). The effort required to implement a data mart is considerably less than that required to implement a data warehouse. This is generally the case because the scope of a data mart is a subject area encompassing the applications in a business area versus the multiple subject areas of the data warehouse, which can cover all major applications in the enterprise. As a result of its reduced scope, a data mart typically requires an order of magnitude less effort than a data warehouse, and it can be built in months rather than years. Therefore, a data mart generally costs considerably less than a data warehouse — tens or hundreds of thousands of dollars versus the millions of dollars necessary for a data warehouse. The effort is much less because a data mart generally covers fewer subject areas, has fewer users, and requires less data transformation, thus resulting in reduced complexity. In contrast, a data warehouse is cross-functional, covering multiple subject areas, has more users, and is a more complex undertaking because conflicting business requirements and perspectives must be reconciled to establish a centralized structured view for all the data in the enterprise. 302
Data Marts: Plan Big, Build Small Exhibit 1.
Contrasts between a Data Mart and a Data Warehouse Data Mart
Data Warehouse
Scope
A subject area
Many subject areas
Time to build
Months
Years
Cost to build
Tens of thousands to hundreds Millions of dollars of thousands of dollars
Complexity to build
Low to medium
High
Requirement for sharing
Shared (within business area)
Common (across enterprise)
Sources
Few operational and external systems
Multiple operational and external systems
Size
Megabytes to low gigabytes
Gigabytes to terabytes
Time horizon
Near-current and historical data
Historical data
Amount of data transformations
Low to medium
High
Frequency of update
Hourly, daily, weekly
Weekly, monthly
Hardware
Workstations and departmental servers
Enterprise servers and mainframe computers
Operating system
Windows and Linux
UNIX, Z/OS, OS/390
Database
Workgroup or standard database servers
Enterprise database servers
Effort
Data
Technology
Usage Number of concurrent users Tens
Hundreds
Type of users
Business area analysts and managers
Enterprise analysts and senior executives
Business focus
Optimizing activities within the business area
Cross-functional optimization and decision making
From a data perspective, a data mart has reduced requirements for data sharing because of its limited scope compared to a data warehouse. It is simpler to provide shared data for a data mart because it is only necessary to establish shared data definitions for the business area or department. In contrast, a data warehouse requires common data, which necessitates establishing identical data definitions across the enterprise — a much more complex and difficult undertaking. It is also often easier to provide timely data updates to a data mart than a data warehouse because it is smaller (megabytes to low gigabytes for a data mart versus gigabytes to terabytes for a data warehouse), requires less complex data transformations, and the enterprise does not have to synchronize 303
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE data updates from multiple operational systems. Therefore, it is easier to maintain data consistency within the data mart but difficult to maintain data consistency across the various data marts within an enterprise. The smaller size of the data mart enables more frequent updates (daily or weekly or, in some cases, hourly or near-real-time) than what is generally feasible for a data warehouse (weekly or monthly). This enables a data mart to contain near-current data in addition to the historical data that is normally contained in a data warehouse. From a supporting technology perspective, a data mart can often use existing technology infrastructure or lower-cost technology components, thus reducing the cost and complexity of the data warehousing solution. The computing platform and the database management system are two key components of the technology infrastructure of the data warehousing solution. Data warehousing capabilities are increasingly becoming part of the core database management systems from Microsoft Corporation, Oracle Corporation, and IBM. In terms of the computing platform, a data mart often resides on Intelbased computers running Windows or Linux. In contrast, a data warehouse often resides on a RISC-based (Reduced Instruction Set Computer) computer running the UNIX operating system or on a mainframe computer running the Z/OS or OS/390 in order to support larger data volumes and larger numbers of business users. In terms of the database management system, a data mart can also often be deployed using a lower-cost workgroup or standard relational database management system. Microsoft SQL Server 2000 is the leading platform for data mart deployment, with competition coming from Oracle Database Standard Edition and IBM DB2 Universal Database Workgroup Edition. In contrast, a data warehouse often requires a more expensive and more powerful database server. Oracle Database Enterprise Edition and IBM DB2 Universal Database Enterprise Edition are the leading platforms for data warehouses, with Microsoft SQL Server 2000 Enterprise Edition emerging as a challenger. In addition to different supporting technologies, the way in which the business and the users utilize these data warehousing solutions is also different. There are fewer concurrent users in a data mart than in a data warehouse. These users are often functional managers such as sales executives or financial executives who are focused on optimizing the activities within their specific department or business area. In contrast, the users of a data warehouse are often analysts or senior executives making decisions that are cross-functional and require input from multiple areas of the business. 304
Data Marts: Plan Big, Build Small Thus, the data mart is often used for more operational or tactical decision making, while the data warehouse is used for strategic decision making and some tactical decision making. A data mart is therefore a more short-term, timely data delivery mechanism, while a data warehouse is a longer-term reliable history or archive of enterprise data. PLAN BIG, START SMALL There is no one-size-fits-all strategy. An enterprise’s data warehousing strategy can progress from a simple data mart to a complex data warehouse in response to user demands, the enterprise’s business requirements, and the enterprise’s maturity in managing its data resource. An enterprise can also derive a hybrid strategy that utilizes one or more of these base strategies to best fit its current applications, data, and technology architectures. The right approach is the data warehouse strategy that is appropriate to the business need and the perceived benefits. For many enterprises, a data mart is often a practical first step to gain experience in building and managing a data warehouse, while introducing business users to the benefits of improved access to their data, and generally demonstrating the business value of data warehousing. However, these data marts often grow rapidly to hundreds of users and hundreds of gigabytes of data derived from many different operational systems. Therefore, planning for eventual growth should be an essential part of a data mart project. A balance is required between starting small to get the data mart up and running quickly, and planning for the bigger data mart or data warehouse, which will likely be required over time. Therefore, it is important to “plan big and start small.” That is, to implement a data mart within the context of an overall architecture for the data, technology, and application which allows the data mart to support more data, more users, and more sophisticated and demanding uses over time. Technology advances such as Internet/intranet technology, portals, prepackaged analytical applications, improved data warehouse management tools, and virtual data warehousing architectures are making this more easily doable. Otherwise, the enterprise will be implementing a series of independent and isolated data marts that recreate the jumble of systems and “functional silos” that data warehousing was trying to remedy in the first place. CONCLUSION The enterprise data warehouse is the ideal because it will provide a consistent and comprehensive view of the enterprise, with business users using common terminology and data throughout the enterprise. However, it remains an elusive goal for most enterprises because it is very difficult and 305
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE costly to achieve with today’s technology and today’s rapidly changing business environment. A more cost-effective option for many enterprises is the data mart. It is a more manageable data warehousing project that can focus on delivering value to a specific business area. Thus, it can provide many of the decision support capabilities without incurring the cost and complexity associated with a centralized enterprise data warehouse. With proper planning, these data marts can be gradually consolidated under a common management umbrella to create an enterprise data warehouse as it makes business sense and as the technology evolves to better support this architecture.
306
Chapter 25
Data Mining: Exploring the Corporate Asset Jason Weir
Data mining, as a methodology, is a set of techniques used to uncover previously obscure or unknown patterns and relationships in very large databases. The ultimate goal is to arrive at comprehensible, meaningful results from an extensive analysis of information. For companies with ver y large and complex databases, discover y-based data mining approaches must be implemented in order to realize the complete value that data offers. Companies today generate and collect vast amounts of data that the ongoing process of doing business produces. Web-based commerce and electronic business solutions have greatly increased the amount data available for further processing and analysis. Transaction data such as that produced by inventory, billing, shipping and receiving, and sales systems is stored in organizational or departmental databases. It is understood that data represents a significant competitive advantage, but realizing its full potential is not simple. Decision makers must be able to interpret trends, identify factors, or utilize information based on clear, timely data in a meaningful format. For example, a marketing director should be able to identify a group of customers, 18 to 24 years of age, who own notebook computers and need to or are likely to purchase an upcoming collaboration software product. After identifying those people, the director sends them advance offers, information, or product order forms to increase product pre-sales. Data mining, as a methodology, is a set of techniques used to uncover previously obscure or unknown patterns and relationships in very large databases. The ultimate goal is to arrive at comprehensible, meaningful results from extensive analysis of information. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
307
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE HOW IS DATA MINING DIFFERENT FROM OTHER ANALYSIS METHODS? Data mining differs from other analysis methods in several ways. A significant distinction between data mining and other analytical tools is in the approaches used in exploring the data. Many of the available analytical tools support a verification-based approach in which the user hypothesizes about specific data relationships and then uses the tools to verify or refute those presumptions. This verification-based process stems from the intuition of the user to pose the questions and refine the analysis based on the results of potentially complex queries against a database. The effectiveness of this analysis depends on several factors, the most important of which are the ability of the user to pose appropriate questions, the capability of tools to return results quickly, and the overall reliability and accuracy of the data being analyzed. Other available analytical tools have been optimized to address some of these issues. Query and reporting tools, such as those used in data mart or warehouse applications, let users develop queries through point-and-click interfaces. Statistical analysis packages, like those used by many insurance and actuarial firms, provide the ability to explore relationships among a few variables and determine statistical significance against demographic sets. Multidimensional online analytical processing (OLAP) tools enable a fast response to user inquiries through their ability to compute hierarchies of variables along dimensions such as size, color, or location. Data mining, in contrast to these analytical tools, uses what are called discovery-based approaches, in which pattern matching and other algorithms are employed to determine the key relationships in the data. Data mining algorithms can look at numerous multidimensional data relationships concurrently, highlighting those that are dominant or exceptional. That is, true data mining tools uncover trends, patterns, and relationships automatically. As mentioned, many other types of analytical methods rely on user intuition or the ability to pose the “right kind” of question. In summary, analytical tools — query tools, statistical tools, and OLAP — and the results they produce are all user driven, while data mining is data driven. THE NEED FOR DATA MINING As discussed, traditional methods involve the decision maker hypothesizing the existence of information of interest, converting that hypothesis to a query, posing that query to the analysis tool, and interpreting the returned results with respect to the decision being made. For example, the marketing director must hypothesize that notebook-owning, 18- to 24-yearold customers are likely to purchase the upcoming software release. After posing the query, it is up to the individual to interpret the returned results and determine if the list represents a good group of product prospects. The 308
Data Mining: Exploring the Corporate Asset quality of the extracted information is based on the user’s interpretation of the posed query’s results. The intricacies of data interrelationships —as well as the sheer size and complexity of modern data stores — necessitate more advanced analysis capabilities than those provided by verification-based data mining approaches. The ability to automatically discover important information hidden in the data and then present it in the appropriate way is a critical complementary technology to verification-based approaches. Tools, techniques, and systems that perform these automated analysis tasks are referred to as “discovery based.” Discovery-based systems applied to the data available to the marketing director may identify many groups, including, for example, 18- to 24-year-old male college students with laptops, 24- to 30-year-old female software engineers with both desktop and notebook systems, and 18- to 24-year-old customers planning to purchase portable computers within the next six months. By recognizing the marketing director’s goal, the discovery-based system can identify the software engineers as the key target group by spending pattern or other variable. In sum, verification-based approaches, although valuable for quick, high-level decision support such as historical queries about product sales by fiscal quarter, are insufficient. For companies with very large and complex databases, discovery-based data mining approaches must be implemented in order to realize the complete value that data offers. THE PROCESS OF MINING DATA Selection and Extraction Constructing an appropriate database to run queries against is a critical step in the data mining process. A marketing database may contain extensive tables of data from purchasing records and lifestyle data, to more advanced demographic information such as census records. Not all of this data is required on a regular basis and thus should be filtered out of the query tables. Additionally, even after selecting the desired database tables, it is not always necessary to mine the contents of the entire table to identify useful information under certain conditions and for certain types of data mining techniques. For example, when creating a classification or prediction model, it may be adequate to first sample the table and then mine the sample. This is usually a faster and less expensive operation. Essentially, potential sources of data (e.g., census data, sales records, mailing lists, and demographic databases) should be explored before meaningful analysis can take place. The selected data types can be orga309
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE nized along multiple tables. Developing a sound model involves combining parts of separate tables into a single database for mining purposes. Data Cleansing and Transformation Once the database tables have been selected and the data to be mined has been identified, it is usually necessary to perform certain transformations and cleansing routines on the data. Data cleansing or transformations are determined by the type of data being mined as well as the data mining technique being used. Transformations vary from conversions of one type of data to another (such as numeric data to character data or currency conversions), to more advanced transformations (such as the application of mathematical or logical functions on certain types of data). Cleansing, on the other hand, is used to ensure the reliability and accuracy of results. Data can be verified, or cleansed, in order to remove duplicate entries, attach real values to numeric or alphanumeric codes, and omit incomplete records. “Dirty” (or inaccurate) data in the mining data store must be avoided if results are to be accurate and useful. Many data mining tools include a system log or some other graphical interface tool to identify erroneous data in queries; however, every effort should be made prior to this stage to ensure that incorrect data is not included in the mining database. If errors are not discovered, lower quality results and, due to this, lesser quality decisions will result. Mining, Analysis, and Interpretation The clean and transformed data is subsequently mined using one or more techniques to extract the desired type of information. For example, to develop an accurate classification model that predicts whether or not a customer will upgrade to a new version of a software package, a decision maker must first use clustering to segment the customer database. Next, they will apply rules to automatically create a classification model for each desired cluster. While mining a particular dataset, it may be necessary to access additional data from a data mart or warehouse, and perform additional transformations of the original data. (The terms and methods mentioned above are defined and discussed later in this chapter.) The final step in the data mining process is analyzing and interpreting results. The extracted and transformed data is analyzed with respect to the user’s goal, and the best information is identified and presented to the decision maker through the decision support system. The purpose of result interpretation is to not only graphically represent the output of the data mining operation, but also to filter the information that will be presented through the decision support system. For example, if the goal is to develop a classification model during the result interpretation step, the robustness of the extracted model is tested using one of the established 310
Data Mining: Exploring the Corporate Asset methods. If the interpreted results are not satisfactory, it may be necessary to repeat the data mining step, or to repeat other steps. What this really speaks to is the quality of the data. The information extracted through data mining must be ultimately comprehensible. For example, it may be necessary, after interpreting the results of a data mining operation, to go back and add data to the selection process or to perform a different calculation during the transformation step. TECHNIQUES Classification Classification is perhaps the most often employed data mining technique. It involves a set of instances or predefined examples to develop a model that can classify the population of records at large. The use of classification algorithms begins with a sample set of preclassified example transactions. For a fraud detection application, this would include complete records of both fraudulent and valid transactions, determined on a record-by-record basis. The classifier-training algorithm uses these preclassified examples to determine the set of parameters required for proper identification. The algorithm then encodes these parameters into a model called a classifier, or classification model. The approach affects the decision-making capability of the system. Once an effective classifier is developed, it is used in a predictive mode to classify new records automatically into these same predefined classes. In the fraud detection case cited above, the classifier would be able to identify probable fraudulent activities. Another example would involve a financial application in which a classifier capable of identifying risky loans could be used to aid in the decision of whether or not to grant a loan to an individual. Association Given a collection of items and a set of transactions, each of which contain some number of items from a given collection, an association is an operation against this set of records which returns affinities that exist among the collection of items. “Market basket” analysis is a common application that utilizes association techniques. Market basket analysis involves a retailer running an association function over the point of sales transaction log. The goal is to determine affinities among shoppers. For example, in an analysis of 100,000 transactions, association techniques could determine that “20 percent of the time, customers who buy a particular software application also purchase the complimentary add-on software pack.” In other words, associations are items that occur together in a given event or transaction. Association tools discover rules. 311
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Another example of the use of association discovery could be illustrated in an application that analyzes the claim forms submitted by patients to a medical insurance company. The goal is to discover patterns among the claimants’ treatment. Assume that every claim form contains a set of medical procedures that were performed to the given patient during one visit. By defining the set of items to be the collection of all medical procedures that can be performed on a patient and the records that correspond to each claim form, the application can find, using the association technique, relationships among medical procedures that are often performed together. Sequence-Based Traditional “market basket” analysis deals with a collection of items as a part of a point-in-time transaction. A variant of this occurs when there is additional information to tie together a sequence of purchases. An account number, a credit card, or a frequent shopper number are all examples of ways to track multiple purchases in a time series. Rules that capture these relationships can be used, for example, to identify a typical set of precursor purchases that might predict the subsequent purchase of a specific item. In our software case, sequence-based mining could determine the likelihood of a customer purchasing a particular software product to subsequently purchase complimentary software or a hardware device (such as a joystick or a video card). Sequence-based mining can be used to detect the set of customers associated with frequent buying patterns. Use of sequence-based mining on the set of insurance claims previously discussed can lead to the identification of frequently occurring medical procedures performed on patients. This can then be harnessed in a fraud detection application to detect cases of medical insurance fraud. Clustering Clustering segments a database into different groups. The goal is to find groups that differ from one another as well as the similarities among members. The clustering approach assigns records with a large number of attributes into a relatively small set of groups, or “segments.” This assignment process is performed automatically by clustering algorithms that identify the distinguishing characteristics of the dataset and then partition the space defined by the dataset attributes along natural “boundaries.” There is no need to identify the groupings desired or the attributes that should be used to segment the dataset. Clustering is often one of the first steps in data mining analysis. It identifies groups of related records that can be used as a starting point for 312
Data Mining: Exploring the Corporate Asset exploring further relationships. This technique supports the development of population segmentation models, such as demographic-based customer segments. Additional analyses using standard analytical and other data mining techniques can determine the characteristics of these segments with respect to some desired outcome. For example, the buying habits of multiple population segments might be compared to determine which segments to target for a new marketing campaign. Estimation Estimation is a variation of the classification technique. Essentially, it involves the generation of scores along various dimensions in the data. For example, rather than employing a binary classifier to determine whether a loan applicant is approved or classified as a risk, the estimation approach generates a credit-worthiness “score” based on a pre-scored sample set of transactions. That is, sample data (complete records of approved and risk applicants) is used as samples in determining the worthiness of all records in a dataset. APPLICATIONS OF DATA MINING Data mining is now being applied in a variety of industries, ranging from investment management and retail solutions to marketing, manufacturing, and healthcare applications. It has been pointed out that many organizations, due to the strategic nature of their data mining operations, will not even discuss their projects with outsiders. This is understandable, due to the importance and potential that successful solutions offer organizations. However, there are several well-known applications that are proven performers, including customer profiling, market basket analysis, and fraud analysis. In customer profiling, characteristics of good customers are identified with the goals of predicting who will become one, and helping marketing departments target new prospects. Data mining can find patterns in a customer database that can be applied to a prospect database so that customer acquisition can be appropriately targeted. For example, by identifying good candidates for mail offers or catalogs, direct-mail marketing managers can reduce expenses and increase their sales generation efforts. Targeting specific promotions to existing and potential customers offers similar benefits. Market-basket analysis helps retailers understand which products are purchased together or by an individual over time. With data mining, retailers can determine which products to stock in which stores, as well as how to place them within a store. Data mining can also help assess the effectiveness of promotions and coupons. 313
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE And finally, fraud detection is of great benefit to credit card companies, insurance firms, stock exchanges, government agencies, and telecommunications firms. The aggregate total for fraud losses in today’s world is enormous; but with data mining, these companies can identify potentially fraudulent transactions and contain damage. Financial companies use data mining to determine market and industry characteristics as well as predict individual company and stock performance. Another interesting niche application is in the medical field. Data mining can help predict the effectiveness of surgical procedures, diagnostic tests, medication, and other services. SUMMARY More and more companies are beginning to realize the potential for data mining within their organizations. However, unlike the “plug-and-play,” outof-the-box business solutions that many have become accustomed to, data mining is not a simple application. It involves a great deal of forethought, planning, research, and testing to ensure a sound, reliable, and beneficial project. It is also important to remember that data mining is complementary to traditional query and analysis tools, data warehousing, and data mart applications. It does not replace these useful and often vital solutions. Data mining enables organizations to take full advantage of the investment they have made and are currently making in building data stores. By identifying valid, previously unknown information from large databases, decision makers can tap into the unique opportunities that data mining offers.
314
Chapter 26
Data Conversion Fundamentals Michael Zimmer
When systems developers build information systems, they usually do not start with a clean slate. Often, they are replacing an existing application. They must always determine if the existing information should be preserved. Usually, the older information is transferred to the new system — a process known as data conversion in previous days, but now more often called “extract, transform, and load” (ETL). This ETL process may be a onetime transfer of data from an old system to a new system, or part of an ongoing process such as is found in data warehouse and data mart applications. In fact, any time that data interoperability is an issue, similar considerations apply. Even business-to-business (B2B) and electronic data interchange (EDI) have some similar characteristics, particularly with regard to the issue of definition of common semantics, and applying business rules against the data to ensure quality. ELT can involve moving data from flat file systems to relational database management systems (RDBMS). It could also be related to changing from systems with loose constraints to new systems with tight constraints. Over the past decade or so, various tools for ETL have appeared as data warehousing has exploded. In addition, newer technologies such as intranets, XML, XSLT, and related standards have proven to be useful. This chapter focuses on laying the groundwork for successfully executing a data conversion effort the first time around. It is assumed in this chapter that the principles of conceptual data modeling are followed. For expository purposes, it is assumed that relational database technology is employed but, in fact, the methods are essentially independent of technology. At the logical level, the terms “entity set,” “entity,” and “attribute” are used in place of the terms “file,” “record,” and “field.” At the physical level, the terms “table,” “row,” and “column” are used instead of “file,” “record,” and “field.” The members of IS (information systems) engaged in the data conversion effort are referred to as the data conversion team (DCT). 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
315
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE COMMON PROBLEMS WITH DATA The difficulties of a data conversion effort are almost always underestimated. The conversion usually costs many times more than originally anticipated. This is invariably the result of an inadequate understanding of the cost and effort required to correct errors in the data. The quality of the existing data is typically much worse than the users and development team anticipate. Data may violate key business rules and be incomplete. Problems with data can result from missing information and mismatches between the old model (which is often only implicit) and the new model (which is usually explicitly documented). Problems also result if the conversion effort is started too late in the project and is under-resourced. Costs and Benefits of Data Conversion Before embarking on data conversion, the DCT should decide whether data really needs to be converted and if it is feasible to abandon the noncurrent data. In some situations, starting fresh is an option. The customers may decide that the costs of preserving and correcting old information exceeds the benefits expected. Often, they will want to preserve old information but may not have the resources to correct historical errors. Of course, with a data warehouse project, it is a given that the data will be converted. Preservation of old information is critical. The Cost of Not Converting. The DCT should first demonstrate the cost of permitting erroneous information into the new database. It is a decision to be made by user management. In the long run, permitting erroneous data into the new application will usually be costly. The data conversion team should explain what the risks are in order to justify the costs for robust programming and data error correction. The Costs of Converting. It is no easier to estimate the cost of a conversion effort than to estimate the cost of any other development effort. The special considerations are that a great deal of manual intervention and subsequently extra programming may be necessary to remedy data errors. A simple copy procedure usually does not serve the organization’s needs. If the early exploration of data quality and robust design and programming for the conversion routines is skimped on, the IS group will generally pay for it.
STEPS IN THE DATA CONVERSION PROCESS In even the simplest information technology (IT) systems development projects, the efforts of many players must come together. At the managerial and employee levels, certain users should be involved, in addition to 316
Data Conversion Fundamentals the applications development group, data administration, database administration, computer operations, and quality assurance. The responsibilities of the various groups must be clearly defined. In the simplest terms, data conversion involves the following steps: • • • • • • • • • •
Determine if conversion is required. Plan the conversion. Determine the conversion rules. Identify problems. Write down the requirements. Correct the data. Program the conversion. Run the conversion. Check the audit reports. Institutionalize the results.
Determining if Conversion Is Required In some cases, data does not need to be converted. IS may find that there is no real need to retain old information. The data could be available elsewhere, such as on microfiche. Another possibility is that the current data is so erroneous, incomplete, or inadequate that there is no reason to keep it. The options must be presented to the clients so that they can determine the best course of action. Planning the Conversion and Determining the Conversion Rules Once the DCT and the client have accepted the need for a conversion, the work can be planned in detail. The planning activities for conversion are standard in most respects and are typical of development projects. Beyond sound project management, it is helpful for the DCT to keep in mind that error correction activities may be particularly time-consuming. Determination of the conversion rules consists of the following steps, usually performed in sequence, with any necessary iteration: • • • • • • •
Analyze the old physical data model. Conduct a preliminary investigation on data quality. Analyze the old logical data model. Analyze the new logical data model. Analyze the new physical data model. Determine the data mapping. Determine how to treat missing information.
Analyze the Old Physical Data Model. Some published development methods imply that development starts with a blank slate. As a result, analysis of the existing system is neglected. The reverse-engineering paradigm 317
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE asserts that the DCT should start with the existing computer application to discern the business rules. Data conversion requires this approach for data analysis. The DCT can look at old documentation, database definitions, file descriptions, and record layouts to understand the current physical data model. Conduct a Preliminary Investigation of Data Quality. Without some understanding of data structures for the current application, it is not possible to look at the quality of the data. To examine the quality of the data, the DCT can run existing reports, do online queries, and if possible, quickly write some fourth-generation language programs to examine issues such as referential, primary key, and domain integrity violations that the users might never notice. When the investigation is done, the findings should be formally documented. Analyze the Old Logical Data Model. When the physical structure of the data is understood, it can be represented in its normalized logical structure. This step, although seemingly unnecessary, allows the DCT to specify the mapping in a much more reliable fashion. The results should be documented with the aid of an entity-relationship diagram accompanied by dictionary descriptions. Analyze the New Physical Data Model. The new logical model should be transformed into a physical representation. If a relational database is being used, this may be a simple step. Once this model is done, the mapping can be specified. Determine the Data Mapping. This step is often more difficult than it might seem initially. Often, there are cases where the old domain must be transformed into a new one; an old field is split into two new ones; two old fields become one new one; or multiple records are looked at to derive a new one. There are many ways of reworking the data, and an unlimited number of special cases may exist. Not only are the possibilities for mapping numerous and complex, but in some cases it is not possible to map to the new model because key information was not collected in the old system. Determine How to Treat Missing Information. It is common when doing conversion to discover that some of the data to populate the new application is not available and that there is no provision for it in the old database. It may be available elsewhere as manual records, or it may never have been recorded at all.
Sometimes, this is only an inconvenience — dummy values can be put in certain fields to indicate that the value is not known. In the more serious case, the missing information would be required to create a primary key or 318
Data Conversion Fundamentals a foreign key. This can occur when the new model is significantly different from the old. In this case, the dummy value strategy may be appropriate but it must be fully explained to the client. Identify Problems Data problems can only be detected after both the old data structure and the new model are fully understood. A full analysis of the issue includes looking for erroneous information, missing information, redundancies, inconsistencies, missing keys, and any other problem that will make the conversion difficult or impossible without a lot of manual intervention. Any findings should be documented and brought to the attention of the client. Information must be documented in a fashion that makes sense to the client. Once the problems have been identified, the DCT can help the client identify a corrective strategy. The client must understand why errors have been creeping into the systems. The cause is usually a mixture of problems with the old data structure, problems with the existing input system, and data entry problems that have been ongoing. It may be that the existing system does not properly reflect the business. The users may have been working around the system’s deficiencies for years in ways that violated its integrity. In any case, the new system should be tighter than the old one at the programming and database level, it should properly reflect the business, and the new procedures should not result in problems with usability or data quality. Document the Requirements After the initial study of the conversion is done, the findings should be documented. Some of this work will have been done as part of the regular system design. There must also be a design for the conversion programs, whether it is a one-time or an ongoing activity. First-time as well as ongoing load requirements must be examined. Estimates should include the time necessary to extract, edit, correct, and upload data. Costs for disk storage and CPUs should also be projected. In addition, the sizing requirements should be estimated well in advance of hardware purchases. Correct the Data The client may want to correct the data before the conversion effort begins, or may be willing to convert the data over time. It is best to make sure that the data that is converted is error-free, at least with respect to the formal integrity constraints defined for the new model. 319
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE If erroneous information is permitted into the new system, it will probably be problematic later. The correction process may involve using the existing system to make changes. Often, the types of errors that are encountered may require some extra programming facilities. Not all systems provide all of the data modification capabilities that might be necessary. In any case, this step can sometimes take months of effort and requires a mechanism for evaluating the success of the correction effort. Program the Conversion The conversion programs should be designed, constructed, and tested with the same discipline used for any other software development. Although the number of workable designs is large, there are a few helpful rules of thumb: • The conversion program should edit for all business rule violations and reject nonconforming information. The erroneous transactions should go to an error file, and a log of the problem should be written. The items in error should be unambiguously identified. The soundest course is to avoid putting incorrect data into the new system. • The conversion programs must produce an audit trail of the transactions processed. This includes control totals, checksums, and date and time stamps. This provides a record of how the data was converted after the job is done. • Tests should be as rigorous as possible. All design documents and code should be tested in a structured fashion. This is less costly than patching up problems caused by a data corruption in a million-record file. • Provisions should be made for restart in case of interruption in the run. • It should be possible to roll back to some known point if there are errors. • Special audit reports should be prepared to run against the old and new data to demonstrate that the procedures worked. This reporting can be done in addition to the standard control totals from the programs. Run the Conversion It may be desirable to run a test conversion to populate a test database. Once the programs are ready and volume testing has been done, it is time for the first conversion, which may be only one of many. If this is a data warehouse application, the conversion could be an ongoing effort. It is important to know how long the initial loads will take so that scheduling can be done appropriately. The conversion can then be scheduled for an opportune cutover time. The conversion will go smoothly if 320
Data Conversion Fundamentals contingencies are built in and sound risk management procedures are followed. There may be a number of static tables, perhaps used for code lookup, that can be converted without as much fanfare, but the main conversion will take time. At the time planned for cutover, the old production system can be frozen from update or run in parallel. The production database can then be initialized and test records removed (if any have been created). The conversion and any verification and validation routines can be run at this point. Check the Audit Reports Once the conversion is finished, special audit reports should be run to prove that it worked, to check control totals, and deal with any problems. It may be necessary to roll back to the old system if problems are excessive. The new application should not be used until it is verified that the conversion was correct; otherwise, a lot of work could be lost. Institutionalize the Results In many cases, as in data warehousing, conversion will be a continuous process and must be institutionalized. Procedural controls are necessary to make sure that the conversion runs on schedule, results are checked rigorously, rejected data is dealt with appropriately, and failed runs are handled correctly. DATA QUALITY A strategy to identify data problems early in the project should be in place, although details will change according to the project. A preliminary investigation can be done as soon as the old physical data model has been determined. It is important to document the quality of the current data, but this step may require programming resources. Customers at all levels should be notified if there are data quality issues to be resolved. Knowledge of the extent of data quality problems may influence the users’ decision to convert or abandon the data. Keeping the Data Clean If the data is corrected on a one-time basis, it is important to ensure that more erroneous data is not being generated by some faulty process or programming. There may be a considerable time interval between data correction and conversion to the new system. Types of Data Abnormalities There may be integrity problems in the old system. For example, there may be no unique primary key for some of the old files, which almost guaran321
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE tees redundancy in the data. This violation of entity integrity can be quite serious. To ensure entity integrity in the new system, the DCT will have to choose which of the old records are to be accepted as the correct ones to move into the new system. It is helpful for audit routines to report on this fact. In addition, in the new system, it will be necessary to devise a primary key, which may not be available in the old data. Uniqueness. In many cases, there are other fields that should also be unique and serve as an alternate primary key. In some cases, even if there is primary key integrity, there are redundancies in other alternative keys, which again create a problem for integrity in the new system. Referential Integrity. The DCT should determine whether the data correctly reflects referential integrity constraints. In a relational system, tables are joined together by primary key/foreign key links. The information to create this link may not be available in the old data. If records from different files are to be matched and joined, it should be determined whether the information exists to correctly do the join (i.e., a unique primary key and a foreign key). Again, this problem needs to be addressed prior to conversion. Domain Integrity. The domain for a field imposes constraints on the values that should be found there. IS should determine if there are data domains that have been coded into character or numeric fields in an undisciplined and inconsistent fashion. It should further be determined whether there are numeric domains that have been coded into character fields, perhaps with some nonnumeric values. There may be date fields that are just text strings and the dates may be in any order. A common problem is that date or numeric fields stored as text may contain absurd values with entirely the wrong data type.
Another determination that should be made is whether the domain coding rules have changed over time and whether they have been re-coded. It is common for coded fields to contain codes that are no longer in use, and often codes that never were in use. Also, numeric fields may contain out-ofrange values. Composite domains could cause problems when trying to separate them for storage in multiple fields. The boundaries for each subitem may not be in fixed columns. There may be domains that incorrectly model internal hierarchy. This is common in old-style systems and makes data modeling difficult. There could be attributes based on more than one domain. Not all domain problems will create conversion difficulties but they may be problematic later 322
Data Conversion Fundamentals if it cannot be proven that these were preexisting anomalies and not a result of the conversion efforts. Wrong Cardinality. The old data could contain cardinality violations. For example, the structure may say that each employee has only one job record, but in fact some may have five or six. These sorts of problems make database design difficult. Wrong Optionality. Another common problem is the absence of a record when one should be there. It may be a rule that every employee has at least one record of appointment, but for some reason one percent of old records show no job for an employee. The client must resolve this inconsistency. Orphaned Records. In many cases, a record is supposed to refer back to some other record by making reference to the key value for that other record. In many badly designed systems, there is no key to refer back to, at least not one that uniquely identifies the record. Technically, there is no primary key. In some cases, there is no field available to make this reference, which means that there is no foreign key. In other cases, the key structure is fine but the actual record referred to does not exist. This is a problem with referential integrity. This record without a parent is called an orphan. Inconsistency and Redundancy Combined. If each data item is fully determined by its key, there will be no undesirable redundancy and the new database will be normalized. If attempts at normalization are made where there is redundant information, the DCT will be unable to make consistent automated choices about which of the redundant values to select for the conversion.
On badly designed systems, there will be a great deal of undesirable redundancy. For example, a given fact may be stored in multiple places. This type of redundancy wastes disk storage, but may in some cases permit faster queries. The problem is that without concerted programming efforts, this redundant information is almost certainly going to become inconsistent. If the old data has confusing redundancies, it is important to determine whether they are due to historical changes in the business rules or historical changes in the values of fields and records. The DCT should also determine whether the redundancies are found across files or within individual files across records. There may be no way to determine which data is current, and an arbitrary choice will have to be made. If the DCT chooses to keep all the information to reflect the changes over time, it cannot be stored correctly because the date information will not be in the system. This is an extremely common problem. 323
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Missing Information. When dealing with missing information, it is helpful
to determine whether: • • • • •
The old data is complete. Mandatory fields are filled in. All necessary fields are available in the files. All records are present. Default or dummy values can be inserted where there is missing information.
Date Inconsistencies. When examining the conversion process, it is helpful to determine whether:
• The time dimension is correctly represented. • The data spans a long enough time period. • The data correctly reflects the state of the business for the time at which it was captured. • All necessary date fields are available to properly model the time dimension. • Dates are stored with century information. • Date ranges are in the correct sequence within a given record. • Dates are correct from record to record. Miscellaneous Inconsistencies. In some fields, there will be values derived from other fields. A derived field might be computed from other fields in the same record or may be a function of multiple records. The derived fields may be stored in an entirely different file. In any case, the derived values may be incorrect for the existing data. Given this sort of inconsistency, it should be determined which is correct — the detail or the summary information. Intelligent Keys. An intelligent key results from a fairly subtle data modeling problem. For example, there are two different independent items from the real world, such as Employee and Department, where the Employee is given a key that consists in part of the Department key. The implication is that if a Department is deleted, the employee record will be orphaned; and if an Employee changes Departments, the Employee key will have to change. When doing a conversion, it would be desirable to remove the intelligent key structure. Other Problems. Other problems with the old data also exist that cannot be easily classified. These problems involve errors in the data that cannot be detected except by going back to the source, or violations of various arcane constraints that have not been programmed as edit checks in the existing system. There may be special rules that tie field values to multiple records, multiple fields, or multiple files. Although they may not have a 324
Data Conversion Fundamentals practical implication for the conversion effort, if these problems become obvious, they might be falsely attributed to the conversion routines. THE ERROR CORRECTION PROCESS The data correction effort should be run as part of a separate subproject. The DCT should determine whether the resources to correct the data can be made available. A wholesale commitment from the owners of the data will be required, and probably a commitment of programming resources as well. Error correction cannot be done easily within the context of rapid applications development (RAD) or many of the agile methods. Resources for the Correction Effort Concerning resources for the correction effort, the best-case scenario would ensure that: • Resources are obtained from the client if a major correction effort is required. • Management pays adequate attention to the issue if a data-quality problem is identified. • The sources of the problem will be identified in a fair and nonjudgmental manner if a data quality problem is identified. Choices for Correction The effort required to write an edit program to look for errors is considerable, and chances are good that this will be part of the conversion code and not an independent set of audit programs. Some of the errors may be detected before conversion begins, but it is likely that many of the problems will be found during the conversion run. Once data errors are discovered, data can be copied as is, corrected, or abandoned. The conversion programs should reject erroneous transactions and provide reports that explain why data was rejected. If the decision is made to correct the data, it will probably have to be reentered. Again, in some cases, additional programming can help remedy the problems. Programming for Data Correction Some simple automated routines can make the job of data correction much easier. If they require no manual intervention, it could be advantageous to simply put them into the main conversion program. However, the program may require that a user make the decision. If the existing data entry programs are not adequate for large-scale data correction efforts, some additional programs might have to be written for 325
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE error repair. For example, the existing system may not allow the display of records with a referential integrity problem, which are probably the very records that need correction. Custom programming will be required to make the change. SPECIFY THE MAPPING Often, crucial information needed for the conversion will be missing. If the old system can accommodate the missing information, it may be a matter of keying it in from original paper records. However, the original information may not be available anymore, or it may never have been collected. In that case, it may be necessary to put in special markers to show that the information is not available. Model Mismatches It can be difficult to go from a nonnormalized structure to a normalized structure because of the potential for problems in mapping from old to new. Many problems are the result of inconsistent and redundant data, a poor key structure, or missing information. If there is a normalized structure in the old system, there probably will not be as many difficulties. Other problems result from changed assumptions about the cardinality of relationships or actual changes in the business rules. Discovered Requirements The requirements of a system are almost never fully understood by the user or the developer prior to constructing the system. Some of the data requirements do not become clear until the test conversions are being run. At that point, it may be necessary to go back and revisit the whole development effort. Standard change and scope control techniques apply. Existing Documentation Data requirements are rarely right the first time because the initial documentation is seldom correct. There may be abandoned fields, mystery fields, obscure coding schemes, or undocumented relationships. If the documentation is thorough, many data conversion pitfalls can be avoided. Possible Mapping Patterns The mapping of old to new is usually very complex. There seems to be no useful canonical scheme for dealing with this set of problems. Each new conversion seems to consist of myriad special cases. In the general case, a given new field may depend on the values found in multiple fields contained in multiple records of a number of files. This works the other way as 326
Data Conversion Fundamentals well — one field in an old record may be assigned to different fields or even to different tables, depending on the values encountered. If the conversion also requires intelligent handling of updates and deletes to the old system, the problem is complicated even further. This is true when one source file is split into several destination files, and at the same time, one destination file receives data from several source files. Then, if just one record is deleted in a source file, some fields will have to be set to null in the destination file, but only those coming from the deleted source record. This method, however, may violate some of the integrity rules in the new database. It may be best to specify the mapping in simple tabular and textual fashion. Each new field will have the corresponding old fields listed, along with any special translation rules required. These rules could be documented as decision tables, decision trees, pseudocode, or action diagrams. Relational Mathematics In database theory, it is possible to join together all fields in a database in a systematic manner and create what is called the “universal relation.” Although this technique has little merit as a scheme for designing or implementing a database, it may be a useful device for thinking about the mapping of old to new. It should be possible to specify any complex mapping as a view based on the universal relation. The relational algebra or the relational calculus could be used as the specification medium for detailing the rules of the mapping in a declarative fashion. DESIGN THE CONVERSION Possibility of Manual Data Entry Before starting to design a computer program or a set of programs, reentering the data manually from source records should be considered a possibility. The effort and increased probability of random errors associated with manual data entry should be contrasted with the cost of developing an automated solution. Extra Space Requirements In a conversion, it will be necessary to have large temporary files available. These could double the amount of disk space required for the job. If it is not possible to provide this extra storage, it will be necessary to ensure that the conversion plan does not demand extra space. This has become less and less of a problem over the years, with the decreasing cost of storage, but could become an issue for large volumes of data. 327
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Choice of Language The criteria for programming languages is not going to be too different from that used in any other application area. The programming language should be chosen according to the skills of the IS team and what will run on the organization’s hardware, or what is used by the purchased ETL software. The most appropriate language will allow error recovery, exception handling, control totals reporting, checkpoint and restart capabilities, full procedural capability, and adequate throughput. Most third-generation languages are sufficient if an interface to the source and target databases or file systems is available. Various classes of programs could be used, with different languages for each. For example, the records can be extracted from the old database with one proprietary product, verified and converted to the new layout with C, and input into the new database with a proprietary loader. SQL as a Design Medium The SQL language should be powerful enough to handle any data conversion job. The problem with SQL is that it has no error-handling capabilities and cannot produce a satisfactory control totals report as part of the update without going back and re-querying the database in various ways. Despite the deficiencies of SQL as a robust data conversion language, it may be ideal for specifying the conversion rules. Each destination field could have a corresponding SQL fragment that gave the rules for the mapping in a declarative fashion. The use of SQL as a design medium should lead to a very tight specification. The added advantage is that it translates to an SQL program very readily. Processing Time IS must have a good estimate for the amount of elapsed time and CPU time required to do the conversion. If there are excessive volumes of data, special efforts will be required to ensure adequate throughput. These efforts could involve making parallel runs, converting overnight and over weekends, buying extra-fast hardware, or fine-tuning programs. These issues are not unique to conversions but they must not be neglected to avoid surprises on the day of cutover to the new system. These issues are especially significant when there are large volumes of historical data for an initial conversion, even if ongoing runs will be much smaller. Interoperability There is a strong possibility that the old system and the new system will be on different platforms. There should be a mechanism for transferring the 328
Data Conversion Fundamentals data from one to the other. Tape, disk, or a network connection could be used. It is essential to provide some mechanism for interoperability. In addition, it is important to make sure that the media chosen can support the volumes of data and provide the necessary throughput. Routine Error Handling The conversion routine must include sufficient mechanisms for enforcing all business rules. When erroneous data is encountered, there might be a policy of setting the field to a default value. At other times, the record may be rejected entirely. In either case, a meaningful report of each error encountered and the resultant actions should be generated. It will be best if erroneous records are sent to an error file. There may be some larger logical unit of work than a record. If so, the larger unit should be sent to the error file and the entire transaction rolled back. Control Totals Every run of the conversion programs should produce control totals. At a minimum, there should be counts for every input record, every rejected recorded, every accepted record, and every record inserted into each output file or table. Finer breakdowns are desirable for each of these types of inputs and outputs. Every conversion run should be date and time stamped with start and end times, and the control report should be filed after inspection. Special Requirements for Data Warehousing Data warehousing assumes that the conversion issue arises on a routine, periodic basis. All of the problems that arise in a one-time conversion must be dealt with for an initial load, and then must be dealt with again for the periodic update. In a data warehouse situation, there will most likely be changes to source records that must be reflected into the data warehouse files. As discussed previously, there may be some complex mapping from old to new, and updates and deletes will greatly increase the complexity. There must be a provision for add, change, and delete transactions. A change transaction can often be handled as a paired delete and add, in some cases simplifying the programming. RECOVERY FROM ERROR Certain types of errors, such as a power failure, will interrupt the processing. If the system goes out in the middle of a 20-hour run, there has to be some facility for restarting appropriately. Some sort of checkpoint and 329
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE restart mechanisms are desirable. The operating system may be able to provide these facilities. If not, there should be an explicit provision in the design and procedures for dealing with this possibility. In some cases, it may be necessary to ensure that files are backed up prior to conversion. Audit Records After the data has been converted, there must be an auditable record of the conversion. This is also true if the conversion is an ongoing effort. In general, the audit record depends on the conversion strategy. There may be counts, checksums (i.e., row and column), or even old-versus-new comparisons done with an automated set of routines. These audit procedures are not the same as the test cases run to verify that the conversion programs worked. They are records produced when the conversions are run. CONCLUSION Almost all IS development work involves conversion of data from an old system to a new application. This is seldom a trivial exercise, and in many projects it is the biggest single source of customer dissatisfaction. The conversion needs to be given serious attention, and the conversion process needs to be planned as carefully as any other part of the project. Old applications are fraught with problems, and errors in the data will be common. The more tightly programmed the new application, the more problematic the conversion. It is increasingly common to make the conversion part of an ongoing process, especially when the operational data is in one system and the management information in another. Any data changes are made on the operational system and then, at periodic intervals, copied to the other application. This is a key feature of the data warehouse approach. All of the same considerations apply. In addition, it will be important to institutionalize the procedures for dealing with conversion. The conversion programs must be able to deal with changes to the operational system by reflecting them in the data warehouse. Special care will be required to design the programs accordingly.
330
Chapter 27
Service Level Management Links IT to the Business Janet Butler
Downtime is becoming unacceptably expensive as businesses increasingly depend on their information technology (IT) services for mission-critical applications. As user availability and response time requirements increase dramatically, service level management (SLM) is becoming the common language of choice for communication between IT and end users. In addition, to foster the growing focus on the user, SLM is moving rapidly into the application arena, turning from its traditional emphasis on system and network resources. E-BUSINESS DRIVES SERVICE LEVEL MANAGEMENT Businesses have long viewed IT as an overhead operation and an expense. In addition, when IT was a hidden function dealing with internal customers, it could use ad hoc, temporary solutions to address user service problems. Now, with electronic business gaining importance, IT is becoming highly visible as a front door to the business. However, while Internet visibility can prove highly beneficial and lucrative to businesses, it can also backfire. Amazon, eBay, and Schwab all learned this the hard way when their service failures hit The Wall Street Journal’s front page. And few other organizations would like their CEOS to read about similar problems. As such cases illustrate, downtime on mission-critical applications can cost businesses tens of thousands or millions of dollars per day. In the financial industries, for example, downtime can cost $200,000 per minute, according to one industry analyst. And poor end-to-end application response time can be nearly as costly. Not only does it cause serious tension between internal users and IT, but it creates considerable frustration for external users, and the competition may be only a mouse-click away. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
331
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE With IT now the main entryway to the business, businesses cannot afford the perception of less-than-optimal service. They are therefore increasingly adopting service level agreements (SLAs), service level management, and quality-of-service initiatives. In fact, some organizations have developed SLAs guaranteeing availability levels exceeding 99.9 percent, or aggressive application response times — which depend on optimal end-toend performance. SLM DEFINED Service level management (SLM) is a set of activities required to measure and manage the quality of information services provided by IT. A proactive rather than reactive approach to IT management, SLM manages the IT infrastructure — including networks, systems, and applications — to meet the organization's service objectives. These objectives are specified in the SLA, a formal statement that clearly defines the services that IT will provide over a specified period of time, as well as the quality of service that users can expect to receive. SLM is a means for the lines of business and IT to set down their explicit, mutual expectations for the content and extent of IT services. It also allows them to determine in advance what steps will be taken if these conditions are not met. SLM is a dynamic, interactive process that features: • • • •
Definition and implementation of policies Collection and monitoring of data Analysis of service levels against the agreement Reporting in real-time and over longer intervals to gauge the effectiveness of current policies • Taking action to ensure service stability To implement service level management, the SLA relates the specific service-level metrics and goals of IT systems to business objectives. By linking the end-user and business process experience with what is happening in IT organizations, SLAs offer a common bridge between IT and end users, providing a clear understanding of the services to be delivered, couched in a language that both can understand. This allows users to compare the service they receive to the business process, and lets IT administrators measure and assess the level of service from end to end. SLAs may specify the scope of services, success and failure metrics, goal and performance levels, costs, penalties, time periods, and reporting requirements. The use of SLM offers businesses several benefits. It directs management toward clear service objectives and improves communication between IT 332
Service Level Management Links IT to the Business and users by enabling responsiveness to user issues. It also simplifies the management of network services because resource changes are made according to the SLA and are based on accurate user feedback. Furthermore, SLM clarifies accountability by allowing organizations to analyze service levels and evaluate IT’s effectiveness. Finally, by enabling businesses to optimize current resources and make educated decisions about the necessity for upgrades, it saves money and maximizes investments. FROM SYSTEM TO APPLICATION FOCUS In the early days of performance evaluation and capacity planning, the emphasis was on system tuning and optimization. The field first took off in the mid-1960s with the introduction of third-generation operating systems. The inefficiency of many of these systems resulted in low throughput levels and poor user response time. So, tuning and optimization were vital. As time passed, however, the vastly improved price/performance of computer systems began to limit the need for tuning and optimization. Many organizations found it cheaper to simply buy more hardware resource than to try and tune a system into better performance. Still, organizations continued to concentrate on system throughput and resource utilization, while the fulfillment of service obligations to the end user was of relatively low priority. Enter the PC revolution with its emphasis on end-user requirements. Enter also the client/server model to serve users, with its promise of speedy application development and vast amounts of information at users’ fingertips, all delivered at rapid response times. Of course, the reality does not always measure up. Now the Internet and World Wide Web have joined the fray, with their special concepts of speed and user service. Organizations are now attempting to plan according to Web time, whereby some consider a Web year to be 90 days, but WWW may well stand for “world wide wait.” So organizations are turning their focus to the user, rather than the information being collected. The service-desk/help-desk industry, for example, has long been moving toward user-oriented SLM. In the early 1990s, service-desk technology focused on recording and tracking trouble tickets. Later, the technology evolved to include problem-resolution capabilities. Next, the service desk started using technologies and tools that enabled IT to address the underlying issues that kept call volumes high. Today, organizations are moving toward business-oriented service delivery. IT is being called upon to participate as a partner in the corporate mission — which requires IT to be responsive to users/customers. 333
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Today’s SLM requires that IT administrators integrate visibility and control of the entire IT infrastructure, with the ability to seamlessly manage service levels across complex, heterogeneous enterprise environments, using a single management interface. However, many IT organizations currently have monitors and probes in isolated parts of the network, or tools that monitor performance on certain platforms but not others. In addition, they may only receive after-the-fact reports of downtime, without proactive warnings or suggested actions. SLM requires a single, comprehensive solution whereby every facet of an IT infrastructure is brought into a single, highly automated, managed environment. This enables IT to quickly isolate and resolve problems, and act proactively in the best interest of the end user, rather than merely reacting to network or resource issues. And while comprehensive tools to do this were not available in the past, that situation is changing as the tools evolve. In this complex new environment, organizations must define IT availability in terms of applications rather than resources, and use language that both IT and business users can understand. Thus, in the past, IT’s assurance of 98 percent network availability offered little comfort to a salesman who could not book orders. It did not mean the application was running or the response time was good enough for the salesman. While SLM was formerly viewed as a lot of hot air, today’s business SLAs between IT and the line of business define what customers should expect from IT without problems. SLAs Tied to User Experience Current SLAs, then, are tied to applications in the end-user experience. With their focus on the user, rather than the information being collected, SLAs aim at linking the end user’s business process experience with what is happening in the IT organization. To this end, organizations are demanding end-user response time measurement from their suppliers, and for client/server in addition to mainframe application systems. For example, when one financial organization relocated its customer service center from a private fiber to a remote connection, call service customers were most concerned about response time and reliability. Therefore, they required a tool that provided response time monitoring at the client/server level. Similarly, a glass and plastics manufacturer sought a system to allow measurement of end-user response time as a critical component of user satisfaction when it underwent a complex migration from legacy to client/server systems. Although legacy performance over time provided sub334
Service Level Management Links IT to the Business second response time, client/server performance has only recently gained importance. To measure and improve response time in client/server environments, organizations must monitor all elements of the response time component. Application Viewpoint The application viewpoint offers the best perspective into a company’s mosaic of connections, any one of which could slow down the user. This is no news to end-user organizations. According to a 1999 survey of 142 network professionals, for example, conducted by International Network Services, 64 percent measure the availability of applications on the network to define network availability/performance. (INS, Sunnyvale, California, was a global provider of network consulting and software solutions, acquired by Lucent.) For this very complex environment, organizations must do root cause analysis if users have service problems. When IT organizations were more infrastructure oriented, service problems resulted in much fingerpointing, and organizations wasted valuable time passing the buck around before they found the domain responsible — be it the server, the network, or the connections. Now, however, as IT organizations change from infrastructure providers to service organizations, they are looking at the application level to determine what is consuming the system. SLM APPROACHES, ACTIVITIES, AND COMPONENTS Some analysts have defined four ways of measuring end-to-end response time: code instrumentation, network x-ray tools, capture/playback tools, and client capture. Code Instrumentation By instrumenting the source code in applications, organizations can define the exact start and end of business transactions, capturing the total roundtrip response times. This was the approach taken by Hewlett-Packard and Tivoli with their Application Response Measurement (ARM) application programming interface (API) initiative. For ARM's purposes, application management is defined as end-to-end management of a collection of physical and logical components that interact to support a specified business process. According to the ARM working group draft mission statement, “The purpose of the ARM API is to enable applications to provide information to measure business transactions from an end-user perspective, and the contributing components of response time in distributed applications. This information can be used to support SLAs, and to analyze response time across distributed systems.” 335
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE However, although the approach is insightful in capturing how end users see business transactions, it is also highly invasive, costly, and difficult, requiring modifications to the application source code as well as maintenance of the modifications. Many users want a nonintrusive system to measure end-user response time. Others need a breakdown by segment rather than a round-trip response time measurement. And, despite the promise, only three to five percent of ERP applications have been ARMed, or instrumented. Network X-Ray Tools A second collection approach is via x-ray tools, or network sniffers. An example is Sniffer Network Analyzer from Network Associates, Menlo Park, California. Sniffers use probes spread out in strategic locations across the network to read the packet headers, and calculate response times as seen from that probe point. Although noninvasive, this approach does not address the application layer. Because it does not see transactions in user terms, it does not capture response time from the end-user perspective. And, because the data was not designed for performance purposes, converting it into workload or user transaction-level metrics is not a trivial task. However, while the method might be considered the “hard way” to obtain performance data, it does work. Capture/Playback Tools Capture/playback tools use synthetic transactions, simulating user keystrokes and measuring the response times of these “virtual” users. While simulated transactions have a role in testing the applications’ potential performance, they do not measure the actual end-user’s response time experience. Examples are CAPBAK from Software Research, San Francisco, California, and AutoTester from AutoTester, Inc., Dallas, Texas. Client Capture Client capture is the fourth and most promising approach to measuring response time from the user’s perspective. Here, intelligent agents sit at the user’s desktop, monitoring the transactions of actual end users to capture the response time of business transactions. Client capture technology can complement network and systems management solutions, such as those from Hewlett-Packard, Tivoli, and Computer Associates. Examples of client capture products include the VitalSuite line from INS and FirstSense products from FirstSense Software, Burlington, Massachusetts. Service level management encompasses at least four distinct activities: planning, delivery, measurement, and calibration. Thus, the IT organization and its customers first plan the nature of the service to be provided. Next, the IT organization delivers according to the plan, taking calls, resolv336
Service Level Management Links IT to the Business ing problems, managing change, monitoring inventory, opening the service desk to end users, and connecting to the network and systems management platforms. The IT organization then measures its performance to determine its service delivery level based on line of business needs. Finally, IT and the business department continually reassess their agreements to ensure they meet changing business needs. Delivering service involves many separate disciplines spanning IT functional groups. These include network operations, application development, hardware procurement and deployment, software distribution, and training. SLM also involves problem resolution, asset management, service request and change management, end-user empowerment, and network and systems management. Because all these disciplines and functions must be seamlessly integrated, IT must determine how to manage the performance of applications that cross multiple layers of hardware, software, and middleware. The following general components constitute SLM, and each contributes to the measurement of service levels: • Network availability: a critical metric in managing the network • Customer satisfaction: not as easily quantified, customer satisfaction results from end-users’ network experience, so IT must manage the network in light of user expectations • Network performance • Application availability: this, along with application response time, is directly related to customer satisfaction It is difficult to define, negotiate, and measure SLAs. The metrics for network availability and performance include the availability of devices and links connected to the network, the availability of servers, the availability of applications on the network, and application response time. Furthermore, to track any SLA elements, it is necessary to measure and report on each. SLAs can include such elements as network performance, network availability, network throughput, goals and objectives, and quality-of-service metrics (e.g., mean time to repair, and installation time). Other possible SLA elements include conditions/procedures for updating or renegotiating, assignment of responsibilities and roles, reporting policies and escalation procedures, measurement of technology failures, assumptions and definitions, and trend analyses. SLAs may also include penalties for poor performance, help-desk availability, baseline data, benchmark data, application response time, measurement of process failures, application availability, customer satisfaction metrics, and rewards 337
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE for above-target performance. But a main objective of SLAs is setting and managing user expectations. IMPROVING SERVICE LEVEL MANAGEMENT While the concept of SLM has gained widespread recognition, implementation has been slower, in part due to the complexity of the network environment. In addition, according to the 1999 INS survey findings on SLM, it is a continuing challenge. The good news is that 63 percent of respondents with SLM capabilities in place were satisfied with those capabilities in 1999 (according to the survey) — a dramatic improvement over the previous year. However, despite the high satisfaction with SLM, improving it was important to more than 90 percent of respondents. Furthermore, organizational issues presented the greatest challenge to improving SLM for half the respondents, and managerial issues were the top challenge for another third. Also, customer satisfaction was considered an important SLM metric by 81 percent of respondents. Finally, the top barriers to implementing or improving SLM were said to be organizational/process issues, other projects with higher priority, and the difficulty in measuring SLAs. Despite the fact that SLM and SLAs are moving in the right direction by focusing on applications and end-user response time, the SLA tool market is not yet mature. Instead, SLAs are ahead of the software that is monitoring them. Indeed, 47 percent of the network professionals surveyed by INS in 1999 said that difficulty in measuring SLAs was a significant barrier to implementing or improving SLM. Although SLA contracts have not been monitorable by software people until recently, that situation is changing. Vendors are starting to automate the monitoring process and trying to keep pace with the moving target of customers’ changing needs. Businesses should also realize that SLAs are a tool for more than defining service levels. Thus, SLAs should also be used to actively solicit the agreement of end users to service levels that meet their needs. Often, the providers and consumers of IT services misunderstand the trade-off between the cost of the delivered service and the business need/benefit. The SLA process can help set more realistic user expectations and can support higher budget requests when user expectations exceed IT’s current capabilities. Businesses can implement SLM for important goals such as improving mission-critical application availability and dependability, and reducing application response time as measured from the user’s point of view. In 338
Service Level Management Links IT to the Business general terms, SLM can also enhance IT organizational efficiency and costeffectiveness. To improve their SLM capabilities and meet these objectives, organizations can address the relevant organizational issues, providing processes and procedures that aim at consistent service delivery and associated user satisfaction. In addition, because application performance has become paramount, organizations can implement tools to monitor and measure the behavior of those mission-critical applications that depend on network availability and performance. TOWARD A BUSINESS-PROCESS FOCUS As IT continues to be a business driver, some analysts predict that SLM will move toward a focus on the business process, whereby organizations will abstract the state of the business processes that run their companies. In turn, the available data and its abstraction will consolidate into a dashboard reporting system. As organizations move toward a business dashboard, the data will be just a given. Because solution providers are rapidly becoming sophisticated in making data available, this is already happening today — and more rapidly than expected.
339
This page intentionally left blank
Chapter 28
Information Systems Audits: What’s in It for Executives? Vasant Raval Uma G. Gupta
Companies in which executives and top managers view the IS audit as a critical success factor often achieve significant benefits that include decreases in cost, increases in profits, more robust and useful systems, enhanced company image, and the ability to respond quickly to changing market needs and technology influences. Both of the following examples are real and occurred in companies in which one of the authors worked as a consultant. In both situations, IS auditors played a critical role in not only preventing significant monetary loss for the company, but also in enhancing the image of the company to its stakeholders: • Scenario 1. One fine morning, auditors from Software Publishers Association (SPA) knocked on the doors of one of your business units. They wanted to verify that every copy of every software package in your business unit was properly licensed. The unit had 1700 microcomputers on a local area network. Fortunately, information systems (IS) auditors had recently conducted an audit of software licenses in the business unit. This encouraged the IS auditors and business managers from the company to work closely with SPA auditors who reviewed the audit work and tested a sample of the microcomputers at the company’s facility. The SPA auditors commended the business unit for its exemplary records and outstanding monitoring of software licenses. The investigation was completed in a few hours and the company was given a clean bill in the software licensing audit. • Scenario 2. Early in 1995, the vice president of information systems of a Fortune 500 company visited with the director of audit services and rec0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
341
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE ommended that the company’s efforts to be compliant with Year 2000 (Y2K) should be audited. The vice president was sensitive to the fact that such audits, although expensive and time-consuming, do not have any immediate or significant monetary returns. After considerable discussion, it was agreed that an initial exploratory audit of the current status of the Y2K problem should be conducted. The audit was to outline and discuss the implications of the Y2K problem on the company’s profits and provide an initial estimate of the cost of conducting the audit. A few weeks later, IS auditors presented a report to the board of directors, which reviewed the findings and mandated IS managers and other managers throughout the company to invest resources where necessary to become Y2K compliant by December 1998.1 Given the critical role that IS auditors play in the financial success and stability of a company, IS audits should not be only under the purview of the information systems department. Instead, executives and other top managers should understand and support the roles and responsibilities of IS auditors and encourage their active participation at all levels of decision making. A nurturing and supportive environment for IS auditors can result in significant benefits for the entire organization. The purpose of this chapter is to present a broad overview of the IS audit function and its integral role in organizational decision making. The functions of the IS audit department are discussed and ways in which the IS audit can be used as a valuable executive decision-making tool are outlined. Recommendations for leveraging an IS audit report to increase organizational effectiveness are also outlined. WHAT IS AN IS AUDIT? Information systems audit (hereafter ISA) refers to a set of technical, managerial, and organizational services provided by a group of auditing experts in the area of information systems and technologies. IS auditors provide a wide range of consulting services on problems, issues, opportunities, and challenges in information systems and technologies. The goal of an IS audit may often vary from project to project or even from system to system. However, in general, the purpose of an IS audit is to maximize the leverage on the investments in information systems and technologies and ensure that systems are strategically aligned with the mission and overall goals of the organization. IS audits can be conducted in a number of areas, such as utilization of existing systems, investments, emerging technologies, computer security, help desks, electronic commerce, outsourcing, reengineering, and electronic data interchange (EDI). Other areas warranting an IS audit include database management, data warehousing, intranets, Web page design and mainte342
Information Systems Audits: What’s in It for Executives? Exhibit 1.
Categories of IS Audits
• Control environments audits. Provide guidelines for enterprisewide deployment of technology resources. Examples: business continuity (or disaster recovery) plans, PC software licensing, Internet access and control, LAN security and control. • General control audits. Review general and administrative controls for their adequacy and reliability. Examples: data center security, Internet security, end-user systems access and privileges, role and functions of steering committees. • Financial audits: — Review of automated controls designed as part of the systems, Examples: limit checks, compatibility checks, concurrency controls in databases. — Provide assistance for financial audits. Examples: use of generalized audit software packages and other computer-assisted audit tools to review transactions and their financial results. • Special projects. Projects initiated to satisfy one-time needs. Examples: feasibility study of outsourcing of projects, processes, or systems, risk analysis for proposed offshore software development initiatives. • Emerging technologies. Review and feasibility analysis of newer technologies for the business. Examples: electronic data interchange, Web technology, telecommuting, telephony, imaging, data warehousing, data mining.
nance, business intelligence systems, retention of IS personnel, migration from legacy systems to client/server environments, offshore software contracts, and developing strategic information systems plans. Given the dismal statistics on IS projects that are delivered within-budget and on-time, a number of companies are mandating audits of their information systems projects. Exhibit 1 identifies the different categories of IS audits. TRADITIONAL APPROACH VERSUS VALUE-ADDED APPROACH The traditional view of the IS audit function differs from the value-added view found in many progressive, forward-thinking organizations. In the traditional view, an IS audit is something that is “done to” a department, unit, or project. On the other hand, in a value-added approach an audit is viewed as something that is “done for” another department, unit, or project. This is not a simply play on words but is instead a philosophy that differentiates between environments that are controlling and nurturing; it exemplifies workplaces where people compete versus cooperate. In the traditional approach, the audit is viewed as a product, whereas in the value-added approach the audit is viewed as a service that enhances the overall quality and reliability of the end product or service that the company produces. In traditional environments, the auditor is viewed as an adversary, cop, and trouble-maker. On the other hand, in a value-added environment, the auditor is viewed as a consultant and a counselor. The IS auditor is viewed as one who applies his or her knowledge and expertise to leverage the max343
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Exhibit 2.
Traditional Approach versus Value-Added Approach to Auditing
Traditional Approach
Value-Added Approach
Something done to a unit, department, or project.
Something done for enhancing the quality, efficiency, and effectiveness of a unit, department, or project.
Audit is a product that is periodically delivered to specific units or departments.
Audit is an ongoing service provided to improve the “quality of life” of the organization.
The auditor plays an adversarial role.
The auditor is a consultant whose goal is to leverage resource utilization.
The auditor is a “best cop.”
The auditor is a houseguest.
The primary objective of auditing is to find errors and loopholes.
The primary objective of auditing is to increase the efficiency, effectiveness, and productivity of the organization.
Auditing is an expense.
Auditing is an investment.
The contribution of an auditor is temporary.
An auditor is a life-long business partner.
imum return on investments in information systems and technologies. The auditor is not someone who is out looking for errors but is instead an individual or a group of individuals who look for ways and means to improve the overall efficiency, effectiveness, and productivity of the company. Unlike the traditional approach where the auditor is viewed as someone who is on assignment, the value-based approach views the auditor as a long-term business partner. See Exhibit 2 for a summary of the key differences between the traditional approach and the value-added approach. ROLE OF THE IS AUDITOR The role of an IS auditor is much more than simply auditing a project, unit, or department. An IS auditor plays a pervasive and critical role in leveraging resources to their maximum potential and also in minimizing the risks associated with certain decisions. An IS auditor, therefore, wears several hats to ensure that information systems and technologies are synergistically aligned with the overall goals and objectives of the organization. Some key roles that an IS auditor has are outlined and discussed below: Internal Consultants Good IS auditors have a sound understanding of the business and hence can serve as outstanding consultants on a wide variety of projects. They 344
Information Systems Audits: What’s in It for Executives? can offer creative and innovative solutions to problems and identify opportunities where the company can leverage its information systems to achieve a competitive edge in the marketplace. In other words, IS audits can help organizations ask critical and probing questions regarding IS investments. The consultant role includes a wide variety of issues, including cost savings, productivity, and risk minimization. IS audits can help firms realize cost savings and proactively manage risks that are frequently associated with information technologies. IS audits in many cases support the financial audit requirements in a firm. For example, one of the authors of this chapter audited the review of a large offshore project resulting in savings of $3.4 million to the company. The auditor interviewed over 35 technical and management staff from the business unit and from the offshore facility. Based on the recommendations of the IS auditor, the offshore software development process was reengineered. The reengineering resulted in a well-defined and structured set of functional requirements, rigorous software testing procedures, and enhanced cross-cultural communications. The implications of the IS audit were felt not only on the particular project but on all future offshore IS projects. Change Agents IS auditors should be viewed as powerful change agents within an organization. They have a sound knowledge of the business and this, combined with an acute sense of financial, accounting, and legal ramifications of various organizational decisions, makes them uniquely qualified to push for change within an organization. For example, a company was having a difficult time implementing security measures in its information systems department. Repeated efforts to enlist the support of company employees failed miserably. Finally, the company sought the help of its IS audit team to enforce security measures. IS auditors acted as change agents and educated employees about the consequences of failing to meet established security measures. Within three months the company had one of the tightest security ships in its industry. Experts Many IS auditors specialize in certain areas of the business such as IS planning, security, system integration, electronic commerce, etc. These auditors not only have a good understanding of the technical issues, but also business and legal issues that may influence key information systems and projects. Hence, while putting together a team for any IS project, it is worthwhile to consider including an IS auditor as a team member. 345
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Advisors One of the key roles of an IS auditor is to serve as an advisor to the business manager on IS issues that have an enterprisewide effect. The advisory role often spans both technical and managerial issues. Examples of situations in which IS auditors could be used as advisors include software licensing management, establishing a standardization policy for hardware and software, evaluating key IS projects, and ensuring the quality of outsourcing contracts. IS auditors not only monitor the progress of the project but also provide timely advice if the project is going haywire. It is worthwhile to always include a member from the IS audit team on IS ventures that have significant implications for the organization. Advocates IS auditors can serve as outstanding advocates to promote the information system needs and functions of business units to top management. As neutral parties who have a stake in the success of the company, their views are often likely to get the attention of top management. IS auditors cannot only serve as advocates of the technology and personnel needs of the business unit, but also emphasize the strategic role of information systems in the success of both the business unit and the organization at large. IS auditors also play a critical role in ensuring the well-being of the organization. For example, IS auditors have often played a leading role in convincing top management of the importance of investing in computer security, without which the organization may simply cease to be in business. ROLE OF EXECUTIVES IN CAPITALIZING THE IS AUDIT FUNCTION Successful and pragmatic companies view the IS audit function as an integral and vital element in corporate decision making. Companies that view IS audit as an information systems function — or even worse, as merely an audit function — will fail to derive the powerful benefits that an IS audit can provide. This section discusses how companies can use the IS audit to achieve significant benefits for the entire organization. Be Proactive The IS audit should not be viewed as a static or passive function in an organization that is called to act on a “need-only” basis. Instead, the IS audit function should be managed proactively and should be made an integral part of all decision making in the organization. The auditor is an internal consultant whose primary goal is to provide the information and tools necessary to make sound decisions. The auditor’s role is not limited to one department or even to one project; instead, the goal of the auditor is to help each business unit make sound technology decisions so as to have a far-reaching and positive impact on the entire organization. However, this 346
Information Systems Audits: What’s in It for Executives? cannot be achieved unless companies are proactive in tapping into the skillset of its IS auditors. Increase Visibility of the IS Audit Executives who view the IS audit function as a necessary evil will be doing grave injustice to their organizations. Top management should take an active role in advocating the contribution of the IS audit team to the organization. Executives must play an active role in promoting the critical role and significant contributions of IS auditors. Publicizing projects and systems where an IS audit resulted in significant savings to the company or led to better systems is a good way to increase organizational understanding of IS audits. Many companies also mandate IS audits for all projects and systems that exceed a certain minimum dollar value, thus increasing the visibility and presence of IS auditors. Enhance the IS Auditor’s Image Encourage business units managers to view the IS audit not as a means to punish individuals or units, but as an opportunity to better utilize information systems and technologies to meet the overall goals of the organization. Include IS auditors in all key strategic committees and long-range planning efforts. Bring IS auditors early on into the development phase of a project so that project members view them as team players rather than “cops.” Provide Resources The IS audit, like other audit functions, requires hardware, software, and training resources. Companies that recognize the critical role of IS auditors support their resource needs and encourage their active participation. They recognize that a good and robust audit system can pay for itself many times over in a short span of time. Given the rapid changes in technology, auditors not only need hardware and software resources to help them stay on the leading edge, but should also be given basic training in the use of such technologies. Communicate, Communicate, Communicate Effective communication between business units and IS auditors is vital for a healthy relationship between the two groups. Business unit managers should know the specific role and purpose of an IS audit. They should have a clear understanding of who will review the auditors’ report and the actions that will be initiated based on that report. IS auditors, on the other hand, should be more open in their communication with business managers and communicate issues and concerns, both informally and formally. 347
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE They should always be good team players and understand that their role is to help and support the organization in achieving its full potential. CONCLUSION The IS audit is a critical function for any organization. What separates the successful organizations from the less successful ones is the ability to leverage the IS audit function as a vital element in organizational decision making. Companies in which executives and top managers view the IS audit as a critical success factor often achieve significant benefits, including decreases in costs, increases in profits, more robust and useful systems, enhanced company image, and ability to respond quickly to changing market needs and technology influences. Notes 1. Editor’s note: Although already old and linked to an issue that does not get much attention anymore, this example illustrates well the importance of understanding the effects of IS audits and their results on the business performance of an organization.
348
Chapter 29
Cost-Effective IS Security via Dynamic Prevention and Protection Christopher Klaus
This chapter presents a fresh perspective on how the IS security mechanism should be organized and accomplished. It discusses the unique characteristics of the cyberspace computing environment and how those characteristics affect IS security problems. It describes three approaches to resolving those problems and analyzes the effectiveness of each. THE CYBERSPACE ENVIRONMENT In cyberspace, one cannot see, touch, or detect a problem. Most organizations find it difficult to allocate funds to address problems that their executives cannot directly experience. Comprehensive prevention and protection is a complex, multilayer process that reaches across networks, servers, desktops, and applications. It takes a considerable expenditure in both software and services and in highly trained staff to properly protect online information assets from attack or misuse. And yet, security for online business operations is rarely a direct part of an organization’s core expertise. For these reasons, many organizations underinvest in security. The problem with underprotection is that security breaches have a direct effect on profitability, either through business interruption loss, merger and acquisition due diligence, legal and shareholder/stakeholder liability, regulatory compliance, or negative publicity. An information processing application or network looks the same (at least externally) from the time of an attacker’s initial reconnaissance through penetration and
0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
349
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE subsequent attack on the application or network. If risks associated with the network are not adequately addressed, economic and operational harm may occur before the damage is discovered and remedied. The specific risk facing any individual organization is defined by the combination of the threat to the information resource or the application that processes it, its vulnerability to compromise, the economic consequences of an assault, and the likelihood of a successful attack. During the past few years, a large number of commercial and government organizations have studied the challenges associated with reducing risk within such a complex environment. Within the physical domain, decision makers typically have minutes, hours, days, or even weeks to respond to potential or actual attacks by various types of intruders. This is not true in the world of cyberspace. Security decisions need to be made almost instantaneously, requiring highly automated sensors and management platforms capable of helping security administrators focus on the most important security events immediately, with an absolute minimum of false alarms or false-positives. At the same time, the security solution must not unduly affect normal online business operations. The Four Categories of Human Threats Four basic human threat categories exist in cyberspace: internal and external, structured and unstructured. Internal Threat: Unstructured. The unstructured internal threat is posed by the average information processing application user. Typically, this individual lacks awareness of existing technical computing vulnerabilities and is responsible for such things as device use errors and network crashes. These result from inadvertent misuse of computing resources and poor training. When these individuals exploit computing resources for illegal gain, they typically misuse authorized privileges or capitalize on obvious errors in file access controls. Internal Threat: Structured. The structured internal threat is posed by an authorized user who possesses advanced knowledge of network vulnerabilities. This person uses this awareness to work around the security provisions in a simplistically configured network. An aggressive, proactive IS security mechanism must be deployed to counter the threat that this person’s activities may pose. External Threat: Unstructured. The unstructured external threat created by the average World Wide Web user is usually not malicious. Typically, this individual lacks the skills and motivation to cause serious damage to a network. However, this person’s curiosity can lead to unintentional system crashes and the loss of data files. 350
Cost-Effective IS Security via Dynamic Prevention and Protection External Threat: Structured. The structured external threat stems from someone with detailed knowledge of network vulnerabilities. This person has access to both manual and automated attack tools that permit compromising most IS security programs, especially when the intruder does not perceive a risk of detection or apprehension. In particular, the development of hybrid threats, automated integrations of virus technologies and attack techniques has created a virulent new class of structured external threats designed specifically to elude traditional security infrastructure such as firewalls or anti-virus mechanisms.
The IS Security Issues of the Virtual Domain Within the virtual domain, the entire sequence that may be associated with a probe, intrusion, and compromise of a network, server, or desktop often can be measured in milliseconds. An attacker needs to locate only one exposed vulnerability. By contrast, the defenders of an application or network must address hundreds of potential vulnerabilities across thousands of devices. At the same time, these defenders must continue to support an array of revenue-generating or mission-enabling operations. The virtual domain is not efficiently supported by conventional manual audits, random monitoring of information processing application operations, and nonautomated decision analysis and response. It requires strategic insertion and placement of technical and procedural countermeasures, as well as rapid, automated responses to unacceptable threat and vulnerability conditions involving a wide array of attacks and misuse. READY-AIM-FIRE: THE WRONG APPROACH The primary challenges associated with bringing the network security domain under control result from its relative complexity, as well as the shortage of qualified professionals who understand how to operate and protect it. Some organizations have adequate and well-trained IS staff. The norm, however, is a small, highly motivated but outgunned team that focuses most of its energies on user account maintenance, daily emergencies, and general network design reviews. Few staff have time to study evolving threat, vulnerability, and safeguard (countermeasure) data, let alone develop policies and implementation plans. Even fewer have time to monitor network activity for signs of application or network intrusion or misuse. This situation results in a “ready-aim-fire” response to IS security vulnerabilities, achieving little more than to create a drain on the organization. This is the typical sequence of events: 1. IS executives fail to see the network in the context of the actual risk conditions to which it is exposed. These individuals understand 351
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 1. The Ad-Hoc Approach to Safeguard Selection Does Not Work
the basic technology differences between such operating systems as Windows NT/2000/XP and Sun Solaris. They also understand how products such as Oracle, Sybase, Internet Explorer, Microsoft Word, PowerPoint, and Excel enhance operations. However, these individuals typically have little knowledge about the vulnerabilities associated with the use of such products and can allow threats to enter, steal, destroy, or modify the enterprise’s most sensitive data. 2. IS safeguards are implemented in an ad hoc manner due to this incomplete understanding of the problem (Exhibit 1). There is no real program to map security exposures to IS operational requirements, no study of their effects on either threats or vulnerabilities, and no analysis of the return on security investment. That is: SECURITY = DIRECT TECHNICAL COUNTERMEASURES
(The latter include such things as firewalls, data encryption, and security patches.) 3. These organizations are left with a false sense of security (Exhibit 2). They believe that the risk has been addressed, when in fact many threats and vulnerabilities remain. 4. As a result, risk conditions continue to degrade as users alter system and safeguard configurations and work around the safeguards. 352
Cost-Effective IS Security via Dynamic Prevention and Protection
:H'RQ W.QRZ:KDW V$GGUHVVHGDQG:KDW V1RW
2QO\3DUWLDO 5HGXFWLRQ 0D\EH 7KUHDW 5HGXFWLRQ
)LUHZDOO 5RXWHU &RQILJXUDWLRQ
&RPPV
0RGHP $FFHVV 7KH:RUOG
5RXWHU
7KH:RUOG 2QO\3DUWLDO5HGXFWLRQ 0D\EH 9XOQHUDELOLW\5HGXFWLRQ ([WHUQDO 8QVWUXFWXUHG
Exhibit 2.
([WHUQDO 6WUXFWXUHG
,QWHUQDO 8QVWUXFWXUHG
,QWHUQDO 6WUXFWXUHG
What the Network Really Looks Like
LOOKING FOR MANAGEMENT COMMITMENT The approach just described is obviously not the answer. As noted in Exhibit 3, online vulnerability conditions are complex; encompass many networks, servers, and desktops; and require more than token attention. Success within the virtual domain will depend on the acceptance and adoption of sound processes that support a sequential and adaptive IS security model. However, an attempt to obtain the commitment of the organization’s senior executives to an investment in new IS security may be rejected. The key to obtaining support from senior executives is a clear presentation of how the organization will receive a return on its investment. A GOOD START IS WITH WHAT IS UNDERSTOOD The best place to start developing a new IS security solution is with what is already understood and can be applied directly to the new problem domain. In this case, one starts with the following steps: 1. 2. 3. 4. 5.
Define sound security processes. Create meaningful and enforceable policies. Implement organizational safeguards. Establish appropriate program metrics. Conduct frequent IS security program audits, which evaluate variance between specific organizational IS security policies and their actual implementation. 353
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE /D\HU &RPPXQLFDWLRQV DQG6HUYLFHV 7&3,3 ,3; ; (WKHUQHW )'', 5RXWHU&RQILJXUDWLRQV +XEV6ZLWFKHV
0RGHP $FFHVV
7KH :RUOG
&RPPV 5RXWHU 7KH :RUOG /D\HU
/D\HU
2SHUDWLQJ6\VWHPV
$SSOLFDWLRQV
81,; :LQGRZV17 :LQGRZV :LQGRZV;3 0DF26; 1RYHOO 096 '26 26 906
Exhibit 3.
'DWDEDVHV :HE6HUYHU ,QWHUQHW%URZVHU 0DLQWHQDQFH 2IILFH$XWRPDWLRQ
Vulnerabilities Are Located throughout the Network Architecture
Without established process and rigor, successful, meaningful reduction of network risk is highly unlikely. This situation also ensures that there will be a major variance between the actual IS security program implementation and the organization’s IS security policy. DIRECT RISK MITIGATION Without an understanding of the total risk to their networks, many organizations move quickly to implement conventional baseline IS security solutions such as: • Identification and authentication (I&A) • Data encryption • Access control This approach is known as direct risk mitigation. Organizations that implement this approach will experience some reduction in risks. However, these same organizations will tend to leave significant other risks unaddressed. The network security domain is too complex for such an ad hoc approach to be effective. 354
Cost-Effective IS Security via Dynamic Prevention and Protection
5LVN0DQDJHPHQW0RGHO 5HILQHDV )RUP%DVLVIRU6HFXULW\ 3HUIRUP)UHTXHQW 1HFHVVDU\ 3URJUDPDQG5HTXLUHPHQWV 5,6. $66(660(176 3ODQQLQJ
3HUIRUP)UHTXHQW 5,6.326785( $66(660(176 0HDVXUH(IIHFWLYHQHVV 5HYLHZV $XGLWV
,PSOHPHQW (QIRUFH 9DOXH$GGHG 6DIHJXDUGV
3URFHGXUHV 7HFKQLFDO&RXQWHUPHDVXUHV
6HFXULW\3URJUDP ,PSOHPHQWDWLRQ 0DQDJHPHQW
Exhibit 4. Implementation of Sound Risk Management Process Will Ensure Reduced Risk
Incorporating risk analysis, policy development, and traditional audits into the virtual domain will provide the initial structure required to address many of these issues. At a minimum, the IS security program must consist of well-trained personnel who: • Adhere to sound, standardized processes • Implement valid procedural and technical solutions • Provide for audits intended to support potential attack or information application misuse analysis This approach is captured by the formula: SECURITY = RISK ANALYSIS + POLICY + DIRECT TECHNICAL COUNTERMEASURES + AUDIT
If implemented properly, direct risk mitigation provides 40 to 60 percent of the overall IS security solution (Exhibit 4). This model begins, as should all security programs, with risk assessment. The results support computing operations and essential enterprise planning efforts. Without proper risk analysis processes, the IS security policy and program lacks focus and traceability (Exhibit 5). Once a risk assessment has been conducted, the individuals responsible for implementation will acquire, configure, and operate the defined network solution. Until now, little has been done to ensure that clear technical IS security policies are provided to these personnel. The lack of guidance and rationale has resulted in the acquisition of non-value-added technical safeguards and the improper and insecure configuration of the associated 355
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
3ROLF\$GGUHVVHV6SHFLILF7KUHDWV 9XOQHUDELOLWLHV52,%DVHG 'LVDEOH 0DLQWHQDQFH %DFNGRRU
&KDQJH 'HIDXOW 3DVVZRUG
&RPPV
&RQILJXUH $SSO\$OO $V)LOWHU 263DWFKHV
,QVWDOO,GHQWLILFDWLRQ $XWKHQWLFDWLRQ
0RQLWRULQJ %DQQHU3DJH
Exhibit 5.
7KH:RUOG
5RXWHU
(QFU\SW 'DWD)LOHV
7KH:RUOG
([WHUQDO 8QVWUXFWXUHG
0RGHP $FFHVV
8VHU6HFXULW\ 7UDLQLQJ
([WHUQDO 6WUXFWXUHG
,QWHUQDO 8QVWUXFWXUHG
$XGLW 5HYLHZ
,QWHUQDO 6WUXFWXUHG
Ensuring a Sound Security Policy
applications once these mechanisms have arrived within the operational environment. One other major problem typically occurs within the implementation phase. Over time, administrators and users alter system configurations. These alterations re-open many of the vulnerabilities associated with the network’s communications services, operating systems, and applications. This degradation has driven the requirement represented within the final phase of the risk management cycle. Risk posture assessments (audits) are linked to the results of the risk assessment. Specifically, risk posture assessments determine the organizational IS security policy compliance levels, particularly as they define the variance from the policy. The results of such assessments highlight program weaknesses and support the continuous process of measuring compliance of the IS security policy against actual security practice. Organizations can then facilitate a continuous improvement process to reach their goals. Risk Posture Assessment Results The results of a risk posture assessment can be provided in a number of individual formats. Generally, assessment results may be provided to: 356
Cost-Effective IS Security via Dynamic Prevention and Protection • Technicians and engineers in a format that supports corrective action • Security and network managers in a format that supports program analysis and improvement • Operations executives in a format that summarizes the overall effectiveness of the IS security program and its value to the organization This approach is sound, responsive, and simple to implement. However, major problems still exist, and this approach addresses only 40 to 60 percent of the solution. Attackers do not care about this 40 to 60 percent — they only care about the remaining 40 to 60 percent that has been left exposed. Any success associated with this type of process depends on proper initial system and countermeasure implementation and a fairly static threat and vulnerability environment. This is not the case in most organizations. Normally, the IS security exposures not addressed by this approach include: • An active, highly knowledgeable, and evolving threat • A greatly reduced network security decision and response cycle • Network administrators and users who misconfigure or deliberately work around the IS security countermeasures • Low levels of user and administrator awareness of the organization’s IS security policies and procedures — and the threats and vulnerabilities those policies are designed to detect and resolve • Highly dynamic vulnerability conditions The general classes of vulnerabilities involve: • Design inadequacies in hardware and software • Implementation flaws, such as insecure file transfer mechanisms • Administration deficiencies Although direct risk mitigation is a good start to enhancing IS security, serious threats and vulnerability conditions can still leave the network highly susceptible to attack and misuse. The next level of response is described as dynamic prevention and protection. DYNAMIC PREVENTION AND PROTECTION The world of cyberspace requires an adaptive, highly responsive process and product set to ensure ongoing, consistent risk reduction. This solution is dynamic prevention and protection, which is discussed further in this chapter. It is captured in the formula: SECURITY = RISK ANALYSIS + POLICY + IMPLEMENTATION + THREAT AND VULNERABILITY MONITORING + ACTIVE BLOCKING OF IMMEDIATE THREATS + LONGER-TERM RESPONSE TO NONCRITICAL THREATS AND VULNERABILITIES 357
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE The dynamic protection model consists of a proactive cyclic risk management approach that includes active network and systems monitoring, detection, and response. A comprehensive security management mechanism becomes a natural outgrowth of the overall IS environment and provides overlapping, yet complementary, network, server, and desktop management ser vices. These performance and security management mechanisms are required to support an organization’s overall operational requirements. The network security management application supports the unique variables associated with the network security domain. Its architectural components address and support the following variables. • Attack analysis and response. Attack analysis and response is the realtime monitoring of attack recognition signatures, network protocol configurations, and other suspicious activities, including viruses, probing activity, and unauthorized modification of system access control mechanisms. Real-time monitoring provides the ability to rapidly detect unauthorized activity and respond with a variety of counterthreat techniques. The responses can range from simple IS security officer notification to proactive blocking of suspect users or behaviors, or automated reconfiguration of identified weaknesses or communications paths. • Misuse analysis and response. Misuse analysis and response is the realtime monitoring of the internal misuse of online resources. Typically, misuse is associated with activities that do not impact operational computing effectiveness, but are counter to documented policy regarding the acceptable use of organizational systems and resources. Automated response actions include denial of access, sending warning messages to the offending individuals, and the dispatch of e-mail messages to appropriate managers. • Vulnerability analysis and response. Vulnerability analysis and response consists of frequent, automated scanning of network components to identify unacceptable security-related vulnerability conditions — including automatic vulnerability assessment and reconfiguration for other potentially suspect devices once an active attack has been identified. This unacceptability is determined by a failure to conform to the organization’s IS security policy. The scanning includes automated detection of relevant design and administration vulnerabilities. Detection of the vulnerabilities leads to a number of user-defined responses, including automatic correction of the exposure, the dispatch of automated e-mail corrective actions, and the issuance of warning notices. 358
Cost-Effective IS Security via Dynamic Prevention and Protection • Configuration analysis and response. Configuration analysis and response includes frequent, automated scanning of performance-oriented configuration variables. • Risk posture analysis and response. Risk posture analysis and response includes automated evaluation of threat activity and vulnerability conditions. This activity goes beyond basic, hard-coded detection and response capabilities. It requires and bases its response on the analysis of a number of variables such as asset value, threat profile, and vulnerability conditions. Analysis supports real-time technical modifications and countermeasures in response to dynamic risk conditions. These countermeasures may include denial of access, active blocking of suspect users or behaviors, placement of conventional decoy files, and mazing — setting up decoy files and directory structures to lock an intruder into a maze of worthless directories to track his activities and form a basis for possible prosecution. • Audit and trends analysis. Audit and trends analysis includes the automated evaluation of threat, vulnerability, response, and awareness trends. The output of such an examination includes historical trends data associated with the IS security program’s four primary metrics: (1) risk, (2) risk posture, (3) response, and (4) awareness. This data supports both program planning and resource allocation decisions and automated assessments and reconfigurations based on clearly identified risk variables. • Real-time user awareness support. Real-time user awareness support provides recurring IS security policy, risk, and configuration training. This component ensures that users are aware of key organizational IS security policies, risk conditions, and violations of the policies. • Continuous requirement support. The dynamic prevention and protection model and its related technology components support organizational requirements to continuously ensure that countermeasures are installed and properly configured. Threats are monitored and responded to in a highly effective and timely manner, and vulnerability conditions are analyzed and corrected prior to exploitation. The model also supports the minimization of system misuse and increases general user and administrator IS security awareness. With the inclusion of the model and its supporting technologies, the entire spectrum of network security is addressed and measured. Although reaching the zero percent risk level is impossible in the real world of computing and telecommunication, incorporating dynamic prevention and protection security processes and mechanisms into the overall IS security effort supports reaching and maintaining a realistic solution — that is, the best solution for any one specific organization in terms of risk management and best value for each dollar of security investment. In addition to appro359
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE priately and consistently addressing these unique network security variables, these technology modules support the requirement for defining, collecting, analyzing, and improving the IS security program’s operational metrics.
360
Chapter 30
Reengineering the Business Continuity Planning Process Carl B. Jackson
CONTINUITY PLANNING MEASUREMENTS There is a continuing indication of a disconnect between executive management’s perceptions of continuity planning (CP) objectives and the manner in which they measure its value. Traditionally, CP effectiveness was measured in terms of a pass/fail grade on a mainframe recovery test, or on the perceived benefits of backup/recovery sites and redundant telecommunications weighed against the expense for these capabilities. The trouble with these types of metrics is that they only measure CP direct costs, or indirect perceptions as to whether a test was effectively executed. These metrics do not indicate whether a test validates the appropriate infrastructure elements or even whether it is thorough enough to test a component until it fails, thereby extending the reach and usefulness of the test scenario. Thus, one might inquire as to the correct measures to use. While financial measurements do constitute one measure of the CP process, others measure the CPs contribution to the organization in terms of quality and effectiveness, which are not strictly weighed in monetary terms. The contributions that a well-run CP process can make to an organization include: • • • • •
Sustaining growth and innovation Enhancing customer satisfaction Providing people needs Improving overall mission-critical process quality Providing for practical financial metrics
0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
361
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE A RECEIPT FOR RADICAL CHANGE: CP PROCESS IMPROVEMENT Just prior to the new millennium, experts in organizational management efficiency began introducing performance process improvement disciplines. These process improvement disciplines have been slowly adopted across many industries and companies for improvement of general manufacturing and administrative business processes. The basis of these and other improvement efforts was the concept that an organization’s processes (Process; see Exhibit 1) constituted the organization’s fundamental lifeblood and, if made more effective and more efficient, could dramatically decrease errors and increase organizational productivity. An organization’s processes are a series of successive activities, and when they are executed in the aggregate, they constitute the foundation of the organization’s mission. These processes are intertwined throughout the organization’s infrastructure (individual business units, divisions, plants, etc.) and are tied to the organization’s supporting structures (data processing, communications networks, physical facilities, people, etc.). A key concept of the process improvement and reengineering movement revolves around identification of process enablers and barriers (see Exhibit 1). These enablers and barriers take many forms (people, technology, facilities, etc.) and must be understood and taken into consideration when introducing radical change into the organization. The preceding narration provides the backdrop for the idea of focusing on continuity planning not as a project, but as a continuous process that must be designed to support the other mission-critical processes of the organization. Therefore, the idea was born of adopting a continuous process approach to CP, along with understanding and addressing the people, technology, facility, etc. enablers and barriers. This constitutes a significant or even radical change in thinking from the manner in which recovery planning has been traditionally viewed and executed. Radical Changes Mandated High awareness of management and low CP execution effectiveness, coupled with the lack of consistent and meaningful CP measurements, call for radical changes in the manner in which one executes recovery planning responsibilities. The techniques used to develop mainframe-oriented disaster recovery (DR) plans of the 1980s and 1990s consisted of five to seven distinct stages, depending on whose methodology was being used, that required the recovery planner to: 1. Establish a project team and a supporting infrastructure to develop the plans. 362
Reengineering the Business Continuity Planning Process Exhibit 1.
Definitions
Activities: Activities are things that go on within a process or sub-process. They are usually performed by units of one (one person or one department). An activity is usually documented in an instruction. The instruction should document the tasks that make up the activity. Benchmarking: Benchmarking is a systematic way to identity, understand, and creatively evolve superior products, services, designs, equipment, processes, and practices to improve the organization’s real performance by studying how other organizations are performing the same or similar operations. Business process improvement: Business process improvement (BPI) is a methodology that is designed to bring about self-function improvements in administrative and support processes using approaches such as FAST, process benchmarking, process redesign, and process reengineering. Comparative analysis: Comparative analysis (CA) is the act of comparing a set of measurements to another set of measurements for similar items. Enabler: An enabler is a technical or organizational facility/resource that make it possible to perform a task, activity, or process. Examples of technical enablers are personal computers, copying equipment, decentralized data processing, voice response, etc. Examples of organizational enablers are enhancement, self-management, communications, education, etc. Fast analysis solution technique: FAST is a breakthrough approach that focuses a group’s attention on a single process for a one- or two-day meeting to define how the group can improve the process over the next 90 days. Before the end of the meeting, management approves or rejects the proposed improvements. Future state solution: A combination of corrective actions and changes that can be applied to the item (process) under study to increase its value to its stakeholders. Information: Information is data that has been analyzed, shared, and understood. Major processes: A major process is a process that usually involves more than one function within the organization structure, and its operation has a significant impact on the way the organization functions. When a major process is too complex to be flowcharted at the activity level, it is often divided into sub-processes. Organization: An organization is any group, company, corporation, division, department, plant, or sales office. Process: A process is a logical, related, sequential (connected) set of activities that takes an input from a supplier, adds value to it, and produces an output to a customer. Sub-process: A sub-process is a portion of a major process that accomplishes a specific objective in support of the major process. System: A system is an assembly of components (hardware, software, procedures, human functions, and other resources) united by some form of regulated interaction to form an organized whole. It is a group of related processes that may or may not be connected. Tasks: Tasks are individual elements or subsets of an activity. Normally, tasks relate to how an item performs a specific assignment. From Harrington, H.J., Esseling, E.K.C., and Van Nimwegen, H., Business Process Improvement Workbook, McGraw-Hill, 1997, 1–20.
363
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE 2. Conduct a threat or risk management review to identify likely threat scenarios to be addressed in the recovery plans. 3. Conduct a business impact analysis (BIA) to identify and prioritize time-critical business applications/networks and determine maximum tolerable downtimes. 4. Select an appropriate recovery alternative that effectively addresses the recovery priorities and timeframes mandated by the BIA. 5. Document and implement the recovery plans. 6. Establish and adopt an ongoing testing and maintenance strategy. Shortcomings of the Traditional Disaster Recovery Planning Approach The old approach worked well when disaster recovery of “glass-house” mainframe infrastructures was the norm. It even worked fairly well when it came to integrating the evolving distributed/client/server systems into the overall recovery planning infrastructure. However, when organizations became concerned with business unit recovery planning, the traditional DR methodology was ineffective in designing and implementing business unit/function recovery plans. Of primary concern when attempting to implement enterprisewide recovery plans was the issue of functional interdependencies. Recovery planners became obsessed with identification of interdependencies between business units and functions, as well as the interdependencies between business units and the technological services supporting time-critical functions within these business units. Losing Track of the Interdependencies The ability to keep track of departmental interdependencies for CP purposes was extremely difficult and most methods for accomplishing this were ineffective. Numerous circumstances made consistent tracking of interdependencies difficult to achieve. Circumstances affecting interdependencies revolve around the rapid rates of change that most modern organizations are undergoing. These include reorganization/restructuring, personnel relocation, changes in the competitive environment, and outsourcing. Every time an organizational structure changes, the CPs must change and the interdependencies must be reassessed. The more rapid the change, the more daunting the CP reshuffling. Because many functional interdependencies could not be tracked, CP integrity was lost and the overall functionality of the CP was impaired. There seemed to be no easy answers to this dilemma. Interdependencies Are Business Processes Why are interdependencies of concern? And what, typically, are the interdependencies? The answer is that, to a large degree, these interdependencies are the business processes of the organization and they are of concern 364
Reengineering the Business Continuity Planning Process because they must function in order to fulfill the organization’s mission. Approaching recovery planning challenges with a business process viewpoint can, to a large extent, mitigate the problems associated with losing interdependencies, and also ensure that the focus of recovery planning efforts is on the most crucial components of the organization. Understanding how the organization’s time-critical business processes are structured will assist the recovery planner in mapping the processes back to the business units/departments; supporting technological systems, networks, facilities, vital records, people, etc.; and keeping track of the processes during reorganizations or during times of change. THE PROCESS APPROACH TO CONTINUITY PLANNING Traditional approaches to mainframe-focused disaster recovery planning emphasized the need to recover the organization’s technological and communications platforms. Today, many companies have shifted away from technology recovery and toward continuity of prioritized business processes and the development of specific business process recovery plans. Many large corporations use the process reengineering/improvement disciplines to increase overall organizational productivity. CP itself should also be viewed as such a process. Exhibit 2 provides a graphical representation of how the enterprisewide CP process framework should look. This approach to continuity planning consolidates three traditional continuity planning disciplines, as follows: 1. IT disaster recovery planning (DRP). Traditional IT DRP addresses the continuity planning needs of the organizations’ IT infrastructures, including centralized and decentralized IT capabilities and includes both voice and data communications network support services. 2. Business operations resumption planning (BRP). Traditional BRP addresses the continuity of an organization’s business operations (e.g., accounting, purchasing, etc.) should they lose access to their supporting resources (e.g., IT, communications network, facilities, external agent relationships, etc.). 3. Crisis management planning (CMP). CMP focuses on assisting the client organization develop an effective and efficient enterprisewide emergency/disaster response capability. This response capability includes forming appropriate management teams and training their members in reacting to serious company emergency situations (e.g., hurricane, earthquake, flood, fire, serious hacker or virus damage, etc.). CMP also encompasses response to life-safety issues for personnel during a crisis or response to disaster. 4. Continuous availability (CA). In contrast to the other CP components as explained above, the recovery time objective (RTO) for recovery of infrastructure support resources in a 247 environment 365
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Enterprisewide Availability Infrastructure and Approach
Business Process/ Function/Unit Recovery Planning and Execution Teams - Time-Critical Processing - Resource Requirements - Plan Development - Plan Exercise - Quality Assurance - Change Management
- Business Process Focused - Risk Management/Analysis/BIA - Continuity and Recovery Strategy - E-Business Uptime Requirements - Benchmarking/Peer Analysis
Crisis Management Planning (CM) Business Resumption Planning (BRP)
Disaster Recovery Planning (DRP)
Continuous Availability Continuous Availability - Continuous Operations - Disaster Avoidance - E-Technologies Redundancy and Diversity - Known Failover and Recovery Timeframes
Global Enterprise Emergency and Recovery Response Team(s) - Emergency Response - Command Center Planning - Awareness Training - Communications Coordination
Technology Infrastructure Recovery Planning and Execution Teams - Strategy Implementation Assistance - Plan Development - Plan Exercise - Quality Assurance - Change Management
Exhibit 2. The Enterprisewide CP Process Framework
has diminished to zero time. That is, the client organization cannot afford to lose operational capabilities for even a very short period of time without significant financial (revenue loss, extra expense) or operational (customer service, loss of confidence) impact. The CA service focuses on maintaining the highest uptime of support infrastructures to 99 percent and higher. MOVING TO A CP PROCESS IMPROVEMENT ENVIRONMENT Route Map Profile and High-Level CP Process Approach A practical, high-level approach to CP process improvement is demonstrated by breaking down the CP process into individual sub-process components as shown in Exhibit 3. The six major components of the continuity planning business process are described below. Current State Assessment/Ongoing Assessment. U n d e r s t a n d i n g t h e approach to enterprisewide continuity planning as illustrated in Exhibit 3, one can measure the “health” of the continuity planning 366
Reengineering the Business Continuity Planning Process
Establish Infrastructure
Do Dev cu elo me pm nta en tio t a n S nd up po rt
Implementation
Operations
Implem en Requir tation of ed Infrastr BCP ucture
Resou rc Criticali e ty
Ma Com xim mit u m D o m T ent w n ole to tim rab le e
Develop Strategy n of Definitiocture u Infrastr ments e Requir
Risk Mitigation Initiatives
als Go s ve ive uti ct ec bje Ex d O an
Executive
Implementation in Business Unit
Process Risk and Impact Baselining
Related Process Strategy
Current State Assessment
cts s Impa Proces nt Threats rre and Cu
Proce Informa ss tion
Business Units
Continuous Improvement Assistance Plan Ownership
Information Security Vital Records Crisis Management Information Technology Physical Facilities Executive Protection
Business Unit Information Security Vital Records Crisis Management Information Technology Physical Facilities Executive Protection Audit
Technology Owners Business Unit Owners Human Resources Recovery Vendors
Organizational Change
Exhibit 3.
A Practical, High-Level Approach to the CP Process Improvement
process. During this process, existing continuity planning business sub-processes are assessed to gauge their overall effectiveness. It is sometimes useful to employ gap analysis techniques to understand current state, desired future state, and then understand the people, process, and technology barriers and enablers that stand between the current state and the future state. An approach to co-development of current state/future state visioning sessions is illustrated in Exhibit 4. The current state assessment process also involves identifying and determining how the organization “values” the CP process and measures its success (often overlooked and often leading to the failure of the CP process). Also during this process, an organization’s business processes are examined to determine the impact of loss or interruption of service on the overall business through performance of a business impact analysis (BIA). The goal of the BIA is to prioritize business processes and assign 367
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Current5.__________ State
1. Define
1.__________ 2.__________ 3.__________ 4.__________
6.__________ 7.__________ 8.__________
Key Performance Indicators Key Future State ContinuityRelated Initiatives Critical Success Factors How do we measure success?
3. Document Analyze Design
GAP
Potential Risks/Barriers/Rewards What are our people-, process-, technology-, and mission-related risks/barriers/rewards?
Future 5.__________ State
2. Vision
Exhibit 4.
1.__________ 2.__________ 3.__________ 4.__________
6.__________ 7.__________ 8.__________
Current State/Future State Visioning Overview
the recovery time objective (RTO) for their recovery, as well as for the recovery of their support resources. An important outcome of this activity is the mapping of time-critical processes to their support resources (e.g., IT applications, networks, facilities, communities of interest, etc.). Process Risk and Impact Baseline. During this process, potential risks and vulnerabilities are assessed, and strategies and programs are developed to mitigate or eliminate those risks. The stand-alone risk management review (RMR) commonly looks at the security of physical, environmental, and information capabilities of the organization. In general, the RMR should identify or discuss the following areas:
• • • • • • • •
potential threats physical and environmental security information security recoverability of time-critical support functions single-points-of-failure problem and change management business interruption and extra expense insurance an offsite storage program, etc.
Strategy Development. This process involves facilitating a workshop or series of workshops designed to identify and document the most appropriate recovery alternative to CP challenges (e.g., determining if a hotsite is needed for IT continuity purposes, determining if additional communications circuits should be installed in a networking environment, determining if additional workspace is needed in a business operations environment, etc.). Using the information derived from the risk assessments 368
Reengineering the Business Continuity Planning Process above, design long-term testing, maintenance, awareness, training, and measurement strategies. Continuity Plan Infrastructure. During plan development, all policies, guidelines, continuity measures, and continuity plans are formally documented. Structure the CP environment to identify plan owners and project management teams, and to ensure the successful development of the plan. In addition, tie the continuity plans to the overall IT continuity plan and crisis management infrastructure. Implementation. During this phase, the initial versions of the continuity or crisis management plans are implemented across the enterprise environment. Also during this phase, long-term testing, maintenance, awareness, training, and measurement strategies are implemented. Operations. This phase involves the constant review and maintenance of the continuity and crisis management plans. In addition, this phase may entail maintenance of the ongoing viability of the overall continuity and crisis management business processes.
HOW DOES ONE GET THERE? THE CONCEPT OF THE CP VALUE JOURNEY The CP value journey is a helpful mechanism for co-development of CP expectations by the organization’s top management group and those responsible for recovery planning. To achieve a successful and measurable recovery planning process, the following checkpoints along the CP value journey should be considered and agreed upon. The checkpoints include: • Defining success. Define what a successful CP implementation will look like. What is the future state? • Aligning the CP with business strategy. Challenge objectives to ensure that the CP effort has a business-centric focus. • Charting an improvement strategy. Benchmark where the organization and the organization’s peers are, the organization’s goals based on their present position as compared to their peers, and which critical initiatives will help the organization achieve its goals. • Becoming an accelerator. Accelerate the implementation of the organization’s CP strategies and processes. In today’s environment, speed is a critical success factor for most companies. • Creating a winning team. Build an internal/external team that can help lead the company through CP assessment, development, and implementation. • Assessing business needs. Assess time-critical business process dependence on the supporting infrastructure. 369
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE • Documenting the plans. Develop continuity plans that focus on ensuring that time-critical business processes will be available. • Enabling the people. Implement mechanisms that help enable rapid reaction and recovery in times of emergency, such as training programs, a clear organizational structure, and a detailed leadership and management plan. • Completing the organization’s CP strategy. Position the organization to complete the operational- and personnel-related milestones necessary to ensure success. • Delivering value. Focus on achieving the organization’s goals while simultaneously envisioning the future and considering organizational change. • Renewing/recreating. Challenge the new CP process structure and organizational management to continue to adapt and meet the challenges of demonstrate availability and recoverability. This value journey technique for raising the awareness level of management helps both facilitate meaningful discussions about the CP process and ensure that the resulting CP strategies truly add value. As discussed later, this value-added concept will also provide additional metrics by which the success of the overall CP process can be measured. In addition to the approaches of CP process improvement and the CP value journey mentioned above, the need to introduce people-oriented organizational change management (OCM) concepts is an important component in implementing a successful CP process. HOW IS SUCCESS MEASURED? BALANCED SCORECARD CONCEPT1 A complement to the CP process improvement approach is the establishment of meaningful measures or metrics that the organization can use to weigh the success of the overall CP process. Traditional measures include: • How much money is spent on hotsites? • How many people are devoted to CP activities? • Was the hotsite test a success? Instead, the focus should be on measuring the CP process contribution to achieving the overall goals of the organization. This focus helps to: • • • •
Identify agreed-upon CP development milestones Establish a baseline for execution Validate CP process delivery Establish a foundation for management satisfaction to successfully manage expectations
The CP balanced scorecard includes a definition of the: 370
Reengineering the Business Continuity Planning Process Definition of "Future" State
Vision Strategy/Goals
How will Your Company Differ?
Growth and Innovation
Customer Satisfaction
What are the Critical Success Factors? What are the Critical Measures?
Exhibit 5.
• • • • •
People
Process Quality
Financial
Critical Success Factors (CSFs)
Balanced Scorecard Measurements
Balanced Scorecard Concept
Value statement Value proposition Metrics/assumptions on reduction of CP risk Implementation protocols Validation methods
Exhibit 5 and Exhibit 6 illustrate the balanced scorecard concept and show examples of the types of metrics that can be developed to measure the success of the implemented CP process. Included in this balanced scorecard approach are the new metrics upon which the CP process will be measured. Following this balanced scorecard approach, the organization should define what the future state of the CP process should look like (see the preceding CP value journey discussion). This future state definition should be co-developed by the organization’s top management and those responsible for development of the CP process infrastructure. Exhibit 4 illustrates the current state/future state visioning overview, a technique that can also be used for developing expectations for the balanced scorecard. Once the future state is defined, the CP process development group can outline the CP process implementation critical success factors in the areas of: • • • • •
Growth and innovation Customer satisfaction People Process quality Financial state
These measures must be uniquely developed based on the specific organization’s culture and environment. 371
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Exhibit 6.
Continuity Process Scorecard
Question: How should the organization benefit from implementation of the following continutity process components in terms of people, processes, technologies, and mission/profits? Continuity Planning Process Components
People Processes Technologies Mission/Profits
Process methodology Documented DRPs Documented BRPs Documented crisis management plans Documented emergency response procedures Documented network recovery plan Contingency organization walkthroughs Employee awareness program Recovery alternative costs Continuous availability infrastructure Ongoing testing programs etc.
WHAT ABOUT CONTINUITY PLANNING FOR WEB-BASED APPLICATIONS? Evolving with the birth of the Web and Web-based businesses is the requirement for 247 uptime. Traditional recovery time objectives have disappeared for certain business processes and support resources that support the organizations’ Web-based infrastructure. Unfortunately, simply preparing Webbased applications for sustained 247 uptime is not the only answer. There is no question that application availability issues must be addressed, but it is also important that the reliability and availability of other Web-based infrastructure components (such as computer hardware, Web-based networks, database file systems, Web servers, file and print servers, as well as preparing for the physical, environmental, and information security concerns relative to each of these [see RMR above]) also be undertaken. The terminology for preparing the entirety of this infrastructure to remain available through major and minor disruptions is usually referred to as continuous or high availability.
372
Reengineering the Business Continuity Planning Process Continuous availability (CA) is not simply bought; it is planned for and implemented in phases. The key to a reliable and available Web-based infrastructure is to ensure that each of the components of the infrastructure have a high degree of resiliency and robustness. To substantiate this statement, Gartner Research reports “Replication of databases, hardware servers, Web servers, application servers, and integration brokers/suites helps increase availability of the application services. The best results, however, are achieved when, in addition to the reliance on the system’s infrastructure, the design of the application itself incorporates considerations for continuous availability. Users looking to achieve continuous availability for their Web applications should not rely on any one tool but should include the availability considerations systematically at every step of their application projects.”2 Implementing a continuous availability methodological approach is the key to an organized and methodical way to achieve 247 or near 247 availability. Begin this process by understanding business process needs and expectations, and the vulnerabilities and risks of the network infrastructure (e.g., Internet, intranet, extranet, etc.), including undertaking singlepoints-of-failure analysis. As part of considering implementation of continuous availability, the organization should examine the resiliency of its network infrastructure and the components thereof, including the capability of its infrastructure management systems to handle network faults, network configuration and change, the ability to monitor network availability, and the ability of individual network components to handle capacity requirements. See Exhibit 7 for an example pictorial representation of this methodology. The CA methodological approach is a systematic way to consider and move forward in achieving a Web-based environment. A very high-level overview of this methodology is as follows. • Assessment/planning. During this phase, the enterprise should endeavor to understand the current state of business process owner expectations/requirements and the components of the technological infrastructure that support Web-based business processes. Utilizing both interview techniques (people to people) and existing system and network automated diagnoses tools will assist in understanding availability status and concerns. • Design. Given the results of the current state assessment, design the continuous availability strategy and implementation/migration plans. This will include developing a Web-based infrastructure classification system to be used to classify the governance processes used for granting access to and use of support for Web-based resources. 373
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Assessment/ Planning
Business Process Owner Needs and Expectations
Infrastructure Infrastructure Network Stability Management Resiliency Audit Assessment Assessment
SLA Review and Assessment
Infrastructure Availability Assessment Strategy
Design
Implementation
Operations/ Monitoring
Exhibit 7.
CA Design and Migration Plan
CA Infrastructure
Operational CA Infrastructure
Continuous Availability Methodological Approach
• Implementation. Migrate existing infrastructures to the Web-based environment according to design specifications as determined during the design phase. • Operations/monitoring. Establish operational monitoring techniques and processes for the ongoing administration of the Web-based infrastructure. Along these lines, in their book Blueprints for High Availability: Designing Resilient Distributed Systems,3 Marcus and Stern recommend several fundamental rules for maximizing system availability (paraphrased): • Spend money…but not blindly. Because quality costs money, investing in an appropriate degree of resiliency is necessary. • Assume nothing. Nothing comes bundled when it comes to continuous availability. End-to-end system availability requires up-front planning and cannot simply be bought and dropped in place. • Remove single-points-of-failure. If a single link in the chain breaks, regardless of how strong the other links are, the system is down. Identify and mitigate single-points-of-failure. • Maintain tight security. Provide for the physical, environmental, and information security of Web-based infrastructure components.
374
Reengineering the Business Continuity Planning Process • Consolidate servers. Consolidate many small servers’ functionality onto larger servers and less numerous servers to facilitate operations and reduce complexity. • Automate common tasks. Automate the commonly performed systems tasks. Anything that can be done to reduce operational complexity will assist in maintaining high availability. • Document everything. Do not discount the importance of system documentation. Documentation provides audit trails and instructions to present and future systems operators on the fundamental operational intricacies of the systems in question. • Establish service level agreements (SLAs). It is most appropriate to define enterprise and service provider expectations ahead of time. SLAs should address system availability levels, hours of service, locations, priorities, and escalation policies. • Plan ahead. Plan for emergencies and crises, including multiple failures in advance of actual events. • Test everything. Test all new applications, system software, and hardware modifications in a production-like environment prior to going live. • Maintain separate environments. Provide for separation of systems, when possible. This separation might include separate environments for the following functions: production, production mirror, quality assurance, development, laboratory, and disaster recovery/business continuity site. • Invest in failure isolation. Plan — to the degree possible — to isolate problems so that if or when they occur, they cannot boil over and affect other infrastructure components. • Examine the history of the system. Understanding system history will assist in understanding what actions are necessary to move the system to a higher level of resiliency in the future. • Build for growth. A given in the modern computer era is that system resource reliability increases over time. As enterprise reliance on system resources grow, the systems must grow. Therefore, adding systems resources to existing reliable system architectures requires preplanning and concern for workload distribution and application leveling. • Choose mature software. It should go without saying that mature software that supports a Web-based environment is preferred over untested solutions. • Select reliable and serviceable hardware. As with software, selecting hardware components that have demonstrated high mean times between failures is preferable in a Web-based environment. • Reuse configurations. If the enterprise has stable system configurations, reuse or replicate them as much as possible throughout the en375
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE vironment. The advantages of this approach include ease of support, pretested configurations, a high degree of confidence for new rollouts, bulk purchasing possible, spare parts availability, and less to learn for those responsible for implementing and operating the Web-based infrastructure. • Exploit external resources. Take advantage of other organizations that are implementing and operating Web-based environments. It is possible to learn from others’ experiences. • One problem, one solution. Understand, identify, and utilize the tools necessary to maintain the infrastructure. Tools should fit the job; so obtain them and use them as they were designed to be used. • KISS: keep it simple…. Simplicity is the key to planning, developing, implementing, and operating a Web-based infrastructure. Endeavor to minimize Web-based infrastructure points of control and contention, as well as the introduction of variables. Marcus and Stern’s book3 is an excellent reference for preparing for and implementing highly available systems. Reengineering the continuity planning process involves not only reinvigorating continuity planning processes, but also ensuring that Web-based enterprise needs and expectations are identified and met through the implementation of continuous availability disciplines. SUMMARY The failure of organizations to measure the success of their CP implementations has led to an endless cycle of plan development and decline. The primary reason for this is that a meaningful set of CP measurements has not been adopted to fit the organization’s future-state goals. Because these measurements are lacking, expectations of both top management and those responsible for CP often go unfulfilled. A radical change in the manner in which organizations undertake CP implementation is necessary. This change should include adopting and utilizing the business process improvement (BPI) approach for CP. This BPI approach has been implemented successfully at many Fortune 1000 companies over the past 20 years. Defining CP as a process, applying the concepts of the CP value journey, expanding CP measurements utilizing the CP balanced scorecard, and exercising the organizational change management (OCM) concepts will facilitate a radically different approach to CP. Finally, because Web-based business processes require 247 uptime, implementation of continuous availability disciplines is necessary to ensure that the CP process is as fully developed as it should be. 376
Reengineering the Business Continuity Planning Process References 1. Robert S. Kaplan and David P. Norton, Translating Strategy into Action: The Balanced Scorecard, HBS Press, 1996. 2. Gartner Group RAS Services, COM-12-1325, 29 September 2000. 3. Marcus, E. and Stern, H., Blueprints for High Availability: Designing Resilient Distributed Systems, John Wiley & Sons, 2000.
377
This page intentionally left blank
Chapter 31
Wireless Security: Here We Go Again Aldora Louw William A. Yarberry, Jr.
Ronald Reagan’s famous rejoinder in the 1980 presidential debates — “There you go again” — applies equally well to wireless security. In the early days of personal computers, professional IT staff were alarmed at the uncontrolled, ad hoc, and unsecured networks that began to spring up. PCs were bought by users out of “miscellaneous supplies” budgets. The VP of Information Systems had no reliable inventory of these new devices; and certainly corporate data was not particularly secure or backed up on the primitive hard drives. Now, 20 years later, we have architectures and systems to control traditional networked systems. Unfortunately, history is repeating itself with wireless LANs and wireless applications. It is convenient to set up a wireless LAN or an application that uses wireless technology; however, the convenience means that sometimes wireless technology is spreading throughout organizations without oversight or adequate security functions. CIOs today, like VPs of Information Systems 20 years ago, are missing key information. Where are the wireless devices? Are they secure? Exacerbating the problem of wireless security is the general lack of awareness of the risks. Interception and even spoofing are easier over the airwaves than with cables — simply because it is not necessary to get physical access to a conduit in order to tap into the information flow. BACKGROUND Like old cities developed around cow-paths, wireless technology meanders around a confusing history of regulations, evolving and proprietary standards, a plethora of protocols, and ever smaller and faster hardware. To simplify this discussion, a “wireless” transmission is one that does not travel through a wire. This approach is not as dull-witted as it would seem. The media focus so much on the newer technologies, such as Blue0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
379
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE
Exhibit 1. IntelPro Wireless 2011B LAN Access Point (Courtesy of Intel (www.intel.com))
tooth, that traditional wireless communications — microwave, satellite, and radio transmissions — are often ignored. Regardless of the precise definition of “wire-less,” it is clear that the technology is growing quickly. Following are some of the key protocols and standards that are driving the industry. Standards and Protocols (Software). 802.11b (802.11a and 802.11g). This specification is today’s choice for wireless LAN communications. Based on work by the IEEE, 802.11b uses radio frequencies to transmit higher-level protocols, such as IP. Wireless LANs are convenient and quick to set up. Applications, servers, and other devices see the traffic going over the air-waves as no different than wirebased Ethernet packets. In a typical wireless LAN, a transmitter/receiver device, such as that shown in Exhibit 1, connects to the wired network at a fixed location. An alternative to the fixed access point is the ad hoc network that uses devices such as notebook PCs equipped with wireless adaptor cards to communicate with each other via peer-to-peer transmissions. The recently adopted 802.11a and 802.11g standards, the successors to 802.11b, allow for considerably greater bandwidth (up to 54 Mbps versus 11 Mbps originally). With this increase in bandwidth, wireless LANs will likely become much more prevalent. Using the appropriate access points and directional antennas, wireless LANs can be linked over more than a mile. As discussed later, it is not difficult to see why “war driving” around the premises of buildings is so popular with hackers. iMode. To date, the iMode service of DoCoMo is used almost exclusively in Japan. However, it is a bellwether for the rest of the world. Using propri380
Wireless Security: Here We Go Again etary (and unpublished) protocols, iMode provides text messaging, E-commerce, Web browsing, and a plethora of services to Japanese customers. Another advantage to the service is that it is a packet-switched service and thus, always on. What has grabbed the business community’s full attention is the degree of penetration within Japan — 28 million iMode users out of a total of 60 million cellular subscribers. Japanese teenagers have created a pseudo-language of text codes that rivals the cryptic language of Internet chat rooms (brb for “be right back,” etc.). Bluetooth. Intended as a short-distance (generally less than ten meters) communication standard, Bluetooth allows many devices to communicate with each other on an ad hoc basis, forming a “pico-net.” For example, PDAs can communicate with properly equipped IP telephones to transfer voice-mail to the PDA when the authorized owner walks into her office. Bluetooth is a specification that, when followed by manufacturers, allows devices to emit radio signals in the unlicensed 2.4-GHz frequency. By using spread spectrum, full-duplex signals at up to 1600 hops per second, interference is greatly reduced, allowing up to seven simultaneous connections in one location. It is intended to be used by laptops, digital cameras, PDAs, devices in automobiles, and other consumer devices. Because of its short range, interception from outside a building is difficult (not to mention the additional effort required to overcome frequency hopping). Nevertheless, there are scenarios that could result in security breaches. For example, transmission between a Bluetooth wireless headset and a base cellular phone could be intercepted as an executive walks through an airport. Cellular. Standards for mobile wireless continue to evolve. The United States and parts of South America originally used the AMPS analog system; this system is not secure at all, much to the chagrin of some embarrassed politicians. More current protocols include TDMA, CDMA, and the world standard GSM. GSM now supports broadband digital data transmission rates using general packet radio services (GPRS) and CDMA providers are offering advanced data services using a technology called CDMA2000 1x. Miscellaneous, Older Wireless Technologies. Any consideration of wireless security should include older technologies such as satellite communications (both geostationary and low earth orbit), microwave, infrared (line of sight, building to building), and CDPD for narrowband data transmission over unused bandwidth in the cellular frequencies and cordless phones operating in a number of public frequencies (most recently 900 MHz and 2.4 GHz). It is important to note that — particularly in telecommunications — technologies never seem to die. Any complete review of wireless security should at least consider these older, sometimes less secure transmission media. 381
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Hardware Personal Digital Assistants. Palm Pilots, iPAQs, Blackberrys, and other devices are proliferating. Typically, they use frequencies somewhere in the cellular range and require sufficient tower (transmitter) density to work well. For example, ordering a book on amazon.com using a Palm Pilot is not likely to work in Death Valley, California. Laptops with Wireless Adaptor Cards. Using adaptor cards or wireless connections like Compaq’s Bluetooth Multiport Module, laptops communicate with each other or with servers linked to an access point. Cell Phones. The technologies of cellular phones, PDAs, dictation machines, and other devices are merging. Cell phones, especially those with displays and mini-browsers, provide the form factor for Internet, public telephone system, and short-range communications.
The ISO Stack Still Applies The ubiquitous, seven-layer ISO stack applies to wireless communications as well. Although a discussion of this topic is outside the scope of this chapter, there is one protocol stack concept that should be kept in mind: traveling over the air is the logical equivalent to traveling over a copper wire or fiber. Airwave protocols represent layer 2 protocols, much like Frame Relay or ATM.1 If a TCP/IP layer 3 link is established over a wireless network, it is still an IP network. It merely rides over a protocol designed for transmission in the air rather than through copper atoms or light waves. Hence, many of the same security concepts historically applied to IP networks, such as authentication, non-repudiation, etc., still apply. WIRELESS RISKS A January 2002 article in Computerworld described how a couple of professional security firms were able to easily intercept wireless transmissions at several airports. They picked up sensitive network information that could be used to break in or to actually establish a rogue but authorized node on the airline network. More threatening is the newly popular “war driving” hobby of today’s au courant hackers. Using an 802.11b-equipped notebook computer with appropriate software, hackers drive around buildings scanning for 802.11b access points. The following conversation, quoted from a newsgroup for wireless enthusiasts in the New York City area, illustrates the level of risk posed by war driving: Just an FYI for everyone, they are going to be changing the nomenclature of ‘War Driving’ very soon. Probably to something like ‘ap map382
Wireless Security: Here We Go Again ping’ or ‘net stumbling’ or something of the sort. They are trying to make it sound less destructive, intrusive and illegal, which is a very good idea. This application that is being developed by Marius Milner of BAWUG is great. I used it today. Walking around in my neighborhood (Upper East Side Manhattan) I found about 30 access points. A company called www.rexspeed.com is setting up access points in residential buildings. Riding the bus down from the Upper East Side to Bryant park, I found about 15 access points. Walking from Bryant Park to Times Square, I found 10 access points. All of this was done without any external antenna. In general, 90 percent of these access points are not using WEP. Fun stuff.
The scanning utility referred to above is the Network Stumbler, written by Marius Milner. It identifies MAC addresses (physical hardware addresses), signal-to-noise ratios, and SSIDs.2 Security consultant Rich Santalesa points out that if a GPS receiver is added to the notebook, the utility records the exact location of the signal. Many more examples of wireless vulnerability could be cited. Looking at these wide open links reminds us of the first days of the Internet when the novelty of the technology obscured the risks from intruders. Then, as now, the overriding impediment to adequate security was simple ignorance of the risks. IT technicians and sometimes even knowledgeable users set up wireless networks. Standard — but optional — security features such as WEP (Wired Equivalent Privacy) may not be implemented. Viewing the handheld or portable device as the weak sibling of the wireless network is a useful perspective. As wireless devices increase their memory, speed, and operating system complexity, they will only become more vulnerable to viruses and rogue code that can facilitate unauthorized transactions. The following sections outline some defenses against wireless hacking and snooping. We start with the easy defenses first, based on security consultant Don Parker’s oft-repeated statement of the obvious: “Prudent security requires shutting the barn doors before worrying about the rat holes.” DEFENSES Virtually all the security industry’s cognoscenti agree that it is perfectly feasible to achieve a reasonable level of wireless security. And it is desperately needed — for wireless purchases, stock transactions, transmissions of safety information via wireless PDA to engineers in hazardous environments, and other activities where security is required. The problems come from lack of awareness, cost to implement, competing standards, and leg383
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE acy equipment. Following are some current solutions that should be considered if the business exposure warrants the effort. Awareness and Simple Procedures First, make management, IT and telecom personnel, and all users aware that wireless information can be intercepted and used to penetrate the organization’s systems and information. In practical terms, this means: • Obtain formal approval to set up wireless LANs and perform a security review to ensure WEP or other security measures have been put in place. • Limit confidential conversations where security is notoriously lax. For example, many cellular phones are dual mode and operate on a completely unsecured protocol/frequency in areas where only analog service is available. Some cell phones have the ability to disable dual mode so they only operate in the relatively more secure digital mode. • Use a password on any PDA or similar device that contains sensitive data. An even stronger protection is to encrypt the data. For example, Certicom offers the MovianCrypt security package, which uses a 128bit advanced encryption standard to encrypt all data on a PDA. • Ensure that the security architecture does assume that the end device (e.g., a laptop) will always be in physical possession of the authorized owner. Technical Solutions There are several approaches to securing a wireless network. Some, like WEP, focus on the nature of wireless communication itself. Others use tunneling and traditional VPN (virtual private network) security methods to ensure that the data is strongly encrypted at the IP layer. Of course, like the concentric walls of medieval castles, the best defense includes multiple barriers to access. Start with WEP, an optional function of the IEEE 802.11 specification. If implemented, it works by creating secret shared encryption keys. Both source and destination stations use these keys to alter frame bits to avoid disclosure to eavesdroppers. WEP is designed to provide the same security for wireless transmissions as could be expected for communications via copper wire or fiber. It was never intended to be the Fort Knox of security systems. WEP has been criticized because it sends the shared secret over the airwaves; sniffers can ferret out the secret and compromise the system. Some Berkley researchers broke the 40-bit encryption relatively quickly after the IEEE released the specification. WEP also has a few other weaknesses, including: 384
Wireless Security: Here We Go Again
Exhibit 2. Two-Factor Authentication from RSA (Courtesy of RSA (www.rsasecurity.com))
• Vendors have added proprietary features to their WEP implementation, making integration of wireless networks more difficult. • Anyone can pick up the signal, as in the “war driving” scenario described above. This means that even if hackers do not want to bother decoding the traffic using a wireless sniffer — which is somewhat difficult — they can still get onto the network. That is, they are plugged in just the same as if they took their laptop into a spare office and ran a cable to the nearest Ethernet port. A partial solution is to enable MAC address monitoring.3 By adding MAC addresses (unique to each piece of hardware, such as a laptop) to the access point device, only those individuals possessing equipment that matches the MAC address table can get onto the network. However, it is difficult to scale the solution because the MAC address tables must be maintained manually. None of these deficiencies should discourage one from implementing WEP. Just implementing WEP out-of-the-box will discourage many hackers. Also, WEP itself is maturing, taking advantage of the increased processing power available on handheld and portable devices to allow more compute intensive security algorithms. As mentioned, authentication of laptops and other devices on the user end is as important in wireless as it is in dial-up remote access. It is beyond human diligence not to lose or have stolen a portable device. VPNs with remote, two-factor authentication superimpose a layer of security that greatly enhances any native wireless protection system. RSA’s SecurID, shown in Exhibit 2, is an example of a two-factor system based on some385
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE thing the user knows (a password) and something the user possesses (an encrypted card). In addition to VPNs, software-based firewalls such as BlackICE are useful for end-computer security. Relying on tools such as these takes some of the pressure off application level security, which is sometimes weak due to loose password management, default passwords, and other flaws. A Hole in the Fabric of Wireless Security Wireless security protocols are evolving. WAP 1.2.1 uses Wireless Transport Layer Security (WTLS), which very effectively encrypts communications from, for example, a cell phone to a WAP gateway. At the WAP gateway, the message must be momentarily unencrypted before it is sent onto the Web server via SSL (Secure Sockets Layer). This “WAP GAP” exists today but is supposed to be eliminated in WAP version 1.3 or later. A temporary fix is to strengthen physical security around the WAP gateway and add additional layers of security onto the higher-level applications. Traditional Security Methods Still Work Of course, existing security methods still apply — from the ancient Spartan’s steganography techniques (invisible messages) to the mind-numbing complex cryptography algorithms of today. Following are some major security algorithms that can easily support a high level of E-commerce security: • Digital hashing: a lower-strength security technique to help prevent unauthorized changes to documents transmitted electronically • Digital signatures: provide the same function as digital hashing but are a much more robust algorithm • Public key cryptography: the cornerstone of much digital-age security (key management, such as the use of smart cards, is important in the various implementations of public key infrastructure (PKI)) Auditing Wireless Security Auditing an organization’s wireless security architecture is not only useful professionally, but also an excellent personal exercise. The reason: physically walking around the premises with a wireless LAN audit tool is necessary to determine where wireless LANs and other wireless networks have been set up. Often, these LANs have been implemented without approval or documentation and, hence, a documentation review is not sufficient. Using a device such as IBM’s Wireless Security Auditor, a reliable inventory of wireless networks and settings can be obtained (see Exhibit 3). 386
Wireless Security: Here We Go Again
Exhibit 3.
Wireless Security Auditor Tool (Courtesy IBM (www.research.ibm.com/gsal/wsa/))
Using IBM’s Wireless Security Auditor as an example, the following are some of the configurations and potential vulnerabilities that might be evaluated in a wireless security review: • • • • •
Inventory of access points Identification of encryption method (if any) Identification of authentication method Determination of WEP status (has it been implemented?) Notation of any GPS information (useful for determining location access point) • Analytics on probe packets • Identification of firmware status (up-to-date?) Aside from the technology layer, standard IT/telecom controls should be included in the review: change control, documentation, standards compliance, key management, conformance with technical architecture, and appropriate policies for portable devices. 387
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE SUMMARY Wireless security is important to both leading and trailing-edge organizations. Applications and infrastructure uses are showing up everywhere, from the shop floor to the techie whose Bluetooth PDA collects his voicemail as he walks into the office. This rapid growth, reminiscent of the first days of PCs and the Internet, should be accompanied by a corresponding level of security, control, and standards. Here we go again…. Notes 1. In a sense, “air” also represents layer 1, the most basic and physical layer. Copper, fiber, and even (in the earliest days of the telegraph) barbed wire stand as examples of layer 1 media. 2. Service Set Identifier. An encoded flag attached to packets sent over a wireless LAN that indicates it is authorized to be on a particular radio network. All wireless devices on the same radio network must have the same SSID or they will be ignored. 3. MAC (medium access control) addresses are unique. “03:35:05:36:47:7a” is a sample MAC address that might be found on a wireless or wired LAN.
388
Chapter 32
Understanding Intrusion Detection Systems Peter Mell
Intrusion detection is the process of detecting an unauthorized use of, or attack upon, a computer or a telecommunication network. Intrusion detection systems (IDSs) are designed and installed to aid in deterring or mitigating the damage that can be caused by hacking, or breaking into sensitive IT systems. IDSs are software or hardware mechanisms that detect such misuse. IDSs can detect attempts to compromise the confidentiality, integrity, and availability of a computer or network. The attacks can come from outsider attackers on the Internet, authorized insiders who misuse the privileges that have been given them, and unauthorized insiders who attempt to gain unauthorized privileges. IDSs cannot be used in isolation, but must be part of a larger framework of IT security measures. THE BASIS FOR ACQUIRING IDSs At least three reasons justify the acquisition of an IDS. They are: 1. To provide the means for detecting attacks and other security violations that cannot be prevented 2. To prevent attackers from probing a network 3. To document the intrusion threat to an organization Detecting Attacks That Cannot Be Prevented Using well-known techniques, attackers can penetrate many networks. Often, this happens when known vulnerabilities in the network cannot be fixed. For example, in many legacy systems, the operating systems cannot be updated; in those systems that can be updated, the administrators may not have, or take, the time to install all the necessary patches in a large number of hosts. In addition, it is usually impossible to map perfectly an 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
389
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE organization’s computer use policy to its access control mechanisms. Authorized users can often perform unauthorized actions. In addition, users may demand network services and protocols that are known to be flawed and subject to attack. Although ideally it would be preferable to fix all of the vulnerabilities, this is seldom possible. Thus, an excellent approach to protecting a network may be the use of an IDS to detect when an attacker has penetrated a system using the vulnerability that can be created by an uncorrectable flaw. At least it is better to know that a system has been penetrated so that its administrators can perform damage control and recovery than not to know that the system has been penetrated. Preventing Attackers from Probing a Network A computer or network without an IDS may allow attackers to explore its weaknesses, leisurely and without retribution. If a single, known vulnerability exists in such a network, a determined attacker will eventually find and exploit it. The same network in which an IDS has been installed is a much more formidable challenge to an attacker. Although the attacker may continue to probe the network for weaknesses, the IDS should detect these attempts. In addition, the IDS can block these attempts and it can alert IT security personnel who can then take appropriate action in response to the probes. Documenting the Threat It is important to verify that a network is under attack or is likely to be attacked in order to justify spending money for securing the network. Furthermore, it is important to understand the frequency and characteristics of attacks to understand what security measures are appropriate for the network. IDSs can itemize, characterize, and verify the threats from both outside and inside attacks. Thus, the operation of IDSs can provide a sound foundation for IT security expenditures. Using IDSs in this manner is important because many people believe — and mistakenly so — that no one would be interested in breaking into their networks. (Typically, this type of mistaken thinking makes no distinction between threats from either outsiders or insiders.) TYPES OF IDSs There are several types of IDSs available. They are characterized by different monitoring and analysis approaches. Each type has distinct uses, advantages, and disadvantages. IDSs can monitor events at three different levels: network, host, and application. They can analyze these events using two techniques: signature detection and anomaly detection. Some IDSs have the ability to respond automatically to attacks that are detected. 390
Understanding Intrusion Detection Systems IDS MONITORING APPROACHES One way to define the types of IDSs is to look at what they monitor. Some IDSs listen on network backbones and analyze network packets to find attackers. Other IDSs reside on the hosts that they are defending and monitor the operating system for signs of intrusion. Still others monitor individual applications. Network-Based IDSs Network-based IDSs are the most common type of commercial product offering. These mechanisms detect attacks by capturing and analyzing network packets. Listening on a network backbone, a single network-based IDS can monitor a large amount of information. Network-based IDSs usually consist of a set of single-purpose hosts that “sniff” or capture network traffic in various parts of a network and report attacks to a single management console. Because no other applications run on the hosts that are used by a network-based IDS, they can be secured against attack. Many of them have “stealth” modes, which make it extremely difficult for an attacker to detect their presence and to locate them.1 • Advantages. A few well-placed network-based IDSs can monitor a large network. The deployment of network-based IDSs has little impact on the performance of an existing network. Network-based IDSs are typically passive devices that listen on a network wire without interfering with normal network operation. Thus, usually, it is easy to retrofit a network to include network-based IDSs with a minimal installation effort. Network-based IDSs can be made very secure against attack and can even be made invisible to many attackers. • Disadvantages. Network-based IDSs may have difficulty processing all packets in a large or busy network. Therefore, such mechanisms may fail to recognize an attack that is launched during periods of high traffic. IDSs that are implemented in hardware are much faster than those that are based on a software solution. In addition, the need to analyze packets quickly forces vendors to try and detect attacks with as few computing resources as possible. This may reduce detection effectiveness. Many of the advantages of network-based IDSs do not always apply to the more modern switch-based networks. Switches can subdivide networks into many small segments; this will usually be implemented with one fast Ethernet wire per host. Switches can provide dedicated links between hosts that are serviced by the same switch. Most switches do not provide universal monitoring ports. This reduces the monitoring range of a network-based IDS sensor to a single host. In switches that do provide such monitoring ports, the single port is frequently unable to mirror all the traffic that is moving through the switch. 391
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Network-based IDSs cannot analyze encrypted information. Increasingly, this limitation will become a problem as the use of encryption, both by organizations and by the attackers, increases. Most network-based IDSs do not report whether or not an attack was successful. These mechanisms only report that an attack was initiated. After an attack has been detected, administrators must manually investigate each host that has been attacked to determine which hosts were penetrated. Host-Based IDSs Host-based IDSs analyze the activity on a particular computer. Thus, they must collect information from the host they are monitoring. This allows an IDS to analyze activities on the host at a very fine granularity and to determine exactly which processes and users are performing malicious activities on the operating system. Some host-based IDSs simplify the administration of a set of hosts by having the administration functions and attack reports centralized at a single IT security console. Others generate messages that are compatible with network administration systems. • Advantages. Host-based IDSs can detect attacks that are not detectable by a network-based IDS because this type has a view of events that are local to a host. Host-based IDSs can operate in a network that is using encryption when the encrypted information is decrypted on (or before) reaching the host that is being monitored. Host-based IDSs can operate in switched networks. • Disadvantages. The collection mechanisms must usually be installed and maintained on every host that is to be monitored. Because portions of these systems reside on the host that is being attacked, hostbased IDSs may be attacked and disabled by a clever attacker. Hostbased IDSs are not well-suited for detecting network scans of all the hosts in a network because the IDS at each host sees only the network packets that the host receives. Host-based IDSs frequently have difficulty detecting and operating in the face of denial-of-service attacks. Host-based IDSs use the computing resources of the hosts they are monitoring. Application-Based IDSs Application-based IDSs monitor the events that are transpiring within an application. They often detect attacks by analyzing the application’s log files. By interfacing with an application directly and having significant domain or application knowledge, application-based IDSs are more likely to have a more discerning or fine-grained view of suspicious activity in the application. • Advantages. Application-based IDSs can monitor activity at a very fine level of granularity, which allows them, often, to track unauthorized 392
Understanding Intrusion Detection Systems activity to individual users. Application-based IDSs can work in encrypted environments because they interface with the application that may be performing encryption. • Disadvantages. Application-based IDSs may be more vulnerable than host-based IDSs to being attacked and disabled because they run as an application on the host that they are monitoring. The distinction between an application-based IDS and a host-based IDS is not always clear. Thus, for the remainder of this chapter, both types will be referred to as host-based IDSs. IDS EVENT ANALYSIS APPROACHES There are two primary approaches to analyzing computer and networks events to detect attacks: signature detection and anomaly detection. Signature detection is the primary technique used by most commercial IDS products. However, anomaly detection is the subject of much research and is used in limited form by a number of IDSs. Signature-Based IDSs Signature-based detection looks for activity that matches a predefined set of events that uniquely describe a known attack. Signature-based IDSs must be specifically programmed to detect each known attack. This technique is extremely effective and is the primary method used in commercial products for detecting attacks. • Advantages. Signature-based IDSs are very effective in detecting attacks without generating an overwhelming number of false alarms. • Disadvantages. Signature-based IDSs must be programmed to detect each attack and thus must be constantly updated with the signatures of new attacks. Many signature-based IDSs have narrowly defined signatures that prevent them from detecting variants of common attacks. Anomaly-Based IDSs Anomaly-based IDSs find attacks by identifying unusual behavior (i.e., anomalies) that occurs on a host or network. They function on the observation that some attackers behave differently than “normal” users and thus can be detected by systems that identify these differences. Anomalybased IDSs establish a baseline of normal behavior by profiling particular users or network connections and then statistically measure when the activity being monitored deviates from the norm. These IDSs frequently produce a large number of false alarms because normal user and network behaviors can vary widely. Despite this weakness, the researchers working on applying this technology assert that anomaly-based IDSs are able to detect never-before-seen attacks, unlike signature-based IDSs that rely on 393
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE an analysis of past attacks. Although some commercial IDSs include restricted forms of anomaly detection, few, if any, rely solely on this technology. However, research on anomaly detection IDS products continues. • Advantages. Anomaly-based IDSs detect unusual behavior and thus have the ability to detect attacks without having to be specifically programmed to detect them. • Disadvantages. Anomaly detection approaches typically produce a large number of false alarms due to the unpredictable nature of computing and telecommunication users and networks. Anomaly detection approaches frequently require extensive “training sets” of system event records to characterize normal behavior patterns. IDSs THAT AUTOMATICALLY RESPOND TO ATTACKS Because human administrators are not always available when an attack occurs, some IDSs can be configured to automatically respond to attacks. The simplest form of automated response is active notification. Upon detecting an attack, an IDS can e-mail or page an administrator. A more active response is to stop an attack in progress and then block future access by the attacker. Typically, IDSs do not have the ability to block a particular person, but instead block the Internet Protocol (IP) addresses from which an attacker is operating. It is very difficult to automatically stop a determined and knowledgeable attacker. However, IDSs can often deter expert attackers or stop novice hackers by: • Cutting TCP (Transmission Control Protocol) connections by injecting reset packets into the attacker’s connections that go to the target of the attack • Reconfiguring routers and firewalls to block packets from reaching the attacker’s location (i.e., the IP address or site) • Reconfiguring routers and firewalls to block the protocols that are being used by an attacker • Reconfiguring routers and firewalls to sever all the connections, in extreme situations, that are using particular network interfaces A more aggressive way in which to respond to an attacker is to launch attacks against, or attempt to gain information actively about, the attacker’s host or site. However, this type of response can prove extremely dangerous for an organization to undertake because doing so may be illegal or may cause damage to innocent Internet users. It is even more dangerous to allow IDSs to launch these attacks automatically, but limited, automated “strike-back” strategies are sometimes used for critical systems. (It would be wise to obtain legal advice before pursuing any of these options.) 394
Understanding Intrusion Detection Systems TOOLS THAT COMPLEMENT IDSs Several tools exist that complement IDSs and are often labeled as IDSs by vendors because they perform functions that are similar to those accomplished by IDSs. These complementary tools are honey pot systems, padded cell systems, and vulnerability assessment tools. It is important to understand how these products differ from conventional IDSs. Honey Pot and Padded Cell Systems Honey pots are decoy systems that attempt to lure an attacker away from critical systems. These systems are filled with information that is seemingly valuable but which has been fabricated and which would not be accessed by an honest user. Thus, when access to the honey pot is detected, there is a high likelihood that it is an attacker. Monitors and event loggers on the honey pot detect these unauthorized accesses and collect information about an attacker’s activities. The purpose of the honey pot is to divert an attacker from accessing critical systems, collect information about the attacker’s activity, and encourage the attacker to stay on the system long enough for administrators to respond to the intrusion. Padded cells take a different approach. Instead of trying to attract attackers with tempting data, a padded cell waits for a traditional IDS to detect an attacker. The attacker is seamlessly transferred to a special padded cell host. The attacker may not realize anything has happened, but is now in a simulated environment where no harm can be caused. Similar to the honey pot, this simulated environment can be filled with interesting data to convince an attacker that the attack is going according to plan. Padded cells offer unique opportunities to monitor the actions of an attacker. IDS researchers have used padded cell and honey pot systems since the late 1980s, but until recently no commercial products have been available. • Advantages. Attackers can be diverted to system targets that they cannot damage. Administrators can be given time to decide how to respond to an attacker. An attacker’s actions can be monitored more easily and the results used to improve the system’s protections. Honey pots may be effective in catching insiders who are snooping around a network. • Disadvantages. Honey pots and padded cells have not been shown, as yet, to be widely useful security technologies. Once an expert attacker has been diverted into a decoy system, the invader may become angry and launch a more hostile attack against an organization’s systems. A high level of expertise is needed for administrators and security managers to use these systems. The legal implications of using such mechanisms are not well-defined. 395
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Vulnerability Assessment Tools Vulnerability assessment tools determine when a network or host is vulnerable to known attacks. Because this activity is actually related to detecting attacks, these mechanisms are sometimes referred to as intrusion detection tools. They come in two varieties: passive and active. • Passive vulnerability assessment tools scan the host on which they reside for the presence of insecure configurations, software versions known to contain exploitable flaws, and weak passwords. • Active assessment tools reside on a single host and scan a network looking for vulnerable hosts. The tool sends a variety of network packets at target hosts and, from the responses, the tool can determine the server and operating system software on each host. In addition, it can identify specific versions of software and determine the presence or absence of security-related patches. The active assessment tool compares this information with a library of software version numbers known to be insecure and determines if the hosts are vulnerable to known attacks from these sources. LIMITATIONS OF IDSs Intrusion detection products have limitations that one must be aware of before endeavoring to deploy an IDS. Despite vendor claims to the contrary, most IDSs do not scale well as enterprisewide solutions. The problems include the lack of sufficient integration with other IT security tools and sophisticated network administration systems, the inability of IDSs to assess and visualize enterprise-level threats, and the inability of organizations to investigate the large number of alarms that can be generated by hundreds or thousands of IDS sensors. 1. Many IDSs create a large number of false positives that waste administrators’ time and may even initiate damaging automated responses. 2. While almost all IDSs are marketed as real-time systems, during heavy network or host activity, an IDS may take several minutes before it reports and responds to an attack automatically. Usually, IDSs cannot detect newly published attacks or variants of existing attacks. This can be a serious problem as 30 to 40 new attacks are posted on the World Wide Web every month. An attacker may wait for a new attack to be posted and then quickly penetrate a target network. 3. Automated responses of IDSs are often ineffective against sophisticated attackers. These responses usually stop novice hackers but if they are improperly configured, these reactions can harm a network by interrupting its legitimate traffic. 396
Understanding Intrusion Detection Systems 4. IDSs must be monitored by skilled IT security personnel to achieve maximum benefits from their operation and to understand the significance of what is detected. IDS maintenance and monitoring can require a substantial amount of personnel resources. 5. Many IDSs are not failsafe. They are not well-protected from attack or subversion. 6. Many IDSs do not have interfaces that allow users to discover cooperative or coordinated attacks. DEPLOYMENT OF IDSs Intrusion detection technology is a necessary addition to every large organization’s IT security framework. However, given the weaknesses that are found in some of these products, and the relatively limited security skill level of most system administrators, careful planning, preparation, prototyping, testing, and specialized training are critical steps for effective IDS deployment. It is suggested that a thorough requirements analysis be performed before IDSs are deployed. The intrusion detection strategy and solution selected should be compatible with the organization’s network infrastructure, policies, and resource level. Organizations should consider a staged deployment of IDSs to gain experience with their operation. Thus, they can ascertain how many monitoring and maintenance resources are required. There is a large variance in the resource requirements for each type of IDS. IDSs require significant preparation and ongoing human interaction. Organizations must have appropriate IT security policies, plans, and procedures in place so that the personnel involved will know how to react to the many and varied alarms that the IDSs will produce. A combination of network-based IDSs and host-based IDSs should be considered to protect an enterprisewide network. First deploy networkbased IDSs because they are usually the simplest to install and maintain. The next step should be to defend the critical servers with host-based IDSs. Honey pots should be used judiciously and only by organizations with a highly skilled technical staff willing to experiment with leading-edge technology. Currently, padded cells are available only as research prototypes. Deploying Network-Based IDSs There are many options for placing a network-based IDS and there are different advantages for each location. See Exhibit 1 for a listing of these options. 397
DESIGNING AND OPERATING AN ENTERPRISE INFRASTRUCTURE Exhibit 1.
Placement of a Network-Based IDS
Location
Advantage
Behind each external firewall
Sees attacks that are penetrating the network’s perimeter defenses from the outside world
In front of an external firewall
Proves that attacks from the Internet are regularly launched against the network
On major network backbones
Detects unauthorized activity by those within a network and monitors a large amount of a network’s traffic
On critical subnets
Detects attacks on critical resources
Deploying Host-Based IDSs Once an organization has deployed network-based IDSs, the deployment of host-based IDSs can offer an additional level of protection. However, it can be time-consuming to install host-based IDSs on every host in an enterprise. Therefore, it is often preferable to begin by installing host-based IDSs on critical servers only. This placement will decrease the overall costs associated with the deployment and will allow the limited number of personnel available to work with the IDSs to focus on the alarms that are generated from the most important hosts. Once the operation and maintenance of host-based IDSs become routine, more IT security-conscious organizations may consider installing host-based IDSs on the majority of their hosts. In this case, it would be wise to purchase host-based systems that have an easy-to-use centralized supervision and reporting function because the administration of alert responses from a large set of hosts can be intimidating. Notes 1. Stealth modes make it extremely difficult for an attacker to detect their presence and to locate them.
398
Section 3
Providing Application Solutions
PROVIDING APPLICATION SOLUTIONS The increased complexity and variety of infrastructure components, the need to provide software solutions for both internal and external audiences, and today’s integration requirements are primary reasons for software development today being a highly challenging task. Some of the conceptual complexity was introduced when client/server architectures were adopted. However, today’s Web-based, highly distributed systems are often, in practice, even more complex because of the variety of client devices and services that need to be able to communicate with each other. Section 3 addresses the challenges related to provisioning application solutions in modern systems development and deployment environments. An underlying theme is the importance of following well-established software engineering and project management tools, techniques, and methods as relevant. The chapters in this section are organized into four topic areas: • • • •
New Tools and Applications Systems Development Approaches Project Management Software Quality Assurance
NEW TOOLS AND APPLICATIONS One of the most promising and widely accepted approaches to addressing the complexity in Web-based development is Web services. Chapter 33, “Web Services: Extending Your Web,” provides an introduction to the emerging technologies that enable Web services. A comprehensive example is used to illustrate the benefits an organization can achieve by utilizing Web services either for internal or external purposes. For example, application interfaces can be developed for external users to support revenuegenerating services or to collaborate with business partners. The chapter also provides a brief comparison of .NET and J2EE® as development environments. Chapter 34, “J2EE versus .NET: An Application Development Perspective,” continues the same theme by providing a more in-depth comparison between the two major component-based development architectures: .NET and J2EE. A comprehensive review and step-by-step comparison is provided to help decision makers analyze the differences between them. Although answers to technology choice questions are seldom black or white, this chapter provides useful guidance to managers who are looking for the best solution for their organization. Chapter 35, “XML: Information Interchange,” focuses on one of the most important technologies for developing modern application solutions. XML is both one of the core elements of Web services and a much more widely used mechanism for defining document structures. As a tool for specifying 400
PROVIDING APPLICATION SOLUTIONS the meaning of various document elements, XML was originally designed to address one of the most important weaknesses of HTML. Since its inception in the late 1990s, XML has been widely adopted for a variety of environments, including some of the most popular personal productivity software applications and large B2B E-business systems. The effective use of XML requires that cooperating organizations define document standards for particular data exchange purposes, and this chapter sheds light on the standardization efforts within and between industries that are as important as the XML standard itself. The final chapter within this topic area — Chapter 36, “Software Agent Orientation: A New Paradigm” — provides some excellent real-life examples of the use of software agents in organizations. The authors carefully avoid the hype and overpromises that are often associated with new technologies and realistically evaluate the possibilities that agent technologies offer for organizations that are willing to embark on a learning process. Among the applications of agent technologies discussed are e-mail filtering and routing, data warehousing, and Internet searches, as well as financial, distance education, and healthcare applications. SYSTEMS DEVELOPMENT APPROACHES Selected for this topic area are six chapters that discuss the role and usage of various systems development methods, techniques, and approaches. The first one is concerned with paradigm shifts within software development environments. Chapter 37, “The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic,” provides a historical overview of methodologies for systems development projects from an evolutionary perspective. Although methodologies in the past have sometimes been hailed as “the holy grail,” evidence from the field suggests that multiple methodologies are, in fact, typically customized for each development project. The author advocates an eclectic, problem-centered view of methodology, in contrast to a “one-size-fits-all” approach. Chapter 38, “Usability: Happier Users Mean Greater Profits,” focuses closely on the business value of usability, and clearly demonstrates why high usability should be a requisite objective of every systems development project. The author emphasizes the importance of a process to ensure the creation of usable systems and also builds a strong case to suggest that methods that rely on very simple, low-technology tools can often be used to develop systems with high usability. Throughout the chapter, the role of users in developing usable systems is a central theme. Chapter 39, “UML: The Good, the Bad, and the Ugly,” evaluates the advantages and disadvantages of the utilization of Unified Modeling Language (UML). Since its inception, UML has gained strong acceptance, par401
PROVIDING APPLICATION SOLUTIONS ticularly within those organizations that follow the object-oriented paradigm throughout all development stages. It has also become the basis for a number of development environments. The authors provide an objective review of UML and its organizational uses and discuss the characteristics of various modeling languages that are part of UML. Due to their prominent role in business process modeling, use cases are one of the most widely used modeling tools in UML. Chapter 40, “Use Case Modeling,” provides a tutorial of this approach, as well as some useful insights into some of the differences between various use case modeling approaches. Also discussed are the reasons why use case modeling has gained popularity over traditional process modeling approaches (such as data flow diagramming) and various criteria to consider when evaluating this approach for organizational adoption. Several agile (“lightweight”) development methodologies, particularly eXtreme Programming (XP), have recently received considerable attention. Proponents argue that these methodologies are a solution to many of the problems that continue to plague software development, whereas opponents find them little more than structured hacking. However, anecdotal evidence suggests that many organizations are increasingly applying either some or all of the core principles of one of the agile methodologies in some of their development projects. Chapter 41, “Extreme Programming and Agile Software Development Methodologies,” helps IS managers responsible for methodology standards in evaluating whether or not XP is suitable for their organization. The authors are clear proponents of XP, but the chapter also provides a useful evaluation of agile approaches for those who are not yet believers. Particularly interesting is the authors’ focus on the values of XP (Simplicity, Feedback, and Courage), which provide the foundation for the methodology itself. Modern software development increasingly entails some type of a component-based approach, and various architectures (such as CORBA, ActiveX, .NET, and J2EE) have been introduced to support the use of components. The final chapter of this section — Chapter 42, “ComponentBased IS Architecture” — provides guidance on a variety of issues related to software development using components. The authors point out the advances in distributed architectures and the Internet that have had a strong impact on the utilization of component technologies, and they discuss the organizational and technical requirements underlying the effective and economically viable use of components. PROJECT MANAGEMENT The importance of project management is one of the stable constants in the otherwise dynamic world of systems development. Development 402
PROVIDING APPLICATION SOLUTIONS approaches, methodologies, tools, and technologies change, but many of the core issues related to the successful management of projects remain the same. The four chapters under this topic provide guidance on various project management challenges. Project risk management is the main focus of Chapter 43, “Does Your Project Risk Management System Do the Job?” The chapter emphasizes the central role of risk management in ensuring project success, and provides a comprehensive list of common project risk management mistakes. Early recognition helps avoid these mistakes. The authors provide a review of the most common threats to IT projects and ways to mitigate them. The chapter ends with a useful checklist of risk management tasks for project managers to perform. The author of Chapter 44, “Managing Development in the Era of Complex Systems,” emphasizes the need for new skills for managing today’s more complex development projects. The chapter identifies three factors associated with success based on complex projects in a large consulting organization: business vision, system testing from a program management (versus single project) perspective, and a phased rollout strategy. Complexity is an unavoidable characteristic of large systems projects today, especially enterprise system projects that involve cross-functional integration. It is therefore vitally important to develop approaches to manage project complexity and related risks. The author of Chapter 45, “Reducing IT Project Complexity,” also focuses on mechanisms for managing and reducing project complexity. This chapter evaluates factors that increase project complexity, paying specific attention to coordination issues, and provides an analysis of the specific steps that an organization can take to control complexity and maintain it at an acceptable level. The author describes a wide range of factors affecting project risk: the scope and nature of the project, development technology, organizational structures, and culture. SOFTWARE QUALITY ASSURANCE Section 3 ends with three chapters that focus on software quality and its potential implications for the individual, the organization, and the society. Chapter 46, “Software Quality Assurance Activities,” provides a systematic review of software quality assurance activities during the entire software development life cycle. The chapter clearly demonstrates that software quality has to be built into the product; it cannot be achieved just by testing the system before it is delivered to the users. The author recognizes the importance of formal approaches such as the Software Engineering Institute’s Capability Maturity Model Integration (CMMI) and the ISO 9000 set of quality assurance standards, but the importance of looking beyond the 403
PROVIDING APPLICATION SOLUTIONS extensive documentation required by these guidelines, and ensuring that basic quality assurance practices are implemented through all stages of a development project, is emphasized. Because quality assurance requirements are not the same for every company and project, the approach chosen must be adjusted to fit the organizational context. The topical focus of Chapter 47, “Six Myths about Managing Software Development,” is broader than quality assurance, but many of the misconceptions identified are closely related to software quality. The author presents six “incorrect” assumptions that often guide the actions of IS managers and developers during development projects. Many of the myths are controversial, but all them are thought-provoking and will challenge readers to reevaluate some fundamental assumptions. The author concludes the chapter with several recommendations that, if implemented, can significantly improve the quality of the final software product. The final chapter in this section, Chapter 48, “Ethical Responsibility for Software Development,” reviews the consequences of software quality problems from the perspectives of both ethical and legal responsibility. The need for both organizations and individual developers to be ethically responsible is clearly established, and the legal liabilities associated with not disclosing known defects in software products are discussed. The authors also provide a useful introduction to the widely debated Uniform Computer Information Transactions Act (UCITA).
404
Chapter 33
Web Services: Extending Your Web Robert VanTol
DYNAMIC SITES TODAY Now that Web sites have moved beyond static marketing sites and simple E-commerce storefronts, corporations face the challenge of making their internal “protected” databases available to the world. Companies are being forced to open their systems to provide real-time transactional capabilities to customers and partners or risk losing market share to more accessible competitors. Until now, companies had to host their own Web site internally to achieve this so that they could attach directly to the corporate databases. This involved not only the need for additional hardware and increased communication capability, but also highly skilled IT resources to manage the site and the associated security risks. The question: Is there a Better Way? Web services allow the full separation of an Internet presence and company data, as shown in Exhibit 1. In the simplest case, a set of Web services can be created that allow viewing, updating, and adding records to internal systems. These services can be hosted within the company and made available to business partners through the Internet. The “public” Internet presence can be located anywhere and on any platform. The “public” site then communicates only with the Web services via HTTP, not with the database directly. This allows companies to host their “public” sites at an ISP that has the infrastructure, security, and knowledge to ensure efficient Web site operation. WEB SERVICES OVERVIEW Web services are modular, self-contained sets of business logic or functions that a company can make available and that can be described, published, located, and invoked through the Internet. Web services are the building blocks of a site, similar to the DLLs or COM objects on current sites, but with the added benefit of being able to be called from external Web servers across 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
405
PROVIDING APPLICATION SOLUTIONS
Exhibit 1. Network Diagram
the Internet. Potential Web services on a typical E-commerce site could include a tax calculation service, a shopping cart service, or an inventory control application. These Web services can be internal to an organization or, in some cases, provided by an external company, such as a bank that might develop a credit card clearing service. Exhibit 2 shows the Web services that are available in the sample application discussed below. The idea of creating software components is not new. In fact, it has been tried many times in the past. But what makes the Web services model exciting is that it is backed by industry leaders such as Microsoft, IBM, Oracle, Sun, as well as smaller vendors. Web services are based on how companies, systems, and people actually behave but based on a distributed Internet application framework. Existing COM or EJB objects can be wrapped and distributed with this technology, allowing greater reuse of existing code. By allowing legacy applications to be wrapped in SOAP and exposed as a service, the Web services architecture easily enables new interoperability between existing applications. Wrapping hides the underlying plumbing from the developer, which allows services to coexist in existing legacy applications and also be exposed to a wider audience as a Web service. Web services need to provide standard communication protocols, data representation, description languages, and a discovery mechanism. The major players have worked together to create a foundation for interoperability with a set of Web services standards such as XML, SOAP, UDDI, and WSDL, which are defined below. XML Extensible Mark-up Language (XML) is the foundation for Web services and was a standard long before Web services were developed. XML was chosen 406
Web Services: Extending Your Web
Exhibit 2. Application Architecture
in part to ensure that there existed tools and expertise in the marketplace although the concept of Web services was relatively new. Web services promote interoperability by minimizing the amount of shared understanding required. The XML-based service description is the lowest common denominator allowing them to be truly platform and language independent and implemented using a large variety of different underlying infrastructures. For Web services to overcome the deficiencies of its predecessors, it needed to be built on standard, widely supported protocols. XML treats all information as text, and an XML Schema makes it possible to describe the type and structure of XML documents using XML itself. Because the XML Schema is self-documenting, it is perfect for Web services where, depending on the platform, converting between data types will be required. The data is delivered as text but the type is defined in the schema, ensuring that even if the originating type is not supported by the calling platform, the data can still be read. SOAP Simple Object Access Protocol (SOAP), the standard for communicating the XML messages, provides the communications link between systems 407
PROVIDING APPLICATION SOLUTIONS and applications. Similar to HTML, SOAP has a clear distinction between headers and its body. To maintain the openness of the XML messages, SOAP transmits all information via HTTP, enabling it to pass seamlessly through firewalls while maintaining their integrity. This also allows Web services to be secured using the same tools that are used for current Web sites. SOAP also defines a standard for error representation and a standard binding to HTML. To invoke a SOAP operation, several pieces of information are needed, other than the XML Schema, such as the endpoint URL, the protocols supported, encoding, etc. WSDL Web Services Description Language (WSDL) is the XML-based specification for describing Web services. The WSDL describes what functions the Web service performs and how the communication operates. The WSDL is independent of any vendor and does not depend on a specific platform or programming language to ensure that it is as open a platform as possible. UDDI Universal Description, Discovery, and Integration (UDDI) is an industry standard set of protocols and a public directory for the registration and real-time lookup of available Web services. Companies that wish to make their Web services available register them with a UDDI Registry, and companies that wish to use the services are able to find them in this directory. UDDI is an industrywide initiative to standardize how Web services are discovered. It defines a SOAP-based API for querying centralized Web service repositories. UDDI makes it possible for developers to discover the technical details of a Web service (WSDL) as well as other business-oriented information and classifications. For example, using UDDI you should be able to query for all Web services related to stock quote information, implemented over HTTP, and that are free of charge. UDDI is to Web services what the registry is to DCOM. UDDI simply makes it possible to build an extra level of abstraction into your Web service so you do not have to hardcode the specific endpoint location at development time. This approach will work within a controlled environment like an Intranet or trusted business partners; in the real world, however, there are too many other issues (quality, legal, etc.) that will inhibit its use. UDDI still can be a useful tool for developers when they need to discover available Web services, either in their enterprise or when on the Web at development time. The power of Web services lies in the fact that they are self-contained, modular applications. They can be described, published, located, and called over the Internet. Web services architecture comprises three basic roles: provider, requester, and broker. The Internet has taught us that to be usable, applications need to be open to many platforms and operating sys408
Web Services: Extending Your Web tems. “Old-style” application integration involved building connectors that were specific to the device and applications. If a system needed to talk to two different applications, one would need two different connectors. For the next generation of E-business systems to be successful, they need to be flexible. Web services allow these systems to be comprised of many modular services that can mixed and matched as needed. These services are “loosely coupled,” which allows them to be changed or expanded at any time without affecting the system as a whole. As the rate of change increases in not only computers but also business, systems need to be flexible so that they do not have to be rewritten every time a subsystem is modified. Web services are only bound at runtime, thus allowing systems to grow and change independently of each other. The WSDL describes the capabilities of the Web service. This allows developers to create applications without knowing the details of how a Web service is architected. CHALLENGES SURROUNDING WEB SERVICES Web services technologies are not without their own issues. Imagine a bank creating a credit card Web service that it makes available to hundreds of business partners. What would happen if for some reason the bank’s system went down? None of its business partners would be able to process credit card payments. Testing and debugging applications that span operating systems and companies will prove to be a challenge; along with every live Web service, a “test mode” will have to be created for companies to use during development and testing. As the lines blur between control of services, there is a potential for mass finger-pointing when things do go wrong. Once the systems are deployed and in production, methods for communicating and approving changes to Web services need to be developed. There is also the issue of payment for using these services. A credit card service is fairly straightforward because it is a transaction charge based on each purchase. But what about a shopping cart service where it could be used many times without the shopper actually purchasing products? Do you charge monthly, per use, by bandwidth, or a combination of all of the above? Likely, various models will emerge and smart providers will be flexible as they seek to build a sustainable business model. Web services allow one to control the “silos of expertise.” If one utilizes a third-party shipping firm to deliver the goods sold on one’s Web site, why should one develop a shipment tracking tool? The shipping company has the expertise in its business to create this application and to modify it as its business changes. This would also be a value-added service for its business partners because once the shipping company develops its Web service, it can be used by any of its business partners to track shipments. 409
PROVIDING APPLICATION SOLUTIONS
Exhibit 3. Application Flow
PROVE IT … THE “HOTTUB” SAMPLE APPLICATION Everyone hears the promises about new technologies and software systems, but many of us have been jaded by years of software promises that do not deliver on the hype. The only way to prove these claims is to actually see it in action. We decided to build a prototype site based on the Web services model. The result was a project, code named “HotTub,” which among other things was meant to prove the concept of reusable Web services across multiple platforms. Exhibit 3 shows the application flow. To accomplish this task, our developers created two versions of an Ecommerce bookstore: one created using Microsoft’s .NET framework written in C#, and the other written in JSP (Java Server Pages) on an Apache Tomcat Web server. In the initial phase of the project, all Web services were written on the Microsoft platform in C# with a Microsoft SQL Server 2000 database. Also included, but not within the scope of this chapter, was backend legacy integration utilizing Microsoft’s BizTalk Server. After the initial planning and architecture was determined, the database was created and loaded with standard test data. The team then developed seven distinct Web services: Publisher List, Catalog Inquiry, Shopping Cart, Tax Calculator, Shipping Calculator, Credit Card Authorization, and Final Order Processing. Each service was planned to be distinct and generic to allow it to be used within a wide range of other applications (such as WAP phones, kiosks, interactive voice response, etc.) that could be developed in future phases of the project. The goal of this project was not only to explore Web services, but to testdrive the new Microsoft .NET framework and the C# language. Although 410
Web Services: Extending Your Web the developers on the team were traditionally VB/ASP coders, the fully integrated development environment of .NET helped them develop the Web services in rapid succession. Once the Web services were complete, the development of the .NET and JSP Java front-end versions was started in parallel. One of the first tasks for both streams was to consume the Publisher List Web service so that the Web site could allow the user to select from a dropdown box the publisher for which they would like to search. The .NET team was able to readily consume the Web service because it was “.NET” communicating with “.NET.” All the functionality was built in Visual Studio .NET to consume the Web service and to treat the data as any dataset that was read directly from a database. Once the correct versions of the Apache SOAP and XML Parser were installed, the JSP team was able to consume the exact same .NET Web service and display the resulting data. The JSP team took a different approach to data manipulation and used the opportunity to implement XSL transformations on the standard XML that was returned from the Web service. XSL is a very powerful XML styling language that can manipulate an XML file and return standard HTML. At this stage, the developers had proven that Web services were indeed cross-platform and cross-language compatible. All that was left to do was to create fully functional sites by implementing the remainder of the Web services that had been developed. Much of the site logic itself was contained in the Web services; for example, the Shopping Cart service contained all the logic to manage adding, updating, and deleting records from the shopping cart. This fact made the creation of the “Public” layer of the site progress very quick. All the development team had to worry about was passing the correct parameters to the Web service. As typically happens in any IT project during development, a bug was found in one of the Web services. Once the team member corrected the problem in the logic, the Web service was copied to the Web server and implemented without work on the remaining of the systems stopping to register the new object or restart the server. The development team was impressed by the ease of deployment of the .NET objects and was quite disappointed when the next project to which they were assigned forced them to return to standard ASP programming. LESSONS LEARNED It was discovered that Web services not only lived up to the hype and promise but also in some ways, with the tools available, exceeded them. By separating the database from the Web site, Web services allow the Web site to be hosted virtually anywhere on any platform. 411
PROVIDING APPLICATION SOLUTIONS Criticality of planning was one of the lessons learned. Major changes to Web services once they have been deployed could have a large impact on the sites consuming the Web services. For this reason, there will be cases where there are several versions of the “same” Web service deployed at the same time. Version control and service level definitions are important because clients will use “older” versions and upgrade when they see benefits to the most current service offered. The goal of not being tied to any one platform was met. Two systems were created that had the same functionality and consumed the same data yet were developed on totally separate operating systems, Web servers, and programming languages. This is not only a benefit on systems that cross the boundaries of companies, but is a great benefit for companies that are migrating from one system to another or integrating separate divisions that use different technology platforms. By forcing applications to be accessed through Web services, one ends up with a series of base functions. Once these are individually analyzed, they can be distributed to the experts in that field. A bank might produce a credit card Web service and FedEx might produce a package tracking service. This will save development time and probably add features to products that would not have been considered feasible using previous technologies. Security will be one of the key concerns when choosing companies with which to partner. Security can be tight but it has to be carefully planned if extending Web services to outside organizations. The good thing is that because Web services travel over http, they are very firewall friendly and can be secured using existing Web site methods. Once a core set of external Web services is created and commercially available or an organization has a library of its own services, the development time of new applications that can take advantage of these services is greatly reduced. The front end becomes the “mortar” that holds the Web service “bricks” together. If one takes the simple function of a tax calculator and multiplies the time it takes to create one with the number of times that different systems require this function, one can imagine how much time can be saved by having a standard tax Web service. Now imagine a situation where the tax rate changes. If all sites within a company used the same Web service, it would only have to be changed once inside the Web service without having to make changes to the actual Web site. TECHNOLOGIES EMPLOYED Microsoft’s .NET framework and the new language C# were built from the ground up for Internet development and more importantly for Web services. To change a typical C# component to a Web service meant the addi412
Web Services: Extending Your Web tion of only one line of code. The .NET framework took care of all the interfaces and plumbing required. Coding in .NET now provides access to an object model that most programmers thought was gone forever when they moved to the Web. There are now data grids and button properties that resemble the days of client/server programming but built with the Internet in mind. To display a table of records with alternating row colors and paging is now a matter of setting three or four properties instead of 20 lines of code in ASP or JSP. Although C# is a new language, it shares a large percentage of its syntax with C++, which makes it an easy transition for a C++ programmer. Visual Studio .NET is such a complete development environment that even VB and ASP programmers with no previous C++ experience can very rapidly develop complex applications using C#. Although the J2EE platform is also able to consume Web services (either built on Java or Microsoft .NET) natively with code that is available now, it was necessary to download various versions of add-in components to the Tomcat server to access the SOAP protocols and display the XML using XSL templates. The versions of these components were not always compatible; once the right combination was found, they worked very efficiently. The combination of SOAP Web services returning XML and XSL templates converting the XML into dynamic Web pages is very powerful. Although this approach is compatible across many systems, it is missing a fully integrated development environment like that of Microsoft Visual Studio .NET. SUMMARY While some of the technologies are new, the basis for Web services is solidly grounded in existing standards that will speed the adoption of this technology. Additional standards may have to be put in place before there is seamless business-to-business integration, but this is the first step on that road; in the next 18 months, Web services will emerge as the tool for extending Web sites into E-commerce collaboration engines.
413
This page intentionally left blank
Chapter 34
J2EE versus .NET: An Application Development Perspective V. Ramesh Arijit Sengupta
J2EE and .NET are two competing frameworks proposed by an alliance of companies led by Sun Microsystems and Microsoft, respectively, as platforms for application development on the Web using the object-oriented programming paradigm. Both J2EE and .NET are component-based frameworks, featuring a system-level abstraction of the machine (called “Java Virtual Machine” in J2EE and “Common Language Runtime” in .NET). They both provide extensive support for developing various kinds of applications, including support for Web services and independent distributed software components. This chapter examines the relative benefits and drawbacks of the two frameworks. A brief overview of each of the frameworks is given in the next section. We then examine the differences between the two frameworks along various dimensions. We first examine the capabilities of each of the above frameworks and then present some issues that decision makers need to consider when making the J2EE versus .NET decision. J2EE (JAVA PLATFORM ENTERPRISE EDITION) The J2EE framework (see Exhibit 1) provides a component-based model for enterprisewide development and deployment of software using the Java programming language. This platform defines the standard for developing applications using a multi-tier framework and simplifies the process of application development by building applications using modular compo0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
415
PROVIDING APPLICATION SOLUTIONS
$SSOLFDWLRQ &OLHQW &RQWDLQHU
-DYD(QWHUSULVH(GLWLRQ -((
$SSOHW &RQWDLQHU
(-%
&RQQHFWRU
-DYD,'/
-63
50,,,23
-'%&
-DYD0DLO
6HUYOHW
-7$
-06
(-%&RQWDLQHU
:HE&RQWDLQHU
$SSOHW
-1',
;0/
$SSOLFDWLRQ &OLHQW
-DYD6WDQGDUG(GLWLRQ-6( Exhibit 1. The J2EE Framework Adapted from Singh, R., “J2EE-Based Architecture and Design — The Best Practices,” http://www.nalandatech.com; accessed November 24, 2002, Nalanda Technologies.
nents and handling different levels of application behavior automatically without the necessity of complex code. J2EE is based on the Java 2 platform, and uses many of its features, such as “Write Once, Run Anywhere” portability, JDBC for database access, RMI/IIOP technology for interaction with other distributed resources, and a security model. The J2EE model is platform independent, with support for many different platforms, including Windows, different versions of UNIX, Linux, as well as different handheld operating systems. Several vendors support the J2EE platform, including Sun, IBM, BEA, and Oracle. In addition, there are a number of open source projects that have lent their support for various parts of the J2EE framework, including Jakarta (Apache) and JBoss. MICROSOFT .NET Microsoft .NET (see Exhibit 2) is a set of Microsoft software technologies for connecting information, systems, devices, and personnel. .NET is also built on small reusable components, and includes support for (1) different types of clients and devices that can communicate via XML, (2) XML Web services written by Microsoft and others that provide developers with ready-to-use application components, (3) server infrastructure that sup416
J2EE versus .NET: An Application Development Perspective
9%
&
&
-6FULSW
-
&RPPRQ/DQJXDJH6SHFLILFDWLRQ $631(7:HE6HUYLFHV DQG:HE)RUPV
:LQGRZV )RUPV
$'21(7'DWDDQG;0/ 1(7)UDPHZRUN%DVH&ODVVHV &RPPRQ/DQJXDJH5XQWLPH Exhibit 2.
9 L V X D O 6 W X G L R 1(7
The .NET framework and Visual Studio.NET
Adapted from Microsoft .NET framework Academic Resource Kit.
ports such applications, and (4) tools that allow developers to build such applications. At the server level, .NET is not platform-independent, because only Windows servers have support for .NET at the moment. However, .NET has support for many different programming languages in which components can be built, such as C#, Visual Basic, C++, JScript, and J# (a derivation of Java). A COMPARISON OF J2EE AND .NET This section examines the differences between the two frameworks in terms of the following characteristics: programming languages, Web applications, Web services, backwards compatibility, support for mobility, and marketing ability. Programming Languages (Advantage: Java) Exhibit 1 and Exhibit 2 show the various languages supported by J2EE and Microsoft .NET. The diagrams show that .NET allows programmers the flexibility of using many different languages. However, only some of these languages are considered “first-class citizens,” that is, are capable of taking full advantage of all the features of .NET. Programmers can mix-and-match such languages when creating their applications. The J2EE platform is based on a single language, Java. Both of these frameworks have adopted 417
PROVIDING APPLICATION SOLUTIONS the object-oriented paradigm. The primary languages in the .NET framework — VB.NET and C# — are both purely object oriented, and the same is, of course, true for Java. One of the biggest advantages of Java is the learning process that went into the design of the language. Java was a product of significant research and development rather than a competitive struggle to the top of the market. Java started its life as early as 1995, and right from the beginning it was a language designed with the Internet in mind and with device-independent modular software development as the background. Java is a very clean language, including the niceties of automatic garbage collection and independence from system-specific libraries that have hurt most of the other languages. The concept of a uniform “virtual machine” also implied that Java developers could develop in one platform and deploy the code in an entirely different platform that also supports the Java virtual machine (JVM). Microsoft’s advantage in this regard is the support for different languages in its .NET framework. The problem is that the primary language of choice for .NET is C#, which is a brand new and untested language. Because the Web development model is based on ASP — which in its earlier version was primarily built on top of Visual Basic — it is natural that most of the new applications developed in ASP.NET or applications that are upgraded to ASP.NET would continue to use Visual Basic (albeit the .NET version of this language). However, many organizations and developers do not perceive Visual Basic as a language for serious software development, and this can definitely work against Microsoft .NET. Given the diversity of languages supported by .NET, at face value, it might seem like the edge should go to Microsoft. However, an examination of what has happened at universities over the past five years indicates the difficulty that these languages are going to face in the future. In the late 1990s, universities were looking for a language that could be used to teach object-oriented principles. C++, despite its popularity in practice, was difficult to teach. When Java was introduced, universities had an alternative to C/C++ when it came to object-oriented education. As a result, Java has gained a lot of momentum at universities over the past five years. Consequently, a significant number of universities have, within the last couple of years, switched to teaching Java in their curriculum. The recency of the transition means that the universities will be reluctant to make another change to a new language like C# or VB.NET, unless there is a compelling reason. Given that both C# and VB.NET resemble Java considerably, it might be hard to find such a reason. Thus, it is likely that the future generation of programmers is going to be trained on Java. 418
J2EE versus .NET: An Application Development Perspective Another issue that might work against the .NET framework, and VB.NET in particular, is the amount of retraining it is going to take to convert Visual Basic programmers into Visual Basic.NET programmers. This is especially true for those programmers who wish to take advantage of the object-oriented features in the language. This retraining might slow down the rate at which .NET gets adopted within companies. Web Application Level (Advantage: Java) The primary objective of both the J2EE and .NET frameworks is to provide support for developing Web-enabled applications. These applications range from customer-centric E-commerce applications to businessto-business (B2B) applications. Modern Web applications are developed using a multi-tier model, where client devices, presentation logic, application logic, data access logic, and data storage are separated from each other. The biggest difference between the J2EE and .NET frameworks lies in the hardware and software choices available in the two frameworks. In essence, .NET is an integrated framework while J2EE is a framework that is integratable. Thus, using the J2EE framework allows organizations to, in theory, mix-and-match products from several vendors. For example, within this framework, one could use an Apache server with a BEA WebLogic application server that connects to an Oracle database server. At a later point, this same organization could replace the BEA WebLogic application server with a different product, such as IBM WebSphere, Oracle’s Application Server, or the free JBoss server. All of these applications can be run on top of several operating systems platforms, including UNIX, Linux, and Windows. With the .NET framework, an organization is essentially limited to systems software developed by Microsoft, including its IIS Web server. The integrated nature of the .NET framework might be considered an advantage for small and medium-sized applications that typically do not have the scalability needs of large applications. This is not to say that .NET cannot be used for developing large applications. However, we believe that the largest adopters of .NET will be the current small and medium-sized application developers that use ASP. However, it should be noted that even for these organizations, the shift to .NET will not be easy and will require significant retraining. For large applications that have significant scalability and security requirements, the J2EE framework provides the flexibility needed to achieve the desired architectural and performance goals. However, Sun’s recent slowness in keeping up with the standards and in coming up with development paradigms has put a dent in this progress. Also, the delay in 419
PROVIDING APPLICATION SOLUTIONS an integrated support for the Java Server pages and in support for the Web services standards has caused the development of many proprietary Javabased Web application extensions from companies such as Oracle, IBM, and BEA. This might affect the mix-and-match abilities that are an essential advantage of a framework. The maturity of the J2EE framework means that creating large-scale applications has become more a science than an art. The J2EE application servers manage most of the complexity associated with scalability, allowing programmers to focus on application development issues. In this regard, the existence of J2EE design patterns allows organizations to adopt the best practices and learn from them. As previously noted, two key issues that need to be addressed by all Web applications are scalability and security. Both frameworks rely heavily on server software — operating systems, Web servers, and database systems — and they seem to have an even share of success in this regard. The J2EE framework is operating system agnostic. The availability of cross-platform Web servers in Apache and database systems such as Oracle has certainly helped the cause of J2EE. In addition, the preferred choice for running J2EE has been UNIX-based platforms. The built-in reliability, scalability, and security of this platform has been one of the key reasons why the J2EE platform has been successful for developing large-scale applications. On the other hand, the fact that .NET runs only on Microsoft Windows and its associated Web server may be considered a disadvantage by many architects. Further, despite the advances, security holes are common in Windows. In addition, scaling a Windows-based system often means adding several additional machines, which in turn are more difficult to manage. However, an advantage that the .NET framework has is that the tight integration of the operating system with the Web and database servers in .NET can help the applications be more resource efficient and potentially hold the edge in performance, especially when a small number of servers are adequate to meet the business’ needs, (e.g., in the case of small and medium-sized applications). In summary, for the various organizations that have made a significant investment in J2EE-based applications, we do not see any reason for them to shift to the .NET framework. In the short term, we envision that the existing base of ASP-based applications is where .NET will make significant inroads. Web Services Level (Advantage: Microsoft) Web services is the latest buzz to take the software industry by storm. However, very few people truly understand what Web services are really all about. Web services represent a paradigm where systems developed in dif420
J2EE versus .NET: An Application Development Perspective ferent platforms can interoperate with each other. The key to this interoperability is a series of standards (all based in one form or the other on XML). Primary among these standards are SOAP (Simple Object Access Protocol), UDDI (Universal Description, Discovery, and Integration), and WSDL (Web Services Description Language). Both J2EE and .NET have more or less equal support for these Web services standards. The difference lies in the fact that, in .NET, support for XML is an integral part of the framework whereas at this point, in J2EE, XML support has to be “bolted on” (see Exhibit 1 and Exhibit 2). For example, in .NET, data retrieved from a database using the DataSet object in ADO.NET is actually stored in XML format. The J2EE alliance has been somewhat sluggish in getting these standards to be integrated into the framework. This has resulted in various Java-based application development companies creating their own proprietary methods based on Java for Web services. Further, Microsoft is one of the organizations playing a key role in the standards body developing these Web service standards. Their willingness and agility in incorporating these standards into .NET has given a temporary advantage to .NET in the Web services arena. It is, however, expected that eventually the standards will become an integral part of both frameworks. The ability for systems created using the two frameworks to interoperate will mean that companies may not need to switch frameworks in order for them to interoperate with systems internally or externally. This might work against .NET in the sense that organizations that have already committed to J2EE (due to its head start in the Web applications arena) will have even less of a compelling reason to switch to the .NET framework. Backwards Compatibility (Advantage: Java) Because VB.NET is a new object-oriented rewrite of the original Visual Basic language, very little of the old VB code is upgradeable to VB.NET. Applications written in legacy languages need to be migrated to the new .NET platform in order to utilize the full compatibility with the .NET technology. Similarly, the Web-based scripting language ASP (Active Server Pages) is replaced by ASP.NET, which has a radically different look and feel. Although Microsoft allows coexistence of both ASP and ASP.NET code on the same server, the interaction between them is not easy. The Microsoft Visual Studio product includes a migration wizard for moving existing Visual Basic code into VB.NET, although such tools are not foolproof and can only change a subset of all the existing code types. Sun Microsystems, on the other hand, has traditionally taken backwards compatibility quite seriously. Most newer versions of Java still include sup421
PROVIDING APPLICATION SOLUTIONS port for the older API, although such use is “deprecated” and not recommended. Given this, it is more likely that an organization that has applications written in Java is going to continue to use that language. However, organizations with a VB code base may not automatically move to VB.NET because of the significant amount of rewrite required. Indeed, this barrier might cause some organizations to reexamine their options and possibly cause some to switch to the J2EE environment. Support for Mobility (Advantage: Even) Microsoft has proposed a version of the .NET framework known as the .NET Compact Framework, which is a smaller version of the desktop/server .NET framework, and is designed to run on devices that support Microsoft’s mobile device operating systems such as Windows CE or Pocket PC 2002. This compact framework comes with full support for XML and ADO.NET. In addition, Microsoft has a toolkit for developing applications for its Pocket PC environment that was available at the same time as Visual Studio.NET. Sun has a version of the Java platform known as the J2ME (Java 2 Platform Micro Edition). J2ME has gained immense popularity on mobile devices in the past couple of years with leading vendors such as Nokia, Motorola, and others supporting Java on their mobile devices. In addition, there is also a virtual machine available for the Palm OS. We believe that the decision to use J2ME or the .NET Compact Framework (or both) will be primarily based on the types of devices that an organization wants to support. Marketing (Advantage: Microsoft) Although Sun had a head start in the process of developing its framework, Microsoft is quickly catching up, thanks to its fierce and aggressive marketing practices. Microsoft has created a highly known advantage with respect to the desktop operating system market, and is quickly closing gap in the server-level database market as well, thanks to its marketing strategy. Given its history with successfully marketing other products, one would definitely have to give the marketing advantage to Microsoft for promoting .NET. The J2EE side of the equation is hampered in this regard because the products are marketed by several companies, many of which are competing for the same market share. This may be the biggest threat facing the J2EE framework moving forward.
422
J2EE versus .NET: An Application Development Perspective SUMMARY Both Sun’s J2EE and Microsoft’s .NET frameworks have advantages and disadvantages that cannot be ignored. At the end of the day, the choice, however, should be based on the specific needs and characteristics of the organization making the decision. We do not see any clear advantage in switching to .NET if an organization is currently already committed to Java, or vice versa. Editor’s Note: See also Chapter 55, “At Your Service: .Net Redefines the Way Systems Interact.”
423
This page intentionally left blank
Chapter 35
XML: Information Interchange John van den Hoven
Today’s rapidly changing, global business environment requires an enterprise to optimize its value chain to reduce costs, reduce working capital, and deliver more value to its customers. The result is an ever-increasing demand for the efficient interchange of information, in the form of documents and data, between systems within an enterprise, and between the enterprise’s systems and those of its customers, suppliers, and business partners. The wide range of technologies, applications, and information sources in use today present the modern enterprise with an immense challenge to manage and work with these different data formats and systems. This internal challenge is further magnified by the efforts required to work with the different data formats and systems of other enterprises. Increasingly, eXtensible Markup Language (XML) is viewed as a key enabling standard for exchanging documents and data because of its ease of implementation and operational flexibility. It is now one of the key technology standards in a modern information systems architecture that enables an enterprise to be more flexible, responsive, and connected. XML OVERVIEW Definition The XML specification is defined by the World Wide Web Consortium (W3C). The eXtensible Markup Language 1.0 W3C Recommendation defines XML as follows: “Extensible Markup Language, abbreviated XML, describes a class of data objects called XML documents and partially describes the behavior of computer programs which process them. XML is an application profile or restricted form of SGML, the Standard Generalized Markup Language.”1 Examining the component parts of the term “eXtensible Markup Language” can further enhance the definition of XML. A markup language is a system of symbols and rules to identify structures in a document. XML is 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
425
PROVIDING APPLICATION SOLUTIONS considered to be very extensible because the markup symbols are unlimited and self-defining, allowing the language to be tailored to a wide variety of data exchange needs. Thus, XML is a system of unlimited and self-defining symbols and rules, which is used to identify structures in a document. Description XML is becoming a universal format for documents and data. It provides a file format for representing data and a Document Type Definition (DTD) or an XML Schema for describing the structure of the data (i.e., the names of the elements and the attributes that can be used, and how they fit together). XML is “a set of rules (you may think of them as guidelines or conventions) for designing text formats that let you structure your data.”2 “The XML syntax uses matching start and end tags, such as
and , to mark up information. A piece of information marked by the presence of tags is called an element; elements may be further enriched by attaching name-value pairs (for example, country = “US”) called attributes. Its simple syntax is easy to process by machine, and has the attraction of remaining understandable to humans.”3 More information on XML can be found in the XML FAQ (Frequently Asked Questions) at http://www.ucc.ie/xml/. History and Context Hypertext Markup Language (HTML) was created in 1990 and is now widely used on the World Wide Web as a fixed language that tells a browser how to display data. XML became available in 1996 and a W3C standard in 1998 (revised in 2000) to solve HTML’s shortcomings in handling very large documents. XML is more complex than HTML because it is a metalanguage used to create markup languages while HTML is one of the languages that can be expressed using XML (XHTML is a “reformulation of HTML 4 in XML 1.0”). While HTML can be expressed using XML, XML itself is a streamlined, Web-enabled version of Standard Generalized Markup Language (SGML), which was developed in the early 1980s as the international standard metalanguage for markup. SGML became an International Organization for Standardization standard (ISO 8879) in 1986. It can be said that XML is an example of Pareto’s principle at work, in that it provides 80 percent of the benefit of SGML with 20 percent of the effort. VALUE AND USES OF XML Value XML can be a container for just about anything. It can be used to describe the contents of a very wide range of file types, including Web pages, busi426
XML: Information Interchange ness documents, spreadsheets, database files, address books, and graphics, to a very detailed level. This allows technology vendors, business users, and enterprises to use XML for anything where interoperability, commonality, and broad accessibility are important. Many technical and business benefits result from this flexibility. Technical Value. XML provides a common transport technology for moving documents and data around in a system-neutral format. Its key benefit is its ability to abstract data from specific technologies such as the processor that the data runs on, the database that manages the data, communication protocols that move the data, and the object models and programming languages that manipulate the data. By providing a common format for expressing data structure and content, XML enables applications and databases to interchange data without having to manage and interpret proprietary or incompatible data formats. This substantially reduces the need for custom programming, cumbersome data conversions, and having to deal with many of the technical details associated with the various technical infrastructures and applications in use today.
XML is also the foundation for many key emerging standards such as the Web services standards. Web services standards such as SOAP (Simple Object Access Protocol) for invoking Web services, WSDL (Web Services Description Language) for describing Web services, and UDDI (Universal Description, Discovery, and Integration) for registering and discovering Web services are key emerging technology standards that rely on XML as their foundation. With its capabilities for bridging different technologies, XML will disrupt many technology markets, including enterprise application integration, business-to-business integration, application servers, personal productivity software (such as word processing), messaging, publishing, content management, portals, and application development. The use of XML will enable several of these markets to converge. It will also enable proprietary data formats to be replaced, resulting in greater interoperability among the various applications and technologies. Business Value. The impact of XML can be compared to that of the Rosetta Stone. The Rosetta Stone was discovered in Egypt in the late 18th century, inscribed with ancient Egyptian hieroglyphics and a translation of them in Greek. The stone proved to be the key to understanding Egyptian writing. It represents the “translation” of “silent” symbols into a living language, which is necessary to make these symbols meaningful.
The interfaces used in today’s enterprises have become the modern form of hieroglyphics. XML promises to play a similar role to that of the Rosetta Stone by enabling a better understanding of these modern hiero427
PROVIDING APPLICATION SOLUTIONS glyphics and in making the content of the data in these interfaces understandable to many more systems. The business value of XML can be better understood by examining the major ways in which it is currently being used with documents and data in the enterprise. Uses There are two major classes of XML applications: documents and data. XML is becoming a universal format for both documents and data because of its capabilities for data exchange and information presentation. In terms of documents, XML is derived from SGML, which was originally designed for electronic publishing, and electronic publishing remains one of the main uses of XML today. In terms of data, XML is widely used as a data exchange mechanism. XML also enables more efficient and effective Web searching for both documents and data. Electronic Publishing. Electronic publishing focuses on enabling the presentation of the content of documents in many different forms. XML is particularly well-suited for internationalized, media-independent, electronic publishing.
XML provides a standardized format that separates information content from presentation, allowing publishers of information to “write once and publish everywhere.” XML defines the structure and content (independent of its final form), and then a stylesheet is applied to it to define the presentation into an electronic or printed form. These stylesheets are defined using the eXtensible Stylesheet Language (XSL) associated with XML to format the content automatically for various users and devices. Different stylesheets, each conforming to the XSL standard, can be used to provide multiple views of the same XML data for different users. This enables a customized interface to be presented to each user based on their preferences. XML supports Unicode, which enables the display and exchange of content in most of the world’s languages, supporting even greater customization and globalization. Through the use of XML and XSL, information can be displayed the way the information user wants it, making the content richer, easier to use, and more useful. XML will also be increasingly used to target this media-independent content to new devices, thus creating new delivery channels for information. Wireless Markup Language and VoiceXML are examples of XML-based languages that enable the delivery of information to a much wider range of devices. XML will be used in an ever-increasing range of devices, including Web browsers, set-top boxes for televisions, personal digital assistants such as the Palm™ and RIM Wireless Handheld™, iPAQ™ Pocket PC, digi428
XML: Information Interchange tal cell phones, and pagers. XML will do for data and documents what Java has done for programs — make the data both platform independent and vendor independent. Data Exchange. Any individual, group of individuals, or enterprise that wants to share data in a consistent way can use XML. XML enables automated data exchange without requiring substantial custom programming. It is far more efficient than e-mail, fax, phone, and the customized interface methods that most enterprises are using today to work with their customers, suppliers, and business partners. XML will simplify data exchange within and between enterprises by eliminating these costly, cumbersome, and error-prone methods.
Within an enterprise, XML can be used to exchange data between individuals, departments, and the applications supporting these individuals and departments. It can also be used to derive greater value from legacy applications and data sources by making the data in these applications easier to access, share, and exchange. This is especially important to facilitate: data warehousing to allow access to the large volumes of legacy data in enterprises today; electronic commerce applications which must work with existing applications and their data formats; and Web access to legacy data. One of the greatest areas of potential benefit for XML is as a means for enterprises with different information systems to communicate with one another. XML can facilitate the exchange of data across enterprise boundaries to support business-to-business (B2B) communications. XML-based EDI (electronic data interchange) standards (XML is complementary to EDI because EDI data can travel inside XML) are extending the use of EDI to smaller enterprises because of XML’s greater flexibility and ease of implementation. XML is transforming data exchange within industries and within supply chains through the definition of XML-based, platform-independent protocols for the exchange of data. XML is a flexible, low-cost, and common container for data being passed between systems, making it easier to transmit and share data across the Web. XML provides a richer, more flexible format than the more cumbersome and error-prone file formats currently in use such as fixed-length messages, comma-delimited files, and other flat file formats. XML opens up enterprise data so that it can be more easily shared with customers, suppliers, and business partners, thereby enabling higher quality, more timely, and more efficient interactions. Web Searching. The use of XML also greatly enhances the ability to search for information in both documents and data. XML does this because documents and data that include metadata (data about data) are more eas429
PROVIDING APPLICATION SOLUTIONS ily searched — the metadata can be used to pinpoint the information required. For example, metadata can be used as keywords for improved searching over the full-text searches prevalent today. As a result, XML makes the retrieval of documents and data much faster and more accurate than it is now. The need for better searching capabilities is especially evident to those searching for information among the masses of documents and data inside the enterprise and the overwhelming volume of documents and data available on the Internet today. XML makes it easier to search and combine information from both documents and data, whether these originate from within the enterprise or from the World Wide Web. As a result, the enterprise’s structured data from its databases can be brought together with its unstructured data in the form of documents, and be linked to external data and documents to yield new insights and efficiencies. This will become even more important as business users demand seamless access to all relevant information on topics of interest to them and as the volume of data and documents continues to grow at a rapid rate. XML STANDARDS The good thing about standards is that there are so many to choose from and XML standards are no exception. XML standards include technology standards and business vocabulary standards. Technology Standards The XML 1.0 specification provides the technology foundation for many technology standards. XML and XHTML define structured documents, and DTDs and XML Schemas establish the rules governing those documents. The XML Schema provides greater capabilities than a DTD for defining the structure, content, and semantics of XML documents by adding data types to XML data fields. XML Schema will become an essential part of the way enterprises exchange data by enabling cross-enterprise XML document exchange and verification. Using an XML Schema will allow enterprises to verify the data by adding checks such as ensuring that XML files are not missing data, that the data is properly formatted, and that the data conforms to the expected values. Other technology standards are being developed and deployed to extend the value of XML. These include standards for (1) transformations — XSL Transformation for converting XML data from one XML structure to another or for converting XML to HTML; (2) stylesheets — eXtensible Stylesheet Language (XSL) is a pure formatting language that describes how a document should be displayed or printed; and (3) programming — Document Object Model (DOM) is a standard set of function calls for 430
XML: Information Interchange manipulating XML and HTML files from a programming language. The XML “family of technologies” is continuously expanding. Business Standards The other area of XML standards is vocabulary standards. XML is a metalanguage (a language for describing other languages) used to define other domain- or industry-specific languages that describe data. “XML allows groups of people or organizations to create their own customized markup applications for exchanging information in their domain.”4 These XML vocabulary standards are being created at a rapid pace as vendors and users try to establish standards for their own enterprises, for industries, or as general-purpose standards. XML allows industries, supply chains, and any other group that needs to work together to define protocols or vocabularies for the exchange of data. They do this by working together to create common DTDs or XML Schemas. These result in exchangeable business documents such as purchase orders, invoices, and advanced shipping notices, which taken together form a language of its own. XML is undergoing rapid innovation and experiencing widespread adoption, resulting in many horizontal (across many industries) and vertical (within an industry) vocabularies, and the development of a common messaging infrastructure. A horizontal vocabulary minimizes the need to interact with multiple vocabularies that are each focused on specific industries or domains, and which cannot easily talk to each other. Common XML vocabularies for conducting business-to-business commerce include OAGIS, ebXML, and EDI/XML. The Open Application Group’s Integration Specification (OAGIS) defines over 200 XML documents to provide a broad set of cross-industry business objects that are being used for application-to-application data exchange internally, and business-to-business data exchange externally. The Organization for the Advancement of Structure Information Standards (OASIS) and United Nations Centre for Trade Facilitation and Electronic Business (UN/CEFACT) have defined an E-business XML (ebXML) business content standard for exchanging business data across a wide range of industries. EDI has been used to exchange business information for the past couple of decades. The XML/EDI group (http://www.xmledigroup.org/) has created a standard that enables EDI documents to be the payload within an XML document. There are also many vertical vocabularies for a wide range of industries (and in many cases several within an industry). The finance, technology, and healthcare industries are prominent areas where much XML vocabulary development is taking place. In finance, the Financial Products Markup Language (FpML) and Financial Information eXchange Markup Language 431
PROVIDING APPLICATION SOLUTIONS (FIXML) are being established as standards for exchanging contracts and information about transactions. In technology, the RosettaNet™ project is creating a standardized vocabulary and defining an industry framework to define how XML documents and data are assembled and exchanged. In healthcare, the Health Level 7 (HL7®) Committee is creating a standard document format, based on XML, for exchanging patient information between healthcare organizations such as hospitals, labs, and healthcare practitioners. These are but a few of the hundreds of vertical standards being created across a wide range of industries. More details on these various industry vocabularies can be found at xml.org (http://www.xml.org). In addition to the ebXML horizontal vocabulary for business content, ebXML also defines a common messaging infrastructure. The ebXML Messaging Services Specification is used by many horizontal (e.g., OAGIS) and vertical (e.g., RosettaNet) business vocabulary standards. The ebXML messaging infrastructure includes a message transport layer (for moving XML data between trading partners), a registry/repository (which contains business process sequences for message exchanges, common data objects, trading partner agreements, and company profiles), and security (for authenticating the other parties). More information on ebXML can be found at www.ebxml.org. CONCLUSION XML is one of the key technology standards in a modern information systems architecture for information interchange. The flexibility of XML enables it to be a container for a very wide range of content resulting in many business and technical benefits because of its simplicity, interoperability, commonality, and broad accessibility. XML enables different enterprises using different applications, technologies, and terminologies to share and exchange data and documents. XML has been widely applied to electronic publishing, data exchange, and Web searching. In electronic publishing, XML facilitates the customization of data and documents for individual needs, broadens the range of information presentation and distribution options, and enables the global distribution of information. In data exchange, XML is emerging as a key integrating mechanism within the enterprise and between the enterprise and its customers, suppliers, and business partners. In Web searching, XML greatly enhances the ability to search for information in documents and data, and to combine the results. As Nelson Jackson has said, “I do not believe you can do today’s job with yesterday’s methods and be in business tomorrow.” Yesterday’s methods for handling business documents and data are no longer meeting today’s business needs, let alone positioning the enterprise to meet the challenges 432
XML: Information Interchange of tomorrow. XML has emerged as the new method for handling business documents and data in a way that increases the reach, range, depth, and speed of information interchange within and between enterprises. References 1. Extensible Markup Language (XML) 1.0 (Second Edition) W3C Recommendation 6 October 2000, Page 4. http://www.w3.org. 2. XML in 10 points is an essay by Bert Bos that covers the basics of XML, Page 1. http://www.w3.org. 3. Extensible Markup Language (XML) Activity Statement, Page 2. http://www.w3.org. 4. The XML FAQ, Page 9. http://www.ucc.ie.
433
This page intentionally left blank
Chapter 36
Software Agent Orientation: A New Paradigm Roberto Vinaja Sumit Sircar
Software agent technology, which started as a new development in the field of artificial intelligence, has evolved to become a versatile technology used in numerous areas. The goal of this chapter is to review the general characteristics of software agents and describe some of the most important applications of this technology. One of the earliest and most accepted definitions is the one by Wooldridge1 in the February 1996 issue of The Knowledge Engineering Review. An agent is “an autonomous, self-contained, reactive, proactive computer system, typically with central locus of control that is able to communicate with other agents via some Agent Communication Language.” However, this is just one among literally dozens of definitions. There is no common definition of an agent, even after almost a decade of continuous developments in this area. In fact, there might never be a common definition because the real power of agents does not reside in any specific application. The real power resides in the fact that agents are not specific applications but a paradigm for software development. The idea that started as a breakthrough development in artificial intelligence has become a new paradigm being applied in a wide range of systems and applications. There is an urgent need for a standard set of agent protocols to facilitate interaction of several agents across platforms. The “discovery” of the structured programming concept revolutionized the development of applications. More recently, the object-oriented paradigm provided a higher abstraction level and a radically different approach to application development. Concepts such as encapsulation and polymorphism transformed the conceptualization and development of systems. 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
435
PROVIDING APPLICATION SOLUTIONS The agent-orientation approach is perhaps the next paradigm. Although many of the concepts and principles associated with agents (such as mobility, intelligence, and purposeful planning) are not new, the overall agent approach provides a totally new perspective. According to Diep and Massotte, with the Ecole des Mines d’Ales, France, agents are the next step in the evolution of programming languages and an innovative paradigm.2 In his book Future Edge, Joel Barker says that every paradigm will uncover problems it cannot solve, and these unsolvable problems trigger a paradigm shift.3 How do we identify a paradigm shift? According to Barker, a paradigm shift occurs when a number of problems that cannot be solved using the current paradigm are identified. Object orientation has been very successful in solving a wide variety of problems. However, there are some development problems for which the object-oriented approach has not been sufficiently flexible. This is the reason software engineers and developers have turned to a new approach. Agents have properties such as intelligence and autonomy, which objects do not have. Agents can negotiate, can be mobile, and still possess many of the characteristics of objects. The agent paradigm combines concepts from artificial intelligence, expert systems, and object orientation. Knowledge-based systems, like expert systems, have been applied in many different areas. Similarly, the agent paradigm is becoming pervasive. Agents have several similarities with objects and go one step further. Like objects, agents are able to communicate via messages. They have data and methods that act on that data, but also have beliefs, commitments, and goals.4 Agents can encapsulate rules and planning. They are adaptive and, unlike objects, they have a certain level of autonomy. According to Wagner, agent orientation is a powerful new paradigm in computing.5 He states that agent-oriented concepts and techniques could well be the foundations for the next generation of mainstream information systems. Wagner also affirms that the agent concept might also become a new fundamental concept for dealing with complex artificial phenomena, in the same manner as the concept of entity in data modeling and object in object orientation. Other authors such as O’Malley and DeLoach6 have described the advantages of the agent-oriented paradigm in some detail; Wooldridge7 has proposed a agent-based methodology for analysis and design; and Debenham and Henderson-Sellers8 with the University of Technology, Sydney, Australia, have proposed a full life-cycle methodology for agent-oriented systems. BACKGROUND In the past few years, there has been a revolution in the tools for organizational decision making. The explosive growth of the Internet and the World 436
Software Agent Orientation: A New Paradigm Wide Web has encouraged the development and spread of technologies based on database models and artificial intelligence. Organizational information structures have been dramatically reshaped by these new technologies. The old picture of the organizational decision-making environment with tools such as relational databases, querying tools, e-mail, decision support systems, and expert systems must be reshaped to incorporate these new technologies. Some examples follow: • The browser has become the universal interface for accessing all kinds of information resources. Users can access not just Web pages, but also multimedia files, local files, and streaming audio and video using a program such as Internet Explorer or Netscape Navigator. Microsoft has tried to integrate Internet Explorer and the Windows operating system. Netscape has tried to embed operating system capabilities to its Navigator program. The ubiquitous use of the browser as the common interface for multiple resources has changed the way users access systems. • The proliferation of data warehouses. Organizations are consolidating information stored in multiple databases in multidimensional repositories called data warehouses. Firms are strategically using data mining tools to enable better decision making. However, some data warehouses are so huge that it would be almost impossible for a user to obtain the relevant information without the aid of some “intelligent” technology. • The development and growth of the Internet and the World Wide Web. A decade ago, the Internet was the realm of academics and scientists;9 now it is the public network of networks. The volume of information is growing at an exponential rate. Users need navigation and information retrieval tools to be able to locate relevant information and avoid information overload.10 • The establishment of Internet-based interorganizational systems. Many organizations are capitalizing on the use of the Internet as a backbone for the creation of extranets. Groupware-based tools such as Lotus Notes facilitate sharing data and workflow. Considering these dramatic changes in the information systems landscape, the need for a new paradigm for development should be apparent. Agent orientation is perhaps the most viable candidate for this urgent need. ATTRIBUTES OF AGENTS It is quite common in artificial intelligence (AI) to characterize an agent using human attributes, such as knowledge, belief, intention, and obligation. Some AI researchers have gone further and considered emotional agents.11 Another way of giving agents human-like attributes is to represent 437
PROVIDING APPLICATION SOLUTIONS them visually using techniques such as a cartoon-like graphical icon or an animated face.12 Examples of agents with a graphical interface are the Microsoft help agents. Research into this matter has shown that although agents are pieces of software code, people like to deal with them as if they were dealing with other people. Agents have some special properties but miscommunication has distorted and exaggerated them, causing unrealistic expectations. Intelligence What exactly makes an agent “intelligent” is something that is difficult to define. It has been the subject of much discussion in the AI field, and a clear answer has not yet been found. Allen Newell13 defines intelligence as “the degree to which a system approximates a knowledge-level system.” Intelligence is defined as the ability to bring all the knowledge a system has at its disposal to bear in the solution of a problem (which is synonymous with goal achievement). A practical definition that has been used for artificial intelligence is “attempting to build artificial systems that will perform better on tasks that humans currently do better.” Thus, tasks such as number addition are not artificial intelligence because computers easily do this task better than humans do. However, voice recognition is artificial intelligence because it has been very difficult to get computers to perform even the most basic tasks. Obviously, these definitions are not the only ones acceptable but they do capture the nature of AI. Autonomy Autonomy refers to the principle that agents can operate on their own without the need for human guidance.7 Self-regulated agents are goal-governed agents that, given a certain goal, are able to achieve it by themselves.14 Cooperation To cooperate, agents need to possess a social ability, that is, the ability to interact with other agents and possibly humans via some communication language.7,15 Agents may be complex or simple, and either work alone or in harmony, creating a multi-agent environment. Each agent or group of agents has knowledge about itself and about other agents.16 The agent should have a “language” or some way to communicate with the human user and with other agents as well.17 The nature of agent-to-agent communication is certainly different from the nature of human-to-agent communication, and this difference calls for different approaches.18 438
Software Agent Orientation: A New Paradigm Openness An open system is one that relates, interacts, and communicates with other systems.19 Software agents as a special class of open systems have unique properties of their own but they share other properties in common with all open systems. These include the exchange of information with the environment and feedback.20 An agent should have the means to deal with its software environment and interact with the world (especially with other agents). Bounded Rationality Herbert Simon21 proposed the notion of bounded rationality to refer to the limitations in the individual’s inherent capabilities of comprehending and comparing more than a few alternatives at a time. Humans are not optimal and only in some cases locally optimal. Just like humans have limitations, agents also have limitations. The bounded rationality concept can also be applied to describe the behavior of an agent that is nearly optimal with respect to its goals, as its resources will allow. Because of limited resources, full rationality may not always be possible even when an agent has the general capability to act. This is known as bounded rationality. For example, an agent for price comparison might not be able to search for every single online store on the World Wide Web and locate the true lowest price for a product. Even if the agent had the capability to do so, there is also a time limitation; users might not want to wait for an unusual amount of time. Nevertheless, users can be satisfied with a solution, which is nearly optimal and sufficient instead of the optimal solution. Most agents that use heuristic techniques for problem solving can reach acceptable (nonoptimal) solutions in a reasonable amount of time. Purposiveness The most distinctive characteristic of the behavior of higher organisms is their goal-directness, that is, their apparent purposiveness.22 Purposeful behavior is that which is directed toward the attainment of a goal or final state.23 An agent is something that satisfies a goal or set of goals.24 That is, a degree of reasoning is applied to the data that is available to guide the gathering of extra information to achieve the goals that have been set. Purposeful behavior pertains to systems that can decide how they are going to behave. Intentional attitudes, usually attributed to humans, are also a characteristic of agents.17 Human Interaction and Anthropomorphism An agent is a program that interacts with and assists an end user. There are many viewpoints concerning the form of interaction between an agent and 439
PROVIDING APPLICATION SOLUTIONS a human. Rapid development of networks and information processing by computer now makes it possible for large quantities of personal information to be acquired, exchanged, stored, and matched very quickly. More and more activities are becoming computer based and, as computers spread, new users are beginning to take advantage of the computer. The volume of information now available on the “information superhighway” is overwhelming, and these users need help to handle this information overload. In the past, the computer user stereotype was a sophisticated, professional, technically oriented person. Nowadays, however, the typical user is less sophisticated and user characteristics encompass different educational levels, different ages, and diverse cultural backgrounds. This requires a new paradigm of human–computer interaction that can handle the inherent complexity of user–computer interaction. The present paradigm is one in which the user directly manipulates the interface. In the past, the user used to request an action by issuing a command; an important improvement has been achieved with the implementation of graphical user interfaces (GUIs) such as in Microsoft Windows. GUIs are more intuitive and user-friendly; nevertheless, the user still has to initiate manipulation by clicking on a graphic or a hyperlink. Another important notion is the fact that the agent should have external indicators of its internal state. The user needs to visualize the response of the agent based on its external features. Should agents use facial expressions and other means of personification? The use of facial expressions or gestures can indicate the state of the agent. This is called anthropomorphism. There is much debate over anthropomorphism in agent user interface design (utilizing a human-like character interface). Some designers think that providing an interface that gives the computer a more human appearance can ensure that a computer novice feels comfortable with the computer. On the other hand, some critics say that an anthropomorphic interface may be deceptive and misleading. The use of human-like agents for an interface was pioneered by Apple Computer. The promotional video “The Knowledge Navigator,” produced by former chairman John Sculley, features an agent called Phil. Phil plays the roles of resource manager, tutor, and knowledge retrieval agent. However, the future vision depicted in this video is still utopian and impossible to achieve with existing technology. It is quite common in AI to characterize an agent using human attributes, such as knowledge, beliefs, intention, and obligation. Another way of giving agents human-like attributes is to represent them visually by using techniques such as a cartoon-like graphical icon or an animated face. 12 Research into this matter has shown that although agents are pieces of software code, people like to deal with them as if they were dealing with other people.25 440
Software Agent Orientation: A New Paradigm Whenever one interacts with some other entity, whether that entity is human or electronic, the interaction goes better if one’s expectations match reality. Therefore, when a problem needs to be solved, the user may not trust the agent enough to delegate important tasks. If the agent interface is sloppy, the user may perceive the agent as incapable of performing at a satisfactory level, and may be reluctant to delegate a task. Going to the other extreme is also dangerous. The optimum balance should be achieved between the level of autonomy and the degree of user control. By the very nature of delegation, assuming perfect performance, especially in a changing world where goals may be constantly changing, is likely to lead to disappointment. Users’ expectations are very important in making the agent useful. The problem arises from the fact that people tend to assign human attributes to a system personified as a human character. Contributing to the same problem is the tendency of researchers and marketers to advertise their products as human characters just for the sake of sales. Adaptation Adaptation is defined as the ability to react to the environment in a way that is favorable, in some way, to the continued operation of the system.26 Autonomous agents are software components with some ability to understand their environment and react to it without detailed instructions. The agent must have some mechanism to perceive signals from its environment. The environment is constantly changing and modified by the user interaction. The agent should be able to adapt its behavior and continue toward the desired goal. It should be capable of constantly improving skills, adapting to changes in the world, and learning new information. Furthermore, the agent should be able to adapt to unexpected situations in the environment, and be able to recover and perform an “adequate” response.27 Learning Interface agents are software programs that assist a user to perform certain specific tasks. These agents can learn by interacting with the user or with other agents. The agent should be able to learn from its experience. Because people do not all do the same tasks, and even those who share the same task do it in different ways, an agent must be trained in the task and how to do it.28 Ideally, the structure of the agent should incorporate certain components of learning and memory. This is related to heuristics and cybernetic behavior. Jon Cunnyngham29 has developed a hierarchy for understanding intelligence, the learning hierarchy, where at each step something is added to the learning mechanisms already at hand. The hierarchy has four levels of learning: 441
PROVIDING APPLICATION SOLUTIONS 1. 2. 3. 4.
Learning by discovery Learning by seeing samples Learning by being told Learning by being programmed
Patti Maes12 has addressed the problem of agent training based on the learning approach of a real human assistant. In addition, she has proposed a learning model highly compatible with Cunnyngham’s hierarchy. When a personal assistant is hired, he is not familiar with the habits and preferences of his employer, and he cannot help much. As the assistant learns from observation and practices repetitively, he becomes knowledgeable about the procedures and methods used in the office. The assistant can learn in several ways: by observation, by receiving instructions from the boss, by learning from more experienced assistants, and also by trial and error. As the assistant gains skills, the boss can delegate more and more tasks to him. The agent has four learning sources: imitation, feedback, examples, and agent interaction. 1. Observing and imitating the user. The agent can learn by observing a repetitive behavior of the user over long periods of time. The agent monitors the user activity and detects any recurrent pattern and incorporates this action as a rule in the knowledge base. 2. Direct and indirect user feedback. The user rates the agent’s behavior or the agent’s suggestions. The agent then modifies the weights assigned to different behaviors and corrects its performance in the next attempt. The Web agent Firefly will choose certain Web sites that may be of interest based on one’s personal preferences. The agent asks to rate each one of the suggestions, and these ratings serve as an explicit feedback signal that modifies the internal weights of the agent. 3. Receiving explicit instructions from the user. The user can train the agent by giving it hypothetical examples of events and situations, and telling the agent what to do in those cases. The interface agent records the actions, tracks relationships among objects, and changes its example base to incorporate the example that it is shown. Letizia,30 a Web browser-based agent, collects information about the user’s behavior and tries to anticipate additional sites that might be of interest. 4. Advice from other agents. According to Maes, if an agent does not itself know what action is appropriate in a certain situation, it can present the situation to other agents and ask “what action they recommend for that situation.” For example, if one person in the organization is an expert in the use of a particular piece of software, then 442
Software Agent Orientation: A New Paradigm other users can instruct their agents to accept advice about that software from the agent of that expert user. Mobility The mobile agent concept encompasses three areas: artificial intelligence, networking, and operating systems.31 Mobile agents are somewhat more efficient models32 and consume fewer network resources than traditional code because the agent moves the computation to the data, rather than the data to the computation. Java applets can be accessed from any terminal with Internet access. To execute an applet, only a Java-enabled browser is required, such as the already widely used browser programs. Some examples of mobile agents are D’Agents,33 designed at Dartmouth, and IBM’s Aglets. Applications of Agents Agent technology has tremendous potential. Most agents are still in the prototype stage although there are a growing number of commercial applications. Most agent implementations are part of another application rather than stand-alone agents. This section focuses on how agents impact the other technologies, namely, how they support other applications. E-Mail Several agent applications have been developed for e-mail filtering and routing. An agent can help managers classify incoming mail based on the user’s specifications. For example, he could specify that all incoming mail with the word “Confirmation” be stored in a folder with lower priority. Also, the agent can learn that a user assigns a higher priority to mail personally addressed than mail received from a subscription list. After the user specifies a set of rules, the agent can use those rules to forward, send, or file the mail. It is true that e-mail programs already use filtering rules for handling and sorting incoming mail. However, an agent can provide additional support. An artificially intelligent e-mail agent, for example, might know that all requests for information are handled by an assistant, and that a message containing the words “request information” is asking for a certain information envelope. As a result, the agent will deduce that it should forward a copy of the message to the assistant. An electronic mail agent developed by Patti Maes12 is an excellent example of a stationary, limited scope agent, operating only on the user’s workstation and only upon the incoming mail queue for that single user. This agent can continuously watch a person’s actions and automate any regular patterns it detects. An e-mail agent could learn by observation that the user always forwards a copy of a message containing the words “request 443
PROVIDING APPLICATION SOLUTIONS for information” to an assistant, and might then offer to do so automatically. Data Warehousing Data warehouses have made available increasing volumes of data (some data warehouses reach the terabyte size) that need to be handled in an intuitive and innovative way. While transaction-oriented databases capture information about the daily operations of an organization, the data warehouse is a time-independent, relevant snapshot of the data. Although several tools such as online analytic processing (OLAP) help managers to analyze the information, there is so much data that, in fact, the availability of such an amount of information actually reduces, instead of enhances, their decision-making capabilities. Recently, agents have become a critical component of many OLAP and relational online analytic processing (ROLAP) products. Software agents can be used to search for changes in the data and identify patterns, all of which can be brought to the attention of the executive. Users can perform ad hoc queries and generate multiple views of the data. Internet-Based Applications As the World Wide Web grows in scale and complexity, it will be increasingly difficult for end users to track information relevant to their interests. The number of Internet users is growing exponentially. In the early years of the Internet, most of its users were researchers. Presently, most new users are computer novices. These new users are only partially familiar with the possibilities and techniques of the Internet. Another important trend is that more and more companies, governments, and nonprofit organizations are offering services and information on the Internet. However, several factors have hindered the use of Internet for organizational decision making, including: • The information on the Internet is located on many servers all over the world, and it is offered in different formats. • The variety of services provided in the marketspace is constantly growing. • The reliability of Web servers is unpredictable. The service speed of a Web server depends on the number of requests or the nature of the request. • Information is highly volatile, Web pages are dynamic and constantly change. Information that is accessible one day may move or vanish the next day. These factors make it difficult for a single person to collect, filter, and integrate information for decision making. Furthermore, traditional informa444
Software Agent Orientation: A New Paradigm tion systems lack the ability to address this challenge. Internet-based agents can help in this regard because they are excellent tools for information retrieval. Information Retrieval Search engines feature indices that are automatically compiled by computer programs, such as robots and spiders,34 which go out over the Internet to discover and collect Internet resources. However, search engines might not be optimal in every single case. The exponential growth in the number of Web pages is impacting the performance of search engines based on indexes and subject hierarchies. Some other problems derived from search engines’ blind indexing are inefficiency and the inability to use natural languages. A superior solution is the combination of search engines and information retrieval agents. Agents may interact with other agents when conducting a search. This will increase query performance and increase precision and recall. A user agent can perform a search on behalf of the user and operate continuously. This will save valuable time for the user and increase the efficient use of computer resources. Examples of search agents include: • The Internet Softbot, developed at the University of Washington under the direction of Oren Etzioni,35 is one of the first agents allowing adaptation to changes in the environment. The Softbot is based on the following main objectives: — Goal oriented: the user specifies what to find and the agent decides on how and when to find it. — Charitable: the Softbot tries to understand the request as a hint. — Balanced: the Softbot considers the trade-off between searching on its own, or getting more specific information from the user. — Integrated: this program serves as a common interface to most Internet services. • MetaCrawler36 is a software robot, also developed at the University of Washington, that aggregates Web search services for users. MetaCrawler presents users with a single unified interface. Users enter queries, and MetaCrawler forwards those queries in parallel to the search services. MetaCrawler then collates the results and ranks them into a single list, returning a consolidated report to the user that integrates information from multiple search services. Electronic Commerce Agents are a strategic tool for electronic commerce because of their negotiation and mobility characteristics. For example, Intershop Research is a company that has developed agents customized for electronic commerce transactions.37 Agents are sent to E-marketplaces on behalf of buyers and 445
PROVIDING APPLICATION SOLUTIONS sellers. These agents have a certain level of autonomous decision-making and mobility capabilities. They can proactively monitor trading opportunities, search for trading partners and products, and make trading decisions to fulfill users’ objectives and preferences based on the users’ trading rules and constraints. Mobile agents can move to the E-marketplace through the Internet and can be initiated from different computer platforms and mobile devices such as mobile phones and PDAs. The agents can interact with a number of other participants in an E-marketplace and visit other E-marketplaces, if required. Agents can be used to provide customer service and product information in online markets. Well-known companies such as Procter&Gamble and Coca-Cola have implemented software agents for customer service at their sites. The agent attempts to answer customer questions by identifying a keyword in the sentence and matching the keyword against a database of possible answers. If the agent is not successful in identifying a keyword, it will ask the user to restate the question. If the second attempt fails, it will provide the customer with a list of frequently asked questions (FAQs). There are also agent applications for business-to-business electronic commerce transactions. They provide more sophisticated negotiation protocols, can manage bidding systems, and handle RFQs (Request for Quotations) or RFPs (Request for Proposals). For example, SmartProcurement (developed by the National Institute for Standards and Technology and Enterprise Integration Technologies) uses agents to facilitate procurement.38 The system is based on CommerceNet technology. It includes a series of databases with supplier and transaction information. A purchasing agent, representing the buyer organization, can post RFQs to the database. Agents can also represent authorized suppliers. Supplier agents can access the RFQ database, review the details, and decide whether or not to post a bid. The purchasing agent reviews the proposals and selects one proposal among the many alternatives. The supplier agent is then notified of the award. Agents have potential applications for online auctions. Researchers at MIT have developed several prototype systems for online auctions. Kasbah is an online system for consumer-to-consumer electronic commerce based on a multi-agent platform. Users can create an agent, provide the agent with general preferences, and dispatch the agent to negotiate in an electronic marketplace. Têtê-à-têtê, also developed at MIT,12 is a negotiation system for business-to-consumer electronic commerce transactions. Merchants and customers can negotiate across multiple terms, including price, warranties, delivery times, service contracts, and other merchant value-added services. The Electric Power Research Institute has also 446
Software Agent Orientation: A New Paradigm developed agent-based E-commerce applications for the electric power industry.39 Intranet Applications An Intranet is a private internal network based on Internet protocols. Many organizations are implementing intranets for enhanced intraorganizational communications. Some of the major benefits of an Intranet are enhanced collaboration and improved information dissemination. Employees are empowered because they have access to the relevant information for decision making. Lotus Notes and other intranet software facilitate information sharing and group work. Beyond the convenient distribution of basic internal documents to employees, an intranet can improve communication and coordination among employees. Internal documents can be distributed to employees and groupware software can facilitate group work and collaboration. Professional services and meetings can be scheduled through engagement management software and calendars on an intranet, thus providing input from all parties and communicating current status to the individuals involved. Large amounts of a company’s information are made available to executives of that organization. As a result, executives must be able to find relevant information on an intranet. It is in this area that intelligent support for information search comes to play an important role. For example, the applications of agent technology in an intranet environment might include the automated negotiation and scheduling of meetings based on personal schedules and available resources. Monitoring The routine of checking the same thing over and over again is tedious and time consuming. However, by employing agents to do this task, automated surveillance ensures that each potential situation is checked any time the data changes, freeing decision makers to analyze and act on the information. This kind of agent is essentially a small software program written to perform background tasks. It typically monitors networks or databases and other information sources and flags data it has been instructed to find. In the past, information systems delivered limited information about critical issues, such as competitors. However, the expansion of the Internet and the Web facilitates the ability to deliver business intelligence to the executive. The use of monitoring agents can help companies gather information about the industry environment competitors. The collected information can be used for strategic planning purposes and for providing business intelligence. 447
PROVIDING APPLICATION SOLUTIONS Push Technology and Agents Push technology delivers a constant stream of information to the user’s desktop without having him search for it. Push providers use this technology to send information for viewing by their customers. A related function for agents can be filtering information before it hits the desktop, so users receive only the data they need or get warnings of exceptional data. Users may design their own agents and let them search the Web for specific information. The user can leave the agent working overnight and the next morning finds the results. Another function can be to check a Web site for structure and layout. Every time that the site changes or it is updated, the agent will “push” a notice to the user. The combination of push technology and intelligent agents can be used for news monitoring. One of the most useful applications of software agents is helping the user to select articles from a constant stream of news. There are several applications that can generate an electronic newspaper customized to the user’s own personal interests and preferences. Companies can gain “business intelligence” by monitoring the industry and its competitors. Information about the industry environment can be used for strategy development and long-term planning. Financial Applications A financial analyst sitting at a terminal connected to the global information superhighway is faced with a staggering amount of information available to make a decision about investments. The online information includes company information available for thousands of stocks. Added to this are online financial services and current data about companies’ long-term projects. The analyst faces an important dilemma: how to find the information that is relevant to his problem and how to use that information to solve it. The information available through the network is overwhelming, and the ability to appropriately access, scan, process, modify, and use the relevant data is crucial. A distributed agent framework called Retsina that has been used in financial portfolio management is described by Katia Sycara of Carnegie Mellon University. The overall portfolio-management task has several component tasks: eliciting (or learning) user profile information, collecting information on the user’s initial portfolio position, and suggesting and monitoring a reallocation to meet the user’s current profile and investment goals. Each task is supported by an agent. The portfolio manager agent is an interface agent that interacts graphically and textually with the user to acquire information about the user’s profile and goals. The fundamental analysis agent is a task assistant that acquires and interprets information about a stock’s fundamental value. The technical analysis agent uses numerical techniques to try to predict the near future in the stock market. The breaking news agent tracks and filters news stories and decides if they 448
Software Agent Orientation: A New Paradigm are so important that the user needs to know about them immediately, in that the stock price might be immediately affected. Finally, the analyst tracking agent tries to gather intelligence about what human analysts are thinking about a company. Users of a site that deals with stock market information spend a lot of time rechecking the site for new stock reports or market data. Users could be given agents that e-mail them when information relevant to their portfolio becomes available or changes. According to researchers at the City University of Hong Kong,40 intelligent agents are well suited for monitoring financial markets and detecting hidden financial problems and reporting abnormal financial transaction, such as financial fraud, unhedged risks, and other inconsistencies. Other monitoring tasks involve fraud detection, credit risk monitoring, and position risk monitoring. For example, an external environment monitoring agent could collect and summarize any movements in the U.S. Treasury Bond yield in order to monitor any financial risk that may result from movements within the bond market. Researchers at Georgia State University (GSU) have proposed a MultiAgent Decision Support System, which is an alternative to the traditional Data + Model Decision Support System.41 The system is composed of a society of agents classified into three categories according to the phases of Simon’s problem solving model. Herbert Simon21 proposed a model of human decision making that consists of three interdependent phases: intelligence, design, and choice. The intelligence phase involves searching or scanning the environment for problems and opportunities. In the intelligence phase, the environment is searched to find and formulate problem situations. Design phase activities include searching for, developing, and analyzing possible alternative courses of actions. This phase can be divided into search routine (find ready-made solutions) and design routine (used if no ready-made solution is available). In the choice phase, a course of action is chosen from the available alternatives. During the choice phase, further analysis is performed and an alternative is selected. The agent system developed at GSU is composed of intelligence-phase agents, design-phase agents, and choice-phase agents. It has been used to support investment decisions. Networking and Telecommunications Agent technology is also used for network management applications.42 Complex networks include multiple platforms and networking devices such as routers, hubs, and switches. Agents can monitor the performance of a device and report any deviation from regular performance to a network monitoring system. They are also used to execute critical functions, including performance monitoring, fault detection, and asset management. 449
PROVIDING APPLICATION SOLUTIONS Software agents can be used to monitor network performance and also in the analysis of the traffic and load on a network. Agents can identify overutilized or underutilized servers or resources based on prespecified performance objectives. This information can be used for improving network design and for identifying required changes to network configuration and layout. Based on the information collected by the agents, the network manager can determine if it is time to increase server capacity or increase the available bandwidth. They can also monitor network devices, report unusual situations, and identify faults and errors. Using instant fault notification, network downtime can be reduced. Some sample error detection tasks include detecting a broadcast storm, a faulty server, or damaged Ethernet frames. Agents can also be organized in a hierarchical arrangement. Low-level agents can monitor local devices and attempt to fix minor problems. More serious problems can be reported to a centralized node, and critical problems can be routed directly to the network administrator using an automatic e-mail/pager alerting system. Many networking equipment manufacturers have realized the benefits of agent technology and they provide device-specific agents for routers, hubs, and switches. Distance Education Software agents are used for supporting distance education. For example, an agent developed at Chun Yuan Christian University uses a diagnosis problem-solving network to provide feedback to distance learning students.43 Students solve mathematical problems and the agent checks the results, provides a diagnostic result, and suggests remedial action. SAFARI, developed at the Heron Labs of Middle East Technical University, is an intelligent tutoring system with a multi-agent architecture.44 The intelligent agents provide student guidance in Web-based courses. They can also customize content based on student performance. Software agents can also provide tutoring capabilities in a virtual classroom.38 For example, The Advance Research Projects Agency’s Computer Assisted Education and Training Initiative (CAETI) provides individualized learning in combination with a group-learning approach based on a multiuser environment and simulation. Manufacturing A classical problem in manufacturing is scheduling, which involves the optimal allocation of limited resources among parallel and sequential activities.45 There are many traditional approaches to the manufacturing scheduling problem. Traditional heuristic techniques use a trial-and-error approach or an iterative algorithm. According to Weiming Shen,45 with the National Research Council of Canada, in many respects agents use a supe450
Software Agent Orientation: A New Paradigm rior approach to developing schedules. Agents use a negotiation approach, which is more similar to the approach used by organizations in the real world. Agents can model not just the interaction at the shop floor level, but also at the supply-chain level. The National Center for Manufacturing Sciences, based in Ann Arbor, Michigan, has developed the Shop Floor Agents project.46 This is an applied agent-based system for shop floor scheduling and machine control. The prototype system was implemented in three industrial scenarios sponsored by AMP, General Motors, and Rockwell Automation/Allen-Bradley. The PABADIS system, a multi-agent system developed at the Ecole des Mines d’Ales in France, uses mobile agents based on production orders issued by an enterprise resource planning (ERP) system.2 The PABADIS virtual factory system enables the configuration and reconfiguration of a production system. It uses an auction coordination mechanism in which agents negotiate based on bid-allocation and temporal feasibility constraints. Infosys Technologies Ltd. has developed an agent-oriented framework for sales order processing.47 The framework, called Agent-Based Sales Order Processing System (AESOPS), allows logistics personnel to conceptualize, design, and build a production environment as a set of distributed units over a number of physical locations. These production units can interact with each other to process any order to completion in a flexible yet consistent and efficient manner. Healthcare According to the Health Informatics Research Group at the Universiti Sains Malaysia, the utilization of agent technology in healthcare knowledge management is a highly viable solution to providing necessary assistance to healthcare practitioners while procuring relevant healthcare knowledge.48 This group has developed a data mining agent in which the core functionality is to retrieve and consolidate data from multiple healthcare data repositories.49 The potential decision-support/strategic-planning applications include analysis of hospital admission trends and analysis of the costeffectiveness of healthcare management. The system contains a defined taxonomy of healthcare knowledge and a standard healthcare vocabulary to achieve knowledge standardization. It also has medical databases that contain data obtained from various studies or surveys. The clinical case bases contain “snapshots” of actual past clinical cases encountered by healthcare practitioners and other experts. 451
PROVIDING APPLICATION SOLUTIONS ISSUES SURROUNDING AGENTS Several implementation and technical issues must be resolved before broad use of agent technologies can take off. These are discussed below. Implementation Issues Unrealistically high expectations promoted by computer trade magazines and software marketing literature may affect the implementation of intranet/Internet systems and agent software. Before implementing a new Internet technology, managers and users should be aware that a new technology does not guarantee improved productivity by itself. Implementing the most expensive or sophisticated intranet system may not necessarily provide benefits reflected in improved productivity; in fact, a less costly system may provide the same benefits. There are instances where systems may be implemented as easily, without using sophisticated agents. It is important to carefully plan agents’ implementation to truly increase productivity and reduce the risk of implementation failure. It has been proposed that Internet delivery is a proven way to improve information deployment and knowledge sharing in organizations. However, more understanding is needed of the effects of information delivery in decision making and how information delivery is influenced by other variables. Many companies have implemented intranet sites after large investments, expecting improved information deployment and ultimately better decision making. However, developers and implementers should also take into account nontechnical issues. It is important to train and educate employees and managers to take full advantage of agent technologies. Managers read articles that promise increases in revenue and productivity by implementing agent technology, and they want to implement the same technologies in their companies. They may, however, have no clear vision of how agents can enhance existing processes or improve information deployment. There are many arguments that agent technology will lead to productivity improvements, but some of these arguments have not been tested in practice. Agents are not a panacea, and problems that have troubled software systems are also applicable to agents. Miscellaneous Technical Issues There are many technical issues that need to be resolved, such as the development of standards that facilitate the interaction of agents from different environments, the integration of legacy systems and agents, and security concerns regarding cash handling. Legacy systems are usually mainframe based and were initially set up long before the widespread adoption of the Internet. Therefore, mainframe systems use nonroutable 452
Software Agent Orientation: A New Paradigm protocols, which are not compatible with the TCP/IP family of Internet protocols. This intrinsic limitation makes the integration of legacy systems and agent technology very difficult. To exploit the full synergy of these two technologies, there is a need for middleware and interfacing systems. CONCLUSION Information systems managers and researchers should keep a close eye on agents because they offer an excellent alternative for managing information. Many corporate information technology managers and application developers are considering the potential business applications of agent technologies. This represents a new approach to software development. Most agent programs today are site specific; companies are adding agents to their Web sites with knowledge and features specific to the business of that organization. At the most fundamental level, agents provide sites with value-added services that leverage the value of their existing content. Some companies that may benefit from agents include: • Information publishers such as news and online services, which can filter and deliver information that satisfies subscribers’ personalized search profiles by the use of personalized agents. • Companies implementing intranets, which provide their employees with monitoring of industry news, product developments, data about competitors, e-mail, groupware software, and environmental scanning. • Product vendors can provide customer support by informing customers about new products, updates, tips, and documentation, depending on each customer’s personalized profile. Businesses could provide other agents that automate customer service and support, disseminate, or gather information, and generally save the user from mundane and repetitive tasks. As the Web matures, these valueadded services will become critical in differentiating Web sites from their competition and maximizing the content on the site. The evolution of agents will undoubtedly impact future work practices. Current technology is already delivering benefits to users. By introducing more advanced functionality and additional autonomy to agents in an incremental way, organizations will benefit more from this technology. Systems and applications are designed based on the agent approach. Agent orientation is no longer an emerging technology but rather a powerful approach that is pervasive in systems and applications in almost every single area. In summary, the agent paradigm is the next step in the evolution of software development after object orientation. 453
PROVIDING APPLICATION SOLUTIONS References 1. Wooldridge, M. and Jennings, N., “Intelligent Agents: Theory and Practice,” The Knowledge Engineering Review, 10(2), 115–152, 1996. 2. Diep, D., Masotte, P., Reaidy, J., and Liu, Y.J., “Design and Integration of Intelligent Agents to Implement Sustainable Production Systems,” in Proc. of the Second Intl. Symposium on Environmentally-Conscious Design and Inverse Manufacturing ECODESIGN, 2001, 729–734. 3. Barker, J.A., Future Edge: Discovering the New Paradigms of Success, William Morrow and Company, 1992, chap. 3–6. 4. Kinny, D., Georgeff, M., and Rao, A., “A Methodology and Modeling Technique for Systems of BDI Agents,” in Agents Breaking Away, Springer-Verlag, 1996, 56–71. 5. Wagner, G., Call for position papers, Agent-Oriented Information Systems Web site (www.aois.org). 6. O’Malley, S.A. and DeLoach, S.A., “Determining When to Use an Agent-Oriented Software Engineering Paradigm,” in Agent-Oriented Software Engineering II LNCS 2222, Second International Workshop, AOSE 2001, Montreal, Canada, May 29, 2001, Wooldridge, M.J. and Ciancarini, W.P., Eds., Springer-Verlag, Berlin, 2001, 188. 7. Wooldridge, M., Muller, J.P., and Tambe, M., “Agent Theories, Architectures, and Languages: A Bibliography,” in Intelligent Agents II, Agent Theories, Architectures and Languages, IJCAI 1995 Workshop, Montreal, Canada, 1995, 408–431. 8. Debenham, J. and Henderson-Sellers, B., “Full Lifecycle Methodologies for Agent-Oriented Systems — The Extended OPEN Process Framework,” in Agent-Oriented Information Systems at CAiSE’02, 27–28, May 2002, Toronto, Ontario, Canada. 9. Berners-Lee, T., Cailliau R., Loutonen, H., Nielsen, F., and Secret, A., “The World-Wide Web,” Communications of the ACM, 37(8), 76–82, August 1994. 10. Daig, L., “Position Paper,” ACM SigComm’95 -MiddleWare Workshop, April 1995. 11. Bates, J., "The Role of Emotion in Believable Characters," Communications of the ACM, 37(7), 122–125, 1994. 12. Maes, P., “Agents that Reduce Work and Information Overload,” Communications of the ACM, 37(7), 31–40, 1994. 13. Newell, S., “User Models and Filtering Agents for Improved Internet Information Retrieval,” User Modeling and User-Adapted Interaction, 7(4), 223–237, 1997. 14. Castelfranchi, C., “Guarantees for Autonomy in Cognitive Agent Architecture,” in Intelligent Agents, Proceedings of the ECAI-94 Workshop on Agent Theories, Architectures and Languages, Amsterdam, The Netherlands, 1994, 56–70. 15. Guha, R.V. and Lenat, D.B., “Enabling Agents to Work Together,” Communications of the ACM, 37(7), 127–141, 1994. 16. Guichard, F. and Ayel, J., “Logical Reorganization of Distributed Artificial Intelligence Systems, Intelligent Agents,” in Proc. of the ECAI-94 Workshop on Agent Theories, Architectures and Languages, Amsterdam, The Netherlands, 1994, 118–128. 17. Haddadi, A., Communication and Cooperation in Agent Systems, Springer-Verlag, 1996, 1–2, 52–53. 18. Lashkari, Y., Metral, M., and Maes, P., “Collaborative Interface Agents,” in Proc. of the National Conference on Artificial Intelligence, July 31-Aug. 4, 1994, 444–449, Seattle, Washington. 19. Katz, D. and Kahn, R.L., “Common Characteristics of Open Systems, in Systems Thinking, Penguin Books, Middlesex, England 1969, 86–104. 20. Bertalanffy, L. von, "General System Theory,” in General Systems Textbook, Vol.1, 1956. 21. Simon, H.A., “Decision Making and Problem Solving,” Interfaces, 17(5), 11–31, SeptemberOctober 1987. 22. Sommerhoff, G., “The Abstract Characteristics of Living Systems,” in Systems Thinking, Penguin Books, Middlesex, England, 1969, 147–202. 23. Van Gigch, J.P., Systems Design Modeling and Metamodeling, Plenum Press, New York, 1991. 24. d’Inverno, M. and Luck, M., “Formalising the Contract Net as a Goal-Directed System,” in Agents Breaking Away, Springer-Verlag, 1996, 72–85. 25. Norman, D., “How Might People Interact with Agents,” Communications of the ACM, 37(7), 68–76, 1994. 26. Ashby, W.R., “Adaptation in the Multistable Environment,” in Design for a Brain, 2nd ed., Wiley, New York, 1960, 205–214.
454
Software Agent Orientation: A New Paradigm 27. Giroux, S., “Open Reflective Agents,” in Intelligent Agents II, Agent Theories, Architectures and Languages, IJCAI 1995 Workshop, Montreal, Canada, 1995, 315–330. 28. Mitchell, T. et al., “Experience with a Learning Personal Assistant,” Communications of the ACM, 37(7), 81–91, 1994. 29. Cunnyngham, J., “Cybernetic Design for a Strategic Information System,” in Applied Systems and Cybernetics, Lasker, G.E. Ed., Pergamon Press, New York, 1980, 920–925. 30. Lieberman, H., “Letizia, A User Interface Agent for Helping Browse the World Wide Web,” International Joint Conference on Artificial Intelligence, Montreal, Canada, August 1995. 31. Vogler, H., Moschgath, M., and Kunkelman, T., “Enhancing Mobile Agents with Electronic Commerce Capabilities,” in Cooperative Information Agents II, Proceedings of the Second International Workshop, CIA 1998, Klusch, M. and Weib, G., Eds., Paris, France, July 1998, Springer-Verlag, Germany, 148–159. 32. Murch, R. and Johnson, T., Intelligent Software Agents, Prentice Hall, Upper Saddle River, NJ, 1999. 33. Brewington, B. et al., “Mobile Agents for Distributed Information Retrieval,” in Intelligent Information Agents, Klusch, M., Ed., Springer-Verlag, Germany, 1999, 354–395 34. Hutheesing, N., “Spider’s Helper,” Forbes, 158(1), 79, July 1, 1996. 35. Etzioni, O. and Weld, D., “A Softbot-Based Interface to the Internet,” Communications of the ACM, 37(7), 72–76, 1994. 36. Selberg, E. and Etzioni O., “The MetaCrawler Architecture for Resource Aggregation on the Web,” IEEE Expert, 12(1): 8–14, 1997. 37. Kowalczyk, R. et al., “InterMarket — Towards Intelligent Mobile Agent e-Marketplaces,” in Proceedings of the Ninth Annual IEEE International Conference and Workshop on the Engineering of Computer-Based Systems, April 8–11, 2002, Lund Swenden, IEEE.. 38. O’Leary, D.E., Kuokka, D., and Plant, R., “Artificial Intelligence and Virtual Organizations,” Communications of the ACM, 40:1, January 1997, 52. 39. EPRI, E-Commerce Applications and Issues for the Power Industry, EPRI technical report, (TR-114659), 2000. 40. Wang, H., Mylopoulos, J., and Liao, S., “Intelligent Agents and Financial Risk Monitoring Systems,” Communications of the ACM, 45(3), 83, March 2002. 41. Fazlollahi, B. and Vahidov, R., “Multi-Agent Decision Support System Incorporating Fuzzy Logic,” 19th International Conference of the North-American Fuzzy Information Processing Society, 13–15 July 2000, Atlanta, GA, 246–250. 42. Muller, N.J., “Improving Network Operations with Intelligent Agents,” International J. of Network Management, 7, 116–126, 1997. 43. Chang, J. et al., “Implementing a Diagnostic Intelligent System for Problem Solving in Instructional Systems,” in Proc. of the International Workshop on Advanced Learning Technologies 2000, 29–30. 44. Ozdemir, B. and Alpaslan, F.N., “An Intelligent Tutoring System for Student Guidance in Web-Based Courses,” in Fourth International Conference on Knowledge-Based Intelligent Engineering Systems and Allied Technologies, August 30–September 1, 2000, Brighton, UK. 45. Shen, W., “Distributed Manufacturing Scheduling Using Intelligent Agents,” IEEE Intelligent Systems, (17)1: 88–94, 2002. 46. Parunak, H.V.D., Workshop Report: Implementing Manufacturing Agents (in conjunction with PAAM 96), National Center for Manufacturing Sciences, Ann Arbor, MI, 1996. 47. Mondal, A.S., A Multi-Agent System for Sales Order Processing, Intelligence, 32, Fall 2001. 48. Zaidi, S.Z.H., Abidi, S.S.R., and Manickam, S., “Distributed Data Mining from Heterogeneous Healthcare Data Repositories: Towards an Intelligent Agent-Based Framework,” in Proc. of the 15th IEEE Symposium on Computer-Based Medical Systems, 2002, 339–342. 49. Hashmi, Z.I., Abidi, S.S.R., and Cheah, Y.N., “An Intelligent Agent-Based Knowledge Broker for Enterprise-Wide Healthcare Knowledge Procurement,” in Proc. of the 15th IEEE symposium on Computer-Based Medical Systems, 2002.
455
This page intentionally left blank
Chapter 37
The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic Robert L. Glass
Methodology — the body of methods used in a particular branch of activity Method — a procedure or way of doing something — Definitions from the Oxford American Dictionary
To use a methodology is to choose an orderly, systematic way of doing something. At least that is the message the dictionary brings us. But what does that really mean in the context of the systems and software field? There has been a quiet evolution in that real meaning over the last few decades. In the beginning (the 1950s), there were few methods and no methodologies. Solution approaches tended to focus attention on the problem at hand. Because methodologies did not exist, systems developers chose from a limited collection of “best-of-breed” methods. Problem solution was difficult, but with nothing much to compare with, developers had the feeling they were making remarkable progress in solving application problems. That “best-of (primitive method)-breed” approach persisted through a decade or two of the early history of the systems development field. And then suddenly (in the 1970s), the first real methodology burst forth, and the systems development field would never be the same again. Not only was the first methodology an exciting, even revolutionary, addition to the 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
457
PROVIDING APPLICATION SOLUTIONS field, but the assumption was made that this one methodology (structured analysis and design, later to be called the “structured revolution”) was suitable for any systems project the developer might encounter. We had passed from the era of no methodology to the era of best methodology. Some, looking back on this era, refer to it as the one-size-fits-all era. But as time went by, what had begun as a field characterized by one single best methodology changed again. Competing methodologies began to appear on the scene. There was the information engineering approach. There was object orientation. There was event-driven systems development. What had been a matter of simple choice had evolved into something very complex. What was going on here? With this brief evolutionary view of the field, let us go back over some of the events just described to elaborate a bit more on what has been going on in the methodology movement and where we are headed today. TOOLS AND METHODS CAME FIRST In the early days, the most prominent systems development tools were the operating system and the compiler. The operating system, which came along in the mid–late 1950s, was a tool invented to allow programmers to ignore the bare-bones software interface of the computer and to talk to that interface through intermediary software. Then came high-order language (HOL), and with it the compiler to translate that HOL into so-called machine code. HOLs such as Fortran and COBOL became popular quickly; the majority of software developers had chosen to write in HOL by the end of the 1950s. Shortly thereafter, in the early 1960s, a more generous supply of support tools to aid in software development became available. There were debuggers, to allow programmers to seek, find, and eliminate errors in their code. There were flowcharters, to provide automated support for the drawing of design representations. There were structural analyzers, to examine code searching for anomalies that might be connected with errors or other problems. There were test drivers, harnesses for testing small units of software. There were error reporters, used for tracking the status of errors as they occurred in the software product. There were report generators, generalized tools for making report creation easy. And with the advent of these tools came methods. It was not enough to make use of one or more tools; methods were invented to describe how to use them. By the mid to late 1960s there was, in fact, a thriving collection of individual tools and methods useful to the software developer. Also evolving at a steady but slower rate was a body of literature describing better ways to 458
The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic use those tools and methods. The first academic computer science (CS) program was put in place at Purdue University in the early 1960s. Toward the end of that decade, CS programs were beginning to become commonplace. At about the same time, the first academic information systems programs began to appear. The literature, slow to evolve until the academic presence began, grew rapidly. What was missing, at this point in time (the late 1960s), was something that tied together all those evolving tools and methods in some sort of organized fashion. The CS and IS textbooks provided some of that organization, but the need was beginning to arise for something more profound. The scene had been set for the appearance of the concept of “methodology.” THE METHODOLOGY During the 1970s, methodologies exploded onto the software scene. From a variety of sources — project work done at IBM, analytical work done by several emerging methodology gurus, and with the active support and funding of the U.S. Department of Defense, the “structured methodologies” sprang forth. At the heart of the structured methodologies was an analysis and design approach — structured analysis and design (SA&D). There was much more to the structured methodologies than that — the Department of Defense funded IBM’s development of a 15-volume set of documents describing the entire methodological package, for example — but to most software developers, SA&D was the structured methodology. Textbooks describing the approach were written. Lectures and seminars and, eventually, academic classes in the techniques were conducted. In the space of only a few years during the 1970s, SA&D went from being a new and innovative idea to being the established best way to build software. By 1980 few software developers had not been trained/educated in these approaches. What did SA&D consist of? For a while, as the popularity of the approach boomed, it seemed as if any idea ever proposed by any methodology guru was being slipped in under the umbrella of the structured methodologies, and the field covered by that umbrella became so broad as to be nearly meaningless. But at heart there were some specific things meant by SA&D: • Analysis. Requirements elicitation, determination, analysis — obtaining and structuring the requirements of the problem to be solved: — The data flow diagram (DFD), for representing the processes (functions/tasks) of the problem, and the data flow among those processes — Process specifications, specifically defining the primitive (rudimentary or fundamental) processes of the DFD; entity/relationship (E/R) diagrams representing the data relationships 459
PROVIDING APPLICATION SOLUTIONS • Design. Top-down design, analyzing the most important parts of the problem first: — Transformation analysis to convert the DFDs into structure charts (representing process design) and thence to pseudocode (detail level process design) • Coding. Constructing single entry/exit modules consisting only of the programming concepts sequence, selection, and iteration. TOWARD THE HOLY GRAIL OF GENERALITY An underlying theoretical development was also happening in parallel with the practical development of the concept: an evolution from the problemspecific approaches of the early days of computing toward more generalized approaches. The early approaches were thought of as “ad hoc,” a term that in CS circles came to mean “chaotic and disorganized,” perhaps the worst thing that could be said about a software development effort. Ad hoc, in effect, became a computing dirty word. As tools and methods and, later, methodologies evolved, the field appeared to be approaching a “holy grail” of generality. There would be one set of tools, one set of methods, and one methodology for all software developers to use. From the early beginnings of Fortran and COBOL, which were problem-specific languages for the scientific/engineering and business/information systems fields, respectively, the field evolved toward more general programming languages, defined to be suitable for all application domains. First, PL/1 (sometimes called the “kitchen sink” language because it explicitly combined the capabilities of Fortran and COBOL) and later Pascal and C/C++/Java were deliberately defined to allow the solution of all classes of problems. But soon, as mentioned earlier, cracks appeared in this veneer of generality. First, there was information engineering, a data/information-oriented methodology. Information engineering not only took a different approach to looking at the problem to be solved, but in fact appeared to be applicable to a very different class of problem. Then there was object orientation (OO), which focused on a collection of data and the set of processes that could act on that data. And, most recently, there was the event-driven methodology, best personified by the many “visual” programming languages appearing on the scene. In the event-driven approach, a program is written as a collection of event servicers, not just as a collection of functions or objects or information stores. The so-called Visual languages (Visual Basic is the best example) allowed system developers to create graphical user interfaces (GUIs) that responded to user-created “events.” Although many said that the event-driven approach was just another way of looking at problems from an OO point of view, the fact that Visual Basic 460
The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic is a language with almost no object capability soon made it clear that events and objects are rather different things. If the software world only needed one holy grail approach to problem solution, why was this proliferation of competing methodologies occurring? The answer for the OO advocates was fairly straightforward — OO was simply a better approach than the now obsolete structured approaches. It was a natural form of problem solution, they said, and it led more straightforwardly to the formation of a culture of reuse, in which components from past software efforts could be used like Lego™ building blocks to build new software products. But the rise of information engineering before the OO approaches, and event-driven after them, was perplexing. It was fairly clear to most who understood both the structured and information approaches, that they were appropriate for rather different kinds of problems. If a problem involved many processes, then the structured approaches seemed to work best. If there was a lot of data manipulation, then the information approaches worked best. And the event approach was obviously characterized by problems where the need was to respond to events. Thus it had begun to appear that the field was reverting to a more problem-focused approach. Because of that, a new interest arose in the definition of the term “ad hoc.” The use of “ad hoc” to mean chaotic and disorganized, it was soon learned, was wrong. Ad hoc really means focused on the problem at hand. TROUBLE ON THE GENERALITY RANCH Meanwhile, there was additional trouble with the one-size-fits-all view. Researchers in another part of the academic forest began looking at systems development from a new point of view. Instead of defining a methodology and then advocating that it be used in practice — a prescriptive approach that had been used for most of the prior methodologies — they began studying instead what practitioners in the field actually did with methodologies. These “method engineering” researchers discovered that most practitioners were not using methodologies as the methodology gurus had expected them to. Instead of using these methodologies “out of the box,” practitioners were bending and modifying them, picking and choosing portions of different methodologies to use on specific projects. According to the research findings, 88 percent of organizations using something as ubiquitous as the structured methodology were tailoring it to meet their specific project needs. At first, the purists made such statements as “the practitioners are losing the rigorous and consistent capabilities that the methodologies were invented to provide.” But then, another viewpoint began to emerge: 461
PROVIDING APPLICATION SOLUTIONS researchers began to accept as fait accompli that methodologies would be modified, and began working toward providing better advice for tailoring and customization and defining methodological approaches that lent themselves to tailoring and customizing. In fact, the most recent trend among method engineers is to describe the process of modifying methods and to invent the concept of “meta-modeling,” an approach to providing modifiable methods. This evolution in viewing methodologies is still under way. Strong factions continue to see the general approach as the correct approach. Many advocates of the OO methodology, for example, tend to be in this faction, and they see OO as the inevitable holy grail. (It is no accident that in OO’s most popular commercial modeling language, UML, the “U” stands for “Unified,” with an implication that it really means “Universal.”) Most data from practitioner surveys shows, however, that the OO approaches have been very slow taking hold. The structured methodology still seems to dominate in practice. A PROBLEM-FOCUSED METHODOLOGICAL APPROACH There is certainly a strong rationale for the problem-focused methodological approach. For one thing, the breadth of problems being tackled in the computing field is enormous and ever increasing. Do we really imagine that the same approach can be used for a hard real-time problem that must respond to events with nanosecond tolerances, and an IS problem that manipulates enormous quantities of data and produces a complex set of reports and screens? People who see those differences tend to divide the software field into a diverse set of classes of problems based on size, application domain, criticality, and innovativeness, as follows: • Size. Some problems are enormously more complicated than others. It is well-known in the software field that for every tenfold increase in the complexity of a problem, there is a one-hundredfold increase in the complexity of its solution. • Application domain. There are very different kinds of problems to be solved: — Business systems, characterized by masses of data and complex reporting requirements — Scientific/engineering systems, characterized by complex mathematical sophistication — System programming, the development of the tools to be used by application programmers — Hard real-time systems, those with terribly tight timing constraints — Edutainment, characterized by the production of complex graphical images 462
The Methodology Evolution: From None, to One-Size-Fits-All, to Eclectic • Criticality. Some problem solutions involve risking lives and/or huge quantities of money. • Innovativeness. Some problems simply do not lend themselves to traditional problem-solving techniques. It is clear to those who have been following the field of software practice that it would be extremely difficult for any methodology to work for that enormously varied set of classes of problems. Some classes require formal management and communication approaches (e.g., large projects); others may not (e.g., small and/or innovative projects). Some require specialized quality techniques, such as performance engineering for hard real-time problems and rigorous error-removal approaches for critical problems. There are domain-specific skill needs, such as mathematics for the scientific/engineering domain and graphics for edutainment. The fragmentation of the methodology field, however, has left us with a serious problem. There is not yet a mapping between the kinds of solution approaches (methodologies) and the kinds of problems. Worse yet, there is not even a generally accepted taxonomy of the kinds of problems that exist. The list of problem types described previously, although generally accepted at a superficial level, is by no means accepted as the definitive statement of what types of problems exist in the field. And until a generally agreed-on taxonomy of problem types exists, it will be nearly impossible to produce that much-needed mapping of methodologies to problems. Even before such practical problems can be solved, an attitudinal problem must also be overcome. The hope for that “holy grail” universal solution tends to steer enormous amounts of energy and brilliance away from the search for better problem-focused methodologies. The computing field in general does not yet appear ready to move forward in any dramatic way toward more problem-specific solution approaches. Some of the method engineering people are beginning to move the field forward in some positive, problem-focused ways. Others are holding out for another meta-methodology, with the apparent hope that there will be a giant umbrella over all of these specialized methodologies — a new kind of “holy grail” of generality. THE BOTTOM LINE My own storytelling suggests that the methodology movement has moved from none (an era when there were no methodologies at all and problemsolution approaches focused on the problem at hand) to one-size-fits-all (there was one single best methodology for everyone to use) to prolific (there were apparently competing choices of which “best” methodology to use) to tailored (methodology choices were back to focusing on the problem at hand). 463
PROVIDING APPLICATION SOLUTIONS Not everyone, however, sees the topic of methodology in this same way. There are those who still adhere to a one-size-fits-all view. There are those who think that tailoring methodologies is wrong. There are those who point to a lack of a taxonomy of applications, or a taxonomy of methodologies, or an ability to map between these (missing) taxonomies, as evidence that the field is not yet ready for methodologies focused on the problem at hand. The methodology field, like the software engineering field of which it is a part, is still young and immature. What does this mean for knowledgeable managers of software projects? First, they must stay on top of the methodology field because its sands are shifting frequently. Second, for now, they must expect that no single methodology will solve the whole problem. They must be prepared for some technical, problem-focused tinkering with standard methodologies. Further, many larger software projects today involve a three-tiered solution — a user interface (front end) tier, a database/Internet (back end) tier, and the application problem solution (middle) tier. Each of those tiers will tend to need a different methodological approach: • The front end will probably be attacked with an event-driven GUI builder, probably using one of the Visual programming languages. • The back end will likely be addressed using an information-based database system using SQL, or an object-oriented Internet system, perhaps using Java. • The middle tier will be addressed by a problem-focused methodology, perhaps the structured approaches for process-oriented problems, information engineering for data-focused problems, or an object-oriented approach for problems that involve a mixture of data objects and their associated processes. Event-driven plus information-based or object-oriented plus some combination of the above? Does that not mean that systems development is becoming enormously more complex? The answer, of course, is yes. But there is another way of looking at this plethora of problem-solving approaches. The toolbox of a carpenter contains much more than one simple, universal tool. Should we not expect that the toolbox of a systems developer be diverse as well?
464
Chapter 38
Usability: Happier Users Mean Greater Profits Luke Hohmann
WHY YOU SHOULD CARE ABOUT USABILITY Historically, usability has been associated with the user interface, and human–computer interaction (HCI) professionals have tended to concentrate their work on it. This makes perfect sense because most of our understanding of a given system and most of our perceptions of its usability are shaped by the user interface. Usability is, however, much deeper than the user interface. Usability refers to the complex set of choices that ends up allowing the users of the system to accomplish one or more specific tasks easily, efficiently, enjoyably, and with a minimum of errors. In this case, “users of the system” refers to all users: • System administrators who install, configure, customize, and support your system • Developers and system integrators who integrate your system with other applications • End users who use the system directly to accomplish their tasks, from basic data entry in corporate systems to strategic decision making supported through complex business information systems Many of these choices are directly influenced by your technical architecture. That is, if your system is perceived as usable, it will be usable because it was fundamentally architected to be usable. Architecting a system to be usable is a lot of work but it is worth it. A large amount of compelling evidence indicates that usability is an investment that pays for itself quickly over the life of the product. A detailed analysis of the economic impact of usability is beyond the scope of this chapter but anec0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
465
PROVIDING APPLICATION SOLUTIONS dotal evidence speaks strongly about the importance of usability. One system the author of this chapter worked on was used by one of the world’s largest online retailers. A single telephone call to customer support could destroy the profits associated with two dozen or more successful transactions. In this case, usability was paramount. Other applications may not be quite as sensitive to usability but practical experience demonstrates that executives routinely underestimate the importance of usability. The benefits of usable systems include any or all of the following, each of which can be quantified: • • • • • •
Reduced training costs Reduced support and service costs Reduced error costs Increased productivity of users Increased customer satisfaction Increased maintainability
Given the wide range of areas in which improving usability can reduce costs and increase profits, it is easy to see why senior managers should care about usability, and make creating usable systems a primary requirement of all development efforts. CREATING USABLE SYSTEMS Creating usable applications centers around four key processes: 1. Understanding users. The cornerstone of creating usable applications is based on an intimate understanding of the users, their needs, and the tasks that must be accomplished. The outcome of this understanding results in a description of the users’ mental model. A mental model is the representation of the problem users have formed as they accomplish tasks. Understanding mental models enables designers to create system models that supplement and support users. System models are, in turn, conveyed to users through the use of metaphors. 2. A progression from “lo-fidelity” to “hi-fidelity” systems. Building usable applications is based on a gradual progression from “lo-fidelity” paper-and-pencil-based prototypes to “hi-fidelity” working systems. Such an approach encourages exploration through low-cost tools and efficient processes until the basic structure of the user interface has been established and is ready to be realized as a working computer system. 3. Adherence to proven principles of design. Through extensive empirical studies, HCI professionals have published several principles to guide the decisions made by designers. These simple and effective principles transcend any single platform and dramatically contrib466
Usability: Happier Users Mean Greater Profits
8VHUDQG7DVN$QDO\VLV 8VHU$QDO\VLV
7DVN$QDO\VLV
)XQFWLRQ$VVLJQPHQW
0HQWDO0RGHO 'HYHORSPHQW
Exhibit 1. User and Task Analysis
ute to usability. The use of design principles is strengthened through the use of usability specifications, quantifiable statements used to formally test the usability of the system (e.g., the application must load within 40 seconds). If you are new to usability, start by focusing on proven principles of design. If you want to formalize the goals you are striving to achieve, require your marketing or product management organizations to include usability specifications in your requirements documents. 4. Usability testing. Each result produced during development is tested — and retested — with users and iteratively refined. Testing provides the critical feedback necessary to ensure designers are meeting user needs. An added benefit of testing is that it involves users throughout the development effort, encouraging them to think of the system as something they own and increasing system acceptance. UNDERSTANDING USERS The cornerstone of creating usable applications is based on an intimate understanding of the users, their needs, and the tasks that must be accomplished. One of the very best ways to create this understanding is through a simple user and task analysis. Once these are complete, a function assignment can be performed to clearly identify the distribution of tasks between the user and the system, which leads to the development of the mental model (see Exhibit 1). 467
PROVIDING APPLICATION SOLUTIONS User Analysis The purpose of a user analysis is to clearly define who the intended users of the system really are, through a series of context-free, open-ended questions. Such questions might include: • Experience: — What is the expertise of the users? Are they experts or novices? — Are they comfortable with computers and GUIs? — Do they perform the task frequently or infrequently? — What problem domain language would the users most easily understand? • Context: — What is the working environment? — Is work done alone or in a group? Is work shared? — Who installs, maintains, and administers the system? — Are there any significant cultural or internationalization issues that must be managed? • Expectations: — How would the users like the system to work? — What features do they want? (If users have difficulty answering this question, propose specific features and ask if the user would like or dislike the specific feature). — How will the current work environment change when the system is introduced? (Designers may have to propose specific changes and ask if these changes would be considered desirable). Asking these questions usually takes no more than a few hours, but the data the answers provide is invaluable to the success of the project. Task Analysis Task analysis seeks to answer two very simple questions: (1) What tasks are the users doing now that the system will augment, change, enhance, modify, or replace? (2) What tasks will the users perform in the new system? The first phase of task analysis is to develop a clear understanding of how the system is currently being used, using the process outlined in Exhibit 2. The last step, that of creating an overall roadmap, is especially important for projects involved with replacing an existing user interface with a redesigned user interface. Common examples of this include replacing aging mainframe systems with modern Web-based applications or creating an entirely new system to work beside an existing system, such as when a voice-based application is added to a call center. 468
Usability: Happier Users Mean Greater Profits
y 8VHYLGHRWDSH y &RQWH[WIUHHTXHVWLRQV
y :KDWLVWKHELJSLFWXUH"
Exhibit 2. Task Analysis
The second phase of task analysis, that of describing how the new system will work, is often done through use cases. A use case is a structured prose document that describes the sequence of events between one or more actors and the system as one of the actors (typically the user) attempts to accomplish some task. Chapter 40 discusses use cases in more detail. Function Assignment As users and tasks are identified, the specific functions detailed in the requirements spring to life and are given meaning through use cases. At this stage, it is often appropriate to ask if the identified tasks should be performed by the user, performed by the system automatically on behalf of the user, or initiated by the user but performed by the system. This process is called function assignment, and can be absolutely essential in systems in which the goals are to automate existing business processes. To illustrate, consider an electronic mail system. Most automatically place incoming mail into a specific location, often an “in-box.” As a user of the system, did you explicitly tell the mail system to place mail there? No. It did this on your behalf, usually as part of its default configuration. Fortunately, for those of us who are heavy e-mail users, we can override this default configuration and create a variety of rules that allow us to automatically sort and process incoming e-mails in a variety of creative ways. 469
PROVIDING APPLICATION SOLUTIONS While most systems can benefit from a function assignment, it is an optional step in the overall development effort. Mental Model Development The final step of user and task analysis is to propose various mental models of how the users think about and approach their tasks. Mental models are not documented in any formal manner. Instead, they are informal observations about how designers think users approach their tasks. For example, consider a designer creating a new project planning tool. Through several interviews she has discovered that managers think of dependencies within the project as a web or maze instead of a GANTT or PERT chart. This could provide insight into creative new ways of organizing tasks, displaying information, or providing notification of critical path dependencies. Checklist • We have identified a target population of representative users. • We have created an overall roadmap of current users’ tasks. — If redesigning a current system, we have made screen snapshots of every “screen” and have annotated each screen with a description of the task(s) it supports. — If redesigning a current system, each task has a set of screen snapshots that describe, in detail, how users engage the system to accomplish the task. • We have created a set of high-level use cases that document our understanding of the system as we currently understand it. These are specified without regard to any specific user interface that might be developed over the life of the project. • We have reviewed our list of use cases with our users. • All requirements are covered by the use cases. • No use case introduces a new requirement. • We have summarized our findings with a description of the mental model of the users. • (Optional) We have performed a functional assignment. LO-FIDELITY TO HI-FIDELITY SYSTEMS The common approach to building user interfaces is based on taking the requirements, building a prototype, and then modifying it based on user feedback to produce a really usable system. If you are lucky, and your users like your initial prototype, you are doing OK. More often than not, your users will want to change the initial prototype more than your schedule, motivation, or skills will allow. The result is increased frustration because 470
Usability: Happier Users Mean Greater Profits users are asked to provide feedback into a process that is designed to reject it. A more effective process is to start with lo-fidelity (lo-fi), paper-and-pencil prototypes, testing and developing these with users. Once you have a good sense of where to head, based on a lo-fi design, you can move to a hifidelity, fully functional system secure in the knowledge that you are going to produce a usable result. Lo-Fi Design A lo-fi design is a low-tech description of the proposed user interface. For a GUI, it is a paper-and-pencil prototype. For a VUI (voice-based user interface), it a script that contains prompts and expected responses. The remainder of this section assumes that you are creating a GUI; you will find that you can easily extend these techniques to other user interfaces. Specific objectives of a lo-fi design include: • Clarifying overall application flow • Ensuring that each user interaction contains the appropriate information • Establishing the overall content and layout of the application • Ensuring that each use case and requirement is properly handled The basic activities, as outlined in Exhibit 3, consist of developing a storyboard, creating a lo-fidelity prototype, and beginning the testing process by testing the prototype with representative users. Among the inputs to this activity are the conceptual model (perhaps created through a process like the Rational Unified Process) or an information model (an entity-relationship model for traditional systems or a class diagram for object-oriented systems) and design guidelines. Capturing the System Model: The Role of Metaphor The system model is the model the designer creates to represent the capabilities of the system under development. The system model is analogous to the mental model of the user. As the users use the system to perform tasks, they will modify their current mental model or form a new one based on the terminology and operations they encounter when using the system. Usability is significantly enhanced when the system model supports existing mental models. To illustrate, your mental model of an airport enables you to predict where to find ticket counters and baggage claim areas when you arrive at a new airport. If you were building a system to support flight operations, you would want to organize the system model around such concepts as baggage handing and claim areas. 471
PROVIDING APPLICATION SOLUTIONS
/R)L3URWRW\SH'HVLJQ 0HWDSKRU(YDOXDWLRQ 0HQWDO0RGHO 7DVN$QDO\VLV
,QIRUPDWLRQ0RGHO
6WRU\ERDUG
/R)L3URWRW\SH
'HVLJQ*XLGHOLQHV 6LPXODWHG3URWRW\SH
Exhibit 3.
Lo-Fi Prototype Design
A metaphor is a communication device that helps us understand one thing in terms of another. Usability is enhanced when the system model is communicated through a metaphor that matches the users’ mental model. For example, the familiar desktop metaphor popularized by the Macintosh and copied in the Windows user interface organizes operations with files and folders as a metaphorical desktop. The system model of files and folders meshes with the mental model through the use of the interface. Another example is data entry dialogs based on their paper counterparts. In the design process, the designer should use the concept of a metaphor as a way of exploring effective ways to communicate the system model and as a means of effectively supporting the users’ mental model. Resist blindly adhering to a metaphor, as this can impede the users’ ability to complete important tasks. For example, although the desktop metaphor enables me to manage my files and folders effectively, there is no effective metaphorical counterpart that supports running critical utility software such as disk defragmentation or hard disk partitioning. A paper form may provide inspiration as a metaphor in a data entry system, but it is inappropriate to restrict the designer of a computer application to the inherent limitations of paper. Storyboarding A storyboard is a way of showing the overall navigation logic and functional purpose of each window in the GUI. It shows how each task identified 472
Usability: Happier Users Mean Greater Profits
6XSHU0DLO 7KLVLVWKHPDLQXVHU LQWHUIDFHIRU6XSHU0DLO ,WSUHVHQWVDPHQXRI RSHUDWLRQV
0RGDO'LDORJ
8VHUVHOHFWV&UHDWH0HVVDJH
&UHDWH0HVVDJH 8VHUHQWHUVWH[WRI PHVVDJHDQG UHFLSLHQWV
$GGUHVV/RRNXS 8VHUFDQVHOHFW DGGUHVVHHVIURPDOLVW
Exhibit 4.
Simple Storyboard for a Mail System
in the task analysis and described in the use cases can be accomplished in the system. It also shows primary and secondary window dependencies, and makes explicit the interaction between different dialogs in the user interface. (A primary window is a main application window. A secondary window is a window such as a dialog). The storyboard often expands on the system model through the metaphor and clarifies the designers’ understanding of the users’ mental model. An example of a simple storyboard for a mail system is shown in Exhibit 4. The example shows the name of each window, along with a brief description of the window contents. A solid line means the users’ selection will open a primary window, while a dashed line indicates the opening of a modal dialog. The notation used for storyboards should be as simple as possible. For example, in the earliest phases of system design, using a simple sheet of paper with Post-It notes representing each window is an effective way to organize the system. Storyboards have an additional benefit in that they can show the overall “gestalt” of the system. The author of this chapter has seen storyboards as large as three by six feet, packed with information yet entirely understandable. This storyboard provided the development staff with a powerful means of making certain that the overall application was consistent. The storyboard also enabled the project manager to distribute the detailed window design to specific developers in a sensible way, as the common relationships between windows were easy to identify. The development of the storyboard should be guided by the use cases. Specifically, it should be possible to map each operation described in a use 473
PROVIDING APPLICATION SOLUTIONS case to one or more of the windows displayed in the storyboard. The mapping of specific actions to user interface widgets will come at a later stage in the development process. Lo-Fi Window Design Following the storyboard, the design process proceeds to the development of the initial lo-fi window designs. During this phase, the designer takes paper, pencil, and many erasers and prepares preliminary versions of the most important windows described in the storyboard. A good starting point for selecting candidates for lo-fi design includes any windows associated with important or high-priority use cases. One critical decision point in lo-fi window design is determining the information that should be displayed to the user. The basic rule-of-thumb is to only display the information needed to complete the task. By mapping use cases to the data model, you can usually identify the smallest amount of data required to complete a task. Alternatively, you can simply show your storyboard to your users and add the detail they think is required. The resultant information can be compared with data models to ensure that all data has been captured. Creating a lo-fi window design is fun. Freed from the constraints of the control palette associated with their favorite IDE (integrated development environment), designers tend to concentrate on design and user needs instead of implementation details. The fun part of the design process includes the tools used to create lo-fi designs. The following items should be easily accessible for lo-fi window design: • • • • • • • •
Scissors Glue Clear and colored overhead transparencies White correction paper A computer with a screen capture program and a printer Clear tape “Whiteout” A photocopier
A computer and a printer are included on this list because it is often more practical to print standard widgets and screens, such as corporate-defined standards for buttons or the standard file open dialog provided by the operating system, than try to draw them by end. Once printed, these can be glued, taped, or otherwise used in the lo-fi design. This does mean that a lo-fi design is a mixture of hand-drawn and computer-generated graphics. In practice, this mixture does not result in any problems. 474
Usability: Happier Users Mean Greater Profits While the practical use of a computer during lo-fi design is acceptable in an appropriate role, it is important to realize that developers should not be attempting to create lo-fi designs on a computer. Doing so defeats many of the fundamental goals of lo-fi design. Paper-and-pencil designs are more amenable to change and are often created faster than similar designs created in an IDE. More importantly, designers who create their designs in a computer are less likely to change them, primarily because the amount of effort put into a computer design increases the designers’ psychological attachment to the design. This increased attachment means a greater reluctance to change it, which defeats the purpose of the design. A final reason to use lo-fi prototypes is that designers who create their initial designs on a computer tend to worry about how they will make these designs “work.” Specifically, they start worrying about how to connect the user interface to the business logic, or how to format data to meet the needs of the user. The result is premature emphasis on making things work rather than exploring design alternatives. Checklist • We have created a storyboard that details the overall navigation logic of the application. • We have traced each use case through the storyboard. • We have created a data (or object) model that describes the primary sources of information to be displayed to the users and the relationships among these items. • We have transformed our storyboards into a set of lo-fi prototypes. • The lo-fi prototypes were created using paper and pencil. • All information displayed in the lo-fi prototype can be obtained from the entity-relationship or object model or from some other well-known source. DESIGN PRINCIPLES While usability testing is the only way to be certain that the user interface is usable, there are several well-known and validated principles of user interface design that guide the decisions made by good designers. These principles are platform and operating system independent, and they are applicable in almost any environment. Adherence to these principles is becoming increasingly important in the era of Web development, as there are no universal standards for designing Web-based applications. This section presents a consolidated list of design principles that have stood the test of time, drawing heavily from the design principles published by Apple Computer Corporation and user interface researcher Jakob Nielson, a cofounder of the Nielson-Norman consulting company (see Exhibit 5). 475
PROVIDING APPLICATION SOLUTIONS Checklist • Each developer has been given a copy of the design principles. • Each developer has easy access to a copy of the platform standards. Ideally, each developer is given a copy of the platform standards and the time to learn them. • Each error situation has been carefully examined to determine if the error can be removed from the system with more effective engineering. • Management is prepared to properly collect and manage the results of the usability inspection. SIMULATED PROTOTYPING There are many kinds of testing systems in software development: performance, stress, user acceptance, etc. Usability testing refers to testing activities conducted to ensure the usability of the entire system. This includes the user interface and supporting documentation, and in advanced applications can include the help system and even the technical support operation. A specific goal of lo-fi prototyping is to enable the designer to begin usability testing as early as possible in the overall design process through a technique called simulated prototyping. Simulated prototyping means that the operation of the lo-fi system is simulated. Quite literally, a representative user attempts to complete assigned tasks using the prototype with a human playing the role of the computer. Before describing how to conduct a simulated prototype test, let us first explore what results the test should produce. A simulated prototyping session should produce a report that includes the following three items. First, it must be clearly identified for tracking purposes. Second, it must identify all individuals associated with the test. Participants are not identified by name, but by an anonymous tracking number. Referring to the users involved with the test as participants rather than subjects encourages an open and friendly atmosphere and a free-flowing exchange of ideas. The goal is to keep participants as comfortable as possible. Third, and most importantly, it must clearly summarize the results of the test. It is common to see test results concentrating on the negative responses associated with the prototype, but designers should also be looking for the positive responses exhibited by the user. This will enable them to retain the good ideas as the prototype undergoes revision. Unlike a source code review report, the results of the simulated prototype can provide solutions to problems identified during testing. 476
Usability: Happier Users Mean Greater Profits Exhibit 5. Consolidated List of Design Principles Principle
Description
Use concrete metaphors
Concrete metaphors are used to make the application clear and understandable to the user. Use audio, visual, and graphic effects to support the metaphor. Avoid any gratuitous effects; prefer aesthetically sleek interfaces to those that are adorned with useless clutter.
Be consistent
Effective applications are both consistent within themselves and with one another. There are several kinds of consistency that are important: The first is platform consistency, which means the application should adhere to the platform standards on which it was developed. For example, Windows specifies the exact distance between dialog buttons and the edge of the window, and designs should adhere to these standards. Each developer associated with the design of the user interface should be given a copy of the relevant platform standards published by each vendor. This will ensure that the application is platform compliant from the earliest stages of development. The second is application consistency, which means that all of the applications developed within a company should follow the same general model of interaction. This second form of consistency can be harder to achieve as it requires the interaction and communication between all of the development organizations within a company. A third kind of consistency is task consistency. Similar tasks should be performed through similar sequences of actions.
Provide feedback
Let users know what effect their actions have on the system. Common forms of feedback include changing the cursor, displaying a percentdone progress indicator, and dialogs indicating when the system changes state in a significant manner. Make certain the kind of feedback is appropriate for the task.
Prevent errors
Whenever a designer begins to write an error message, he should ask: Can this error be prevented, detected and fixed, or avoided altogether? If the answer to any of these questions is yes, additional engineering effort should be expended to prevent the error.
Provide corrective advice
There are many times when the system cannot prevent an error (e.g., a printer runs out of paper). Good error messages let the user know what the problem is and how to correct it (“The printer is out of paper. Add paper to continue printing”).
Put the user in control
Usable applications minimize the amount of time that they spend controlling user behavior. Let users choose how they perform their tasks whenever possible. If you feel that a certain action might be risky, alert users to this possibility, but do not prevent them from doing it unless absolutely necessary.
Use a simple and natural dialog
Simple means no irrelevant or rarely used information. Natural means an order that matches the task.
477
PROVIDING APPLICATION SOLUTIONS Exhibit 5. Consolidated List of Design Principles (continued) Principle
Description
Speak the users’ language
Use words and concepts that match in meaning and intent the users’ mental model. Do not use system-specific engineering terms. When presenting information, use an appropriate tone and style. For example, a dialog written for a children’s game would not use the same style as a dialog written for an assembly-line worker.
Minimize user memory load
Do not make users remember things from one action to the next by making certain each screen retains enough information to support the task of the user. (I refer to this as the “scrap of paper” test. If the user ever needs to write a critical piece of information on a piece of paper while completing a task, the system has exceeded memory capacity.)
Provide shortcuts
Shortcuts can help experienced users avoid lengthy dialogs and informational messages that they do not need. Examples of shortcuts include keyboard accelerators in menus and dialogs. More sophisticated examples include command-based searching languages. Novice users can use a simple interface, while experienced users can use the more advanced features afforded by the query language.
Conducting the Test A simulated prototype is most effective when conducted by a structured team consisting of between three and five developers. The roles and responsibilities of each developer associated with a simulated prototype are described in Exhibit 6 (developers should rotate roles between successive tests, as playing any single role for too long can be overly demanding). Selecting users for the simulated prototype must be done rather carefully. It may be easy to simply grab the next developer down the hall, or bribe the security guard with a bagel to come and look at the user interface. However, unless the development effort is focused on building a CASE tool or a security monitoring system, the development team has the wrong person. More specifically, users selected for the test must be representative of the target population. If the system is for nurses, test with nurses. If the system is for data entry personnel, test with data entry personnel. Avoid testing with friends or co-workers unless developers are practicing “playing” computer. It is critically important that designers be given the opportunity to practice playing computer. At first, developers try to simulate the operation of the user interface at the same speed as the computer. Once they realize this is impossible, they become skilled at smoothly simulating the operation of the user interface and provide a quite realistic experience for the participant. 478
Usability: Happier Users Mean Greater Profits Exhibit 6. Developer Roles and Responsibilities with a Simulated Prototype Role
Responsibilities
Leader
• Organizes and coordinates the entire testing effort • Responsible for the overall quality of the review (i.e., a good review of a poor user interface produces a report detailing exactly what is wrong) • Ensures that the review report is prepared in a timely manner
Greeter
• Greets people, explains test, handles any forms associated with test
Facilitator
• • • •
Computer
• Simulates the operation of the interface by physically manipulating the objects representing the interface. Thus, the “computer” rearranges windows, presents dialogs, simulates typing, etc. • Must know application logic
Observer
• Takes notes on 3v5 cards, one note per card
Runs test — the only person allowed to speak Performs three essential functions: Gives the user instructions Encourages users to “think aloud” during the test so observers can record users’ reactions to the user interface • Makes certain test is finished on time
During the simulated prototype, make certain developers have all of their lo-fi prototyping tools easily available. A lot of clear transparency is necessary, as they will place this over screens to simulate input from the user. Moreover, having the tools available means they will be able to make the slightest on-the-fly modifications that can dramatically improve the quality of the user interface, even while the “computer” is running. The test is run by explaining the goals of the test to the participants, preparing them, and having them attempt to accomplish one or more tasks identified in the task analysis. While this is happening, the observer(s) carefully watch the participants for any signs of confusion, misunderstanding, or an inability to complete the requested task. Each such problem is noted on a 3v5 card for further discussion once the test is completed. During the simulated prototype, designers will often want to “help” participants by giving them hints or making suggestions. Do not do this, as it will make the results of the test meaningless. The entire test — preparing to run the test, greeting the users and running the test, and discussing the results — should take about two hours. Thus, with discipline and practice, an experienced team can actually run up to four tests per day. In practice, it is better to plan on running two tests per day so that the development team can make critical modifications to the user interface between tests. In general, three to eight tests give
479
PROVIDING APPLICATION SOLUTIONS enough data to know if the development effort is ready to proceed to the next phase in the overall development process. Finally, one final word on selecting and managing participants. Remember that they are helping create a better system. The system is being tested, not the participants. Participants must feel completely free to stop the test at any time should they feel any discomfort. Checklist • We have prepared a set of tasks for simulated prototype testing. • A set of participants who match the target user population have been identified. • Any required legal paperwork has been signed by the participants. • Our lo-fi prototype has been reviewed — we think it supports these tasks. • The simulated prototyping team members have practiced their roles. • The “computer” has simulated the operation of the system. • We have responded to the review report and made the necessary corrections to the user interface. We have scheduled a subsequent test of these modifications. HI-FI DESIGN AND TESTING Once simulated prototyping has validated the lo-fi prototype, the design process moves into the last stage before implementation: the creation of the hi-fidelity (hi-fi) prototype. This is an optional step in the overall process and can be safely skipped in many circumstances. Skipping this step results in taking the lo-fi prototype and simply implementing it without further testing or feedback from users. Motivations for creating and testing a hi-fi prototype include ensuring that there is sufficient screen real estate to display the information identified in the lo-fi prototype, checking detailed art or graphics files, and making certain that the constraints of the delivery platform do not invalidate prior design decisions. Hi-fi prototypes are required when the design team has created a customized component, such as a special widget to represent a unique object. These must be tested to ensure they will work as desired. A hi-fi prototype allows developers to enhance presentation and aesthetics through the use of fonts, color, grouping, and whitespace. Doing this most effectively requires experience with graphic design, a rich topic beyond the scope of this chapter. However, graphic design details substantially contribute to overall feelings of aesthetic enjoyment and satisfaction, and should be considered an essential activity in the overall development effort. Unless the development team has solid graphic design experience, 480
Usability: Happier Users Mean Greater Profits the choice of fonts should be kept simple, using predominantly black on white text and avoiding the use of graphics as adornments. If you are taking the time to build a hi-fi prototype before final implementation, test it with a few users. Doing so will help clarify issues that are difficult or impossible to test in a lo-fi design, such as when a button should be enabled or disabled based on prior user input, response times for key system operations, or the operation of custom controls. While the results of a hi-fi prototype test are the same as a lo-fi test, the process is substantially different. First, the test environment is different. It is typically more formal, with tests conducted within a usability lab, a special room with the equipment necessary to conduct and run the test. Second, the nature of the tasks being tested means that the structure of the test is different. For example, lo-fi prototypes are most effective at determining if the overall design created by the development team will be effective. Specifically, the lo-fi test should have helped determine the overall structure of the user interface: the arrangement and content of menus, windows, and the core interactions between them. When the lo-fi testing is complete, the conceptual structure of the interface should be well-understood and agreed upon with the users. The hi-fi test, on the other hand, should be organized around testing one or more concrete performance variables. The real managerial impact of hi-fi testing is twofold. First, there is question of finding the right individuals to conduct the test. Does the team have access to individuals who can properly conduct a hi-fi test? Most development teams do not. While most developers can quickly and easily learn how to run an effective lo-fi test, conducting a properly structured hi-fi test requires significantly more training. The second, and far more important question, is this: What is going to do be done with the results of the test? Like lo-fi test results, the results of a hi-fi test must be evaluated to determine what, if any, modifications are needed in the user interface. The problem is that modifying a hi-fi prototype takes a substantial amount of design and coding, and, as discussed earlier, the likelihood of substantially changing prior design decisions decreases as the effort invested in creating them increases. Do not test a hifi prototype if you are not willing to change it. Checklist • • • •
Our hi-fi test is measuring a specific performance variable. We have identified a qualified human factors specialist for hi-fi testing. We have prepared precise definitions of test requirements. We have secured the use of an appropriately equipped usability lab. 481
PROVIDING APPLICATION SOLUTIONS CONCLUSION The first main conclusion of this chapter deals with process. Creating usable systems is much more than following a series of checklists or arbitrary tasks. Ultimately, the process of creating usable systems is based on working to understand your users and performing a number of activities to meet their needs. These activities, such as user and task analysis, lo-fi design, and simulated prototyping, must all be performed, keeping in mind the primary objective of creating a usable system. The second main conclusion of this chapter concerns motivation. There are several motivations for creating usable systems, the most important of which must be the goal to create satisfied customers. Satisfied users are satisfied customers, however you might define customer, and satisfied customers are the foundation of a profitable enterprise. Given the correlation between usability and profitability, it is imperative that senior management takes usability seriously.
482
Chapter 39
UML: The Good, the Bad, and the Ugly John Erickson Keng Siau
OBJECT ORIENTATION AND THE EMERGENCE OF UML Introduction The proliferation and development of information systems has proceeded at a pace amazing to even those intimately involved in the creation of such systems. It appears, however, that software engineering has not kept pace with the advances in hardware and general technological capabilities. In this maelstrom of technological change, systems development has traditionally followed the general ADCT (Analyze, Design, Code, Test) rubric, and utilized such specific methodologies as the Waterfall method, the Spiral method, the System Life Cycle (alternatively known as the System Development Life Cycle, or SDLC), Prototyping, Rapid Application Development (RAD), Joint Application Development (JAD), end-user development, outsourcing in various forms, or buying predesigned software from vendors (e.g., SAP, J.D. Edwards, Oracle, PeopleSoft, Baan). In general, systems and software development methods do not require that developers adhere to a specific approach to building systems; and while this may be beneficial in that it allows developers the freedom to choose a method that they are most comfortable with and knowledgeable about, such an open-ended approach can constrain the system in unexpected ways. For example, systems developed using one of the above triedand-not-so-true approaches (judging from the relatively high 66 to 75 percent failure rate of systems development projects) generally do not provide even close to all of the user-required functionalities in the completed system. Sieber et al.1 stated that an ERP implementation provided only 60 0-8493-1595-6/03/$0.00+$1.50 © 2003 by CRC Press LLC
483
PROVIDING APPLICATION SOLUTIONS to 80 percent of the functionality specified in the requirements, and that was “merely” an implementation of a supposedly predeveloped application package. Thus, a different approach to systems development, one that provides close integration between analysis, design, and coding, would appear to be necessary. This chapter explores the role of the Unified Modeling Language (UML) as a modeling language that enables such an approach. This chapter starts by exploring the concept of object orientation, including object-oriented systems analysis and design, the idea of modeling and modeling languages, and the history of UML. It continues by covering the basic UML constructs and examining UML from a practitioner perspective. The chapter ends with a discussion of the future of UML and integrative closing comments. Object Orientation Over the past 15 to 20 years, object-oriented programming languages have emerged as the approach that many developers prefer to use during the Coding part of the ADCT cycle. However, in most cases, the Analysis and Design steps have continued to proceed in the traditional style. This has often created tension because traditional analysis and design are processoriented instead of being object-oriented. Object-oriented systems analysis and design (OOSAD) methods were developed to close the gap between the different stages, the first methods appearing in the 1980s. By the early 1990s, a virtual explosion in the number of OOSAD approaches began to flood the new paradigmatic environment. Between 1989 and 1994, the number of OO development methods grew from around 10 to more than 50.2 Two of these modeling languages are of particular interest for the purposes of this chapter: Booch and Jacobson’s OOSE (Object-Oriented Software Engineering), and Rumbaugh’s OMT (Object Modeling Technique).2 A partial listing of methods and languages is shown in Exhibit 1. The Emergence of UML Prominent developers of different object-oriented modeling approaches joined forces to create UML, which was originally based on the two distinct OO modeling languages mentioned above: OOSE and OMT. Development began in 1994 and continued through 1996, culminating in the January 1997 release of UML version 1.0. 2 The Object Management Group (OMG) adopted UML 1.1 as a standard modeling language in November 1997. Version 1.4 is the most current release, and UML 2.0 is currently under development. 484
UML: The Good, the Bad, and the Ugly Exhibit 1. • • • • • • • • • • • • • • • •
Sample Methods and Languages
Bailin Berard Booch Coad-Yourdon Colbert Embley Firesmith Gibson Hood Jacobson Martin-Odell Rumbaugh Schlaer-Mellor Seidewitz UML Wirfs-Brock
CURRENT UML MODELS AND EXTENSIBILITY MECHANISMS Modeling UML, as its name implies, is really all about creating models of software systems. Models are an abstraction of reality, meaning that we cannot model the complete reality, simply because of the complexity that such models would entail. Without abstraction, models would consume far more resources than any benefit gained from their construction. For the purposes of this chapter, a model constitutes a view into the system. UML originally proposed a set of nine distinct modeling techniques representing nine different models or views of the system. The techniques can be separated into structural (static) and behavioral (dynamic) views of the system. UML 1.4, the latest version of the modeling language, introduced three additional diagram types for model management. Structural Diagrams. Class diagrams, object diagrams, component diagrams, and deployment diagrams comprise the static models of UML. Static models represent snapshots of the system at a given point or points in time, and do not relate information about how the system achieved the condition or state that it is in at each snapshot.
Class diagrams (see an example in Exhibit 2) represent the basis of the OO paradigm to many adherents and depict class models. Class diagrams specify the system from both an analysis and design perspective. They depict what the system can do (analysis), and provide a blueprint showing how the system will be built (design).3 Class diagrams are self-describing 485
PROVIDING APPLICATION SOLUTIONS
Interface create FillRFQform() FillcustomerInformation() RFQToContractor() QueryLabor() RFQToSubContractor() AddendumtoCustomer() AddendumtoContractor() QuoteToContractor() opname() 1
RFQ number customerID description date total_amount bonding_requirement overhead tax
addSite() updateFromAddendum() calculateQuotation() 1 associated with
Addendum RFQ_number Date Description
handler 0..*
RecordCustomerInfoInDatabase() RequestQuoteFromVendor() RequestLaborQuote() 1 CalculateQuote() MergeAddendumToRFQ() SendQuoteToCustomer() GetSiteInfo() 1 +Quote Requester
1
1
UpdateRFQ() AddUpdateDetail() UpdateSite()
SiteLine Composed of Amount Number Description 1..* 1
Contains
1..* Site site_number site_name site_city site_state site_country total_site_amount AddSiteInfo() UpdateSiteInfo() AddSiteLine() UpdateSiteLine()
Exhibit 2.
Contractor I Name Address Phone Email
Request quote
+Parts/Service Provider 0..* SubContractor I Name Address Phone Email ReturnQuote() MergeAddendumToRFQ() SendQuoteToContractor() opname()
Class Diagram
and include a listing of the attributes, behaviors, and responsibilities of the system classes. Properly detailed class diagrams can be directly translated into physical (program code) form. In addition, correctly developed class diagrams can guide the software engineering process, as well as provide detailed system documentation.4 Object models and diagrams represent specific occurrences or instances of class diagrams, and as such are generally seen as more concrete than the more abstract class diagrams. Component diagrams depict the different parts of the software that constitute a system. This would include the interfaces of and between the components as well as their interrelationships. Ambler3 and Booch, Rumbaugh, and Jacobson2 defined component diagrams as class diagrams at a more abstract level. Deployment diagrams can also be seen as a special case of class diagrams. In this case, the diagram models how the runtime processing units are connected and work together. The primary difference between compo486
UML: The Good, the Bad, and the Ugly
customer registration <