Foundations of Data Quality Management

Lieferzeit: Lieferbar innerhalb 14 Tagen

35,30 

Synthesis Lectures on Data Management

ISBN: 3031007646
ISBN 13: 9783031007644
Autor: Fan, Wenfei/Geerts, Floris
Verlag: Springer Verlag GmbH
Umfang: xv, 201 S.
Erscheinungsdatum: 10.08.2012
Auflage: 1/2012
Produktform: Kartoniert
Einband: KT
Artikelnummer: 5885069 Kategorie:

Beschreibung

Data quality is one of the most important problems in data management. A database system typically aims to support the creation, maintenance, and use of large amount of data, focusing on the quantity of data. However, real-life data are often dirty: inconsistent, duplicated, inaccurate, incomplete, or stale. Dirty data in a database routinely generate misleading or biased analytical results and decisions, and lead to loss of revenues, credibility and customers. With this comes the need for data quality management. In contrast to traditional data management tasks, data quality management enables the detection and correction of errors in the data, syntactic or semantic, in order to improve the quality of the data and hence, add value to business processes. While data quality has been a longstanding problem for decades, the prevalent use of the Web has increased the risks, on an unprecedented scale, of creating and propagating dirty data. This monograph gives an overview of fundamental issues underlying central aspects of data quality, namely, data consistency, data deduplication, data accuracy, data currency, and information completeness. We promote a uniform logical framework for dealing with these issues, based on data quality rules. The text is organized into seven chapters, focusing on relational data. Chapter One introduces data quality issues. A conditional dependency theory is developed in Chapter Two, for capturing data inconsistencies. It is followed by practical techniques in Chapter 2b for discovering conditional dependencies, and for detecting inconsistencies and repairing data based on conditional dependencies. Matching dependencies are introduced in Chapter Three, as matching rules for data deduplication. A theory of relative information completeness is studied in Chapter Four, revising the classical Closed World Assumption and the Open World Assumption, to characterize incomplete information in the real world. A data currency model is presented in Chapter Five, to identify the current values of entities in a database and to answer queries with the current values, in the absence of reliable timestamps. Finally, interactions between these data quality issues are explored in Chapter Six. Important theoretical results and practical algorithms are covered, but formal proofs are omitted. The bibliographical notes contain pointers to papers in which the results were presented and proven, as well as references to materials for further reading. This text is intended for a seminar course at the graduate level. It is also to serve as a useful resource for researchers and practitioners who are interested in the study of data quality. The fundamental research on data quality draws on several areas, including mathematical logic, computational complexity and database theory. It has raised as many questions as it has answered, and is a rich source of questions and vitality. Table of Contents: Data Quality: An Overview / Conditional Dependencies / Cleaning Data with Conditional Dependencies / Data Deduplication / Information Completeness / Data Currency / Interactions between Data Quality Issues

Autorenporträt

Wenfei Fan is the (Chair) Professor of Web Data Management in the School of Informatics, University of Edinburgh, UK. He is a Fellow of the Royal Society of Edinburgh, UK, a National Professor of the 1000-Talent Program, and a Yangtze River Scholar, China. He received his Ph.D. from the University of Pennsylvania, U.S.A., and his M.S.and B.S. from Peking University, China. He is a recipient of the Alberto O. Mendelzon Test-of-Time Award of ACM PODS 2010, the Best Paper Award for VLDB 2010, the Roger Needham Award in 2008 (UK), the Best Paper Award for IEEE ICDE 2007, the Outstanding Overseas Young Scholar Award in 2003 (China), the Best Paper of the Year Award for Computer Networks in 2002, and the Career Award in 2001 (USA). His current research interests include database theory and systems, in particular data quality, data integration, database security, distributed query processing, query languages, social network analysis, Web services, and XML. Floris Geerts is Research Professor in the Department of Mathematics and Computer Science, University of Antwerp, Belgium. Before that, he held a senior research fellow position in the database group at the University of Edinburgh, UK and a postdoctoral research position in the data mining group at the University of Helsinki, Finland. He received his Ph.D. in 2001 from the University of Hasselt, Belgium. His research interests include the theory and practice of databases and the study of data quality, in particular. He is a recipient of the Best Paper Awards for IEEE ICDM 2001 and IEEE ICDE 2007.

Das könnte Ihnen auch gefallen …