Validity, Reliability, and Significance: Empirical Methods for NLP and Data ScienceКНИГИ » ПРОГРАММИНГ
Название: Validity, Reliability, and Significance: Empirical Methods for NLP and Data Science Автор: Stefan Riezler, Michael Hagmann Издательство: Morgan & Claypool Год: 2022 Страниц: 165 Язык: английский Формат: pdf (true) Размер: 10.2 MB
Empirical methods are means to answering methodological questions of empirical sciences by statistical techniques. The methodological questions addressed in this book include the problems of validity, reliability, and significance. In the case of Machine Learning, these correspond to the questions of whether a model predicts what it purports to predict, whether a model's performance is consistent across replications, and whether a performance difference between two models is due to chance, respectively. The goal of this book is to answer these questions by concrete statistical tests that can be applied to assess validity, reliability, and significance of data annotation and Machine Learning (ML) prediction in the fields of natural language processing (NLP) and Data Science.
Our focus is on model-based empirical methods where data annotations and model predictions are treated as training data for interpretable probabilistic models from the well-understood families of generalized additive models (GAMs) and linear mixed effects models (LMEMs). Based on the interpretable parameters of the trained GAMs or LMEMs, the book presents model-based statistical tests such as a validity test that allows detecting circular features that circumvent learning. Furthermore, the book discusses a reliability coefficient using variance decomposition based on random effect parameters of LMEMs. Last, a significance test based on the likelihood ratio of nested LMEMs trained on the performance scores of two Machine Learning models is shown to naturally allow the inclusion of variations in meta-parameter settings into hypothesis testing, and further facilitates a refined system comparison conditional on properties of input data.
Machine Learning is a research field that has been explored for several decades, and recently has begun to affect many areas of modern life under the reinvigorated label of artificial intelligence. The goal of Machine Learning can be described as learning a mathematical function to make predictions on unseen test data, based on given training data, without explicit programmed instructions on how to perform the task. The methods employed for learning functional relationships between inputs and outputs heavily build on methods of mathematical optimization . While optimization problems are formalized as minimization of empirical risk functions on given training data, the important twist in Machine Learning is that it aims to optimize prediction performance in expectation, thus enabling generalization to unseen test data. The development and analysis of techniques for generalization is the topic of the dedicated sub-field of statistical learning theory. Statistical learning theory can be seen as the methodological basis of Machine Learning, and central concepts of statistical learning theory have been compared to Popper’s ideas of falsifiability of a scientific theory.
Let us contrast this proposition with the practical workflow of a machine learning researcher conducting empirical research in natural language processing (NLP) and Data Science. Most empirical research in these areas follows the paradigm of adopting or establishing a set of input representations and output labels that are split into portions for training, development, and testing. The data in these splits are assumed to represent independent samples from an identical distribution (so-called i.i.d. samples). Furthermore, data in the splits are made i.i.d. artificially, e.g., by shuffling data at random between splits or by experience replay. The i.i.d. assumption is crucial for the consistency guarantees from statistical learning theory to apply. Furthermore, it can be seen an acknowledgment of basic principles of experimental control by a randomized experimental design. A typical NLP or Data Science project then starts with optimizing the parameters of a Machine Learning model on given training data, tuning meta-parameters on development data, and ends with testing the model using a standard automatic evaluation metric on benchmark test data. We call this scheme of a Machine Learning process the train-dev-test paradigm of NLP and Data Science.
This book can be used as an introduction to empirical methods for machine learning in general, with a special focus on applications in NLP and Data Science. The book is self-contained, with an appendix on the mathematical background on GAMs and LMEMs, and with an accompanying webpage including R code to replicate experiments presented in the book.
Скачать Validity, Reliability, and Significance: Empirical Methods for NLP and Data Science
Statistical Foundations of Data Science Название: Statistical Foundations of Data Science Автор: Jianqing Fan, Runze Li Издательство: Chapman and Hall/CRC Год: 2020 Страниц: 775 Язык:...