Название: Human-Centric AI with Common Sense Автор: Filip Ilievski Издательство: Springer Серия: Synthesis Lectures on Computer Science Год: 2024 Страниц: 146 Язык: английский Формат: pdf (true), epub Размер: 26.9 MB
This book enables readers to understand the challenges and opportunities of developing human-centered AI with commonsense reasoning abilities. Despite apparent accuracy improvements brought by large neural models across task benchmarks, common sense is still lacking. The lack of common sense affects many tasks, including story understanding, decision-making, and question answering. Commonsense knowledge and reasoning have long been considered the “black matter” of Artificial Intelligence (AI), raising concerns about the trustworthiness and applicability of AI methods in both autonomous and hybrid applications. This book describes how to design a more robust, collaborative, explainable, and responsible AI through incorporating neuro-symbolic commonsense reasoning. In addition, the book provides examples of how these properties of AI can facilitate a wide range of social-good applications in digital democracy, traffic monitoring, education, and robotics. What makes commonsense reasoning such a unique and impactful challenge? What can we learn from cognitive research when designing and developing AI systems? How can we approach building responsible, robust, collaborative, and explainable AI with common sense? And finally, what is the impact of this work on human-AI teaming? This book provides an accessible introduction and exploration of these topics.
With the stakes of AI increasing, there is a recognition that a key requirement of a human-centric AI is explainability. This recognition has inspired a range of methods formaking AI explainable, either during or after its main inference process. Such explainable AI (XAI) efforts form an extensive taxonomy of approaches differing in their scope, stage, input/output format, result, and functioning. The most popular idea in these XAI methods is to localize the part of the input or the network that is most responsible for a given prediction. As much of the real-world inference relies on implicit commonsense information, it requires specialized XAI methods that explain a decision by including this implicit information in their explanations. This chapter discusses the challenges in developing commonsense explanation systems, including the implicit nature of common sense, alignment between the explanation and the model result, and incompleteness of explanations. Then, we describe several neuro-symbolic methods that can be mapped to popular XAI functioning categories: structure leveraging explanations through path generation, architecture modification model based on compositional reasoning, and example-based explanations via case-based reasoning.
This book has several unique features that I hope will be informative and inspiring to advanced students, researchers, and practitioners in a variety of areas within Computer Science. It has a broad coverage of neural and symbolic methods for commonsense reasoning, connecting key insights from Machine Learning, natural language processing (NLP), and neuro-symbolic AI.
Уважаемый посетитель, Вы зашли на сайт как незарегистрированный пользователь.
С этой публикацией часто скачивают:
Machines like Us: Toward AI with Common Sense Название: Machines like Us: Toward AI with Common Sense Автор: Ronald J. Brachman and Hector J. Levesque Издательство: The MIT Press Год: 2022...
AI and Common Sense: Ambitions and Frictions Название: AI and Common Sense: Ambitions and Frictions Автор: Martin W. Bauer, Bernard Schiele Издательство: Routledge Год: 2024 Страниц: 286 Язык:...