" "



:






Explainable Neural Networks Based on Fuzzy Logic and Multi-criteria Decision Tools: Explainable Neural Networks Based on Fuzzy Logic and Multi-criteria Decision Tools
: Jozsef Dombi, Orsolya Csiszar
: Springer
: 2021
: 186
:
: pdf (true), epub
: 28.5 MB

The research presented in this book shows how combining deep neural networks with a special class of fuzzy logical rules and multi-criteria decision tools can make deep neural networks more interpretable and even, in many cases, more efficient. Fuzzy logic together with multi-criteria decision-making tools provides very powerful tools for modeling human thinking. Based on their common theoretical basis, we propose a consistent framework for modeling human thinking by using the tools of all three fields: fuzzy logic, multi-criteria decision-making, and Deep Learning (DL) to help reduce the black-box nature of neural models; a challenge that is of vital importance to the whole research community.

This monograph consists of new research results developed by the authors and their co-authors, Zsolt Gera and Gabor Csiszar, and it focuses on a special class of continuous-valued logic and multi-criteria decision tools. Based on their common theoretical basis, we propose a consistent framework for modeling human thinking by using the tools of both fields: fuzzy logical operators as well as multi-criteria decision tools, such as aggregative and preference operators. Fuzzy logic together with multi-criteria decision-making tools provides very powerful tools for modeling human thinking. Another successful field in this direction is that of artificial neural networks, which were inspired by the biological neural networks that constitute human brains. Deep learning based on neural networks is revolutionizing the business and technology world. However, there is an increasing need to address the problem of interpretability, safety, and transparency. This challenge is closely related to the fact that although deep neural networks have achieved impressive experimental results, especially in image classification, they have been shown to be surprisingly unstable when it comes to adversarial perturbations: Minimal changes to the input image may cause the network to misclassify it.

Moreover, although Machine Learning (ML) algorithms are capable of learning from a set of data and of producing a model that can be used to solve different problems, the values of the accuracy or the prediction error are insufficient, since these only provide an incomplete description of most real-world problems. The interpretability of a machine learning model gives an insight into its internal functionality to explain the reasons why it suggests making certain decisions. In a high-risk environment, it is vital to know why a decision was made; the predictive performance on a test dataset is not enough. In black-box models, less is known about what influencing variables are actually driving the final decision. The relationship between the input and output is often limited in complexity and local interpretations. However, the white-box models such as linear regression and decision trees are significantly easier to explain and interpret, they provide less predictive capacity, and they are not always capable of modeling the inherent complexity of the dataset (i.e., feature interactions).

In this book, we offer a unified framework for logical operators and decision tools with applications in neural computation. We show how combining deep neural networks with structured logical rules and multi-criteria decision tools might help reduce the black-box nature of neural models. We strongly believe that our work is an important step toward better interpretability, transparency, and safety of neural models.

Explainable Neural Networks Based on Fuzzy Logic and Multi-criteria Decision Tools












TURBOBIT.NET? , !





: Ingvar16 1-05-2021, 16:57 | |
 
, .





:

, , .


 MirKnig.Su  2021