Explainable AI: Interpreting, Explaining and Visualizing Deep Learning

Mentés helye:
Bibliográfiai részletek
Testületi szerző:
Közreműködő(k):
Különgyűjtemény:e-book
Formátum: könyv
Nyelv:angol
Megjelenés: Cham : Springer International Publishing, 2019
Sorozat:Lecture Notes in Artificial Intelligence ; 11700
Tárgyszavak:
Online elérés:http://doi.org/10.1007/978-3-030-28954-6
Címkék: Új címke
A tételhez itt fűzhet saját címkét!
id opac-EUL01-001013695
collection e-book
institution L_042
EUL01
spelling Explainable AI: Interpreting, Explaining and Visualizing Deep Learning edited by Wojciech Samek [et al.]
Cham Springer International Publishing 2019
XI, 439 p. online forrás
szöveg txt rdacontent
számítógépes c rdamedia
távoli hozzáférés cr rdacarrier
szövegfájl PDF rda
Lecture Notes in Artificial Intelligence 11700
Towards Explainable Artificial Intelligence -- Transparency: Motivations and Challenges -- Interpretability in Intelligent Systems: A New Concept? -- Understanding Neural Networks via Feature Visualization: A Survey -- Interpretable Text-to-Image Synthesis with Hierarchical Semantic Layout Generation -- Unsupervised Discrete Representation Learning -- Towards Reverse-Engineering Black-Box Neural Networks -- Explanations for Attributing Deep Neural Network Predictions -- Gradient-Based Attribution Methods -- Layer-Wise Relevance Propagation: An Overview -- Explaining and Interpreting LSTMs -- Comparing the Interpretability of Deep Networks via Network Dissection -- Gradient-Based vs. Propagation-Based Explanations: An Axiomatic Comparison -- The (Un)reliability of Saliency Methods -- Visual Scene Understanding for Autonomous Driving Using Semantic Segmentation -- Understanding Patch-Based Learning of Video Data by Explaining Predictions -- Quantum-Chemical Insights from Interpretable Atomistic Neural Networks -- Interpretable Deep Learning in Drug Discovery -- Neural Hydrology: Interpreting LSTMs in Hydrology -- Feature Fallacy: Complications with Interpreting Linear Decoding Weights in fMRI -- Current Advances in Neural Decoding -- Software and Application Patterns for Explanation Methods.
The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.
Nyomtatott kiadás: ISBN 9783030289539
Nyomtatott kiadás: ISBN 9783030289553
Az e-könyvek a teljes ELTE IP-tartományon belül online elérhetők.
e-book
könyv
számítógépes látás informatika EUL10000467497 Y
gépi látás informatika EUL10001008247 Y
neurális hálózatok mesterséges intelligencia EUL10000965239 Y
fuzzy-rendszerek EUL10000250526 Y
képfeldolgozás EUL10000446595 Y
számítógépes biztonság EUL10000488943 Y
Artificial intelligence. EUL10000183324 Y
Optical data processing.
Computers. EUL10000049752 Y
Computer security. EUL10000344503 Y
Computer organization. EUL10000373746 Y
elektronikus könyv
Samek, Wojciech. szerk. EUL10001092284 Y
SpringerLink (Online service) közreadó testület
Lecture Notes in Artificial Intelligence
Online változat http://doi.org/10.1007/978-3-030-28954-6
Cham Springer International Publishing Imprint: Springer 2019
EUL01
language English
format Book
author2 Samek, Wojciech., szerk.
author_facet Samek, Wojciech., szerk.
SpringerLink (Online service), közreadó testület
author_corporate SpringerLink (Online service), közreadó testület
author_sort Samek, Wojciech.
title Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
spellingShingle Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
Lecture Notes in Artificial Intelligence ; 11700
számítógépes látás -- informatika
gépi látás -- informatika
neurális hálózatok -- mesterséges intelligencia
fuzzy-rendszerek
képfeldolgozás
számítógépes biztonság
Artificial intelligence.
Optical data processing.
Computers.
Computer security.
Computer organization.
elektronikus könyv
title_short Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
title_full Explainable AI: Interpreting, Explaining and Visualizing Deep Learning edited by Wojciech Samek [et al.]
title_fullStr Explainable AI: Interpreting, Explaining and Visualizing Deep Learning edited by Wojciech Samek [et al.]
title_full_unstemmed Explainable AI: Interpreting, Explaining and Visualizing Deep Learning edited by Wojciech Samek [et al.]
title_auth Explainable AI: Interpreting, Explaining and Visualizing Deep Learning
title_sort explainable ai interpreting explaining and visualizing deep learning
series Lecture Notes in Artificial Intelligence ; 11700
series2 Lecture Notes in Artificial Intelligence
publishDate 2019
publishDateSort 2019
physical XI, 439 p. : online forrás
isbn 978-3-030-28954-6
callnumber-first Q - Science
callnumber-subject Q - General Science
callnumber-label Q334-342
callnumber-raw 15929
callnumber-search 15929
topic számítógépes látás -- informatika
gépi látás -- informatika
neurális hálózatok -- mesterséges intelligencia
fuzzy-rendszerek
képfeldolgozás
számítógépes biztonság
Artificial intelligence.
Optical data processing.
Computers.
Computer security.
Computer organization.
elektronikus könyv
topic_facet számítógépes látás -- informatika
gépi látás -- informatika
neurális hálózatok -- mesterséges intelligencia
fuzzy-rendszerek
képfeldolgozás
számítógépes biztonság
Artificial intelligence.
Optical data processing.
Computers.
Computer security.
Computer organization.
elektronikus könyv
számítógépes látás
gépi látás
neurális hálózatok
fuzzy-rendszerek
képfeldolgozás
számítógépes biztonság
Artificial intelligence.
Optical data processing.
Computers.
Computer security.
Computer organization.
informatika
mesterséges intelligencia
url http://doi.org/10.1007/978-3-030-28954-6
illustrated Illustrated
dewey-hundreds 000 - Computer science, information & general works
dewey-tens 000 - Computer science, knowledge & systems
dewey-ones 006 - Special computer methods
dewey-full 006.3
dewey-sort 16.3
dewey-raw 006.3
dewey-search 006.3
first_indexed 2023-12-27T14:34:21Z
last_indexed 2023-12-29T20:06:33Z
recordtype opac
publisher Cham : Springer International Publishing
_version_ 1786644313017417728
score 13,371168
generalnotes The development of “intelligent” systems that can take decisions and perform autonomously might lead to faster and more consistent decisions. A limiting factor for a broader adoption of AI technology is the inherent risks that come with giving up human control and oversight to “intelligent” machines. Forsensitive tasks involving critical infrastructures and affecting human well-being or health, it is crucial to limit the possibility of improper, non-robust and unsafe decisions and actions. Before deploying an AI system, we see a strong need to validate its behavior, and thus establish guarantees that it will continue to perform as expected when deployed in a real-world environment. In pursuit of that objective, ways for humans to verify the agreement between the AI decision structure and their own ground-truth knowledge have been explored. Explainable AI (XAI) has developed as a subfield of AI, focused on exposing complex AI models to humans in a systematic and interpretable manner. The 22 chapters included in this book provide a timely snapshot of algorithms, theory, and applications of interpretable and explainable AI and AI techniques that have been proposed recently reflecting the current discourse in this field and providing directions of future development. The book is organized in six parts: towards AI transparency; methods for interpreting AI systems; explaining the decisions of AI systems; evaluating interpretability and explanations; applications of explainable AI; and software for explainable AI.