Robustness and explainability of Artificial Intelligence

: from technical to policy solutions

Autor(es):
Hamon, Ronan | Junklewitz, Henrik | Sánchez, Ignacio
Comisión Europea
Series JRC Technical Reports ; ; 30040Editor: Luxembourg : Publications Office of the European Union, 2020Descripción: 34 p. : il. ; 1 documento PDFTipo de contenido: texto (visual)
Tipo de medio: electrónico
Tipo de soporte: recurso en línea
ISBN: 978-92-76-14660-5 (online)ISSN: 1831-9424 (online)Serie normalizada: Tema(s): Tecnologías habilitadoras digitales | artificial intelligence | data protection | information security | regulation of telecommunicationsRecursos en línea: Acceso al documento Resumen: In the light of the recent advances in artificial intelligence (AI), the serious negative consequences of its use for EU citizens and organisations have led to multiple initiatives from the European Commission to set up the principles of a trustworthy and secure AI. Among the identified requirements, the concepts of robustness and explainability of AI systems have emerged as key elements for a future regulation of this technology. This Technical Report by the European Commission Joint Research Centre (JRC) aims to contribute to this movement for the establishment of a sound regulatory framework for AI, by making the connection between the principles embodied in current regulations regarding to the cybersecurity of digital systems and the protection of data, the policy activities concerning AI, and the technical discussions within the scientific community of AI, in particular in the field of machine learning, that is largely at the origin of the recent advancements of this technology. The individual objectives of this report are to provide a policy-oriented description of the current perspectives of AI and its implications in society, an objective view on the current landscape of AI, focusing of the aspects of robustness and explainability. This also include a technical discussion of the current risks associated with AI in terms of security, safety, and data protection, and a presentation of the scientific solutions that are currently under active development in the AI community to mitigate these risks. This report puts forward several policy-related considerations for the attention of policy makers to establish a set of standardisation and certification tools for AI. First, the development of methodologies to evaluate the impacts of AI on society, built on the model of the Data Protection Impact Assessments (DPIA) introduced in the General Data Protection Regulation (GDPR), is discussed. Secondly, a focus is made on the establishment of methodologies to assess the robustness of systems that would be adapted to the context of use. This would come along with the identification of known vulnerabilities of AI systems, and the technical solutions that have been proposed in the scientific community to address them. Finally, the aspects of transparency and explainability of AI are discussed, including the explainability-by-design approaches for AI models.
    Valoración media: 0.0 (0 votos)
Tipo de ítem Ubicación actual Colección Signatura Estado Notas Fecha de vencimiento Código de barras
Informes Informes CDO

El Centro de Documentación del Observatorio Nacional de las Telecomunicaciones y de la Sociedad de la Información (CDO) os da la bienvenida al catálogo bibliográfico sobre recursos digitales en las materias de Tecnologías de la Información y telecomunicaciones, Servicios públicos digitales, Administración Electrónica y Economía digital. 

 

 

Colección digital Acceso libre online pdf 1000020175895

Bibliografía: p. 26-31

In the light of the recent advances in artificial intelligence (AI), the serious negative consequences of its use for EU citizens and organisations have led to multiple initiatives from the European Commission to set up the principles of a trustworthy and secure AI. Among the identified requirements, the concepts of robustness and explainability of AI systems have emerged as key elements for a future regulation of this technology. This Technical Report by the European Commission Joint Research Centre (JRC) aims to contribute to this movement for the establishment of a sound regulatory framework for AI, by making the connection between the principles embodied in current regulations regarding to the cybersecurity of digital systems and the protection of data, the policy activities concerning AI, and the technical discussions within the scientific community of AI, in particular in the field of machine learning, that is largely at the origin of the recent advancements of this technology. The individual objectives of this report are to provide a policy-oriented description of the current perspectives of AI and its implications in society, an objective view on the current landscape of AI, focusing of the aspects of robustness and explainability. This also include a technical discussion of the current risks associated with AI in terms of security, safety, and data protection, and a presentation of the scientific solutions that are currently under active development in the AI community to mitigate these risks. This report puts forward several policy-related considerations for the attention of policy makers to establish a set of standardisation and certification tools for AI. First, the development of methodologies to evaluate the impacts of AI on society, built on the model of the Data Protection Impact Assessments (DPIA) introduced in the General Data Protection Regulation (GDPR), is discussed. Secondly, a focus is made on the establishment of methodologies to assess the robustness of systems that would be adapted to the context of use. This would come along with the identification of known vulnerabilities of AI systems, and the technical solutions that have been proposed in the scientific community to address them. Finally, the aspects of transparency and explainability of AI are discussed, including the explainability-by-design approaches for AI models.

All the authorized content except: Figure 17 on page 80, used under CC0 licence European Union

No hay comentarios en este titulo.

para colocar un comentario.

Haga clic en una imagen para verla en el visor de imágenes

Copyright© ONTSI. Todos los derechos reservados.
x
Esta web está utilizando la política de Cookies de la entidad pública empresarial Red.es, M.P. se detalla en el siguiente enlace: aviso-cookies. Acepto