Cultural Appropriateness of Artificial Intelligence in Health Care

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

Standard

Cultural Appropriateness of Artificial Intelligence in Health Care. / Lebret, Audrey.

Les intelligences artificielles au prisme de la justice sociale / Considering Artificial Intelligence Through the Lens of Social Justice . ed. / Karine Gentelet. Presses de l'Université Laval, 2023. p. 37-56 (Éthique, IA et sociétés – OBVIA ).

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

Harvard

Lebret, A 2023, Cultural Appropriateness of Artificial Intelligence in Health Care. in K Gentelet (ed.), Les intelligences artificielles au prisme de la justice sociale / Considering Artificial Intelligence Through the Lens of Social Justice . Presses de l'Université Laval, Éthique, IA et sociétés – OBVIA , pp. 37-56.

APA

Lebret, A. (2023). Cultural Appropriateness of Artificial Intelligence in Health Care. In K. Gentelet (Ed.), Les intelligences artificielles au prisme de la justice sociale / Considering Artificial Intelligence Through the Lens of Social Justice (pp. 37-56). Presses de l'Université Laval. Éthique, IA et sociétés – OBVIA

Vancouver

Lebret A. Cultural Appropriateness of Artificial Intelligence in Health Care. In Gentelet K, editor, Les intelligences artificielles au prisme de la justice sociale / Considering Artificial Intelligence Through the Lens of Social Justice . Presses de l'Université Laval. 2023. p. 37-56. (Éthique, IA et sociétés – OBVIA ).

Author

Lebret, Audrey. / Cultural Appropriateness of Artificial Intelligence in Health Care. Les intelligences artificielles au prisme de la justice sociale / Considering Artificial Intelligence Through the Lens of Social Justice . editor / Karine Gentelet. Presses de l'Université Laval, 2023. pp. 37-56 (Éthique, IA et sociétés – OBVIA ).

Bibtex

@inbook{67cb54c4db50451cb451c8e999211263,
title = "Cultural Appropriateness of Artificial Intelligence in Health Care",
abstract = "Extracts (intro):Potential interactions between AI and economic, social and cultural rights have received less attention in the legal literature [than between AI and civil and political rights]. Yet, the reference to human rights law to achieve social justice in AI may only be fully prolific if its interpretation is transversal and coherent. This implies to go beyond the traditional and outdated « generations » of human rights, to include social, economic and cultural aspects in the equation. This analysis is even more important that essential services such as health care rely more and more on an AI designed by private actors on a transnational scene, including big corporations like Google or IBM, jeopardizing social and cultural rights. Medicine interacts in many ways with culture. In particular, biomedical practices such as organ donation or assisted reproduction technologies substantially vary between continental regions and between countries, but also sometimes between regions of a given nation state. Language, beliefs and religion also influence our attitudes towards illness and health professionals and, more broadly, our understandings of health and biomedicine. The disappearance of indigenous languages for instance threatens traditional medicinal knowledge. Therefore, the social and legal analysis of AI in healthcare cannot ignore this close link between culture and medicine when it comes to developing and applying a supposedly neutral and universal technology to peoples.This book chapter contributes to the analysis of social justice for medical AI by exploring the interactions between the protection of culture and the development of medical AI within a human rights framework. It builds on an anthropological definition of culture, transmitted or built in contact with other groups, rather than on culture as a creative process. This chapter considers shared representations and normative practices as part of a non-static understanding of culture. […]. Culture is not only an individual and collective right. It also participates in the effectiveness of other human rights, which depends on the acceptability of the measures that States take in order to implement such rights. […] Within the framework of human rights and their underlying principles of interdependence and universalism, achieving cultural appropriateness and therefore legitimacy requires a process of conflict resolution between individual and collective cultural rights on the one hand, and between different collective cultural rights on the other hand. Cultural rights aim to ensure the concrete effectiveness of rights and cannot lead to cultural relativism and the justification of practices contrary to rights. Faced with the transnational development of AI, exploring the cultural appropriateness of health algorithms is therefore necessary both from a substantive perspective (it questions the claim of a neutral and uniform technology), but also from a procedural perspective (it contributes to the framing of an AI developed by multiple actors, including multinational companies with predatory practices). This concept may contribute to the sustainability of AI, reconciling legitimate collective and individual interests. […]. A culturally appropriate medical AI would call for both procedural and substantive inclusion of culture in the AI development process. First, cultural appropriateness requires States to guarantee a procedural right to participation in health-decision making, which implies the consultation of concerned peoples and the representation of relevant cultural factors in the implementation of digital health solutions (1). Second, cultural appropriateness demands the adaptation of AI by the incorporation of concerned populations{\textquoteright} relevant and legitimate cultural values and perceptions in the algorithms through active design choices (2). ",
keywords = "Faculty of Law, Cultural Rights, Artificial intelligence, biomedicine, Human Rights, Immaterial cultural heritage, health",
author = "Audrey Lebret",
year = "2023",
language = "English",
isbn = "9782766301843",
series = "{\'E}thique, IA et soci{\'e}t{\'e}s – OBVIA ",
pages = "37--56",
editor = "Karine Gentelet",
booktitle = "Les intelligences artificielles au prisme de la justice sociale / Considering Artificial Intelligence Through the Lens of Social Justice",
publisher = "Presses de l'Universit{\'e} Laval",

}

RIS

TY - CHAP

T1 - Cultural Appropriateness of Artificial Intelligence in Health Care

AU - Lebret, Audrey

PY - 2023

Y1 - 2023

N2 - Extracts (intro):Potential interactions between AI and economic, social and cultural rights have received less attention in the legal literature [than between AI and civil and political rights]. Yet, the reference to human rights law to achieve social justice in AI may only be fully prolific if its interpretation is transversal and coherent. This implies to go beyond the traditional and outdated « generations » of human rights, to include social, economic and cultural aspects in the equation. This analysis is even more important that essential services such as health care rely more and more on an AI designed by private actors on a transnational scene, including big corporations like Google or IBM, jeopardizing social and cultural rights. Medicine interacts in many ways with culture. In particular, biomedical practices such as organ donation or assisted reproduction technologies substantially vary between continental regions and between countries, but also sometimes between regions of a given nation state. Language, beliefs and religion also influence our attitudes towards illness and health professionals and, more broadly, our understandings of health and biomedicine. The disappearance of indigenous languages for instance threatens traditional medicinal knowledge. Therefore, the social and legal analysis of AI in healthcare cannot ignore this close link between culture and medicine when it comes to developing and applying a supposedly neutral and universal technology to peoples.This book chapter contributes to the analysis of social justice for medical AI by exploring the interactions between the protection of culture and the development of medical AI within a human rights framework. It builds on an anthropological definition of culture, transmitted or built in contact with other groups, rather than on culture as a creative process. This chapter considers shared representations and normative practices as part of a non-static understanding of culture. […]. Culture is not only an individual and collective right. It also participates in the effectiveness of other human rights, which depends on the acceptability of the measures that States take in order to implement such rights. […] Within the framework of human rights and their underlying principles of interdependence and universalism, achieving cultural appropriateness and therefore legitimacy requires a process of conflict resolution between individual and collective cultural rights on the one hand, and between different collective cultural rights on the other hand. Cultural rights aim to ensure the concrete effectiveness of rights and cannot lead to cultural relativism and the justification of practices contrary to rights. Faced with the transnational development of AI, exploring the cultural appropriateness of health algorithms is therefore necessary both from a substantive perspective (it questions the claim of a neutral and uniform technology), but also from a procedural perspective (it contributes to the framing of an AI developed by multiple actors, including multinational companies with predatory practices). This concept may contribute to the sustainability of AI, reconciling legitimate collective and individual interests. […]. A culturally appropriate medical AI would call for both procedural and substantive inclusion of culture in the AI development process. First, cultural appropriateness requires States to guarantee a procedural right to participation in health-decision making, which implies the consultation of concerned peoples and the representation of relevant cultural factors in the implementation of digital health solutions (1). Second, cultural appropriateness demands the adaptation of AI by the incorporation of concerned populations’ relevant and legitimate cultural values and perceptions in the algorithms through active design choices (2).

AB - Extracts (intro):Potential interactions between AI and economic, social and cultural rights have received less attention in the legal literature [than between AI and civil and political rights]. Yet, the reference to human rights law to achieve social justice in AI may only be fully prolific if its interpretation is transversal and coherent. This implies to go beyond the traditional and outdated « generations » of human rights, to include social, economic and cultural aspects in the equation. This analysis is even more important that essential services such as health care rely more and more on an AI designed by private actors on a transnational scene, including big corporations like Google or IBM, jeopardizing social and cultural rights. Medicine interacts in many ways with culture. In particular, biomedical practices such as organ donation or assisted reproduction technologies substantially vary between continental regions and between countries, but also sometimes between regions of a given nation state. Language, beliefs and religion also influence our attitudes towards illness and health professionals and, more broadly, our understandings of health and biomedicine. The disappearance of indigenous languages for instance threatens traditional medicinal knowledge. Therefore, the social and legal analysis of AI in healthcare cannot ignore this close link between culture and medicine when it comes to developing and applying a supposedly neutral and universal technology to peoples.This book chapter contributes to the analysis of social justice for medical AI by exploring the interactions between the protection of culture and the development of medical AI within a human rights framework. It builds on an anthropological definition of culture, transmitted or built in contact with other groups, rather than on culture as a creative process. This chapter considers shared representations and normative practices as part of a non-static understanding of culture. […]. Culture is not only an individual and collective right. It also participates in the effectiveness of other human rights, which depends on the acceptability of the measures that States take in order to implement such rights. […] Within the framework of human rights and their underlying principles of interdependence and universalism, achieving cultural appropriateness and therefore legitimacy requires a process of conflict resolution between individual and collective cultural rights on the one hand, and between different collective cultural rights on the other hand. Cultural rights aim to ensure the concrete effectiveness of rights and cannot lead to cultural relativism and the justification of practices contrary to rights. Faced with the transnational development of AI, exploring the cultural appropriateness of health algorithms is therefore necessary both from a substantive perspective (it questions the claim of a neutral and uniform technology), but also from a procedural perspective (it contributes to the framing of an AI developed by multiple actors, including multinational companies with predatory practices). This concept may contribute to the sustainability of AI, reconciling legitimate collective and individual interests. […]. A culturally appropriate medical AI would call for both procedural and substantive inclusion of culture in the AI development process. First, cultural appropriateness requires States to guarantee a procedural right to participation in health-decision making, which implies the consultation of concerned peoples and the representation of relevant cultural factors in the implementation of digital health solutions (1). Second, cultural appropriateness demands the adaptation of AI by the incorporation of concerned populations’ relevant and legitimate cultural values and perceptions in the algorithms through active design choices (2).

KW - Faculty of Law

KW - Cultural Rights

KW - Artificial intelligence

KW - biomedicine

KW - Human Rights

KW - Immaterial cultural heritage

KW - health

UR - https://r.cantook.com/enqc/sample/aHR0cHM6Ly93d3cuZW50cmVwb3RudW1lcmlxdWUuY29tL3NhbXBsZS8xMzI3MDAvd2ViX3JlYWRlcl9tYW5pZmVzdD9mb3JtYXRfbmF0dXJlPXBkZiZzaWdpZD0xNjg3OTIzNjkyJnNpZ25hdHVyZT0wNDAxZDliMTA5YjRjNzYxZTIwZDY4Y2JkYmQ5N2NjZTQ3YzdjNGU2YjU4NDc1NjBhY2Y1NzM1MjQ5MTNkNWMw

M3 - Book chapter

SN - 9782766301843

T3 - Éthique, IA et sociétés – OBVIA

SP - 37

EP - 56

BT - Les intelligences artificielles au prisme de la justice sociale / Considering Artificial Intelligence Through the Lens of Social Justice

A2 - Gentelet, Karine

PB - Presses de l'Université Laval

ER -

ID: 317812505