Cultural Appropriateness of Artificial Intelligence in Health Care

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

Extracts (intro):
Potential interactions between AI and economic, social and cultural rights have received less attention in the legal literature [than between AI and civil and political rights]. Yet, the reference to human rights law to achieve social justice in AI may only be fully prolific if its interpretation is transversal and coherent. This implies to go beyond the traditional and outdated « generations » of human rights, to include social, economic and cultural aspects in the equation. This analysis is even more important that essential services such as health care rely more and more on an AI designed by private actors on a transnational scene, including big corporations like Google or IBM, jeopardizing social and cultural rights. Medicine interacts in many ways with culture. In particular, biomedical practices such as organ donation or assisted reproduction technologies substantially vary between continental regions and between countries, but also sometimes between regions of a given nation state. Language, beliefs and religion also influence our attitudes towards illness and health professionals and, more broadly, our understandings of health and biomedicine. The disappearance of indigenous languages for instance threatens traditional medicinal knowledge. Therefore, the social and legal analysis of AI in healthcare cannot ignore this close link between culture and medicine when it comes to developing and applying a supposedly neutral and universal technology to peoples.
This book chapter contributes to the analysis of social justice for medical AI by exploring the interactions between the protection of culture and the development of medical AI within a human rights framework. It builds on an anthropological definition of culture, transmitted or built in contact with other groups, rather than on culture as a creative process. This chapter considers shared representations and normative practices as part of a non-static understanding of culture. […].
Culture is not only an individual and collective right. It also participates in the effectiveness of other human rights, which depends on the acceptability of the measures that States take in order to implement such rights. […]
Within the framework of human rights and their underlying principles of interdependence and universalism, achieving cultural appropriateness and therefore legitimacy requires a process of conflict resolution between individual and collective cultural rights on the one hand, and between different collective cultural rights on the other hand. Cultural rights aim to ensure the concrete effectiveness of rights and cannot lead to cultural relativism and the justification of practices contrary to rights.
Faced with the transnational development of AI, exploring the cultural appropriateness of health algorithms is therefore necessary both from a substantive perspective (it questions the claim of a neutral and uniform technology), but also from a procedural perspective (it contributes to the framing of an AI developed by multiple actors, including multinational companies with predatory practices). This concept may contribute to the sustainability of AI, reconciling legitimate collective and individual interests. […].
A culturally appropriate medical AI would call for both procedural and substantive inclusion of culture in the AI development process. First, cultural appropriateness requires States to guarantee a procedural right to participation in health-decision making, which implies the consultation of concerned peoples and the representation of relevant cultural factors in the implementation of digital health solutions (1). Second, cultural appropriateness demands the adaptation of AI by the incorporation of concerned populations’ relevant and legitimate cultural values and perceptions in the algorithms through active design choices (2).

Original languageEnglish
Title of host publicationLes intelligences artificielles au prisme de la justice sociale / Considering Artificial Intelligence Through the Lens of Social Justice
EditorsKarine Gentelet
PublisherPresses de l'Université Laval
Publication date2023
ISBN (Print)9782766301843
ISBN (Electronic)9782766301850
Publication statusPublished - 2023
SeriesÉthique, IA et sociétés – OBVIA

    Research areas

  • Faculty of Law - Cultural Rights, Artificial intelligence, biomedicine, Human Rights, Immaterial cultural heritage, health

ID: 317812505