At the seventh annual United Nations Behavioral Science Week held earlier this month, experts gathered to explore how behavioural science, coupled with emerging technologies, can address pressing global humanitarian challenges. The UN Behavioral Science Group curates this event annually to unite researchers and practitioners from diverse fields such as healthcare, education, finance, and peacekeeping, enabling a dialogue on practical solutions grounded in behavioural science.

This year’s event placed a strong emphasis on the role of technology, particularly artificial intelligence (AI), in enhancing the effectiveness of global development and aid initiatives. Representatives from organisations such as UNICEF and the World Bank engaged with academic experts to discuss the intersection of behavioural science, data science, and AI, demonstrating how behavioural insights can inform the design and deployment of technology in challenging humanitarian contexts.

One highlighted application involved the use of digital assistants to augment the capacity of healthcare personnel in regions where qualified professionals are scarce. Stanford economist Susan Athey presented research conducted at a hospital in Cameroon where AI-powered digital tools were integrated into consultations to support nurses in discussing contraception options with patients. The technology provided structured guidance and personalised recommendations tailored to individual women’s needs and preferences. As a result, uptake of long-acting reversible contraceptives such as intrauterine devices and implants tripled. Given that the World Health Organization estimates a significant risk of pregnancy-related deaths among adolescent girls in Sub-Saharan Africa, scaling such technology-enhanced interventions could have substantial impacts on maternal health outcomes. Athey summarised the potential, stating: “AI has the potential to augment humans… AI and digital technology can help make those people more effective.”

Another session explored how agent-based modelling enables aid agencies to better anticipate and meet the needs of refugee populations displaced by conflicts or climate events. Rebeca Moreno Jimenez, an innovation officer and data scientist at the UN Refugee Agency (UNHCR), discussed how their team creates simulations based on comprehensive datasets that are composed of both internal data and external sources. These models predict likely behaviours and needs of refugees when they arrive in host locations or return home following displacement. For example, the current modelling efforts focus on refugees returning to Ukraine amid ongoing conflict, factoring in sociodemographic variables such as family ties and property ownership to forecast resource requirements. Similar models have also been used to manage COVID-19 risks in overcrowded camps in Bangladesh and to anticipate movements of internally displaced Somalis. By iteratively updating their models with real-time data, UNHCR aims to optimise the distribution of aid, ensuring the right resources reach the right places at the right times.

The conference also addressed the challenges posed when technological solutions are developed without adequate consideration of cultural context. Anna Korhonen, a linguist and co-director of the Centre of Human-Inspired Artificial Intelligence at the University of Cambridge, emphasised that AI systems, despite their technical sophistication, often fail to understand or adapt to diverse cultures, motivations, emotions, and social norms. This misalignment can result in technologies that are socially inappropriate or ineffective. A case study presented by Michelle Dugas, a behavioural scientist at the World Bank’s Mind, Behavior and Development Unit, illustrated this issue through the deployment of AI tutors powered by ChatGPT. When the AI was prompted in English to generate practice problems for hypothetical students, it assigned harder tasks to female students. However, when prompted in Hindi, the AI assigned more challenging problems to male students. Such discrepancies could have significant implications if these AI tutors were deployed at scale in countries like India, where gender disparities in education persist. Korhonen noted, “Without human grounding, AI can still work in many simple and low-risk settings… But it will often fall short in high-stakes areas like decision-making, policy, and justice, especially when we bring it into socially complex contexts.”

Overall, the discussions at UN Behavioral Science Week highlighted the promising potential of integrating behavioural science with AI and digital technologies to improve humanitarian outcomes, while cautioning about the limitations and cultural considerations necessary for such innovations to be truly effective around the world.

Source: Noah Wire Services