As Evaluators we are compelled to share knowledge and evidence in ways that are user friendly, accessible, easily understood and can be utilised by those who require the information. The greatest value of AI (artificial intelligence) is that it can facilitate the type and quality of the communication requirements for Evaluators. AI, to my knowledge, has the ability to simplify language, select appropriate images that can accompany messages and through careful editing, can target the ideal recipients for specific messages. These are extremely useful tools for evaluators, researchers as well as for policy makers. Very many successful businesses are using AI on big data made available through computer technology, online behaviour patterns as well as direct observation through video data captured with traffic and security cameras in most cities, towns, villages, shops and institutions. AI enables them to target their marketing, to link products with potential clients and to tailor the messages so that customers will be more attracted to the product.

It is in this context that the African Evaluation Association (AfrEA) recently (March 2024) hosted a conference in Kigali, Rwanda with the theme; “Technology and Innovation in Evaluation Practice in Africa. The Last Nail on the Coffin of Participatory Approaches.” The theme was provocative, and several keynote speakers spoke directly to the theme in a very balanced way. On the one hand audiences were encouraged to embrace the countless benefits and contributions to be had by the judicious use of technology, be it for data collection, data analysis, sensemaking and even reporting during evaluations. Some also implored evaluators to not sacrifice the value of participatory approaches as these will continue to develop ownership of programmes and a better understanding of the contribution of evaluation in the development processes. Individual presenters showcased the power of new and innovative software that dealt with both quantitative and qualitative data with ease and how these created options for evaluators and policymakers. Still, others spoke of how they adapted the use of cell phone technology to better collect data in rural areas where it was often difficult to access communities and how in situations of disaster and conflict there was a reliance on drone-technology to gather intelligence, to do rapid assessments and to commit resources or support in the neediest environments. Generally during the conference there was a conflation of the concepts ‘information technology’ and ‘artificial intelligence’ rather than a distinction between the two. The way the concepts are used here points to IT being the infrastructure and systems needed to manage and process data whereas AI is focused on creating intelligent systems that can operate autonomously and make decisions or suggestions based on algorithms.

One speaker did ask a key question. “What difference did you really make with your evaluation or research?” Besides the evaluation reports, the graphs and figures presented as part of the findings, the rapid feedback and rapid responses, and even where these were substantive and life-saving (food, water, medical supplies) there was a sense that the efforts were not sustainable. They were not sustainable, and they will continue to be unsustainable as long as the evaluations are done ON communities or research is done TO or FOR communities and not WITH communities. He was not arguing against the use of technology or AI but made a convincing case that evaluation research tended to marginalise the very communities they were engaging in service of those commissioning the studies who are mostly not from the communities in Africa. He suggested that the widespread use of technology and AI has the potential to further alienate the communities from the evaluation research process.

For Southern Hemisphere the discussion and engagement about the value of AI and technology in evaluation are essential parts of our growth as individual evaluators and researchers but also key to how we address evaluation challenges as a collective. We remain steadfast in our commitment to participatory processes as we believe that they allow for better accountability on the one hand but more so for the learning that does happen for all participants in the evaluation process. The judicious use of technology and AI is seriously embraced with a warning that, in the era of ‘false truths’, where we can no longer rely on the truism of yesterday’s saying — ‘seeing is believing’, we need to strengthen our evaluative thinking so we can be more discerning about the choices and the decisions we make. While using the immense power and benefits offered by technology and AI, we will continue to appeal to the human element of autonomy within a participatory process and uncovering the authenticity that communities bring to the table. We believe that human beings are able to reflect and problematise as opposed to just problem-solve based on whatever algorithm as is done with AI. Our participatory approach sets the poor, the marginalised, the alienated, the voiceless at the centre of the evaluation research and the level to which technology and AI can enhance the learning during that process will be fully explored.

AI Automation Learning And Development Monitoring And Evaluation