Will artificial intelligence (AI), a technological invention, single-handedly decide and break up the future course of human history? This has become a subject of intense discussion in recent weeks.
Professor Santiago Zabala and senior Journalist Claudio Gallo while accepting the future emergence of an AI - like all era-defining technology have also warned of some potential downsides that need to be carefully monitored. In this context reference has also been made to how technological inventions have determined not only the mode of industrial production but also influenced the creation of an evolving political dimension within a society or a country.
Such a view is interesting to say the least. The question then arises as to who this time will capitalize on this new technology as it slowly emerges as a dominant productive force in our societies. Zabala and Gallo have queried as to whether “AI could take on a life of its own, like so many seem to believe it will, and single-handedly decide the course of our history- or will it end up as yet another technological invention that serves a particular agenda and benefits a certain subset of humans?”
It has been pointed out that in March this year such concerns led Apple co-founder Steve Wozniak, AI heavyweight Yoshua Bengio and Tesla/Twitter CEO Elon Musk to accuse AI labs of being “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.” It has also been observed that recently, Geoffrey Hinton – known as one of the three “godfathers of AI" left Google “to speak freely about the dangers of AI” and that he, at least in part, regrets his contributions to the field.
Zabala and Gallo have noted that “like all era-defining technology AI comes with considerable downsides and dangers”, but they “do not believe that it could determine the course of history on its own, without any input or guidance from humanity” emerging from “our political, social and cultural agendas.” One needs to understand that the emergence and different use s of AI is complex but one need not be afraid of this. Yes, we are undoubtedly building ever more capable computers and calculators but that should not be a source of acute anxiety. It will help us to move forward.
The multi-dimensional facet has been further explained by Federico Faggin, the inventor of the first commercial microprocessor, the mythical Intel 4004. He explained this clearly in his 2022 book Irreducible- “There is a clear distinction between symbolic machine ‘knowledge’ … and human semantic knowledge. The former is objective information that can be copied and shared; the latter is a subjective and private experience that occurs in the intimacy of the conscious being.” Zabala and Gallo have interestingly observed in this regard- “interpreting the latest theories of Quantum Physics, Faggin appears to have produced a philosophical conclusion that fits curiously well within ancient Neo-Platonism – a feat that may ensure that he is forever considered a heretic in scientific circles despite his incredible achievements as an inventor”.
Nevertheless, another side of the coin is also creating some concern about greater use of AI. There is anxiety that AI appears to be emerging as the next global “big business” innovation that will steal jobs from humans – making laborers, doctors, barristers, journalists and many others somewhat superfluous. Such an assumption pertaining to digital intelligence appears to be rather hasty, particularly related to socio-economic dimensions. It is here to stay but not just as a driver of rapacious capitalism.
In this context digital analysts have observed that AI readiness is one of the first steps towards adoption of procedures that can help mitigate potential risks. This is so because artificial intelligence has the potential to benefit society in manifold ways- from using predictive analytics for disaster risk reduction to leveraging translation software to break down language barriers. Consequently, AI is already impacting our daily lives.
However, to avoid negative implications, proactive steps need to be taken to ensure its responsible and ethical development and use. Apparently UNDP, through their AI Readiness Assessment, is making sure countries are equipped with valuable insights on design and implementation as they progress on their AI journey. This is a good measure.
It would be pertinent at this point to also be careful about some steps that are also being taken by business men regarding the evolving intersection that is slowly emerging between AI, available data and the people desiring to use them. AI-powered tools on the market are often touted based on their benefits – not their shortcomings. However, as seen with the latest example of ChatGPT, questions around responsible and ethical use have become important.
The UNDP has highlighted in their Digital Strategy that design and technology of AI should be centred on people. Digital transformation, including AI innovations, must also be intentionally inclusive and rights-based to yield meaningful societal impact. For instance, whilst governments can leverage AI to improve public service delivery, consideration must be given to various layers of inclusion to ensure everyone can benefit equally.
AI models rely on data to function. Accordingly, this necessitates that quality data gets fed into a model. In fact, the lack of quality data may even exacerbate bias and discrimination, particularly against vulnerable groups – pushing them further behind. Therefore, the degree of accuracy, relevance, and representativeness of a data set will impact the reliability and trustworthiness of results and insights the data is informing. In this regard it will also be suitable to remember that the digital infrastructure needs to be seen as an interoperable network of digital systems working together. This will then enable timely and reliable data flows. This is pertinent, for instance, in responding to crises, when access to accurate and up-to-date information is needed to inform responsive programming and decision-making. This is particularly pertinent for disaster management. Without such a digital infrastructure, data flows may be disrupted, or the data available may be inaccurate or incomplete.
At this juncture there is a clear and strong interest amongst UN Member States in adopting AI-powered technologies to improve people’s lives by providing better services- be it in healthcare or education or industrial production and product diversification.
However, as the benefits and risks of these technologies are uncovered, the need for an ethical data and AI governance framework, improved capacities and knowledge has also become equally relevant. The initiative launched by UNDP and ITU to enhance governments’ digital capacity development, includes harnessing AI responsibly. UNDP is assisting countries such as Kenya, Mauritania, Moldova and Senegal in developing data governance frameworks to promote the use of data for evidence-based decision making.
It will also be pertinent to highlight here that the development of the ‘Data to Policy Navigator’ is being created by UNDP and the BMZ’s Data4Policy Initiative. The Navigator is designed to provide decision-makers with the knowledge they need to integrate new data sources into policy-development processes. No advanced or prior knowledge of data science will be needed. UNESCO is also beginning to assist in achieving this goal. They are also developing recommendations on AI Ethical Standards, which include key aspects of international and human rights regulations related to the right to privacy, fairness and non-discrimination, and data responsibility. Such an approach should help countries who are at different stages of their AI journey and require careful assessment to determine the appropriate digital infrastructure, governance and enabling community that may be associated with their unique needs and capabilities.
It must be also understood that AI Readiness Assessment comprises of a comprehensive set of tools that allow governments to get an overview of the AI landscape. This is required for facilitating technological advancement and also for users of AI in the public sector. Such an assessment also helps to prioritize ethical considerations. The assessment also highlights key elements necessary for the development and implementation of ethical AI, including policies, infrastructure and skills to help meet national priorities and also achieve the required SDGs.
The United Nations and some other associated institutions appropriately feels that such assessment employs a qualitative approach by utilizing surveys, key informant interviews, and workshops with civil servants to gain a more in-depth understanding of the AI ecosystem in a country. By doing so, it offers governments valuable insights and recommendations on how to ensure effective and ethical implementation of AI regulatory approaches, including how AI ethics and values may be integrated into existing frameworks. This UN tool has also become globally applicable and available for use, particularly for governments and international financial institutions at any stage of their AI journey. It is correctly believed that such ethical and responsible use of AI will also help to facilitate transparency, fairness, responsibility and privacy by default.
One needs to understand that the forward movement of digital technology and AI-powered innovations that are expected to emerge in years to come will be significant due to efforts by developing countries to climb the ladder of development and become a developed country.
At the same time governments worldwide are under pressure to move quickly to mitigate the risks. This has been brought to the forefront by the CEO of ChatGPT’s Open AI telling US lawmakers that regulating the AI is essential so that measures can be agreed upon to overcome emerging challenges resulting from widespread and diverse use of AI in the near future. This has also been reiterated during the Hiroshima G-7 Summit meeting.
Accordingly, it is critical that we take proactive measures to ensure that the potential benefits and risks of this changing paradigm are evaluated through a people-centred approach. Having a responsible framework to thoroughly assess the benefits and risks will be the right key.
Analysts have accordingly observed that as these innovations evolve, so must governments’ mindset on AI. The AI Readiness Assessment, according to them, will be part of a common effort to promote not only a proactive governance approach to digital development but also ensure countries are informed, prepared and staying ahead when it comes to AI.
Muhammad Zamir, a former Ambassador, is an analyst
specialized in foreign affairs, right to information and good governance