Artificial Intelligence (AI) that is able to produce human-like communication, otherwise known as generative AI, is quickly changing how we function on a day-to-day basis. By simply using a generative AI tool, it is easy to see that AI can be beneficial to streamline our own productivity. As AI advances it is becoming more powerful in its capabilities, largely in the absence of regulation. Consequently, there have been calls to halt the release of more sophisticated AI platforms until we can ensure the safety of these AI tools.
One of the most notable calls on this topic came from the Future for Life Institute which drafted an open letter, signed by many prominent figures including Elon Musk and Steve Wozniak, urging AI creators to pause the development of experimental AI systems until humanity is confident that the effects of AI will be positive and their risks manageable. The Future Life Institute has highlighted in their Policymaking in the Pause
document that AI systems are able to create misinformation that appears authentic which may damage the shared factual foundations of society and has the potential to fuel political tensions. These concerns are not just hypothetical, there are examples of generateve AI creating misinformation across the globe and South Africa is no exception to this.
Recently in the Johannesburg Regional Court, a matter was heard wherein the Plaintiff’s attorneys in a defamation case searched for legal sources through ChatGPT, a generative AI platform. ChatGPT provided the attorneys with case law that included case names, citations, case facts, and decisions. The attorneys accepted the cases provided and did not conduct any further investigation into the accuracy of the cases, thereafter submitting them to the defendant’s attorneys. It turned out that the cases names, citations, facts and decisions were fictitious and generated by the AI platform. This came to light when the cases could not be sourced online. The Regional Magistrate presiding over the case stated that ‘the efficiency of modern technology still needs to be infused with a dose of good old-fashioned independent reading’. Whilst this misinformation did not cause far-reaching harm, as AI becomes more sophisticated, the potential risks increase.
It is important then to consider how to curb the tendency of generative AI to create misinformation. Regulation of generative AI is an essential first step to ensuring that AI can exist safely within society. The European Union (EU) has taken steps toward the regulation of AI by drafting a proposal for an Artificial Intelligence Act to ensure that AI systems do not infringe on fundamental rights such as equality, non-discrimination, democracy, freedom, human dignity, data protection, privacy, and rights of the child. The Artificial Intelligence Act provides for the banning of damaging AI practices including harmful manipulative AI systems and proposes enhanced governance. Whilst the Act has not become law within the EU yet, it is the first comprehensive law on AI taking a step forward in the quest to, amongst other aspects, keep AI honest. The Act is set to be negotiated by members of European Parliament in an effort to reach an agreement by the end of 2023.
Currently, there are no laws that regulate the administration of generative AI in South Africa. Recently, the South African Artificial Intelligence Association (SAAIA) was established as a body that is intended to promote the advancement of AI in a responsible manner. Its founding members include Google, the Department of Communications and Technology, the Western Cape Government, the University of Johannesburg, Tshwane University of Technology, Webber Wentzel, amongst other notable members. Whilst the creation of the SAAIA is a step forward for AI oversight in South Africa, there have been no policy initiatives or good practice standards put forward by SAAIA in an effort to encourage AI regulation. Thus, it is important for lawmakers in South Africa to consider taking steps to regulate AI as regulation is the only way to mitigate the risks that AI may pose to South African citizens. The biggest hurdle to AI regulation will be ensuring that regulations stay current with advancing AI technology.
Ideally, the calls to halt the release of more sophisticated AI platforms will be heeded to allow for the development of laws that aim to safeguard the use of AI well into the future.
Kyra Boshoff is a Programme Quality Practitioner at Boston City Campus; she holds an LLB and LLM from the University of Kwazulu-Natal and is a PhD candidate. She is experienced in lecturing, module development, and supervision and is passionate about teaching and learning.