
Afro feminist Terms and conditions: Do they exist?
The terms and conditions, sometimes referred to as terms of service," are rules and regulations governing the contractual relationship betw…
As artificial intelligence systems continue to develop exponentially and become more prevalent in most of the technologies we use today, various studies and investigations point to a number of serious safety and ethical implications for these technologies, particularly for marginalized groups, despite their potential and utility. As a result, clarity on what these negative impacts are and how they can be mitigated becomes an important discussion point.
Artificial intelligence is a term loosely used to refer to a collection of technologies ranging from machine learning and rule-based systems to natural interfaces, including functions such as computer vision, speech recognition, and natural language processing. The idea of computers being intelligent is an allusion to human intelligence and largely speaks to their ability to learn and make decisions about various things, although we know this machine's "intelligence" in many ways deviates from what fully constitutes an intelligent being. Important to note is also the fact that the learning of machines is made possible by data, which makes it critical to AI as a whole.
The proliferation of AI in almost all parts of our lives, from our social interactions on social media platforms to more high-stake areas such as identification systems, crime prediction systems, and health systems, among several other areas, explains its growing centrality in today’s Fourth Industrial Revolution, primarily because of the benefits it is posited to offer humankind. But despite all of these exciting things, new problems have also come up. Some of these problems already existed, but the new ones have even more potential to hurt everyone's well-being.
In attempting to understand the downside to AI systems, one ought to acknowledge one of the central problems, which is the opacity of these systems. This opacity contributes to the mystery surrounding AI, which has long resulted in a lack of accountability in the event of undesirable automated outcomes, such as hateful speech, sexist or racist language, and so on. Scholars studying this automated bias, on the other hand, have revealed that AI systems have, for the most part, been unaccountable because most of us know so little about this technology. But one thing has become clear: AI systems are made by people or companies, and these people or companies always put their own values into them.
To further explore the issue of values among AI developers, the question "why is this AI system being built?" is a great starting point.Fully understanding the reasons as to why a certain tool is developed requires careful consideration of its benefits and possible negative consequences, which then guide whether it is further deployed or developed or not. However, unlike in other industries such as food or pharmacy, AI systems and much of today's technological developments do not abide by this normative way of doing things, and instead, an ethos such as Silicon Valley’s "move fast and break things" guides the industry, largely leaving it unchecked.
This lack of checks unfortunately poses real-life impacts on people, which necessitates efforts to ensure that these AI systems are developed in a way that is safe, ethical, and sustainable for all human beings and not just a few at the expense of others. It is from this that the responsible and ethical AI field has been developing over the past few years, with more people pointing out the shortfalls of this technology and how those may be averted. It remains a greatly contested area, however, as the idea of ethics is one that people conceive differently, as at its core it speaks to a pluralism in value systems among different people or communities, whether these values are communitarian, utilitarian, or self-interested, such as maximizing profit. Despite these differences, however, there exist basic human standards of right and wrong.
Taking into account some outstanding AI ethical issues today, such as algorithmic racial and gender bias, extractivist data infrastructures, mass datafication, and the illusion of consent, all of which are based on the monopolization and centralization of power among so-called "big tech" and governments, we see a deepening of inequities that not only replicate but also perpetuate physical world injustices. This calls for an urgent halt to this automated oppression.
A recent example is the case of the invisible workers behind AI systems, who work in precarious and exploitative conditions and are barely mentioned by the big tech companies that employ them in order to create a false image of the super efficiency of these AI systems, which is not the case, as in the Open AI-Sama "data labeling" story. The predominance of white males at the companies developing these systems also ensures, even without outright intent, the discrimination and misrepresentation of certain groups’ realities and stories in digital spaces. Extractivist data practices, which view data as a resource rather than a part of the human body, erode users' autonomy, privacy, and humanity, with outcomes such as psychological nudging to effect sales by advertisers online and round-the-clock surveillance, which is an attack on our fundamental right to privacy. Automated scoring cards are also increasingly being used in assigning scores to jobs or admissions to schools and the like, despite these systems still being inaccurate. This, like several other issues pertaining to AI today, directly affects the standards of living of people, especially marginalized ones.
With a regulatory vacuum looming around these issues, particularly on the African continent, the public, civil society, media, academia, and other stakeholders must join forces to address these issues in collaboration with governments, as the breadth of issues necessitates a multidisciplinary approach.