Opinion

Trying to understand AI? start with the questions that actually matter

Digital citizenship

Nazareen Ebrahim|Published

Experts say the advent of AI will not be much different for workers than prior technology revolutions, like the invention of electricity.

Image: AI Lab

Artificial intelligence (AI) is no longer something far away, futuristic, or reserved for scientists and Silicon Valley boardrooms. It is already here, quietly influencing the way we work, study, search, shop, travel, bank, communicate, and even how we are seen by institutions that make decisions about our lives.

That is exactly why ordinary citizens need a better way to understand it. In many public conversations, AI is either overhyped as a miracle or feared as a monster. Neither helps us. South Africans do not need more noise. We need clarity. We need practical understanding. We need to ask better questions. And above all, we need to build what I often call a responsible digital citizenry.

A responsible digital citizenry means people who are not merely passive users of technology, but informed participants in a rapidly changing society. It means understanding enough about AI to recognise its opportunities, its risks, and its impact on our rights, our livelihoods and our communities. Over time, I developed a simple way to help people make sense of this. I call it the Quadrant of AI Fundamentals.

Don’t let the name frighten you . It is simply a practical framework built around four things every society needs if it is going to use AI responsibly: AI literacy, AI ethics, AI governance, and AI policy.

Let’s start with literacy.

AI literacy is the foundation. It is the ability to understand, in plain language, what AI is, what it is not, where it shows up, and how it works at a basic level. Most people do not need to become engineers. They do, however, need to know when they are engaging with an AI system, what that system may be doing with their data, and how its outputs can influence decisions.

For example, if a job seeker uses AI to help write a CV, that is one level of interaction. But if a company then uses AI to screen that CV, rank candidates or filter out applicants, that is another level entirely. Already, AI can shape who gets noticed, who gets ignored, and who gets access to opportunity. If citizens do not understand this, then the technology gains power while the public remains in the dark. That is a bad bargain.

The second part is ethics.

AI ethics asks a basic but powerful question: just because we can do something with AI, does that mean we should? Ethics is about fairness, accountability, dignity, safety and human impact. It forces us to think about bias, exclusion, manipulation and harm. This matters deeply in South Africa. We are not a neutral society starting from scratch. We come from a history of inequality, exclusion and structural injustice.

If we are careless, AI can simply automate old unfairness in shiny new ways. A biased system used in insurance, recruitment, lending, policing or education can deepen inequality rather than reduce it. The machine may look modern, but the outcome can still be discriminatory.

The third part is governance.

Governance is where principles must become practice. It is about who is responsible, who is accountable, what rules are followed inside organisations, and how decisions are monitored. Good governance means AI is not deployed recklessly. It means there are checks, oversight, escalation paths and clear responsibility. This is important because too many organisations are rushing to adopt AI tools without pausing to ask: who approved this, what is it being used for, what risks were assessed, and what happens if it goes wrong? Governance is the difference between innovation with discipline, and innovation with chaos. Frankly, one builds trust, and the other builds headlines for all the wrong reasons.

The fourth part is policy.

Policy is how a society draws the lines. It includes laws, regulations, standards, institutional guidance and public frameworks that shape how AI should be used. Policy matters because citizens should not carry the burden of technological change alone. Governments, regulators, schools, businesses and civil society all have a role to play in setting norms that protect the public interest.

For South Africa, this is especially urgent. We cannot afford to be passive consumers of global technologies designed elsewhere, for other markets, under other assumptions. We need local thinking, local safeguards and local leadership. Our social realities matter. Our unemployment crisis matters. Our education gaps matter. Our data protection concerns matter. Our development priorities matter.

AI cannot be treated as a fashionable import. It must be understood in the context of our people. This is why the four parts of the quadrant matter together. If we focus only on literacy, people may understand AI but have no protection. If we focus only on ethics, we may have good intentions but no implementation. If we focus only on governance, organisations may tick boxes without building public trust. If we focus only on policy, rules may exist on paper while citizens remain confused and excluded.

We need all four.

South Africans do not need to panic about AI. But we do need to wake up to it. We need to stop speaking about it as though it only belongs to tech experts. AI now belongs in community halls, classrooms, boardrooms, newsrooms, homes and policy discussions. It affects the citizen, not just the coder. The real question is not whether AI is coming. It is already here. The question is whether we will engage it blindly, or whether we will build the knowledge and courage to shape it responsibly. That is the work before us. And that is why building a responsible digital citizenry is no longer optional. It is one of the most important public education tasks of our time.

Nazareen Ebrahim

Image: Supplied

Nazareen Ebrahim is a board director at the Minara Chamber of Commerce. A South African communications and AI ethics practitioner, she is the creator of the Quadrant of AI Fundamentals™, and is currently advancing ISO/IEC 42001 AI management systems lead implementer training, with a focus on building a responsible digital citizenry.

** The views expressed do not necessarily reflect the views of IOL or Independent Media. 

THE POST