
By Speak Up Afrika
Artificial intelligence is no longer a Silicon Valley curiosity or a distant “what-if” in government corridors. Across Africa—and much of the global South—it is quietly rewriting the social contract. It shapes how a mother in Nairobi books prenatal care, how a student in Lagos applies for a loan, and how a farmer in Accra proves identity, accesses a subsidy, or files a complaint.
But as governments trade paper ledgers for automated systems, a fundamental tension sharpens: is AI a frictionless engine of public value—or the scaffolding of a new surveillance state? The question is no longer whether public services should digitise. It is whether the state can be trusted to govern the code that increasingly governs citizens.
What makes this moment consequential is not the novelty of technology. States have always used tools to see, count, and manage populations—censuses, ID systems, filing cabinets, patrols. AI simply accelerates that power and makes it more granular. A government that could once only monitor a neighbourhood can now monitor a network. A bureaucracy that once struggled to find a file can now rank a citizen. Scale, speed and prediction are not neutral capabilities; they are political assets.
AI’s strongest case is practical—and, in many countries, overdue. When designed with transparency and accountability, AI can do something rare: make the citizen’s experience of the state less punitive, less humiliating, less dependent on luck and connections.
Dissolving the bureaucratic wall.
For decades, the defining feature of public service has been the waiting room: long queues, “missing” files, administrative bottlenecks, and the informal fees that grease stuck gears. AI-enabled platforms can streamline school enrolment, licence renewal, clinic appointments, and claims processing—reducing human discretion where it invites petty corruption. In the best versions of digital service delivery, efficiency becomes dignity: fewer wasted days, fewer gatekeepers, fewer excuses.
Safeguarding the commons.
In regions where public resources are scarce, AI is often presented as a digital sentry. Pattern analysis can flag procurement irregularities, detect “ghost” beneficiaries, and identify suspicious billing in health and social protection systems. If governments can reduce leakage, the argument goes, they can redirect funds to schools, clinics, and safety nets. In a continent where fiscal space is limited and public trust is fragile, integrity tools matter. Yet integrity in code must still answer to integrity in law.
Precision in crisis.
AI’s predictive power can support early warning systems: detecting outbreak patterns, forecasting flood risk, mapping emergency logistics. In crises, speed saves lives, and data-driven decisions can bring clarity when panic is the only alternative. Properly governed, AI can help states do what citizens most demand in emergencies: act decisively and communicate credibly.
These gains are not imaginary. But they are not guaranteed. The same machinery that accelerates service can also accelerate control.
Without guardrails, the “smart state” becomes something else: a state that sees more, remembers longer, and decides faster—often without explanation. The most dangerous part is that this shift can happen gradually, disguised as modernisation.
Bias in the code.
Predictive policing and digital profiling often dress themselves as objective science. In reality, they frequently feed on historical data shaped by unequal enforcement and structural prejudice. If certain communities have long been over-policed, then the data will “prove” they are high-risk—because the state has already decided they are. When an algorithm labels someone “suspicious,” the presumption of innocence is replaced by a probability score. The risk is not merely discrimination; it is the automation of discrimination at scale.
The great privacy trade-off.
Accessing services increasingly requires a “biometric tax”: facial recognition, fingerprints, continuous verification, location traces. This can turn citizenship into a conditional subscription—granted to those who surrender the most data. The result is a profound power asymmetry: the state becomes more omniscient while the citizen remains in the dark about where data travels, how long it persists, and who can access it. Surveillance does not always arrive with a knock on the door; sometimes it arrives through a login screen.
The black-box state.
AI systems are not neutral; they are the product of assumptions—about risk, deservingness, threat, and normality—translated into rules and models. When an opaque system denies a welfare benefit, flags a person for investigation, or deprioritises a complaint, there is often no clear explanation, no meaningful appeal, and no accountable human decision-maker. Bureaucratic frustration becomes algorithmic resignation: “the system says no.” In such a world, accountability doesn’t disappear in dramatic fashion; it evaporates.
The deeper problem is that the very logic of automation—optimise, rank, flag, predict—fits neatly into a certain kind of politics: one that prizes control, fears dissent, and treats citizens as risks to be managed rather than partners to be served.
The outcome of AI in public life is not a technical inevitability; it is a political choice. If African states want the benefits of AI without the authoritarian temptations, they need governance strong enough to discipline the technology, not just procure it.
Speak Up Afrika argues for four non-negotiable pillars.
Strict legal boundaries—especially for high-risk systems. Policing, border control, biometric identification, and social protection should not be treated like ordinary digitisation projects. They are high-risk domains with a long history of abuse. Governments must define “no-go zones”: areas where automation is legally barred or tightly constrained. “National security” must not become a blank cheque that swallows constitutional rights.
Algorithmic transparency and the right to contest. If a machine makes a decision that alters a human life, that human has a right to know why—and a right to challenge it. Citizens should be told when AI is involved, what data is being used, and what factors drive outcomes. Appeals must be accessible, timely, and capable of reversing decisions. Transparency is not a technical luxury; it is the foundation of legitimacy.
The human-in-the-loop—real oversight, not theatre. Critical decisions in justice, healthcare, immigration, and welfare must never be fully autonomous. Human review should be meaningful: trained staff with authority to override systems, documented reasoning, and accountability when harm occurs. “Human oversight” that merely rubber-stamps algorithmic output is a performance, not a safeguard.
Radical inclusion—public participation in AI governance. National AI strategies should not be designed behind closed doors by vendors and technical elites. Civil society, labour organisations, journalists, youth groups, and digital rights advocates must have a seat at the table. People most likely to be targeted—gig workers, marginalised communities, migrants, informal workers—should help define the ethics of their own digital future. Legitimacy cannot be coded in; it must be built in public.
These pillars are not anti-innovation. They are pro-democracy. They recognise that public trust is the scarcest resource in governance—and the easiest to waste when technology outpaces accountability.
The future of the digital state is still being written. African nations have a rare opportunity to leapfrog—not only technologically, but institutionally—by building rights-based digital architectures that avoid the mistakes of others.
True progress is not measured by the terabytes of data a government collects. It is measured by the sovereignty it grants citizens over that data; by whether services become more accessible without becoming more coercive; by whether efficiency strengthens democracy instead of smothering it.