Fez — As more public services go digital, officials are testing how far artificial intelligence can help the state.
Around the world, public agencies are handing AI real responsibilities—from answering citizens’ questions to helping manage public tenders.
The results promise speed and transparency, but they also raise sharp questions about accuracy, fairness, and who is accountable when machines assist with state power.
Albania’s virtual cabinet “member”
Albania made headlines in September when Prime Minister Edi Rama introduced Diella, an AI-created assistant that the government says will take a central role in public procurement.
Diella began the year as a voice-and-text guide on the e-Albania portal, then was presented as a “virtually created” cabinet member who will help move tender decisions to a rules-driven system, aiming at cutting down on corruption. Supporters call the move bold.
Critics warn that legality and appeals must be crystal clear when software helps decide who wins contracts.
Service helpers, not decision makers
Other countries are focusing on AI as a front door to services.
Estonia’s Bürokratt project is building a national assistant that lets people use many state services through one chat interface, with speech and text support.
The goal is a single helper that retrieves records, submits forms, and hands off to human staff when needed. The program plans broad capability by the end of 2025, and it is openly documented for reuse.
The UK’s controlled trials
Britain has tested AI across departments and shared the findings publicly.
The Government Digital Service ran early experiments with GOV.UK Chat, a bot that answers questions using official guidance, and expanded trials to thousands of business users with a focus on accuracy and clear sourcing.
Officials also reported time savings from a separate generative AI trial for civil servants, while stressing guardrails and human oversight.
When advice goes wrong
New York City’s small business chatbot shows the risk when AI speaks for the state.
Reporters found the tool sometimes offered guidance that conflicted with local and federal law.
The city kept it online with stronger disclaimers, which underlines a basic rule for public deployments: publish the source of answers, log interactions for audit, and make appeals easy when faced with misinformation.
Gulf roadmaps and playbooks
The United Arab Emirates has issued national policies and charters that push agencies to adopt AI with shared standards on ethics, data, and procurement.
Dubai and federal bodies frame AI as an engine for faster services and economic diversification, backed by guidance on how departments should build and scale projects. Clear policy can reduce one-off pilots and help agencies move together.
What success should look like
Across these cases, three tests matter:
First, clarity about the job of AI, whether it is guiding a citizen through a form or ranking bids in a tender.
Second, governance, including human sign-off, audit logs, data protection, and a simple path to contest outcomes.
Third, public evidence that services are faster or fairer, not only cheaper.
Why this matters for Morocco
Morocco can move quickly on low-risk, high-impact steps, such as a bilingual assistant that helps with civil registry tasks, tax questions, and student services, with every answer linked to an official page.
This kind of tool can cut queues and phone backlogs while keeping the users in control. If agencies test AI in decisions, like scoring applications or routing cases, they should publish criteria in advance, keep a human reviewer for final sign-off, and create a clear appeal path that does not require a lawyer. Universities and training centers can help ministries build in-house skills so projects do not depend entirely on vendors.
The promise is real. AI can make information easier to find, forms less painful, and back offices more efficient. The risk is real too. When governments delegate work to software, they still owe citizens accuracy, fairness, privacy, and a way to challenge mistakes. Countries that are getting this right treat AI as public infrastructure—not a magic trickм—and they prove it with transparent rules and public results.