Please help us improve our website by taking this brief survey
Skip Navigation

The Open Algorithms Network is Back! Three Methods to Increase Algorithmic Transparency for Individuals

Sarah Kennedy|

Governments use algorithms to improve cost efficiency and public service delivery. However, creating robust digital tools, especially for use in decision-making, requires strengthening the human systems around them. To do so, governments should incorporate an open government approach with transparency and accountability at the heart of algorithmic design and implementation.

In the last five years, algorithms have become more pervasive and the amount of governments using or planning to use algorithms in service delivery has ballooned. As a mechanism for multi-stakeholder dialogue, OGP is well-positioned to act as a space for conversations that cut deeper than the buzzwords and focus on the implementation of reforms with open government values at their heart. This is why OGP relaunched the Open Algorithms Network this year for civil servants implementing algorithmic transparency and accountability reforms.

In March, the network’s first quarterly meeting focused on the topic of individual-level transparency.

Individual transparency is different from the systemic level of transparency, where the overall system is made transparent—for instance via algorithm registers, technical documentation such as data sheets and model cards, or the publication of the algorithmic system’s source code. Individual-level transparency aims to provide targeted transparency and explanations to individuals subjected to algorithmic decision-making. These practices can take various forms, ranging from providing notices that an algorithm has been used to individual explanations of a particular result.

Such mechanisms are present in regulation. For instance, the EU General Data Protection Regulation (GDPR) requires data processors using automated algorithmic systems to provide notice and, in some cases, explanations to individuals impacted by the results of algorithmically-supported decisions. The EU AI Act also introduces new transparency and explanation requirements. They can also be found in national legislations such as the Canadian Automated Decision-Making Directive and the French Digital Republic Law.

During the first meeting, three major takeaways emerged from the discussion.

Individual-level transparency requires accessible and meaningful communication

In France, the GDPR and a national law on the transparency of public sector algorithms set obligations around use of automated decision-making. People who receive a public service must be informed if an algorithm has been used to make or support a decision, be informed of their rights (right to contest, right to access data, etc), and know how the algorithm was used in the decision-making process. To implement this, the Directorate for Digital Affairs (DINUM), tested a new method to inform citizens of algorithmic decision-making—sending updates via text message.

As legal notices were too long and costly to be shared in text messages, DINUM worked on creating two levels of information. With the support of behavioral science and plain language experts, the government created a concise version in the text message template and provided a link to access a more detailed version online. This approach is innovative because it allows the government to proactively disclose required information in a clear, short message.

Algorithmic transparency begins with the transparent data practices

Estonia’s Data Tracker is an IT tool designed to help public authorities comply with GDPR requirements. It is specifically intended for use by public authorities that maintain information systems or databases containing personal data. The system is centrally developed but can be integrated into individual databases by each authority.

Citizens can log into the portal and obtain information about how their data has been used in different government databases: What data was accessed, who accessed it, and for what purpose? The data logs not only reflect uses of data where a single government agency collects, stores, and uses citizen information. By also showing when agencies request to use the data provided by the citizen to a different agency, the Data Tracker also showcases the use of the “once only principle.” This principle is an e-government concept that aims to ensure citizens only have to provide certain standard information once, rather than repeatedly each time they interact with the government. In this way, citizens are able to better understand what happens with the data the government holds on them and how the interoperability of government agencies affects their data.

Transparency is not inherently a trade-off with citizen privacy or system security

Public officials often raise the concern that increased transparency of an algorithmic system might allow the system to be gamed by bad actors or infringe upon privacy rights of citizens. However, the persistent opacity of algorithmic decision-making systems in the public sector means that a lot can be done to improve transparency without harming other rights. Useful disclosure of information about an algorithm could take many forms, such as in the French example above where the government discloses information around when, why, and how an AI system was used, but not the specific criteria it uses to make the decision. Participatory approaches and consideration of the needs of the user allow for individual-level transparency that does not require these kinds of tradeoffs.

In 2025, the Open Algorithms Network will remain a space for building peer connections and internal capacities for those implementing the systems that produce robust digital tools and the strong human systems around them. As governments increasingly turn to AI in search of efficiencies and improved service delivery, OGP stands ready to facilitate the inclusion of participation, transparency and accountability in those initiatives.

Many thanks to the participants in the network who shared insightful perspectives and to Soizic Penicaud (Independent Researcher & Policy Specialist) for analysis of the discussion.

No comments yet

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Content

Thumbnail for Three Ways to Better Govern the Use of AI Challenges and Solutions

Three Ways to Better Govern the Use of AI

OGP and the Government of Kenya convened reformers in government and civil society from 12 countries to discuss ways to govern new and emerging digital technologies. Here are three key…

Thumbnail for To Build Transparent Algorithms, Focus on the Why

To Build Transparent Algorithms, Focus on the Why

In this episode of the Voices of Open Government podcast, learn what France has done to increase algorithmic transparency using citizen input while raising awareness about the use of algorithms.

Thumbnail for Three Recommendations for More Inclusive and Equitable AI in the Public Sector Challenges and Solutions

Three Recommendations for More Inclusive and Equitable AI in the Public Sector

See how OGP members are working to better understand and address the gender-differentiated impacts of algorithms, reduce human biases, and create artificial intelligence programs that are trustworthy, ethical, and inclusive.

Open Government Partnership