Building Transparency into AI Projects

In 2018, one of the largest tech companies in the world premiered an AI that called restaurants and impersonated a human to make reservations. To “prove” it was human, the company trained the AI ​​to insert “umms” and “ahhs” into its request: for instance, “When would I like the reservation? Ummm, 8 PM please.”

The backlash was immediate: and citizens objected that people were being deceived into thinking that they were interacting with another person, not a robot. People felt led to.

The story is both a cautionary tale and a reminder: as algorithms and AIs become ever more embedded in people’s lives, there’s also a growing demand for transparency around when an AI is used and what it’s being used for. It’s easy to understand where this is coming from. Transparency is an essential element of earning the trust of consumers and clients in any domain. And when it comes to AI, transparency is not only about informing people when they are interacting with an AI, but also communicating with relevant stakeholders about why an AI solution was chosen, how it was designed and developed, on what grounds it was based, how it’s monitored and updated, and the conditions under which it may be retired.

Seen in this light, and contrary to the assumptions about transparency by many organizations, transparency is not something that happens at the end of deploying a model when someone asks about it. Transparency is a chain that travels from the designers to developers to executives who approve deployment to the people it impacts and everyone in between. Transparency is the systematic transference of knowledge from one stakeholder to another: the data collectors being transparent with data scientists about what data was collected and how it was collected and, in turn, data scientists being transparent with executives about why one model was decided over another and the steps that were taken to mitigate bias, for instance.

As companies integrate and deploy AIs, they should consider how to be transparent and what additional processes they might need to introduce. Here’s where companies can start.

The Impacts of Being Transparent

While the overall goal of being transparent is to engender trust, it has at least four specific kinds of effects:

It decreases the risk of error and misuse.

AI models are highly complex systems — they are designed, developed, and in complex environments by a variety of stakeholders. This means that there is a lot of room for error and misuse. Poor communication between executives and the design team can lead to an AI being optimized for the wrong variable. If the product team doesn’t explain how to properly handle the outputs of the model, introducing AI can be counterproductive in high-stakes situations.

Consider the case of an AI designed to read x-rays in search of cancerous tumors. The x-rays that the AI ​​labelled as “positive” for tumors were then reviewed by doctors. The AI ​​was introduced because, it was thought, the doctor can look at 40 AI-flagged x-rays with greater efficiency than 100 non-AI flagged x-rays.

Unfortunately, there was a communication breakdown. In designing the model, the data scientists reasonably thought that erroneously marking an x-ray as negative when in fact, the x-ray does show a cancerous tumor can have very dangerous consequences and so they set a low tolerance for false negatives and, thus , a high tolerance for false positives. This information, however, was not communicated to the radiologists who used the AI.

The result was that the radiologists spent more time analyzing 40 AI-flagged x-rays than they did 100 non-flagged x-rays. They thought, the AI ​​must have seen something that I’m missing, so I’ll keep looking. Had they been properly informed — had the design decision been made transparent to the end-user — the radiologists may have thought, I really don’t see anything here and I know the AI ​​is overly sensitive, so I’m going to move on.

It distributes responsibility.

Executives need to decide whether a model is sufficiently trustworthy to deploy. Users need to decide how to use the product in which the model is embedded. Regulators need to decide whether a fine should be levied due to negligent design or use. Consumers need to decide whether they want to engage with the AI. None of these decisions can be made if people aren’t properly informed, which means that if something goes wrong, blame falls on the shoulders of those who withheld important information or shamed the sharing of information by others.

For example, an executive who approves use of the AI ​​first needs to know, in broad terms, how the model was designed. That includes, for instance, how the training data was sourced, what objective function was chosen and why it was chosen, and how the model performs against relevant benchmarks. Executives and end users who are not given the knowledge they need to make informed decisions — including knowledge without which they don’t even know there are important questions they are not asking — cannot be reasonably held accountable.

Failure to communicate that information is, in some cases, a dereliction of duty. In other cases — particularly for more junior personnel — the fault lies not with the person who failed to communicate but with the person or people, especially leaders, who failed to create the conditions under which clear communication is possible. For instance, a product manager who wants to control all communication from their group to anyone outside the group may unintentionally constrain important communications because they serve as a communication bottleneck.

By being transparent from start to finish, genuine accountability can be distributed among all as they are given the knowledge they need to make responsible decisions.

It enables internal and external oversight.

AI models are built by a handful of data scientists and engineers, but the impacts of their creations can be enormous, both in terms of how it affects the bottom line and how it affects society as a whole. As with any other high-risk situation, oversight is needed both to catch errors made by the technologists and to spot potential problems that technologists may not have the training for, be they ethical, legal, or reputational risks. There are many decisions in the design and development process that simply should not be left (solely) in the hands of data scientists.

Oversight is impossible, however, if the creators of the models do not clearly communicate to internal and external stakeholders what decisions were made and the basis on which they were made. One of the largest banks in the world, for instance, was recently investigated by regulators for an alleged discriminatory algorithm, which regulators to have insight into how the model was designed, requires, and requires. Similarly, internal risk managers or boards cannot fulfill their function if both the product and the process that resulted in the product is opaque to them, thus increasing risk to the company and everyone affected by the AI.

It expresses respect for people.

The customers who used the reservation-taking AI felt they had been tricked. In other cases, AI can be used to manipulate or push people. For instance, AI plays a crucial role in the spread of disinformation, nudges, and filter bubbles.

Consider, for instance, a financial advisor who hides the existence of some investment opportunities and emphasizes the potential upsides of others because he gets a larger commission when he sells the latter. That’s bad for clients in at least two ways: first, it may be a bad investment, and second, it’s manipulative and fails to secure the informed consent of the client. Put differently, this advisor fails to sufficiently respect his clients right to determine for themselves what investment is right for them.

The more general point is that AI can general general people’s autonomy — their ability to see the options available to them and to choose among them without undue influence or manipulation. To the extent that options are quietly pushed off the menu and other options are repeatedly promoted is, roughly, the extent to which people are pushed into boxes instead of given the ability to freely choose. The corollary is that transparency about whether an AI is being used, what’s it’s used for, and how it works expresses respect for people and their decision-making abilities.

What Good Communication Looks Like

Transparency is not an all-or-nothing proposition. Companies should find the right balance with regards to how transparent to be with which stakeholders. For instance, no organization wants to be transparent in a way that would compromise their intellectual property, and so some people should be told very little. Relatedly, it may make sense to be highly transparent in some circumstances because of severe risk; High-risk applications of AI may require going above and beyond standard levels of transparency, for instance.

Identifying all potential stakeholders — both internal and external — is a good place to start. Ask them what they need to know in order to do their job. A model risk manager in a bank, for instance, may need information related to the threshold of the model, while the Human Resources manager may need to know how the input variables are weighted in determining an “interview-worthy” score. Another stakeholder may not, strictly speaking, need the information to do their job but it would make it easier for them. That’s a good reason to share the information. However, if sharing that information also creates unnecessary risk of compromising IP, it may be best to withhold the information.

Knowing why Someone needs an explanation can also reveal how high a priority transparency is for each stakeholder. For instance, some information will be nice to have but not, strictly speaking, necessary, and there may be various reasons for providing or withholding that additional information.

These kinds of decisions will ultimately need to be systematized in policy and procedure.

Once you know who needs what and why, there is then the issue of providing the right kinds of explanations. A chief information officer can understand technical explanations that, say, the chief executive officer might not, let alone a regulator or the average consumer. Communications should be tailored to their audiences, and these audiences are diverse in their technical know-how, educational level, and even in the languages ​​they speak and read. It’s crucial, then, that AI product teams work with stakeholders to determine the clearest, most efficient, and easiest method of communication, down to the details of whether communication by email, Slack, in-person onboarding, or carrier pigeon is the most effective .

. . .

Implicit in our discussion has been a distinction between transparency and explainability. Explainable AI has to do with how the AI ​​model transforms inputs into outputs; what are the rules? Why did this particular input lead to this particular output? Transparency is about everything that happens before and during the production and deployment of the model, whether or not the model has explainable outputs.

Explainable AI is or can be important for a variety of reasons that are distinct from what we’ve covered here. That said, much of what we’ve said also applies to explainable AI. After all, in some instances it will be important to communicate to various stakeholders not just what people have done to and with the AI ​​model, but also how the AI ​​model itself operates. Ultimately, both explainability and transparency are essential to building trust.

Leave a Comment