It's more ethical to use AI to make decisions
We should use more algorithmic decision making, not less. Contrary to many worries, together with "Explainable AI" and an appeal process, algorithms can make for an ethically superior alternative than only human decision makers. Here's my view why and I'd welcome your responses.
Imagine Susana, a working class mother, suffering palpitations and showing up at hospital. She needs to be triaged: should she be sent for specialist cardiac follow up, or sent home? We all care that this decision process should be Accurate, Explainable, Fair, and have a process of Appeal. The argument here is simply that AI (algorithmic) decision making can be superior in all these goals and better for Susana and others.
Accurate. This is an empirical matter: can an AI system better predict what kind of medical attention Susana needs? For Susana's benefit, all else equal, we should favour the more accurate system - whatever that is. In many tasks the trend is clearly for AI to surpass human predictions - and so long as that's the case, the more accurate system will serve Susana best, and we should welcome it.
Now to the counterintuitive claims: Algorithmic systems may be more explainable, and fairer for Susana.
Explainable. There's good reasons for Susana to favour an algorithm: First, it will be more reliable and auditable. It will produce the same output given the same input and can easily record its decisions. It can also be trivially tested against different inputs, and the input-output relationship is visible. Second: with an AI algorithm, Susana can demand explainability techniques to let her understand why the decision was made. Thanks to advances in Explainable AI, using an algorithm lets Susana see that her being a mother made it more likely she was sent home, but conversely had she been older, the model would have behaved the same. A human decision maker is neither as reliable nor as explainable.
Fair. Precisely because we can interrogate the process of creating the model and the resulting decision, decision making algorithms have the foundations to be much fairer than inscrutable human brains. Algorithms can be precisely quantified, measured, and tested for different kinds of fairness - see more in this People + AI Research explainer. Conversely we can't look into someone's brain and know if they'll be biased in their decisions, we can't measure their conscious or unconscious limitation in the task at hand. How can we know that a human decision did not look down at Susana's working class aesthetic, and was less concerned about her prognosis than they'd have been otherwise?
Appeal. It's extremely important that any subject of a decision can appeal for oversight and redress. A good and robust process of appeal can be put along side any decision making. However, if the original decision making is algorithmic it will be easier to interrogate with explainability methods to check what might be happening (e.g. did someone introduce some data incorrectly, or perhaps the original decision is in fact problematic). Because an algorithmic decision is more explainable and auditable, appeals against it are made easier to evaluate.
In short: If Susana, or a regulator, is worried her case was mishandled, they have more and superior tools at their disposal if an algorithm is the decision maker. Algorithms are easier to optimise and scrutinise than human brains are. A human may articulate a reason for why it made some decision, but there's vast evidence post hoc reasons are not reliable, and often are justifications of existing bias and prejudices. The algorithms are not guaranteed to be fairer, but Susana is better served with reproducible, explainable, auditable model that can be interrogated for fairness, than she is with an ad-hoc non-explainable decision.
We should focus on the properties of the decision making process and its outcomes, not on the technologies used for them. The upshot: improve decisions by improving algorithms not by restricting them.