There’s no doubt that the best word to describe the shenanigans going on thanks to Centrelink’s automated debt collection system is a debacle. One is reminded of Better Off Ted, where the scientists argue over whether something should be called a debacle or a disaster, and settle for disastacle. There’s another word, of course, that could be used. From The Free Dictionary:
- Illegal use of one’s official position or powers to obtain property, funds, or patronage.
- The act or an instance of extorting something, as by psychological pressure.
- An excessive or exorbitant charge.
Applying a simple litmus test – if a regular company had been sending out the same sort of volume of debt recovery notices with the same error rates as being reported in the media, it would be facing class action law suits, and extortion would be a term being bandied around. Just because it’s the government and can make the process ‘legal’ should not stop us, ethically, from calling a spade a spade and naming the process for what it is.
And that’s the IT Story That Dare Not Speak its Name: ethics.
Over at The Register, Richard Chirgwin wrote an article, Australia: Stop blaming Centrelink debts on its IT systems (published 6 Jan 2017). Much of what Richard wrote are fairly sensible statements about how this isn’t a “classic” IT problem, but rather:
it demonstrates that the government’s blind faith in big data analysis is completely misplaced.
Richard goes on to say:
It’s not an IT story, because the computer systems, as geriatric as they are, are calculating exactly what they’re asked to calculate.
This isn’t a bug in an IT system: it’s an executive giving developers instructions to implement a malicious system.
And herein is where I differ from Richard. Richard wants to say this isn’t an IT problem; I say it is, and in fact is a near-perfect example of an escalating problem in IT as compute and data processing capabilities expand.
Data processing is increasingly driving us to need to more regularly ask the question: just because we can do something, ought we do it?
Now, the simple answer to the question above is – developers and IT staff involved were presumably instructed to program the data analysis system in the way it has been configured. (E.g., averaging someone’s fortnightly income as if they had been paid it all year, etc.)
Without wishing to invoke Godwin’s law (for we’re not in any way approaching that situation), we do have to consider the Nuremberg defence, viz., “I was just following orders”. This is, as we know, a defunct argument; both legally and ethically it is a generally accepted principle that doing something wrong simply because a superior told you to do so is insufficient a reason, and more importantly, it does not deflect guilt from the act.
Stepping back from the precipice of an extreme comparison to just following orders, we know from normal criminal law and ethical standards that committing a crime or acting in an unethical way simply because you’ve been told to is undesirable, and illegal. If a bank teller were told by his manager that $1 from each deposit was to be siphoned into another account, obeying the manager’s order would be both criminal and unethical. If a clerk in a bank were told by her manager to report a customer with a pristine credit record as a bad debt risk simply because the manager was trying to get back at the customer, this too would be deemed to be unethical and potentially illegal, too. In both instances, there is no abrogation of responsibility for the unethical or illegal behaviour for either the bank teller or the clerk, simply because they’re following orders.
Some would claim this is not the case in the Centrelink debacle, since we’re in the situation where the government can change the law to suit itself, and in this case appears to have done so. Are you excused from responsibility simply because you are doing something that is acceptable by the letter of the law but happens to be unethical? In such situations, protection from criminal prosecution may be guaranteed, but freedom from ethical obligations is not so easy a line to draw. Ethics and law do not have a 1:1 overlap. There are many things that have been legally permissible which fail to meet current ethical standards. Slavery was legal in much of the United States. Segregation was legal in much of the United States. Forced kidnapping of half/etc caste children of indigenous families in Australia was legal. Chemically castrating men for being gay was legal. None of these activities were ethical.
So let us return to the original – regardless of any government attitudes towards the legality of the action, was it ethical for IT staff and developers involved in the construction of Centrelink’s system to follow orders?
Think of the root result of the debt analysis being performed – an usurping of innocent until proven guilty, one of the basic tenets of modern society. Whatever the alleged crime, we expect someone to be given the benefit of the doubt until such time as it can be satisfactorily proven that a crime has been committed. In this case, the review process takes place after the statement of guilt has been issued, and people are being forced to pay back debt they do not owe while attempting to prove they do not owe it. This is a significant breach in how we, as members of the public typically expect even administrative cases to proceed. If A states that B owes A money, A should prove it, and B should be given the option to disprove it. Only once both parties are satisfied a debt has been incurred (or a third party has ruled it to be the case) should B have to start paying the debt back.
As the world becomes increasingly automated, and data analysis becomes more pervasive, we must not shy away from ethical considerations of what we, as IT professionals, are asked to participate in.
In Unified Ethical Frame for Big Data Analysis (March 2015, Martin Abrams, Information Accountability Foundation), it is suggested there are 5 essential values for an ethical approach to big data, namely:
In Beneficial, we’re told:
The act of big data analytics may create risks for some individuals and benefits for others or society a whole. Those risks must be counter-balanced by the benefits created for individuals, organisations, political entities and society as a whole.
The risks should also be clearly defined so that they may be evaluated as well.
The key proposed benefits would be the reduction in welfare fraud and return of improperly received funds by individuals. Yet, with Centrelink’s own internal documentation having been found to say the debt calculation should not be performed in the way it is being calculated (reference provided in Register article linked to earlier), we have a situation where the risk has been previously defined, documented and cautioned against – but been ignored. While there is a utilitarian stance to the ‘Beneficial’ clause (the counter-balancing of risk with benefit), with current estimates being at least 20% of debt recovery notices being incorrect, it is becoming an increasingly long bow to draw to suggest that society will be better off with the recently activated debt recovery processes. The increased administrative work required to process the number of extortionate claims will be costly, and there are – being blunt – bigger fish to fry than a “guilty until proven innocent” approach to people on welfare, such as multi-national tax loopholes and large numbers of companies who avoid paying tax. Additionally, the medical costs of people on welfare being targeted in this way are likely to quickly grow: increased stress and anxiety leading to increases in mental health problems, potentially reducing the ability for those same people to work. People who are unfairly being forced to pay money back while proving their innocence will have to cut costs in their lives (and welfare is already frugal beyond belief, as evidenced by Even conservatives say the dole is too low, Misha Schubert for the Sydney Morning Herald, October 16, 2011), and a normal starting point is fresh food. The flow-on impacts are unlikely to have been calculated, but will undoubtedly diminish any expected gains by the use of a calculation the department has previously cautioned staff against.
In Progressive, we’re told:
Organisations should not create the risks associated with big data analytics if there are other processes that will accomplish the same objectives with fewer risks.
In this case, there were other processes already in place – human intervention and human analysis of suspected debt fraud cases before any communication took place between the department and welfare recipients. So in this case it could actually be argued the department has not only failed to be Progressive, but has in fact been regressive; it has replaced a functional (if slower) system with a faster system that generates higher error rates and causes increased harm.
The Respectful clause:
relates directly to the context in which the data originated and to the contractual or notice related restrictions on how the data might be applied.
In this case, if data is correctly recorded but incorrect calculations are being executed against them, it can hardly be argued that a respectful treatment of the data and the individuals to whom the data relates is being conducted. Equally:
Fairness relates to the insights and applications that are a product of big data
Extending the above, the data has not been respectfully dealt with, and nor have the resulting calculations been fairly performed.
In this, in the Centrelink debacle, there has been an IT failure. A failure of IT education, a failure of IT process, and a failure of IT professional ethics.
If we don’t start fixing IT education and developing a universal code of conduct for IT professionals, this sort of problem will occur with frightening regularity.