Privacy or Security?

Ever wondered if you were being monitored, if your data and personal information was being stored? There is no denying that technology and data used for the surveillance of citizens, can benefit the security and safety of citizens and society. However, privacy is a fundamental aspect of each individuals’ human rights and “people are saying that they are just starting to understand that their personal information can be used against them” – Ed Santow. This realisation has increased societies need to question whether or not technology products have been designed to protect basic human rights.

Has data and technology been used accurately and fairly? Who seems to be impacted by such data and technology?

Human Rights Commissioner, Ed Santow takes a look into the accuracy and fairness of data and tech used in the surveillance of Australians in New South Wales. He claims that A.I. or algorithms can increase bias, that may possibly lead to discrimination “on the basis of something they (people) can’t control” which includes – but is not limited to – ethnicity, gender or age. This seem to indicate that the “deep seated risks” relate to the extent of equality or possibly the lack thereof.

Ed Santow mentions the Identity Matching Services Bill, for instance. It is “a legal framework that would essentially allow mass surveillance in Australia for the first time “. However, this framework is less accurate and fair as it contains a “98% error rate”. Moreover, the error’s made are not fairly, accurately or evenly distributed. The technology utilised is extremely competent in its ability to recognise and distinguish between average white males and females but not so accurate in distinguishing features and recognising individuals of colour.

The Suspect Target Management Plan (STMP) utilises A.I. and algorithms which places people, that the tech considers to be risks, under increased police surveillance as it seeks to anticipate crime. STMP leaves room for inaccuracy, on the basis that algorithms which utilise data that has been subject to historical prejudice, based on previous data of convictions. An example of its inaccuracy would be made through an experiment of 1800 individuals, that were placed on the STMP list by algorithms using data. “Of the 1800 individual’s, 56% were Aboriginal or Torres Strait Islander”. This proves to be a significant amount considering that of New South Wales’s population, only 3% are of Indigenous background. This increases discrimination towards people of colour, based on deductions drawn from data and algorithms.

It is not only people of colour that are being negatively affected but also disabled bodied people. Even though there are apps that have been made to be of advantage, not all people with disabilities have access to them to this day, in some ways they seem to be shut out from society. Technology seems to be in many cases, design for able-bodied, young individuals of 25 to 50 years, that have all the advantages of the community going for them, as mentioned by Ed Santow. Of all complaints created about tech and its inaccuracy and unfairness, 40% of these complaints have been about disability discrimination.

What needs to be done in order to protect people and the human rights?

Although it may be possible that technology hasn’t been completely designed to protect people’s basic human rights, the need for improvement remains. People desire honesty and trust with police when facial recognition, identity matching or suspect target management is being used against them. People wish to be notified if their identity is being monitored or if frameworks such as the STMP has placed them on a watch list. People desire ethics and equality to be implemented into these frameworks, as to not be discriminated, judged unfairly and falsely accused. The laws regarding such frameworks need to be robust with a human-rights basis and they need to be better enforced.

Therefore, it seems that although the data and tech has been aimed to be used accurately and fairly, it has proven that A.I. and algorithm makes frequent errors, based on the lack of an ability to distinguish between external factors; as well as, the fact that A.I. and algorithms lack ethics and is not completely bias free.

Listen to the full episode of Think: Digital Futures ‘How Will Tech Affect Our Human Rights’ here.

DATE POSTED
Thursday 15th of October, 2020

You may also like

Episodes