Ethics and feature detection

Automated feature detection within images (described in my previous post) deploys “machine learning” techniques. A machine learning algorithm scans thousands of “training” images that are pre-labelled with relevant feature descriptors. The algorithm adjusts the parameters in its network data structure to reproduce those same labels when presented with the same images. It thereby “learns” to re-identify those features. The parameter adjustments are such that the algorithm can detect the same features in new images it has not previously scanned. This is a neural network approach to machine learning. Amazon describes its feature (object) detection algorithm as such a “deep neural network”:

“The Amazon SageMaker Object Detection algorithm detects and classifies objects in images using a single deep neural network. It is a supervised learning algorithm that takes images as input and identifies all instances of objects within the image scene. The object is categorized into one of the classes in a specified collection with a confidence score that it belongs to the class.”

The process is not entirely automated. Neural network developers have to devise the network configuration, decide what constitute inputs and outputs to the network, the layers in between (hidden layers) and the sensitivity of the network parameters — weightings, probabilities, network connections (edges) and threshold values. Louise Amoore explains the ethical dimensions of this design task in her book Cloud Ethics: Algorithms and the Attributes of Ourselves and Others.

“The recognition of edges, motifs, and familiar arrangements is not designed into rules by a human engineer but is definitively generated from the exposure to data. To be clear, this spatial arrangement of probabilistic propositions is one of the places where I locate the ethicopolitics that is always already present within the algorithm. The selection of training data; the detection of edges; the decisions on hidden layers; the assigning of probability weightings; and the setting of threshold values: these are the multiple moments when humans and algorithms generate a regime of recognition” (71).

She’s critical of the reductive nature of feature detection. In the images in my previous post, feature detection misses the “feature” that the sun is glinting off the motor cycle mirror, it’s windy in front of the shopping arcade, the baby calf is visible between the legs of its mother.

“The tyranny of proliferating machine learning algorithms resides not in relinquishing human control but, more specifically, in reducing the multiplicity of potential futures to a single output” (80).

Automated feature detection in imagery is a useful test case for the ethics of machine learning, though the application of the techniques extends to other sensory modalities and robotic operations that incorporate any kind of sensory input, including movement.

The unattributable

Amoore makes the case that machine learning highlights the conflicted nature of attribution. Who is responsible for errors and misjudgements? Mistakes in robotic surgery procedures, errors in automated drone strikes, harm to non-combatants and false matches in image analysis provide obvious examples. She asks

“where would one locate the account of a first-person subject amid the limitless feedback loops and back propagation of the machine learning algorithms of Intuitive Surgical’s robots? When the neural networks animating autonomous weapons systems thrive on the multiplicity of training data from human associations and past human actions, who precisely is the figure of the human in the loop?” (66)

In this passage she suggests that the attribution and even the liability for ethically questionable outcomes constitutes a collective responsibility, involving not just the creators of the algorithms but those who contribute to the learning set.

Sometimes attributed

Counter to that sense of shared attribution I would argue that we rarely distribute responsibility for a social outcome amongst the population we are canvassing. If the average age in a community is 45 then those who use that average as an instrument for closing down schools or aged care facilities bear the responsibility for the decisions. The ethicopolitical decision resides with the statistical advice-giver, the perpetrators of the dodgy analysis of community needs and the decision-makers. No one would spread the responsibility to the individuals in the census.

I think it’s similar with sophisticated learning algorithms. You can’t apportion a share of responsibility to the providers in the learning set: the myriad surgical procedures from which the algorithm putatively learns, the operators of the numerous human-controlled drone flights that contribute to the AI training set, the providers of tagged images on photosharing sites. The responsibility resides somewhere amongst those who design, select, adjust and use the algorithms, and those who decide on the training sets.

The argument about shared and conflicted attribution is similar to discussions about authorship, originality and intellectual copyright within the creative professions. I agree with Amoore that attribution is contextual, fraught and is resolved by human judgement.

“Ethicopolitical life is about irresolvable struggles, intransigence, duress, and opacity, and it must continue to be so if a future possibility for politics is not to be eclipsed by the output signals of algorithms” (172).

Reference

  • Amoore, Louise. 2020. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham, NC: Duke University Press

Leave a Reply