Antonis Faras
4 min readJul 29, 2021

--

Redrawing Images-https://timrodenbroeker.de/

Thoughts On Algorithms

In this blog, I have focused on Algorithms and how we can approach them in order to make them more accessible and transparent. I mainly thought about “How algorithms can be understood and examined” which intrigues me more as a question, rather than “what algorithms are” to which my answers would be trivial.

I have not discussed Big Data or AI since they are somehow different units in my mind.

I.Reflection

When we try to define an algorithm we think of a well-defined computational procedure that uses values and data as input, and produces an output in relation to a classification system. Algorithms appear to give us answers in an automated and consistent manner, although we usually neglect the method and the reasons for choosing a particular classification in relation to the inputs.

Based on that algorithms, appear either mechanical or, in critical approaches, they are identified as black boxes that need to be studied carefully. In order to understand that, we have to lift the veil of opacity imposed on algorithms, which can be attributed to the following reasons:

  • intentional corporate or institutional self-protection and concealment
  • the current state of affairs where writing (and reading) code is a specialist skill
  • the mismatch between mathematical optimization and the demands of human-scale reasoning and interpretation.

By looking at all of these reasons as complimentary, we are able to see that the barrier to understanding algorithms is not simply access and expertise, but the very idea of “The Algorithm”. Among other things, Algorithms have been rationalized as a source of truth, neutrality and common sense when connected with their presentation as “problem-solvers” or “innovation-enablers”, while we need to look at them as a social construction.

So, why should we criticize and study algorithms more? What is the role of algorithms related to digital technologies?

If we want to add transparency in our digital reasoning and behaviors, and as a society to understand existing classification methods and their applications, we need to strive for transparency. This quest for transparency relates to interdisciplinary cooperation and, in my opinion, can be related to the following necessary steps:

  • Identification.

As D. Rushkoff puts it “in order to maintain autonomy in a programmed environment, we have to understand that there is programming going on”. Through multiple channels and means, algorithms can guide our (digital) behavior and to some extent reinforce to us the values and logic of the classification system on which they are based.

To identify we need to find a causal relationship between our behavior and the algorithm- but this will not be enough until we find the how and the why of this relationship.

  • Testing

The how can be answered with a series of testing and experiments. If we change the data, the behavior we feed the algorithm with, we can notice some of its patterns. In this sense, we can reverse-engineer it so that from the change of output we can determine probable ‘classifiers’.

Our modern-day problem here is that we cannot speak of “the algorithm” (let’s say Google Page Rank) or of individual testing, since the behavior of each user and our collective behavior is resulting in multiple variations of algorithmic outputs.

This is why we look for classifiers, signals of the classification system, set of rules, rather than a single algorithmic logic.

  • Acknowledgment

With the existence of Machine (Deep) Learning and multiple data streams, we have to examine the possibility of not being able to answer to the full extent the Why. Our pleas and efforts for transparency must take into account the possibility of not being able to answer in full extent why a system, a relationship behaves as it does.

We generally assume that someone completely knows what is happening and we can discover it or make them share their knowledge. In this sense, we (even the most radical) are treating algorithms more like god-given laws rather than dynamic complicated social and scientific constructs. In this sense, I think that we strive for fragments of knowledge that offer us a depiction of a general aim or purpose of the classification system.

  • Alignment

There is a wide specter of differentiation in terms of literary techniques and computer technologies. In this sense, we need to make an alignment on “how the machine thinks” to how we, as researchers, scientists, citizens think. In many cases, we can still see differences in mathematical/theoretical expressions of algorithms and their implementation, on expected outcomes and eventual outcomes.

We need to align the technical aspect with the social — to put engineers and social scientists together.

The aforementioned ideas and thoughts can contribute to a more transparent appearance of algorithms. In my opinion, the goal of transparency is interconnected with the pursuit of a different social construct regarding algorithms.

Thank you for your time and have a nice summer!

PS: A nice provocative case of algorithmic bias: https://www.youtube.com/watch?v=FejjAbwUqbA

--

--

Antonis Faras

Technology Manager and Researcher, Member of sociality.coop, Ph.D. Candidate at NKUA. Interested @Digital Technology, Maintenance, Economic Alternatives