Yagmur Denizhan: Performance-based control of learning agents and self-fulfilling reductionism.

Yagmur Denizhan: Performance-based control of learning agents and self-fulfilling reductionism. Systema 2 no. 2 (2014) 61-70. ISSN 2305-6991. The article licensed under the Attribution-NonCommercial-NoDerivatives 4.0 International License. A PDF file is here.

Abstract: This paper presents a systemic analysis made in an attempt to explain why half a century after the prime years of cybernetics students started behaving as the reductionist cybernetic model of the mind would predict. It reveals that self-adaptation of human agents can constitute a longer-term feedback effect that vitiates the efficiency and operability of the performance-based control approach.

From the Introduction:

What led me to the line of thought underlying this article  was a strange situation I encountered sometime in 2007 or 2008. It was a new attitude in my sophomore class that I never observed before during my (by then) 18 years’ career. During the lectures whenever I asked some conceptual question in order to check the state of comprehension of the class, many students were returning rather incomprehensible bulks of concepts, not even in the form of a proper sentence; a behaviour one could expect from an inattentive school child who is all of a sudden asked to summarise what the teacher was talking about, but with the important difference that –as I could clearly see– my students were listening to me and I was not even forcing them to answer. After observing several examples of such responses I deciphered the underlying algorithm. Instead of trying to understand the meaning of my question, searching for a proper answer within their newly acquired body of knowledge and then expressing the outcome in a grammatically correct sentence, they were identifying some concepts in my question as keywords, scanning my sentences within the last few minutes for other concepts with high statistical correlation with these keywords, and then throwing the outcome back at me in a rather unordered form: a rather poorly packaged piece of Artificial Intelligence.
It was a strange experience to witness my students as the embodied proof of the hypothesis of cognitive reductionism that “thinking is a form of computation”. Stranger, though, was the question why all of a sudden half a century after the prime years of cybernetic reductionism we were seemingly having its central thesis1 actualised.