The Clash of Methodologies


The Clash of Methodologies


Analytic philosophers think of themselves as pursuing knowledge and believe that knowledge must be true.  According to this belief, anything that is not true is not knowledge.  They believe that we are capable of direct perception and can perceive facts about our environment. Once such facts are available at the start of a deductive process analysts can apply deductive logic and draw factual conclusions. I will refer to this as a DP methodology.

But perceptual facts are not available. Articles like The Epistemological problems of Perception   (https://plato.stanford.edu/entries/perception-episprob/) illustrate the difficulties encountered when perceptual facts are sought.  Yet these are still pursued, perhaps because no alternative approach seems feasible.  

A representational approach cannot use a methodology that depends on facts about our environment, as it assumes that our perceptual system can only deliver approximate representations.  We have to assess these representations and decide if these are accurate enough for us to treat them as being true.  This approach cannot use the DP methodology and has to develop what I will refer to as a RP methodology. 

When our perceptual system creates a representational form this is compared with forms stored in neural networks in our brain.  Neuroscientists discuss how this comparison is done. ART (Associative Resonance Transfer) mechanisms test for resonance between the networks that store the forms.  If there is resonance between the networks the forms are similar. The degree of resonance determines the degree of similarity and our neural systems generate a signal (that we become conscious of) when forms are similar.  We experience this signal as a "feels right" feeling. 

If this feeling is strong enough we determine that the forms being compared represent the same object.  We can look at a banana and recognise that it is a banana because the form created by our perceptual system resonates with a memorised 'banana' form we retain in memory and we get a strong 'feels right' feeling.  We then assume that it is a fact that the object we are looking at is a banana.

This RP methodology shows that we do not perceive facts.  Instead we compare forms and 'feel' if they are similar enough to be treated as being the same. The entire process is non-rational.  Instead it is driven by a process that relies on resonance between neural networks.

The ART mechanism works in conjunction with the neural abstraction mechanism to create a simplified representation of whatever situation we are considering.  We can think of these neural processes as creating a model that is suitable for the application of reason and logic.  They simplify representational models so that we can apply logic to them.

The ART form comparison process can create very strong 'feels right' feelings.  The feeling that we can perceive directly is created by this mechanism.  The ART mechanism is so successful in identifying objects during perception that it generates a very strong 'feels right' feeling that makes us believe that we can perceive directly:  we believe that we see the material object rather than the brain created representation of it.  

When, for example, we have blurred vision we have no difficulty separating the image from what it represents and accepting that we become conscious of the image, but when we have sharp vision we assume we see the object, rather than an image of the object.  It cannot be that in one case we perceive directly and in the other we do not.  We can use the RP model to account for these experiences of perception, but the DP model fails, as it assumes that there are no representations. 

Based on this theory we can think of DP perception and the DP methodology that it seems to support as simplifications;  abstractions of the more detailed RP model and the methodology it supports.  While these highly abstracted DP models are often accurate enough for many applications, they creates significant theoretical problems for philosophy and should not be used in that discipline.











It seems appropriate at this stage to briefly introduce the methodological change that must accompany indirect perception.  Several empiricist philosophers, Locke, Hume, Kant and others, proposed indirect perception, but these proposals were interpreted to be philosophical arguments and were rejected.  In doing this philosophers were applying a well-established methodology; obtain the facts, deduce consequences, and assess the feasibility of the result.  But indirect perception poses a challenge to this methodology as, if you accept indirect perception you have to accept that it is not possible to determine facts as is traditionally done in philosophy. 

If you can directly observe the environment you can reasonably claim to be able to determine facts about this environment, but if you are limited to indirect perception you are fundamentally restricted to becoming conscious of phenomena generated by neural activity that, as far as we can tell, are good representations of objects, scenes etc. in the environment.  You could possibly determine facts about these representations, but when it comes to the environment being represented you are limited to inferences.   

This may seem a trivial change, but it is not as there may be facts about the model that are not (as far as we can know) facts about the world.  For example, we normally attribute colours to the things we see in the world, but if an IP perspective is adopted we have to accept that these colours may be limited to the internal model.  We can’t address the question, “are there colours in the external environment?” because we have no means of testing for this.  We can test for and detect EMR (electromagnetic radiation) of varying frequencies as our perceptual system internally generates colours when stimulated by some of this spectrum.  These internal colours reveal things about the EMR, not about external colours. We can reasonably claim that the representation created by our neural system is coloured, but we should avoid making the inference that the objects represented by these colour are themselves coloured.  Similarly with mechanical vibration and the internal phenomenon we refer to as sound. We can reasonably make a connection between the internal phenomenal we refer to as sound and the external vibrations that stimulate our neural circuits to produce this phenomenon, but claims to the effect that the phenomenon "sound" is present in the external environment is to make a claim based on a DP inference. From an IP perspective we must differentiate between the phenomena we become conscious of and the stimulus that triggers these  phenomena.

While we can normally be assured that the shapes produced by our visual perception are accurate (when this visual perception is working well), other sensory products such as the feel or texture of a surface, colours, sound, smell, taste, touch are all suspect and we are better off thinking of these as internally generated phenomena that are reliably linked to, or associated with, particular external conditions.  But although this separation and differentiation between the inner phenomenal world and the external environment is essential for the IP methodology it is only one of several differences and it seems as if IP demands a completely different mind set.

We have to abandon the mindset that expects to gather facts and make deductions in a uniworld and replace it with one that expects to build and test internal models that reliably represent the external environment.

Let me use the geocentric/heliocentric issue as an example.  It must have seemed obvious to everyone in the ancient world that the sun circles the earth. This was once considered a fact, determined by direct observation.  Note the DP methodology of determining facts from intuitions. Despite the difficulties of explaining the motions of the planets this theory was not abandoned until a different methodology was developed.  It was such a strong and obvious intuition that it could not be challenged by intuitions that we less strong and less obvious.  This geocentric theory was locked in by the methodology used because it was based on the feasibility of the supporting intuition. And there was nothing strong enough to displace this intuition.  A different methodology has to be adopted before a different model could be established.

Note that we now have to discuss models, rather than direct observations of the environment as it is.  This shift from observing, “the environment as it is”, to “models that represent the environment” had to take place before this heliocentric model could be considered.  Once this shift takes place, we can see the geocentric theory as a model, before the shift it was a direct observation.   This is the fundamental change in perspective that is being thrust on us by IP.

From this IP perspective we can claim that a theoretical model was proposed that was better able to predict planetary motion and this model eventually displaced the more intuitive geocentric theory.  This is history, but we need to understand how it was possible for a very strong, very obvious intuition to be displaced by a theoretical model, as we are faced with a similar situation when dealing with perception. There seems to be several forces at work, (a) the failure of the obvious intuition to account for some observations, (b) an alternative (modelling, IP) method became available that facilitated a shift to a more detailed model, (d) The more detailed model, thought counter intuitive, produced better results. 

Both methods rely on intuitions.  I don’t think it is possible to avoid a dependence on intuitions, but there is a difference in the way intuitions are treated.  While the DP method emphasises the strength of an intuition, the IP model focuses on the reliability of the intuition.  Note that strength suggests emotive force, while reliability seems to avoid this.  

Suppose we decide that we are going to rank intuitions on the basis of reliability, rather than on the basis of intuitive appeal and instead of deducing what we can from our intuitions, we construct what we can from those intuitions we consider to be sufficiently reliable.  These two go together as when we restrict ourselves to the most reliable intuitions we cannot deduce a whole lot from these and the alternative, to construct a model using them seems an obvious alternative.  If we use repeatability as the test for reliability we are essentially using the scientific method.

When neuroscientists construct their proposed model of perception they use this methodology and present a model of perception that many philosophers have rejected using their own methodology.  This suggests an outright clash of methodologies, but it seems at least possible that we can combine these methodologies and benefit from doing this.  One way of combining then is to accept the neuroscience model of perception and then use the philosophical method to investigate the consequences of this move.  This is what I propose to do in this blog. 

But accepting the neuroscience model of perception is a big counter intuitive leap, perhaps bigger than the change from geocentric to heliocentric.  We have to accept a complete phenomenal inner model and this is difficult as we are all raised on the great illusion and find that strongly intuitive.  But neuroscience tells us that this great illusion is simply not possible and so the intuition of direct perception has to go the way of the geocentric theory.  The light striking our eyes produces neural activity.  Our becoming conscious and what we become conscious of is entirely dependant on neural activity.  Let me emphasise this.  Everything; the phenomena that we become conscious of, consciousness itself and the intuition that we see the world as it is, all this is created by neural activity.  

It takes some intellectual effort to understand this and relaxed reflective thought work through the implications.  We have to avoid being panicked by the consequences and examine each stage of the conceptual development before moving on to the next. 

There is a limit to what this neural activity can achieve.  It can create representations and it can create the intuition that the world is identical to these representations.  However, if we follow the mechanisms of perception and understand their limitations we have to accept that the intuition, “we see the world as it is”, is beyond the capacity of neural activity and must be false.  It is not physically  possible for this intuition to be correct. This IP position, advanced by neuroscientists, is similar to, if not the same, as the position of several philosophers going back to Pythagoras (according to some) and including well recognised philosophers notably Locke, Hume, Kant and Schopenhauer, but has not gained acceptance.  Why?  Sure it is counter intuitive, but other counter intuitive positions have been established (mainly as a result of scientific research), why not this one?

I think that philosophers reject this position for a combination of factors: because it is so counter intuitive, because of the repercussions of holding it and because of the methodology that philosophers use that makes their work particularly vulnerable to strong intuitions.  I want to spend some time discussing the repercussions of holding this position, but first a note on philosophical methodology. 

Note that we cannot get away from intuitions – this is one of the limitations we all have to work with.  If you trust some scientific theory this trust is based on an intuition.  You may have good reason to trust some particular theory, I am not questioning that, but the jump from having good reasons to believe something and claiming that the theory is true is a big step.  Karl Popper’s falsification thesis argues that we should not make this jump; we should retain the separation between theory and that part of the environment the theory represents and expect that the theory may change in the future.  This introduces a healthy bit of scepticism while retaining a functional relationship between theory and what that theory represents. 

Philosophers (many of them) seem to accept this with regard to scientific theory, but when dealing with their own discipline they think of themselves as determining facts and deducing whatever can be deduced from these facts.  A typical example:  All men are mortal, Socrates is a man: therefore Socrates is mortal.  Having started with premises that are facts, once the logic is applied without error, the conclusion is also fact.  The assumption here is that philosophical theory is not merely representational; it can tell us how the world is and give us facts about the world.  From this perspective, while philosophy deals with facts, science is limited to creating representations.  This difference in methodology is championed by philosophers (many of them) and held up as one of the significant distinguishing factors that separate philosophy from science.  Being able to do this is a feather in the cap of philosophers as they can claim more precise and exact results for their discipline by using this methodology.

It may seem that there is something wrong with this.  How is it possible for an armchair discipline to arrive at facts, while science with all its rigorous testing can’t?  But before examining this question let us look very briefly at the effects  this philosophical methodology tends to have on its discipline. It emphasises language and encourages a very precise interpretation of language.  This seems to follow from the underlying DP assumption and could be thought of as follows.  Since we are directly describing the world we need to use accurate and precise language as any other language would be at best vague and likely false.  But interpreting language in a way that renders it precise and accurate is fraught with difficulty and attempts to do this seem, quite often, to result in nit-picking examinations of the claims made.  This has a knock on effect in that it encourages philosophers to use language that appears to be accurate and precise, but which is difficult to interpret.  It also encourages philosophers to make narrower and simpler claims (to be more easily defended).   This approach also “seems” to lead to inconclusive conclusions and an excessive use of terms like “seems”, “roughly”, “perhaps” “possibly”…  Perhaps it can be expected that when language is going to be interpreted very critically, but nevertheless has to be used to describe something that cannot be stated clearly (perhaps because it is complicated or intricate) the language has to be intentionally made vague, but has to sound precise.  A lay person reading philosophy could certainly get this impression.  Perhaps the assumption that language should be such that it makes accurate and precise claims about the environment should be challenged.

Why interpret language as making factual or very precise claims?  Philosophers of language seem to rely on an intuition that language should be interpreted this way and brush off criticism.  Suggestions that this is not how language is used in the lay community to which they belong are interpreted as criticisms of lay language users, rather than a criticism of language.  The sentiment seems to be: if lay people are not very precise with their use of language that is to their detriment, we philosophers have to rise above this sloppy behaviour and use language precisely.  This criticism is extended to scientists who are sometimes accused of sloppy language. 

But on what basis do philosophers defend their own use and interpretation of language? It seems as if their way of using language is grounded in an intuition, the feeling that precision is essential for scholarship.  It also seems to be assumed that this intuition is so basic that we need not examine it and we can take it to be a fact that this precise way of interpreting language is necessary for philosophy.  I will challenge this later.

Another intuition that seems to go without question relates to the use of logic.  The assumption seems to be that we are rational thinkers and the mark of a good philosopher is his or her use of logic.  This, combined with the determination of facts discussed above, seems to guide philosophers along a trajectory that leads to isolation from science.  Philosophers cannot use scientific findings; these are not facts.  As a result philosophers that wish to deal with facts (most of them) are limited in what they can apply their scholarship to.  In casting about for something factual to analyse philosophers seem to have focused on language as, interpreted as making precise statements, language appears to have the precision they are searching for.  Language may also be an attractive topic because it seems to present something that can be more easily analysed – the written word stays firm and steady and seems to have a structure that can be analysed using logic.  Concepts also give this appearance of exactness and certainty and so DP philosophy tends to focus on language, logic and conceptual analysis.

This snapshot of DP philosophy is taken from an IP perspective; it steps outside the constraints imposed by DP and uses vague and imprecise language to provide an overview that is acknowledged to be incomplete and so probably misleading.  From the IP perspective from which it is taken, this language is acknowledged to be vague and imprecise.  IP accepts that language is, by its very nature, imprecise. It explains why this is the case (see the page on two views of language), but maintains that even though language is imprecise by its very nature it is still a very powerful and useful tool. 

In contrast to DP philosophy,  IP philosophy relies heavily on similarity of form and induced associations between forms.  Modelling and abstraction also play major roles. Let me sketch briefly how I conceptualise this. When we observe the world we detect forms.  Through abstraction we can group these forms into sets and families of sets.  We can, for example, abstract from the form of a particular dog to create the generalised form "dog" and anything that has this form is a part of this set.  This set is part of a larger set that we tag with the label "animal" and all the objects that con-form to this form are a part of this set. Our normal perception provides models at a particular level of detail and as we abstract away from this detail we create models that are both simpler and more general. Language and concepts both operate at a relatively high level of abstraction (LoA) and so when I use language and concepts I am constructing models that are highly abstracted.  These are generalised models that  have wide application.  This approach where language is thought of as constructing highly abstracted models, the precision of the language (itself) becomes less important and is overshadowed by the need to create models that do an adequate job of representing what we are discussing.  The focus therefore shifts from the language to what DP philosophers may sometimes refer to as the semantics of the language and what I refer to as the models that represent whatever we are discussing.    The intent of language in IP is to construct models that adequately represent.

The different methodologies discussed here along with the different interpretations of language put IP and DP on divergent paths and it will be interesting to see how each of these deals with the issues that attract the attention of philosophers. 

No comments:

Post a Comment

Contributors