Truth
The first task in developing any line of reasoning is to ensure a sound premise. Reasoning from a faulty premise simply produces a situation in which the conclusions reached are in logical conformity with the premise. But if the premise is faulty, so are the conclusions. By definition, there can be no sounder premise from which to reason than the truth. To know when I have it, there must be a standard of verification that can be applied to any claim about truth that can determine if the claim is correct. Such a standard exists - truth is objectified information. To objectify information means to achieve a proof that does not rely on any personal or cultural assumptions. Truth is something that is so, regardless of how anyone happens to “feel” about it. It is a fact independent of any particular aspiration or inclination, with a standard of verification that convinces at the level of our common humanity. How is this to be done?
Prior to the discovery of the method for objectifying information, “truth” was defined on the basis of such things as a casual consensus (which is another name for “conventional wisdom”), tradition, divine revelation, the authority of special persons, etc. The problem with “truth” determined by these standards is that objectified information has frequently shown such claims to be false, whereas the reverse has never happened and, indeed, cannot happen. That is, tradition or divine revelation can never show that a piece of objectified information is not true. Of course, all ideologies (whether religious, political, cultural, ethnic, etc.) fear the power of objectified information to invalidate their claims about truth. Realizing they can’t defuse that power, they attempt to circumscribe its application. They concede that objectified information does apply as the standard of truth for the category of “science”, but for other categories different standards apply. However, we can safely ignore this consideration for the moment, since if one truth cannot contradict another, any truth determined through the objectification of information could not be invalidated by any other method that claimed the ability for truth determination.
Information can only be objectified through what is conventionally known as the scientific method supported by mathematics. No other technique is recognized across the whole spectrum of humanity as having this ability to bring us progressively closer to the truth. It has achieved that recognition because it has successfully analyzed the process by which human beings get to “know” anything. We know that information about the world comes to us through our five senses and creates an internal perception (our understanding of what is going on as contained in our mind/brain). Although there may be a large degree of commonality in the reports of those internal perceptions for the same event, differences in reports can exist from individual to individual. How do these differences in reports of internal perceptions come about? The reasonable assumption is that since the stimulus of a common external event has the potential to produce different responses among individuals, there must be differences in the way people are receiving and/or processing that external information. Indeed, we now know this is the case. There is a dynamic in nature to produce variety and in the case of human beings that dynamic produces uniqueness: no person is identical to another.
That uniqueness applies not only to the obvious external physical differences but also to our sensory apparatus (the gateway through which all information comes) and the structure and chemistry of our brains (the place where we try to make sense of things). For example, a condition such as color blindness is an indicator of genetic differences in sensory receptors. Similarly, there is a condition known as synesthesia that blends mental perceptions which in the majority population are strictly separated. One instance of this is the inability to register the concept of number, without simultaneously registering a sensation of color specific to each numeral. These examples are gross indicators of the existence of differences in sensory perception and mental processing of information, and it is reasonable to infer that there also exist subtler differences in both areas that have not been clinically verified. So some differences in reports of internal perceptions can be accounted for by differences in physiology.
In addition to these physiological reasons for the differences in internal perceptions, there are two other areas that bear on this issue. First, there is context. In order for the world to be sensible, new information has to fit in some sort of order with prior information. Since environments (which include culture) can be radically different, those differences can produce different contexts in which the new information is to be understood. Second, all brains tend to rationalize events in such a way so as to make their possessors appear be the “hero of the situation”. This is the ego problem. In consequence there is a point (not necessarily the same point, however) in everyone’s thinking at which we tend to understand things, which are somewhat ambiguous, in the way that is most advantageous to ourselves. Therefore, other differences in reports of internal perceptions can be accounted for by differences in prior experience and ego.
Now that the fundamentals of how we get to “know” anything are clear, there is no mystery as to why there can be differences in the reports of individuals of their perceptions of a common event. Moreover, no one is in a preferred position with regard to the credibility of their reports, since no one is exempt from the factors that produce the differences - there is no rational basis for preferring the internal perceptions of the Pope to those of the plumber. Those potentially distorting factors are part and parcel of the human condition and they apply to everyone. So at this point we are in the position of having internal perception reports that may not all agree on how the world should be explained or understood. How do we purge the effects of subjective distortions in internal reports to get objective information?
There is a clue on how to proceed. There are some areas in which internal perceptions are always in agreement. If we analyze the process by which those elements on which common agreement is reached, we find that there are some fundamental principles of thinking that are hard-wired in our neurology. That is, there are some perceptions that are so vital to our survival that individuals who lack them do not survive to propagate. In consequence, there are some ideas that are truly inconceivable for human beings. For instance, if a string is cut in two, one of the resulting pieces will be longer than the original (for others, see Euclid). It is important that the use of the term “inconceivable” be understood in the strict sense of something that is beyond human powers of conception, and not in the conventional sense of something that is quite conceivable but vastly improbable, such as a truthful used car salesman. To the degree that these fundamental principles of thought can be catalogued and reasoning is restricted to those principles, we begin to neutralize the subjective effects that can warp our understanding of things. Further, if A is bigger than B, and B is bigger that C, A is also bigger than C. We have the capacity to make a logical inference which is attained through literacy. Using this capacity in connection with the fundamental principles creates rational thought.
However, rational thought is still not enough to get to objective information. Rational thought, in and of itself, doesn’t do anything except prove consistency. It is an indispensable tool, but to get to the truth, an additional requirement applies. The conclusions of rational thought must be tested against the actual workings of the world. In other words, we must confirm by environmental feedback that what we think is happening, actually does happen.
Finally, the nature of subjective distortions is such that their effects can be so subtle that their possessors may be unaware of them. Therefore, unless others using the required techniques corroborate a report, objectivity has not been achieved.
As noted above, science must be supported by mathematics. What is so special about mathematics?
Since all thought consists of the manipulation of abstractions, the quality of thought is hostage to the precision with which abstractions are manipulated. That is, quality of thought depends on how accurately we employ logic. Now the nature of conventional language is such that its precision breaks down, as a medium for the employment of logic, as the requirement for exactitude increases. This occurs because words have connotations. Not only those connotations that are universally recognized among the speakers of a particular language, but also personal connotations that arise from particular experiences that an individual may have had. For instance, whether or not you agree with the statement that Albert Schweitzer, Genghis Khan, Adolph Hitler, and George Bush are great men depends on the connotations you make with the word “great”. If you understand it to simply mean famous (that is, well known), then they all make the cut. On the other hand, if your connotation of the word is constrained by the idea of being estimable or noble, then perhaps the designation applies only to one person in that statement. This being the case, no speaker can be sure of the connotations a listener attributes to the speaker’s words. Therefore, the speaker can never know if the listener understands, with precision, the point the speaker is attempting to establish. This problem cannot be remedied by restricting the definition of the words since the restricted definition is also made up of words, and those words have connotations, and so on ad infinitum.
In order to achieve the maximum precision in logic, we need to transcend the limitations of conventional language. That is, we require a “language” with a system of symbols for abstractions that are universally recognized as being devoid of connotations. Mathematical symbols fulfill this requirement. For example, the number one abstracted to the mathematical symbol “1” has a single precise meaning. It has a variety of ways to be expressed - for instance “sine squared x plus cosine squared x”, but this expression means precisely what “1” means. It is this ability to put a logical discourse on a firm non-ambiguous basis that makes mathematics essential to achieving the precision in thought necessary for the determination of truth. Without mathematics there is no certainty that everyone has the same understanding of the elements substantiating a proof and, therefore, no real agreement on whether a proof has been made. It should be clear that, in and of itself, mathematics can’t prove any contention about the nature of things. The value of mathematics is limited to ensuring consistency in logical discourse. Working from a truthful premise, mathematics has to power to reveal other truths that must logically result from that premise.
The objectification of information is possible since we have some understanding of those human functionings that produce the differences in the reports of internal perceptions. On the basis of that understanding, techniques have been invented that allow us to progressively purge those reports of subjective distortions – the methodology of science. This methodology has been summarized by the concepts of “conjecture, criticism, and experiment”. That is, on the basis of an initial review of the data, a conjecture is made as to the nature of some natural process, creating a hypothesis. The hypothesis is then subjected to scrutiny to determine if any anomalies are found in the explanation. If any are found, the hypothesis is criticized for the existence of anomalies. Experiments must then be conducted to resolve the issues disclosed in the criticism.
The methodology is trusted to produce truth because:
If it is possible to get to the end of this process with only one understanding that rationally accounts for all the data, some portion of the truth has been discerned. That is, a hypothesis has been transformed into a theory.
Incidentally, the self-correcting requirement in 2. above implies the principle of “falsifiability”. In addition to cataloguing all of the observations that support a conclusion, we must also be able to imagine a hypothetical observation that would demonstrate the conclusion must be false, even though such observation has not yet been made. The world is, after all, a very complicated place and it would be arrogant, indeed, to assume what you haven’t yet seen couldn’t happen. The principle of falsifiabilty is simply a “trip wire” in thinking that keeps us alert to that possibility. A famous example of falsifiability for evolution through natural selection is Haldane’s positing a six hundred million year old fossil of a rabbit. If one were to be found, evolution through natural selection, as a theory, has serious problems. Since according to the theory, rabbits have to be a much later product of evolution. Falsifiability is an attempt to ensure that we remain as sensitive in our thinking to disconfirming as to confirming evidence. If there are no conceivable circumstances that could invalidate a conclusion, we know that conclusion does not arise from a disinterested quest for the truth, but is rather an attempt to validate an ideological assumption, and such conclusion can be summarily dismissed.
It is important to understand that the concept of science is neither an ideology nor a category. It is a methodology concerned solely with gaining an understanding of the nature of things based on a proof that is beyond rational dispute. It does not teach the truth; it teaches what we must do in order to get to the truth. From the viewpoint of that quest, the methodology is more important than the results it obtains.
To see why, let’s consider Newton. In his era, the physics of Newton seemed to be a true explanation of the workings of the cosmos since it rested on proofs that were beyond rational dispute. Using the tools available in that era, the confirmatory feedback was overwhelming. But the tools improved over time and it became possible to examine things at scales vastly larger and smaller than was possible in Newton’s time. Using essentially the same methodology along with the improved tools, it became apparent that at these vastly larger scales the Newtonian explanation had problems. Along comes Einstein giving us Relativity, which effectively resolved those problems. What has happened? Have we suddenly done a U turn – Newton’s “out” and Einstein’s “in”?
If an understanding gained through this method is tentative, as apparently Newton’s was, doesn’t that show that our quest for the truth via this approach is misplaced? The point would be well taken if the replacement of Newtonian by Einsteinian physics constituted a U turn. That is, if things we had thought we had objectified through the Newtonian route later proved to be false by using Einstein’s route. But that is not the case. Things that were objectified under Newton remained objectified under Einstein, and the equations on which the Newtonian system was based are very accurate approximations of what would be produced using Einstein’s approach on the scales available to Newton. So accurate, in fact, that they continue to be used for some cosmically local events since they are simpler than the calculations of Relativity and, at a practical level, can produce equivalent results. The elements of the Newtonian explanation that were overthrown by Einstein were the seemingly reasonable assumptions that time and space were separate, absolute qualities. These un-objectified assumptions had to be discarded. But things objectified through a Newtonian analysis were still objectified, and the revised understanding achieved by Einstein had to account for them.
The methodology is more important than the results because any understanding gained from the results is always tentative, since we can never be sure that we have discerned the “ultimate” level of any issue. But although there can be no absolute certainty that any understanding is true, as more objectified data corroborates an understanding, the probability that it is true rises. So the definition of truth changes from an absolute certainty to a vastly high probability. The certainty that fidelity to the methodology does provide is that we have achieved the closest approximation to the truth that is possible under given circumstances and that there will be no U turns on things that have been objectified. We will not wake up one day to find that the earth is flat after all or that evil spirits cause mental illness.
To keep things clear, it is necessary to understand how the terms objectified information, facts and truth are being used in this argument. Facts are the simplest units of objectified information that can be used as a premise to reason about a process. Truth is the underlying principle that is found to govern a process; however, that underlying principle is also objectified information once it has been validated by use of scientific methodology. The point being that truth is progressively revealed, going from the simplest truths (facts) to more complicated truths (processes). However, a fact is a truth, and a process is a truth. The only difference between such truths is where they exist on a scale of complexity. So the distinction between the terms is made for ease of explanation regarding the progressive nature of the disclosure of the truth, and does not imply any qualitative difference between a fact and a process – both are objectified information. Once a truth has been determined, however complicated, it can be used as a premise (a fact) in pursuit of deeper truths.
Also, it is necessary to append some clarifications and caveats regarding the term “science” that are needed to keep our thinking clear.
First, science is concerned solely with gaining an understanding of the nature of things. That is, understanding the underlying principles that determine why a certain process produces a particular result. What is done on a practical basis with that understanding is not science; it is technology. The inventions of polio vaccine and the atomic bomb were not scientific achievements; they were technological achievements. Both achievements used existing scientific facts (objectified information) and engineered those facts to produce a desired outcome. In neither case, however, was our fundamental understanding of the nature of things changed by those accomplishments.
Science itself is not a category; it is a methodology that is applicable to any category that makes a claim to truth. “Science” only occurs with multiple confirming replications of experimental results, since it is only through repetition by others using the required methodology that we can be assured that all subjective distortions have been adequately vetted. Since an individual can’t validly vet her/his own work, by definition no single individual can fully manifest the concept of “science”. Therefore, the title of “scientist” is inappropriate for an individual. There are only people (chemists, biologists, physicists, astronomers etc) who are recognized as taking special pains to ensure that the methodology of science is strictly observed in the work done in their category.
The erroneous idea that science is a “category” is now a consensus, and history provides an explanation as to how this consensus arose. But you have to understand why the methodology works in order to see how such a consensus would grow.
The key to this understanding is the issue of controlling for the effects of variables. If we have a complicated process, we can only gain an understanding of what’s going on by isolating variables and seeing how each one impacts the total process. A simple example would be trying to understand how gravity works on the surface of the earth. The theory of gravity maintains that all objects fall at the same rate of acceleration. However, if I drop a feather and a ten-pound lead ball from the same height, the ball reaches the ground first. But we know that a blanket of air envelops the surface of the earth. By doing the same demonstration again, in a vacuum, we will see that both objects reach the ground at the same time. We can gain an understanding of how the principle of gravity itself functions by controlling for the effects of the variable “air” in the demonstration.
Now the first area in which this methodology was effectively applied was in physics. That was because of the happy accident that basic physics is a concentration of study that is the easiest to control for the effect of variables. As we became increasingly sophisticated in understanding the working of the world, we increased the number of categories (chemistry, biology, etc.) in which the control for the effects of variables was possible. As a consequence, we began to define a “category” of science, with “sub-categories” limited to those categories for which control of the effects for variables was easiest.
While the possibility of controlling for variables in the erroneously named “hard science” categories has become progressively more manageable, when we move into such categories as history or economics, the complexity and number of variables (psychology, culture, climate etc) rise to astronomical levels. As a result, we have the ability to control for variables in these categories only at very high levels; but as we move to deeper levels of understanding, the problem of controlling for variables becomes overwhelming in terms of the current technology.
But there are perceived cultural and ideological requirements for broadly applicable “truths” that “should” apply to such categories as economics, history, etc. As a consequence, assumptions are made based on the consensus of “experts” that such assumptions are “reasonable” approximations of what the effects of variables would have been had we actually been able to control for them. However, these assumptions assume that those “experts” know what all the variables are, and understand all the emerging complexities that arise from the variables in various combinations. But that is precisely the point at issue. Without the implementation of the methodology, you can’t have this knowledge. Assumptions made under these circumstances are simply “guesses”. And reasoning premised in a “guess” simply produces another “guess”, not a truth.
We now know that a claim of “truth”, premised only on a consensus of “reasonableness”, is not enough to confirm a truth – and the methodology tells us why – we also need environmental feed-back that confirms that the “reasonable assumption” actually happens. We have a methodology that is proven, beyond reasonable doubt, to give us the closest approximation to the truth that is possible at a point in time. The fact that the methodology is difficult to implement in some categories (history, economics, etc) is no justification to abandon it for those categories. In difficult categories, we do use the methodology, but a claim to truth is valid only to the level at which the control for variables can be managed. Beyond this level any claims about truth are unfounded.
From this history, it’s easy to see how a consensus would grow that there was a category of something called “science” in which the methodology worked. But there were other categories (history, economics, ethics etc) for which the methodology was inappropriate. That consensus is wrong because those distortions in thinking, which the scientific methodology compensates for, apply regardless of the category being considered. The category has no effect on how the brain functions. There are not different kinds of truth. There is only “truth” which we get by the implementation of the methodology. Science is not a category itself but rather a methodology that possesses the possibility of producing truthful results in any category to which it is applied.
However, since these misunderstandings are deeply embedded in the common culture, it is convenient to abandon the term “science” altogether and replace it with “truth-producing methodology”. This simple terminological switch keeps the proper focus; not on how a claim about the truth is labeled or the status of the person making the claim, but rather on the process by which the claim is substantiated.
In summary, any claim that lacks verification through the truth-producing methodology (regardless of category) cannot be considered as true, however it may otherwise be regarded and whatever other value may be attributed to it. This rigorous and restrictive qualification on the use of the term has been the essential element in expanding the human capacity to progressively discover the nature of things (including ourselves), and it is only our understanding of the nature of things that allows us to exploit that understanding to increase the probability of the survival and success of the human species. In short, we need the truth to survive.
We now have a standard that must be applied to a premise that is used to support any line of reasoning, across all categories, to validate a claim to truth. In the absence of that standard, we can safely ignore the reasoning.
Prior to the discovery of the method for objectifying information, “truth” was defined on the basis of such things as a casual consensus (which is another name for “conventional wisdom”), tradition, divine revelation, the authority of special persons, etc. The problem with “truth” determined by these standards is that objectified information has frequently shown such claims to be false, whereas the reverse has never happened and, indeed, cannot happen. That is, tradition or divine revelation can never show that a piece of objectified information is not true. Of course, all ideologies (whether religious, political, cultural, ethnic, etc.) fear the power of objectified information to invalidate their claims about truth. Realizing they can’t defuse that power, they attempt to circumscribe its application. They concede that objectified information does apply as the standard of truth for the category of “science”, but for other categories different standards apply. However, we can safely ignore this consideration for the moment, since if one truth cannot contradict another, any truth determined through the objectification of information could not be invalidated by any other method that claimed the ability for truth determination.
Information can only be objectified through what is conventionally known as the scientific method supported by mathematics. No other technique is recognized across the whole spectrum of humanity as having this ability to bring us progressively closer to the truth. It has achieved that recognition because it has successfully analyzed the process by which human beings get to “know” anything. We know that information about the world comes to us through our five senses and creates an internal perception (our understanding of what is going on as contained in our mind/brain). Although there may be a large degree of commonality in the reports of those internal perceptions for the same event, differences in reports can exist from individual to individual. How do these differences in reports of internal perceptions come about? The reasonable assumption is that since the stimulus of a common external event has the potential to produce different responses among individuals, there must be differences in the way people are receiving and/or processing that external information. Indeed, we now know this is the case. There is a dynamic in nature to produce variety and in the case of human beings that dynamic produces uniqueness: no person is identical to another.
That uniqueness applies not only to the obvious external physical differences but also to our sensory apparatus (the gateway through which all information comes) and the structure and chemistry of our brains (the place where we try to make sense of things). For example, a condition such as color blindness is an indicator of genetic differences in sensory receptors. Similarly, there is a condition known as synesthesia that blends mental perceptions which in the majority population are strictly separated. One instance of this is the inability to register the concept of number, without simultaneously registering a sensation of color specific to each numeral. These examples are gross indicators of the existence of differences in sensory perception and mental processing of information, and it is reasonable to infer that there also exist subtler differences in both areas that have not been clinically verified. So some differences in reports of internal perceptions can be accounted for by differences in physiology.
In addition to these physiological reasons for the differences in internal perceptions, there are two other areas that bear on this issue. First, there is context. In order for the world to be sensible, new information has to fit in some sort of order with prior information. Since environments (which include culture) can be radically different, those differences can produce different contexts in which the new information is to be understood. Second, all brains tend to rationalize events in such a way so as to make their possessors appear be the “hero of the situation”. This is the ego problem. In consequence there is a point (not necessarily the same point, however) in everyone’s thinking at which we tend to understand things, which are somewhat ambiguous, in the way that is most advantageous to ourselves. Therefore, other differences in reports of internal perceptions can be accounted for by differences in prior experience and ego.
Now that the fundamentals of how we get to “know” anything are clear, there is no mystery as to why there can be differences in the reports of individuals of their perceptions of a common event. Moreover, no one is in a preferred position with regard to the credibility of their reports, since no one is exempt from the factors that produce the differences - there is no rational basis for preferring the internal perceptions of the Pope to those of the plumber. Those potentially distorting factors are part and parcel of the human condition and they apply to everyone. So at this point we are in the position of having internal perception reports that may not all agree on how the world should be explained or understood. How do we purge the effects of subjective distortions in internal reports to get objective information?
There is a clue on how to proceed. There are some areas in which internal perceptions are always in agreement. If we analyze the process by which those elements on which common agreement is reached, we find that there are some fundamental principles of thinking that are hard-wired in our neurology. That is, there are some perceptions that are so vital to our survival that individuals who lack them do not survive to propagate. In consequence, there are some ideas that are truly inconceivable for human beings. For instance, if a string is cut in two, one of the resulting pieces will be longer than the original (for others, see Euclid). It is important that the use of the term “inconceivable” be understood in the strict sense of something that is beyond human powers of conception, and not in the conventional sense of something that is quite conceivable but vastly improbable, such as a truthful used car salesman. To the degree that these fundamental principles of thought can be catalogued and reasoning is restricted to those principles, we begin to neutralize the subjective effects that can warp our understanding of things. Further, if A is bigger than B, and B is bigger that C, A is also bigger than C. We have the capacity to make a logical inference which is attained through literacy. Using this capacity in connection with the fundamental principles creates rational thought.
However, rational thought is still not enough to get to objective information. Rational thought, in and of itself, doesn’t do anything except prove consistency. It is an indispensable tool, but to get to the truth, an additional requirement applies. The conclusions of rational thought must be tested against the actual workings of the world. In other words, we must confirm by environmental feedback that what we think is happening, actually does happen.
Finally, the nature of subjective distortions is such that their effects can be so subtle that their possessors may be unaware of them. Therefore, unless others using the required techniques corroborate a report, objectivity has not been achieved.
As noted above, science must be supported by mathematics. What is so special about mathematics?
Since all thought consists of the manipulation of abstractions, the quality of thought is hostage to the precision with which abstractions are manipulated. That is, quality of thought depends on how accurately we employ logic. Now the nature of conventional language is such that its precision breaks down, as a medium for the employment of logic, as the requirement for exactitude increases. This occurs because words have connotations. Not only those connotations that are universally recognized among the speakers of a particular language, but also personal connotations that arise from particular experiences that an individual may have had. For instance, whether or not you agree with the statement that Albert Schweitzer, Genghis Khan, Adolph Hitler, and George Bush are great men depends on the connotations you make with the word “great”. If you understand it to simply mean famous (that is, well known), then they all make the cut. On the other hand, if your connotation of the word is constrained by the idea of being estimable or noble, then perhaps the designation applies only to one person in that statement. This being the case, no speaker can be sure of the connotations a listener attributes to the speaker’s words. Therefore, the speaker can never know if the listener understands, with precision, the point the speaker is attempting to establish. This problem cannot be remedied by restricting the definition of the words since the restricted definition is also made up of words, and those words have connotations, and so on ad infinitum.
In order to achieve the maximum precision in logic, we need to transcend the limitations of conventional language. That is, we require a “language” with a system of symbols for abstractions that are universally recognized as being devoid of connotations. Mathematical symbols fulfill this requirement. For example, the number one abstracted to the mathematical symbol “1” has a single precise meaning. It has a variety of ways to be expressed - for instance “sine squared x plus cosine squared x”, but this expression means precisely what “1” means. It is this ability to put a logical discourse on a firm non-ambiguous basis that makes mathematics essential to achieving the precision in thought necessary for the determination of truth. Without mathematics there is no certainty that everyone has the same understanding of the elements substantiating a proof and, therefore, no real agreement on whether a proof has been made. It should be clear that, in and of itself, mathematics can’t prove any contention about the nature of things. The value of mathematics is limited to ensuring consistency in logical discourse. Working from a truthful premise, mathematics has to power to reveal other truths that must logically result from that premise.
The objectification of information is possible since we have some understanding of those human functionings that produce the differences in the reports of internal perceptions. On the basis of that understanding, techniques have been invented that allow us to progressively purge those reports of subjective distortions – the methodology of science. This methodology has been summarized by the concepts of “conjecture, criticism, and experiment”. That is, on the basis of an initial review of the data, a conjecture is made as to the nature of some natural process, creating a hypothesis. The hypothesis is then subjected to scrutiny to determine if any anomalies are found in the explanation. If any are found, the hypothesis is criticized for the existence of anomalies. Experiments must then be conducted to resolve the issues disclosed in the criticism.
The methodology is trusted to produce truth because:
- At its foundations, its claims are demonstrable and replicable.
- It is self-correcting through its requirement for constant corroborating environmental feedback and progressive consistency (i.e. one truth can’t contradict another).
- It understands and accommodates the need to eliminate the subjective distortions in thinking.
If it is possible to get to the end of this process with only one understanding that rationally accounts for all the data, some portion of the truth has been discerned. That is, a hypothesis has been transformed into a theory.
Incidentally, the self-correcting requirement in 2. above implies the principle of “falsifiability”. In addition to cataloguing all of the observations that support a conclusion, we must also be able to imagine a hypothetical observation that would demonstrate the conclusion must be false, even though such observation has not yet been made. The world is, after all, a very complicated place and it would be arrogant, indeed, to assume what you haven’t yet seen couldn’t happen. The principle of falsifiabilty is simply a “trip wire” in thinking that keeps us alert to that possibility. A famous example of falsifiability for evolution through natural selection is Haldane’s positing a six hundred million year old fossil of a rabbit. If one were to be found, evolution through natural selection, as a theory, has serious problems. Since according to the theory, rabbits have to be a much later product of evolution. Falsifiability is an attempt to ensure that we remain as sensitive in our thinking to disconfirming as to confirming evidence. If there are no conceivable circumstances that could invalidate a conclusion, we know that conclusion does not arise from a disinterested quest for the truth, but is rather an attempt to validate an ideological assumption, and such conclusion can be summarily dismissed.
It is important to understand that the concept of science is neither an ideology nor a category. It is a methodology concerned solely with gaining an understanding of the nature of things based on a proof that is beyond rational dispute. It does not teach the truth; it teaches what we must do in order to get to the truth. From the viewpoint of that quest, the methodology is more important than the results it obtains.
To see why, let’s consider Newton. In his era, the physics of Newton seemed to be a true explanation of the workings of the cosmos since it rested on proofs that were beyond rational dispute. Using the tools available in that era, the confirmatory feedback was overwhelming. But the tools improved over time and it became possible to examine things at scales vastly larger and smaller than was possible in Newton’s time. Using essentially the same methodology along with the improved tools, it became apparent that at these vastly larger scales the Newtonian explanation had problems. Along comes Einstein giving us Relativity, which effectively resolved those problems. What has happened? Have we suddenly done a U turn – Newton’s “out” and Einstein’s “in”?
If an understanding gained through this method is tentative, as apparently Newton’s was, doesn’t that show that our quest for the truth via this approach is misplaced? The point would be well taken if the replacement of Newtonian by Einsteinian physics constituted a U turn. That is, if things we had thought we had objectified through the Newtonian route later proved to be false by using Einstein’s route. But that is not the case. Things that were objectified under Newton remained objectified under Einstein, and the equations on which the Newtonian system was based are very accurate approximations of what would be produced using Einstein’s approach on the scales available to Newton. So accurate, in fact, that they continue to be used for some cosmically local events since they are simpler than the calculations of Relativity and, at a practical level, can produce equivalent results. The elements of the Newtonian explanation that were overthrown by Einstein were the seemingly reasonable assumptions that time and space were separate, absolute qualities. These un-objectified assumptions had to be discarded. But things objectified through a Newtonian analysis were still objectified, and the revised understanding achieved by Einstein had to account for them.
The methodology is more important than the results because any understanding gained from the results is always tentative, since we can never be sure that we have discerned the “ultimate” level of any issue. But although there can be no absolute certainty that any understanding is true, as more objectified data corroborates an understanding, the probability that it is true rises. So the definition of truth changes from an absolute certainty to a vastly high probability. The certainty that fidelity to the methodology does provide is that we have achieved the closest approximation to the truth that is possible under given circumstances and that there will be no U turns on things that have been objectified. We will not wake up one day to find that the earth is flat after all or that evil spirits cause mental illness.
To keep things clear, it is necessary to understand how the terms objectified information, facts and truth are being used in this argument. Facts are the simplest units of objectified information that can be used as a premise to reason about a process. Truth is the underlying principle that is found to govern a process; however, that underlying principle is also objectified information once it has been validated by use of scientific methodology. The point being that truth is progressively revealed, going from the simplest truths (facts) to more complicated truths (processes). However, a fact is a truth, and a process is a truth. The only difference between such truths is where they exist on a scale of complexity. So the distinction between the terms is made for ease of explanation regarding the progressive nature of the disclosure of the truth, and does not imply any qualitative difference between a fact and a process – both are objectified information. Once a truth has been determined, however complicated, it can be used as a premise (a fact) in pursuit of deeper truths.
Also, it is necessary to append some clarifications and caveats regarding the term “science” that are needed to keep our thinking clear.
First, science is concerned solely with gaining an understanding of the nature of things. That is, understanding the underlying principles that determine why a certain process produces a particular result. What is done on a practical basis with that understanding is not science; it is technology. The inventions of polio vaccine and the atomic bomb were not scientific achievements; they were technological achievements. Both achievements used existing scientific facts (objectified information) and engineered those facts to produce a desired outcome. In neither case, however, was our fundamental understanding of the nature of things changed by those accomplishments.
Science itself is not a category; it is a methodology that is applicable to any category that makes a claim to truth. “Science” only occurs with multiple confirming replications of experimental results, since it is only through repetition by others using the required methodology that we can be assured that all subjective distortions have been adequately vetted. Since an individual can’t validly vet her/his own work, by definition no single individual can fully manifest the concept of “science”. Therefore, the title of “scientist” is inappropriate for an individual. There are only people (chemists, biologists, physicists, astronomers etc) who are recognized as taking special pains to ensure that the methodology of science is strictly observed in the work done in their category.
The erroneous idea that science is a “category” is now a consensus, and history provides an explanation as to how this consensus arose. But you have to understand why the methodology works in order to see how such a consensus would grow.
The key to this understanding is the issue of controlling for the effects of variables. If we have a complicated process, we can only gain an understanding of what’s going on by isolating variables and seeing how each one impacts the total process. A simple example would be trying to understand how gravity works on the surface of the earth. The theory of gravity maintains that all objects fall at the same rate of acceleration. However, if I drop a feather and a ten-pound lead ball from the same height, the ball reaches the ground first. But we know that a blanket of air envelops the surface of the earth. By doing the same demonstration again, in a vacuum, we will see that both objects reach the ground at the same time. We can gain an understanding of how the principle of gravity itself functions by controlling for the effects of the variable “air” in the demonstration.
Now the first area in which this methodology was effectively applied was in physics. That was because of the happy accident that basic physics is a concentration of study that is the easiest to control for the effect of variables. As we became increasingly sophisticated in understanding the working of the world, we increased the number of categories (chemistry, biology, etc.) in which the control for the effects of variables was possible. As a consequence, we began to define a “category” of science, with “sub-categories” limited to those categories for which control of the effects for variables was easiest.
While the possibility of controlling for variables in the erroneously named “hard science” categories has become progressively more manageable, when we move into such categories as history or economics, the complexity and number of variables (psychology, culture, climate etc) rise to astronomical levels. As a result, we have the ability to control for variables in these categories only at very high levels; but as we move to deeper levels of understanding, the problem of controlling for variables becomes overwhelming in terms of the current technology.
But there are perceived cultural and ideological requirements for broadly applicable “truths” that “should” apply to such categories as economics, history, etc. As a consequence, assumptions are made based on the consensus of “experts” that such assumptions are “reasonable” approximations of what the effects of variables would have been had we actually been able to control for them. However, these assumptions assume that those “experts” know what all the variables are, and understand all the emerging complexities that arise from the variables in various combinations. But that is precisely the point at issue. Without the implementation of the methodology, you can’t have this knowledge. Assumptions made under these circumstances are simply “guesses”. And reasoning premised in a “guess” simply produces another “guess”, not a truth.
We now know that a claim of “truth”, premised only on a consensus of “reasonableness”, is not enough to confirm a truth – and the methodology tells us why – we also need environmental feed-back that confirms that the “reasonable assumption” actually happens. We have a methodology that is proven, beyond reasonable doubt, to give us the closest approximation to the truth that is possible at a point in time. The fact that the methodology is difficult to implement in some categories (history, economics, etc) is no justification to abandon it for those categories. In difficult categories, we do use the methodology, but a claim to truth is valid only to the level at which the control for variables can be managed. Beyond this level any claims about truth are unfounded.
From this history, it’s easy to see how a consensus would grow that there was a category of something called “science” in which the methodology worked. But there were other categories (history, economics, ethics etc) for which the methodology was inappropriate. That consensus is wrong because those distortions in thinking, which the scientific methodology compensates for, apply regardless of the category being considered. The category has no effect on how the brain functions. There are not different kinds of truth. There is only “truth” which we get by the implementation of the methodology. Science is not a category itself but rather a methodology that possesses the possibility of producing truthful results in any category to which it is applied.
However, since these misunderstandings are deeply embedded in the common culture, it is convenient to abandon the term “science” altogether and replace it with “truth-producing methodology”. This simple terminological switch keeps the proper focus; not on how a claim about the truth is labeled or the status of the person making the claim, but rather on the process by which the claim is substantiated.
In summary, any claim that lacks verification through the truth-producing methodology (regardless of category) cannot be considered as true, however it may otherwise be regarded and whatever other value may be attributed to it. This rigorous and restrictive qualification on the use of the term has been the essential element in expanding the human capacity to progressively discover the nature of things (including ourselves), and it is only our understanding of the nature of things that allows us to exploit that understanding to increase the probability of the survival and success of the human species. In short, we need the truth to survive.
We now have a standard that must be applied to a premise that is used to support any line of reasoning, across all categories, to validate a claim to truth. In the absence of that standard, we can safely ignore the reasoning.
- Home
- Introduction
- Part 1
- Truth
- Insights
- The Human Condition
- Education
- Human Decency
- Enlightenment
- Part 2
- Culture Demystified
- The Elite
- Mediocrities
- Self-regard
- Self and Society
- Part 3
- Morals, Ethics, and Virtue
- The Concept of Evil is a Bad Idea
- Religion
- Patriotism
- Freedom
- Market Capitalism
- Wealth Distribution