What does “computing” mean?
A rejoinder to Dr. Paula Quinon

§1.  Dr. Paula Quinton has kindly shared with me the abstract of her recent work, titled  „What „computing” means?”  (see item 6 in References), and added some comments to more accurately explain her key ideas. Among the comments there was a link to S.Feferman’s paper „Turing’s ‚Oracle’. From Absolute to Relative computability and Back.” (2016).

In that paper I find a passage which seems to me crucial from an epistemological point of  view, though it does not contain any direct references to philosophy. Before I quote and comment on it, let me express my personal attitude to some issues investigated by Ms. Quinton, and how I see their relation to the passage in question.

Frankly speaking, I am no expert in the issues of computing, and my interest in them is  rather, so to say, old-fashioned, going back to Kazimierz Ajdukiewicz’s project in the early 1960s, concerned with the justification of statements and decisions. The subject was discussed at the very impressive international conference in 1961, chaired by Ajdukiewicz, and at a Polish conference, meant like a general rehearsal of Polish participants before that important international meeting.

What I remember best from those events, it is Andrzej Grzegorczyk discussion on the intuitive reasons of accepting axioms in mathematical theories. How such intuitions might be scientifcally justified? This issue seemed then like a puzzle which nobody was ready to solve in the situation (in philosophy of logic and mathematics) at that time. Let us call it
Ajdukiewicz-Grzegorczyk problem.

According to my knowledge, only Hao Wang’s publications in the nineteen-nineties, born from his penetrative discussions with Gödel, are likely to shed some light at the methodological problem of justifying axioms. They are related to the distinction between absolute and relative solvability as considered by Feferman.

Let the fact of adding a new axiom to a theory be exemplified by the transition from arithmetic without the Axiom of Complete Induction — ACI (like, e.g., Presburger arithmetic) to the theory enriched with this axiom. This results in an enormous amplification of the scope of solvability. However, such a success cannot be a decisive argument for the truth of so extended theory. One’s clear intuitive apprehension of the ACI’s truth, as witnessed by what Gödel says about his personal experience of its obviousness, also cannot suffice. However, there does exist an intersubjective evidence for ACI, to wit the gigantic range of its successful applications in every domain of human activity — astronomy,
accounting, engineering etc. — the success experienced during the tens of centuries in each civilization.

§2. Now it is in order to pay attention to Turing’s idea of oracle as presented in the mentioned (above) study by Feferman entitled:  Turing’s ‚Oracle’: From Absolute to Relative computability and Back. The respective passage (slightly by me abbreviated) runs as follows (emphasis by Feferman).

>>The subject of effective computability began with what were offered as analyses of the absolute limits of effective computability, the immediate primary aim was to establish negative  results of the effective unsolvability of various problems in logic and  mathematics.

From this the subject turned to refined classifications of unsolvability.  […] The germinal step, conceptually, was provided by Turing’s notion  of computability relative to an ‚oracle’. At the hands of Post, this provided  the beginning of the subject of degrees of unsolvability, less directly provided by Turing’s notion, but mplicit  in it, were notions of uniform relative computability.<< Sec. 13.1, Introduction.

Turing  did not say much about the way in which an oracle works, but made a very significant  statement: that an oracle is no mechanical device, and that a machine equipped with that device can solve problems unsolvable for ordinary machines, lacking such a device. In other words, an oracle is able to find the value of an uncomputable function. More is said on the issue by Turing’s commentators like Andrew Hodges who offers convincing arguments that the decisions of an oracle are, in fact (in Turing’s intention) some acts of mathematical intuition. A similar stance has been taken by Roger Penrose. Thus, the amount of negative results diminishes with successive steps extending the scope of the solvability relative to an oracle. Now, as we know that the inventing of new, more powerful, systems of axioms is due to intellectual intuitions, and that axioms should be credited not only for their intutive obviousness, but much more for an immense number of successful applications, we attain at an answer to Ajdukiewicz-Grzegorczyk problem of the intuitive justification of axioms. This is to the effect that the intuition, being at the bottom of the process in question, proves its mettle by the fact the those axioms in which it has been expressed do successively pass the most severe exam of very differentiated and long-lasting applications.

§3. To conclude this part of discussion, let us observe that the interest in the idea of oracle, as involving the theory of grades of relative computability (solvability), is not confined to circle of speculative philosophers (to which belongs the present author). Presently it is found in the focus of highly technical discussions in mathematical logic. As well as in computer science; in the latter not only at the level of theoretical foundations, but also in the everyday practice. As testified by Feferman in the Conclusion of his penetrative study (Section 13.6.6):

>>The case had been made here that notions of relativized (as compared to absolute) computability theory are essentially involved in actual hardware and software design.<<

I made the above comment on some statements by Feferman, having been encouraged by the fact that his conribution was highly appreciated by Dr. Quinon. I am grateful for her message as when making use of it, I gained a new pespective in my thinking about computability.

In this perspective, I think that her research in the extensional equivalence and intensional differences between various treatments of Turingian model of computation should embrace the issue of the gradation of of relative computability; one may also say „relative solvability” for the former is a special case of the latter: computing is one of the ways of problem-solving.

Turings (1939) model of problem-solving makes more precise the notion of mathematical intuition, and then we can moce precisely put important questions, say: whether such an intuition can be obtained by artificial intelligence?

Now, when considering other theoretical models of computability (as those of Post, Church, Markov, Kleene, etc.), can we find in them an analog to relative computability as defined in terms of oracle? May be, the answer in the affirmative would be trivial, but I ask so being aware of my limited in that area expertise.

Another result which may be expected from Dr. Quinon’s inquiries is related to the question being the title of her abstract: „What „computing” means?” Should the notion of computing embrace the non-algorithmic activity performed by an oracle, namely that of finding the value (does it mean computing?) of an uncomputable function? If there is a puzzle in this question, what should be done to improve the terminology?

There would be more interesting issues inspired by  Quinon’s project,  but let those listed above suffice for the time being.

Recommended References

Items 1 and refer to the present author’s articles in which the concept of oracle is  discussed, fairly extensively, with quoting and commenting Turing’s (1939) texts. Note that in both titles there appears the phrase „ever higher solvability”  closely related to the notions of effective computability,  effective unsolvability and  relativized (as compared to absolute) computability — as employed by Feferman (quotations in §1 and §2 and the title in §1).  Items 3 and 4 are concerned with the idea of oracle. Item  5 is related to that idea as dealing with Gödel’s theorem which can be used to exemplify Turing’s relative computability with respect to oracles.

1. „The progress of science from a computational point of view: the drive towards ever higher solvability” in: Foundations of Computing and Decision Sciences, Volume 44,  Issue1 (2019),  pp.11-26 — the journal published by the University of Technology in Poznań.

2.  A.M. Turing. „Systems of Logic Based  on Ordinals”. 1939. URL – local copy:  turing-1939.pdf.

3. Robert I. Soare.  Turing Oracle Machines, Online Computing, and Three Displacements in Computability Theory.  2009.  URL  – local copy:  soare.pdf.

4. Samuel R. Buss.  On Gödel’s Theorems on Lengths of Proofs I:
Number of Lines and Speedup for Arithmetics.  1992. URL  – local copy : god-buss.pdf.

5.  Paula Quinon. Abstract of the paper:  What computiong” means

The Church-Turing thesis identifies the intuitive concept of what means ˙to compute˙ with a formal model of computation. There are multiple models of computation and all of them can be proved to be extensionally equivalent (they capture the same functions, such as ˙identity˙, or ˙the next element of the sequence˙). However, despite the extensional equivalence, models differ intensionally (they capture different aspects of computation, for instance computations on abstract natural numbers are intensionally different from computations performed by a machine using concrete electric signals). The main objective of this project is to characterize intensional differences between various concepts of computation.

Postscript

When browsing  my notes  on solvability, I found a very illuminating remark by Gregory Chaitin. His phrase „intuition and creativity” may be interpreted as an exemplification of oracle,  while the infinite chain  of ever stronger axiomatic systems does resemble Turing’s ordering of logics.

Gödel’s own belief was that in spite of his incompleteness theorem there is in fact no limit to what mathematicians can achieve by using their intuition and creativity instead of  depending only on logic and the axiomatic method. He believed that any important mathematical question could eventually be settled, if necessary by adding new fundamental principles to math, that is, new axioms or postulates. Note however that this implies that the concept of mathematical truth becomes something dynamic that evolves, that changes with time, as opposed to the traditional view that mathematical truth is static and eternal.  See „Chaitin interview for Simply Gödel website” (9 February 2008).

Consider the phrase I have emphasised (through red type). Does the process of  „adding new fundamental principles” belong to the processes of computation? If so, which model of computation would explain the intuitive meaning of the a phrase like that? This is a question which seems to be implied by Quinin’s plan of comparing the existing explications of the term „computing”.

Posted in Uncategorized | Leave a comment

On the persuasiveness of views
due to their abundance of information

Credo quia absurdum.  – Tertulianus

This draft  is to hint at a certain paradoxical  kind of  discourse,  such that  the efficiency of a persuasion – not infrequently – is due to its absurdity.  However, the paradox gets diminished,  provided that one considers the following facts.

(i)  Some persons who are not engaged in a serious research  happen  to strive not so much for a  gaining truth  of as for excitement caused by  a  surprising  message.

(ii) The state of being pleasantly stimulated by a surprise  may spontaneously  result in the plausibility of the message in question, and thus in trusting it  as if it  proved  true.

(iii)  The amount of surprise is relative to the person’s knowledge as possessed in the given moment; the greater is the surprise against this background knowledge, the stronger  is the resulting belief in the message.

(iv)   The degree of surprise as carried by a statement is inverse to this statement’s probability (relative to the knowledge in question).  For instance, a miracle is unusual and surprising in a highest degree because it is most improbable with respect to our  knowledge.  However,  quite a lot of people are being  attracted by the wonderful singularity of a miracle,  its surprising novelty,  that is — in terms of information theory — an enormous amount of  of information.  They happen to be so much fascinated,  that the fascination  results in  a strong belief.

(v)  Note,  there is not so that the belief  arises in spite of  improbability,  but  so that it comes into being   because of  improbability.  Exactly,  according to the maxim credo quia absurdum —  I believe for absurdity.  One may say so, since  the highest degree of belief’s  improbability  amounts to its being  inconsistent, or – in other words – absurd.

The paradox of persuasive power of some absurd contentions  challenges both theorists of argumentation and practitioners of rhetoric.  Is it possible to make a honest use of this mental bias?  In fact, this is obtainable, provided that a speaker manages to combine a high portion of surprise with a critical attitude to the problem in question.  This may be achieved with a smart construction of speech.  The method should be patterned on the construction of sensational crime story where the suspense is due to the following sequence of events.

At the very start, there happens something very unusual, unexpected with respect to observers’ knowledge, hence having a tiny subjective probability for them.  This appeals to person’s curiosity, that is, a desire for impressive informational stimuli; technically speaking, this means the need for an unusual amount of information.  As being hardly expected, the event brings a riddle which arouses the desire to find solution.  Various
conjectures are considered in the story  as being most probable with respect to the state of knowledge of the persons in question.  However, the detective involved is one who proposes a least expected solution.  Therefore, this again appeals to the observers’ desire for information, but to become convincing the solution has to by supported by a strong evidence.  As a result, the solution which subjectively appears as least probable proves to be one the most probable, objectively and  subjectively as well.

Such a scenario can be adopted to provide a clever argument, its efficiency being due to such an interplay of improbability and probability.  After the objective probability gets finally demonstrated, this makes the argument honest, while the earlier subjective improbability appeals to the desire of something unusual; both features taken jointly should make the argument in question much plausible for its addressees.

Posted in Rational Discourse | Leave a comment

What are the odds
of creating a computational theory of argument?

This post is an abstract of a more extensive text to be posted later. In the moment, I wish to explain the intention of the above title, hoping that even such a concise message may become thought-provoking for those interested in the subject in question.

In the envisaged  post I am to discuss some ideas of Marcin Koszowy’s paper
“Argument and Computation, and the Legacy of the Lvov-Warsaw School”,  as published recently  in Our Pub Library. There are at least two points to be considered,  both mentioned in the quoted title. Its second line hints at  the need of inquiring into LWS contributions  to the computational approach to argument. This is a challenging task for historical studies which should be undertaken, esp. by Polish scholars.

Nevertheless, I am to focus on what is signaled in the first part of the title – the question of how the computation theory may enhance our knowledge of arguments. There are at least two possible understandings of this relation. (1) We ask which logical theories can be used to create a computational  model of logically correct argumentation; an illuminating example  is to be found in Paul Lorenzen’s dialogical logic,  in paraconsistent Jaśkowski-style logics, etc. (very useful remarks on this point are found in Max Urchs’ paper Discursive Logic. Towards a Logic of Rational Discourse, “Studia Logica”  vol.54, 231-249,  Springer 1995).  (2) While the former is a normative approach, the second one is partly descriptive and psychological. It would consist in digital simulation of real processes of arguing in which a proponent addresses her/his audience in order (i) to convince it to accept a view, (ii)  to motivate it to some action. In  more complex issues, there appears the need of (iii) introducing concepts with which the audience has not been familiar so far.

Point (iii)  is most difficult,  as demanding a creative effort both from the proponent and the audience.  Such creativity  does not seem likely to be reproduced  with digital simulation; I am to exemplify this point with some experiences in the field of automated theorem proving,  e.g. those obtained with an attempt to mechanize Boolos’ “curious inference” as  formalized in the second-order logic.  In the conclusion, as having been motivated by such considerations,  I am  to express a doubt concerning the feasibility of an adequate  digital simulation of argumentation processes. This is why the odds do not seem to be in the favour of a computational theory of argument.

Posted in Rational Discourse, Rationalism in Science | Leave a comment

On how to advance
the cognitive power of free market

This post continues the one entitled “Roubini’s conceptual apparatus […]”.

1. Computational power as a kind of cognitive power

Had Adam Smith lived in our times,  instead of  “market’s invisible hand” he might have  preferred the phrase  “market’s invisible computer”.  For it is an axiom of classical economics that the free market  functions like a device to calculate an optimal state of economy.  Such an analogy has become popular due to the long Friedrich Hayek’s  dispute with Oskar Lange.  The latter was the Polish socialist economist,  who believed that the central socialist planning will  prove efficient owing to computer simulations of free market, while Hayek was aware  that  such a tricky virtual   market  could not match  the real one in its computational power.

Following  this metaphor, let us ask the question: does such computer need a programmer?  In Smith’s perspective, the answer would be, presumably: “yes it is  the  good Lord himself  who has programmed it”. If  you dislike  invoking God’s name in vain,  you may replace it with the phrase “self-organizing Universe”. In either case the question arises: does  the executing  of program need a human assistance?  There appears an illuminating analogy with automated reasoning: often it succeeds when being  executed by computer alone,  but in  most  complex cases  it needs prompting  or controlling  by a  human reasoner.

I resort here to the ideas of computational power and computability, though they belong to very sophisticated concepts of mathematical logic and theoretical computer science; hence their definitions, should they be duly precise, would require many preparatory steps.  However,  with  referring to our everyday experience with computers, we can approximate these ideas  to  some  extent  – to use them as useful metaphors, or models, also  in  the field of economy.

For instance, with successive upgradings of the softeware, my computer ever more knows how to  perform ever more complex tasks. In this sense,  there increases its  power which we call  computational,  for all  such  tasks are being performed by computing.  Since  my computer may be thereby said  to  “know more”,  its computational power turns out a kind of cognitive power.

There are other kinds of  cognitive power, not being computational.  To see the difference, let us compare two kinds of  mathematical cognition. If one computes the  circumference of a circle according to an algorithm (capable of being  rendered as a program), then he  uses his brain’s computational power. However,  when he accepts  intuitively,  as being obvious,  the axioms of arithmetic and geometry (from which there follows the algorithm in question),  he makes use of a cognitive power different from the ability of computation; this power is ofted called  intellectual intuition.

In economic matters, both  varieties of cognitive power come into play. There are calculations, and there are intuitive acts  of cognition. The latter  include seing  significance of  certain problems, accepting certain assumptions,  recognizing  some mathematical models as  useful for predictions, etc.

Cognitive power is enjoyed not  only by individual market agents but also by free market as an overall system of information processing. Its main job consists in computing  prices, each price as the function of  a set of variables,  especially  demand and supply.  In this sense Smith’s dictum may be paraphrased as “invisible action of a computer”.  However,  let  us return to the question posed above (see the second paragraph):  given the fact that complexity of some computations needs a support from an intuitively reasoning agent,  one may ask about the following analogy.  May it be so  that  the complexity of social life (which involves supply-demand relationship) needs deliberate state intervention to protect public interests, apart from  market’s mechanisms? In other words: are there any circumstances in which state control over prices would be a smart strategy to advance public interests? On this point there are divergent appoaches, one of Keynes, the other one of the Austrian Economic School,  brilliantly represented by Hayek.

2.  The interplay of spontaneous market and state’s economic control

To take a well considered  stance toward the  Hayek-Keynes disagreement, that concerning market’s and government’s  impact on  the demand-supply relations,  we should link this controversy with the idea of computability;  it is the latter which plays the key role in the Hayek-Lange dispute.  The free market is a reliable device to compute optimal prices, optimal strategies etc., provided  that all its agents are fully rational and fully honest. Any deviations from these ideal  conditions result in errors of computing.   The more complex  become economic  processes,  the  less  there  is likely  to attain a perfect rationality.  The grater gains are at stake, the more people happen to be drawn away from the virtue of honesty. To become free from such deviations,  the market needs a control by  competent authorities, smart enough to establish reasonable  rules of market game.

Moreover, such a control may be needed because of  interference of various social interests,  and of multiple government strategies,  as  to protect  the poorest parts of population,  to support  export and reduce import, etc.  These purposes may be attained with such instruments as subsidies, tariffs, loans, taxes etc. Such measures can be directed towards definite sections or sections,  e.g. subsidies for food production,  highly advantageous loans for housing,  tariffs on definite kinds of goods, etc.

There is the extreme view (sometimes called libertarian) which denies  governments any right to interfere with markets. To  support  such a contention, one should prove that market agents always  are fully rational, that is, doing correct computations, and  invariably respecting moral  claims. However,  neither is the case.  Market agents happen to commit various errors: sometimes they use false premisses in their predictions   (e.g. some years  ago  they superestimated the potential of Greek economy),  yield to the panic,  have a propensity to engage in speculative bubbles, etc. These are  intellectual errors,  to wit lapses in calculation.  Morever,  market agents happen to  transgress moral standards when their self-interest blinds them to public interest, when they tend to monopolistic power, when they do selfish lobbying, etc.

In such state of affairs there are at least three reasons for  the interaction  of political power with  market forces.  First  of all, there  is the absolute need of legal system to enable the very existence of free market in modern societies.  No economic  action would be possible,  if the state failed to create  a system of economic law  including  financial and banking law, law of companies etc.  This is  an essential state’s contribution to making economic processes predictable and transparent,  and thus obtaining the trait  of computability.

Another reason  amounts to the mentioned above vices, either moral or intellectual, of some free market agents. These diminish the computational power of free markets, hence should be remedied in order to restore a desired level of agents’  computational power and market’s  computability.

At last state, authorities,  mainly governments and central banks, in a period of acute crisis become the lenders of last resort,  necessary to prevent an irreversible economic disaster, as  there argues Roubini in his “Crisis Economics”; see post on Roubini’s conceptual apparatus. 

3. Economic crisis:  either a disaster  or a harsly challenging opportunity

This Section will be continued in more detail to inquire  into the  ideas of  Keynes and of Austrian School, and specially of  Joseph Schumpeter, as discussed by Roubini (see link above).  This issue is worth our careful  attention in  the debate on European integration strategy when  considered in a long run. To avoid economic desintegration, should Europe base on the Austrian or rather the Keynesian strategies? This appears as  the question of  the United Europe’s  to be or not to be.

To be continued.

Posted in Rationalism in Politics | Leave a comment

Smart politics, stupid politics,
and the Hayek-Keynes debate

By Witold Marciszewski

Once at my lecture on European Integrations,  a student expressed  his opinion  about an  European country (no matter which) that its government is undoubtedly stupid.  I did not object  the use of the word since  it has become fairly common in the political idiom (with Google you may find 80000 uses of the phrase “stupid politics” and almost half milion of “stupid politicians”). It was President Clinton who made the word  popular through the famous motto ot his governance: “It’s the economy, stupid“.  Hence, let us employ this expression freely, though with the obvious caution to use it for cognitive purposes, not for an angry emotional expression.

What would be the opposite to being stupid? Wise? Or rather smart? The latter proves  better in the present context, if you compare the following definitions.

wise =  marked by the exercise of good judgment or common sense in practical
matters;  smart = showing mental alertness and calculation and resourcefulness; stupid = neither wise nor smart, even unable to understand obvious truths.

It is not enough for a full-fledged politician to be  good in practical matters. He  is bound  as well  to be a person with wide theoretical perspectives, and those  need  alertness and a skill of calculation.  Simply,  he is bound to be smart.

Among the most  severe tests of a politician’s  intelligence, there is his capability of  responding to extreme situations, especially managing crises. As for the present global crisis, it needs an intelligent choice between two alternative and opposite sets of measures. Each of them results from a different economic worldview. One defended by John Maynard Keynes, the other one by a group of scholars called Austrian School; its  eminent representative was Friedrich Hayek who had entered a famed many years  debate with Keynes regarding relations between free market and government policies.

Now the question arises: whether  is it  smarter for a politician:  to follow Keynes advices  on economy, or those due to Hayek?  The  controversy of  these two intellectual leaders  is regarded  as a central debate in 20th-century economic thought.  The issue  gets most urgent in the time of global economic crisis. Some economists and politicians see  the way out in the Keynesian  strategy, while other ones claim the contrary. Which choise would be smarter?

Let me encourage  each from among this blog’s readers  to consider your choice: which part you take:, that of Keynes, or that of Hayek? Or else  – of both, depending on variable economic, social and political circumstances of the  time in which anti-crisis measures  should  be taken?

To assist your reflexions, two documents may prove helpful.  One of them is a film on the Keynes-Hayek controversy with some introducing comments. Another one is a comment on Nouriel Roubini’s consideration  regarding  circumstaces to be taken into account  when  deciding between Keynes and Hayek.

Posted in Rationalism in Politics | 2 Comments

Roubini’s conceptual apparatus
for addressing the crisis

The core of Nouriel  Roubini’s  brilliant  book “Crisis Economics” (2011)  may be summed up to the effect:  an  important tip on how to rescue  the world economy (and thereby Europe’s economic integration)  may come from an unexpected source.  To wit, from  combining  two  approaches, so far regarded as antythetical  to each other: that of Hayek (with the whole Austrian School) and that of Keynes.  Roubini is the one who recognized that  they can and should be linked into a coherent  system of thought.

This is convincingly expounded  in Chapter 2,  Section “To Austria and back”  (p.28), of the said volume. The key to such a synthesis lies in distinguishing between long-term strategy and short-term strategy.  We read in that Section:  “Joseph Schumpeter  [a prominent Austrian economist] developed  a powerful  theory  of  entrepreneurship that  is often  distilled  down to  a pair of powerful  words: creative  destruction“; this   is a sharper  version of some views of  Hayek.  Roubini continues as follows.

In  Schumpeter’ worldview, capitalism consists of waves of innovation  in prosperous times, followed by a brutal  winnowing in times of depression.  This winnowing is to be neither avoided nor minimized: it is a painful but positive adjustment, whose survivors will create a new economic  order.

In principle, Roubini approves such  a harsh approach.  This  implies that no actions should be taken by governments or central banks   to rescue entrepreneurs who collapse for their recklessness; “in principle” means here “in long distance”. However, this position – claims Roubini – should be completed with the following consideration.

The  Austrian  approach  is  misguided  when  it  comes  to short-term  policies.  As Keynes recognized, in the absence  of  government  intervention,  a  crisis  caused  by  financial excesses can  become  an  outright  depression,  and  what  begins  as  a reasonable  retreat  from  risk  can  turn  into  a rout.   When   the  animal   spirits   [i.e.,  businessmen’  bold initiatives resulting in free market]  of   capitalism   vanish,   the    creative destruction    hailed   by   the Austrians   can   swiftly   turn   into   a   self-fulfilling   collapse   of   private   aggregate   demand.   As   a  consequence,  distressed  but  still-solvent  firms,  banks,  and households  can  no  longer  gain  access  to the  credit  necessary  for their  continued  survival.  It s  one  thing  if  truly  insolvent  banks,   firms,   and individual  households  go   under;   it s   another altogether   when   innocent   bystanders   to   an   economic crisis  are forced  into  bankruptcy  because  credit  dries  up.

In  order  to  prevent  this  kind  of  collateral  damage,  it  makes sense  to  follow  the  playbook  devised by  Keynes  in  the  short  term, even  when  the  underlying  fundamentals  suggest  that  significant portions of  the  economy  are  not  only  illiquid  but  insolvent.  In the  short  term,  it is  best  to  prevent  a  disorderly collapse   of  the   entire   financial   system   via   monetary   easing   and   the creation   of   bulwarks:   via lender-of-last-resort  support,  for example,  or  capital  injections  into  ailing   banks.   It is   also best   to prop  up  aggregate  demand  through  stimulus  spending  and tax  cuts.  Doing  so  will  prevent  a  financial crisis   from   turning into   something   comparable   to  the   Great Depression.

To sum up, Roubini’s conceptual project consists in reconciling  the both opposing theories by assigning each of them a different role in the fight with the global crisis. These theories  prove to complete each other, if reasonably applied in changing circumstances.  Hence, the lesson to be learnt is to the effect that  both  sets of  proposals  should be carefully considered,  not only by professionals alone, but by all responsible citizens as well.

Once more lesson is worth considering, namely the occurence of the word “reckoning” by the end of the section in the following context: “a necessary reckoning  must take place over the longer term in order to achieve  a return to prosperity”.  The term “reckoning”, which means computation, is crucial for economics, and for understading the mechanics of crises, as discussed in the post III on cognitive power of free market.

It would be nice to sum up such a solemn discourse in a more relaxed mood.  Fortunately, for this purpose we got a hilarious  piece of music and piece of verse in the best  rap style. One in which Keynes’ and Hayek’s eagerness in defending their points gets shown in a much funny way.  It is the video “Fear the Boom and Bust“.  The  title goes to the heart of  crisis economics, since excessive and euphoric  booms, called bubbles,  do lead to depressing busts.  After such a bust may come recovery, but the whole process is very costly.  The Austrians, and mostly Hayek,  have become classics in studying  such cycles. Thus that funny film deals with serious economic issues.

A note about the book and Nouriel Roubini. The book is co-authored by Stephen Mihm. It bears the subtitle “A crash course in the future of finance”. At the cover, the   name of Roubini is followed by the phrase “The seer who saw it coming”, taken from a review in “The New York Times”. In fact, Roubini was among those few, who have foreseen  the disaster, having been  much ahaed in spotting this event. This is why his opinion and his analysis weighs now so heavily.

Posted in Rationalism in Politics | Leave a comment

Is Christianity relevant to European integration?

§1. The impulse to include this question into the curriculum “European Integration” came from an unexpected result of Polish parliamentary elections in October 2011. I mean a splendid performance of the anti-clerical party that wants to cut down the influence of the Catholic Church in Poland – Janusz Polikot’s movement. This fact should be seriously considered to be a sign of times. The fact that Mr. Palikot in a few months managed to win one tenth of the electorate for his programme, appealing mainly to young people, and all that happened in the country seen as strongly catholic and conservative, gets thought-provoking, indeed.

This is the more remarkable that it complies with the trend towards secularization within Europe as a whole. Thus there arises for Poland the dilemma: whether to join this trend or to oppose it? The latter option would accord with Polish historical tradition, but then, in turn, the question arises of how would this tradition face challenges of modernisation. Should Polish people give up the quest for it in favour of being true to their traditions (as claimed by some politico-religious circles)? Or depart from these national and religious traditions in favour of modernisation? Or else, try to reconcile both values? The third option presupposes a possibility of such reconciliation. Is this supposition likely to be true? This is the question!

To address this question, we need, first, to get a clear insight into the idea of  modernization, and then to confront it with the Christian doctrine. I shall do the former, while the latter I expect to be answered by readers by themselves, owing to their own knowledge. In order to help answering, I suggest a list of questions to    encourage and guide a search for solutions.

§2. It is not by accident that in the title of this post I mention just the future. As for the past,  it is by no means controversial that Christianity was among the main factors in the process of forming Europe, having  remained  in harmony with the other potent factors, as the heritage of Greek and hellenistic intellectual achievements.  As well as the idea of universal empire due to the lasting historical memory of Roman empire, its legal order,  its power and its role for the world peace (Pax Romana).  This was a monumental  historical scenario which started with the baptism of Clovis (496?),  the first king of the Franks, and culminated with the crowning of Charlemagne as Holy Roman Emperor on Christmas of 800 by the Pope Leo III.

Within the thousand years  following Clovis’ baptism,  the Catholic Church was a leading force in intellectual and  political processes going on in Europe.  A decline of this domination started because of two new factors which appeared at the European stage. These were: the religious reformation, which was rather like a religious revolution, and the scientific revolution due to Copernicus and Galileo. The former subverted the religious foundatons,  the latter –  the intellectual foundations of the medieval order. Thus began the retreat of Catholic Churche from Europe’s historical stage. This process  is going on till to day, leaving ever less place for the Church’s leadership and influence.

This is the historical context in which  we are to try understanding the core of modernisation processes.  Religious reformers introduced (even if not intentionally) the pluralism of  theological doctrtines,  thus initiating the paradigm of pluralism which evolved towards the freedom of all beliefs including philosophical, political,  artistic, and so on.   Such a pluralism is found among the main points of post-medieval modernity. Then modernisation means tending to  pluralism wherever is it lacking.  [1] Does the Catholic Church approve of such a multiple pluralism?  (Bracketed numbers hint at the position of the given issue at  the list of questions at the end of this post.)

The answer  is to the effect that the Church does approve of pluralism after the Vatican Council II,  but she has done so under the pression of secular thought, not on the basis of her own  creed.  This observation is backed by an extensive evidence, in particular “The Syllabus of Errors Condemned by Pius IX” published in 1864.  It consists 0f 80 items, each of them being  a view which the Pope condemns, hence the contrary view is that  demanded by the Church.  Direct or indirect condemnation of pluralism is stated in items 11-17, 22, 24, 48, 57; items 77-80  express a strong rejection of any form of liberalism.  Let it be added that item 55 declares separation of state and Church as contrary to the Catholic faith, and item 63  forbids any opposition against crowned rulers.

If  the popes who  presided the Vatican Council II lived a century earlier,  they would be condemned as heretics by the Pope Pius IX for their being at variance with all the Syllabus points listed above, and many other ones.  However, the Catholic Church deserves  appreciation for her trying to match standards of modernity  created in the age of Enlightenment. Even if the steps toward such adjustment come late, delayed by natural  inertness of great institution,  they give hope that in a future the church may support the modernizing of society.

Posted in Rationalism in Politics | Leave a comment

On advancing frontiers of science
A pragmatist approach

Part A is an abstract of the paper bearing the same title. Its full text – in “Our Pub” Library.  Part B is added as a supplement.

A. The pragmatist approach, as stated in this essay, takes into account two features of knowledge, both having an enormous potential of growth: the scope of science, whose frontiers can be infinitely advanced, while firmness of its propositions grows with consolidating once attained frontiers. An opposite view may be called limitativist as it conservatively sticks to some a priori limiting principles which do not allow progressing in certain directions. Some of them influence science from outside, like ideological constraints, other ones are found inside science itself.

The latter can be exemplified by the principles like those: (1) there can be no action at a distance; (2) there are no necessary truths; (3) there are no abstract objects. The first might have happened to limit physics with rejecting the theory of gravitation. The second entails that arithmetical propositions are either devoid of (clasical) truth or are not necessary; this would limit arithmetic to the role of a mere calculating machine, without giving any insights into reality. The third principle, for instance, limits logic to the first-order level (since in the second one variables range over abstract sets). The history of ideas shows that such limiting principles, had they been obeyed, would have hindered some great achievements of science. This is why we should not acknowledge any such principle as necessarily true, that is, winning in confrontation with any view contrary to it. Such principles on equal terms should compete with other propositions in obtaining as high degree of epistemic necessity as they may prove worth of.

To the core of pragmatist approach there belongs treating epistemic necessity as a gradable attribute of propositions. In accordance with ordinary usage, “necessary” is a gradable adjective, having a comparative form. The degree of epistemic necessity of a scientific statement depends on how much it is needed for the rest of the field of knowledge (Quine’s metaphor). The greater damage for knowledge would be caused by getting rid of the point in question, the greater is its epistemic necessity. At the top of such a hierarchy are laws of logic and arithmetic. Among physical laws at a very high level there is the law of gravitation, owing both to its universality, that is, a colossal scope of possible applications (advancement of frontiers), and its having been empirically confirmed with innumerable cases (consolidation of frontiers). Such a success has proved ible owing to the bold transgression of the limiting principle 1 (see above), and this has resulted in so high a degree of unavoidability.

A motto for  pragmatism can be found with the Chinese saying: “Black cat or white cat: if it can catch mice, it’s a good cat“. Thus, for a pragmatist, either the axiom of choice, or a higher order logic, is a good cat, as it does enable results vital for science, not attainable  otherwise.

  * * *

B.  There  is a computational motivation for pragmatism that can be rendered as follows. Let us not be discouraged by philosophical principialists’ aversions to certain theories, if only these theories either will make possible, or will dramatically speed up, some computational procedures crucial for science. In the paper, two such procedures are considered, one of them presupposing the axiom of choice, the other one — higher-order logics.

The former is hidden at the bottom of skolemization or (in Hilbert’s approach) of the use of epsilon operator. Such devices are meant to algorithmise proofs with predicate logic (e.g., in form of Beth semantic tableaux) up to the highest possible degree (with regard to undecidability of logic). Just owing to such algoritmisation, we get able to harness computers that they work for us provig theorems or assisting us in proving.

Kurt Gödel (1936) pioneered the idea that (1) some proofs, which in the first-order logic cannot be carried out, get feasible in the second-order logic, and (2) other ones which at the first-order level would require time available neither to humans nor to computers, become tractable in an accessible time when performed at higher levels. In my paper I am to report on a very instructive exemplification of point 2 as given by George Boolos with a human-made formalization (“A Curious Inference”, 1987), and to hint at its continuation through a current research in mechanized theorem proving.

To sum up, this case illustrates that the difference between limits and frontiers in science gets in a way reflected in a crucial difference between robots and human beings. A robot equipped with algorithms based, say, on the first-order logic is strictly limited, being unable to perform proofs requiring higher-orders. It is up to a human researcher, who feels himself like a computational pragmatist, to create the second-order logic, and some rooted in it problem-solving programs; and still, face the risk of being accused of platonism by learned collegaues. If the second-order logic fails to algorithmically solve problems arising in it, the pragmatic human mind is free to give life to an algorithmised third-order logic, and so on — potentially — up to infinity to which there tends such a sequence of succsessively advancing frontiers.

Posted in Rationalism in Science | Leave a comment

Abstract of “Mathesis Universalis Revisited”

This post completes the paperMathesis Universalis revisited owing to Cantor, Frege, Einstein and Gödel“, offered as a contribution to the Poznań, October 2011, Conference on the Philosophy of Mathematics and Informatics. It is both an abstract and an additional comment involving questions to initiate a discussion.

1. The paper argues that the 17th century program of Mathesis Universalis (MU) gets accomplished in our times though in a fairly different way; the difference is thought-provoking too, and this is the other reason to revisite that famous project.

2. In the new MU, unlike the old one, we have a clear awareness of the enormous complexity of some algorithms, the physical universe, human mind, and human civilization, this complexity resulting in some unsolvable problems; such critical awareness was alien to our ancestors. It was Einstein who with general relativity paved the way to the idea of the evolving universe, and soon people conceived that its evolution tends toward ever more complex structures, up to living cells and further. As far as the complexity reaches the heights involvig infinity (like in Turing’s diagonal argument revealing the existence of unsolvable problems), the way to its treatment has been prepared by Cantor.

3. Leibniz’s project of universal langauage, which due to its precision would enable to solve any well-stated problem whatever, has been carried out by Frege, followed by Hilbert, as far as possible, up to the point in which Gödel (addressing Hilbert’s problems) could have discovered its dramatic limitations (they should have surprised Leibniz enormously, had he got a message about them).

4. However, Gödel’s results do not imply an epistemological pessimism. In his short but enormously seminal communication “Über die Länge von Beweisen” (1936) he reveals the perspective of never ending, but fruitful with each successive stage, process of discovering ever more sophisticated mathematical truths. Those, let mi add, should enable handling ever more complex phenomena, e.g. providing more and more efficient mathematical models and algorithms for natural science, economics, etc.

5. Gödel’s evolutionary vision of the growing mind’s ability to grasp ever more complex mathematical structures, may be seen parallel to the evolutionary vision of the universe as producing ever more complex physical and intellectual structures. Since the latter has been initiated by Einstein’s relativity, the paper starts from the picture of them both merged in a thoughtful talk.

======================================================

Questions adressed to expert critics

A. Do you agree that in information age the idea in the focus of our worldview is that of computational complexity? If so, how such a worldview should be named and explained? Do you think, that the reference to the MU project, as made in this paper, is a useful step towards the explanation?

B. As for naming, some authors suggest to use the term informational worlview; see e.g. Hector Zenil’s “Seth Lloyd’s quantum universe view“. The Polish counterpart “światopogląd informatyczny” appeared (presumably first time in Polish literature) in the book by Witold Marciszewski and Paweł Stacewicz “Umysł – komputer – świat. O zagadce umysłu z informatycznego punktu widzenia” (2011). Do you regard these terms, English and Polish, as relevant for the worldview focussed around the concept of computational complexity?

C. Do you share the above (item 4) interpretation of Gödel’s paper on the length of proofs?

D. Do you endorse the opinion (item 5) that Einstein has indirectly contributed to the informational worldview with his idea of evolving universe, provided this evolution’s trend toward growing computational complexity?

Witold Marciszewski

Posted in Rationalism in Science | Leave a comment

Europe’s Lead — Questions

[stextbox id=”info”]Problems for exercises to the lecture “Europe’s Lead”[/stextbox]

§1.1

a)  Find out a counterpart of the term “Homeland Europe” in your native language.

b) Define the difference between HE and EH options. Which option is
closer to those called “eurosceptics”?

c) Konrad Adenauer’s option: HE or EH? Mention his most important political
achievements. How to call the worldview which motivated his activities?

d) Helmuth Kohl’s option: HE or EH? Mention his most important political
achievements.  How to call the worldview which motivated his activities?

e) Charles de Gaulle’s option: HE or EH? Mention his motivation and his
most important political achievements.

f) Which political parties in Poland are closer to HE, which to EH?

g) State your own option and its motivation.

§1.2

a)  Which projects or decisions of UE (besides the mentioned monetary
union) are intended to carry out the HE idea?

b) State and motivate your own attitude to Poland’s joining Eurozone?

§1.3

a) Give definition of axiology, and of cognitive and noncognitive theories of
values. Which theory you prefer?

b) Are there properties of your country or events in its history which you
highly appreciate or are proud of? Give some examples.

§1.4

Consider the last statement: “community’s integration is [always?
sometimes? never?] based on some belief that WE have a lead over other
communities”.

a) In which version (from among the three listed in brackets) you regard
this proposition as true?

b) Support your answer with a suitable argument.

§1.5

a) Let “BFG” denote Britain, France and Germany jointly. Let “pop” denotes
the joint population of BFG, and “con” their joint financial contribution to
the UN’s budget. How may times is greater the BFG share in con than their
share in pop?

b) Is there any relation between the BFG wealth (which makes them so
generous payer to the UN budget) and their intellectual achievents? If you
answer in the affirmative, give some examples of great achievements in
science, technology, economy and politics in each of the countries in
question (at lest one example from each field).

§2.1

a) How do you understand the metaphoric description “civilizational avant-garde”
as applied to Europe’s achievements?

b) Hint at historical examples of events and the processes referred to as
Europe’s failures to match moral standards. Eg., for crusades mention some
dates, motives, results, and your own evaluation of this event (you may
express your disagreement with the author’s negative opinion, and argue your
opinion).

§2.2

a) What is specific in genocide in comparison with other mass crimes?

b) Make a short report (one paragraph) on mass crimes in Rwanda in 1994 to
decide whether they fall under the concept of genocide.

c) Make a short report (1-3 paragraphs) on the history of abolishing slavery in
Europe.

d) Why is slavery inconsistent with the idea of human laws?

e) Define, at least partially (ie. listing not all conditions) the concept
of international solidarity.

f) Give examples of international solidarity in EU and USA policies.

g) To exemplify the successes of European science, quote the Newtonian law of
gravitation (as a mathematical formula), and explain its use in the exploration
of cosmic space.

§2.3  

a) Give historical evidence of rising moral standards and their
accomplishment in Europe since the Middle Ages. A suggestion: start from the
he Council of Constance, 1414: how its Polish participants contributed to the
theory and practice of human rights.

b) Find out definition and examples of exponential growth.

[To be continued.]

Posted in Uncategorized | Leave a comment