Paul Wegner and Dina Goldin have for more than ten years been publishing papers and books quarrelling mainly the Church-Turing thesis is frequently misrepresented within the CS Theory community and elsewhere. That’s, it’s presented as encompassing all computation while in fact it applies simply to computation of functions, that is a really small subset of computation. Rather they suggest we ought to aim to model interactive computation, where communication using the outdoors world happens throughout the computation.

The only real critique I’ve come across of the work is incorporated in the Lambda the best forum, where somebody lamented these authors for constantly publishing what’s clearly known. My question then is, can there be anymore critique into this type of thinking, especially their Persistent Turing Machines. And when not, why then could it be apparently studied hardly any (I possibly could be mistaken). Lastly, so how exactly does the idea of universality mean an interactive domain.

requested August 21 ’12 at 23:58

I believe Andrej and Neel have described here that the reply is negative for that greater-type function computation problems. So basically Church-Turing Thesis is all about *number* function computation problems. The typical equivalences between types of computation don’t hold over greater-types. (However, when i comprehend it, this really is much more about the interaction mechanisms and just how greater-type objects are symbolized than concerning the computational power models.) (reposting to repair a couple of typos) Kaveh August 23 ’12 at 19:20

really wegners first paper along wrinkles appears up to now to 1996-1997, Why interaction is much more important than algorithms or The Paradigm Shift from Algorithms to Interaction.

later within the paper there’s ref to Platos cave, the Turing tarpit (?), Kants Critique of Pure Reason, Marx’s dialectical logic, Descartes, Penrose, Searle. so perhaps it ought to be viewed as bordering around the philosophical and less within the vein of technical/math TCS. no math, no lemmas or proofs or thms. while perhaps a bit grandiose, he seriously seeks to know the large picture of CT thesis wrt history etc. vzn August 31 ’12 at 1:26

## 7 Solutions

Here’s my personal favorite example. Suppose I spent ten years publishing books and papers quarrelling that, unlike theoretical computer science’s dogma, the Church-Turing Thesis does not capture all computation, because *Turing machines can’t toast bread*. Therefore, you’ll need my revolutionary *new* model, the Toaster-Enhanced Turing Machine (TETM), which enables bread just as one input and includes toasting it as being a primitive operation.

In ways: sure, I’ve got a “point”, but it is an entirely unexciting one. Nobody ever claimed that the Turing machine could handle every possible interaction using the exterior world, without first hooking up to appropriate peripherals. If you prefer a TM to toast bread, you have to hook it up to some toaster then your TM can certainly handle the toaster’s *internal logic* (unless of course this specific toaster requires solving the halting problem as well to find out how brown the bread ought to be!). In much the same way, if you prefer a TM to deal with interactive communication, you will want to turn it on to appropriate communication devices, as Neel discussed in the answer.

In neither situation shall we be saying something that wouldn’t happen to be apparent to Turing themself.

So, I’d say exactly why there is no “followup” to Wegner and Goldin’s diatribes is the fact that TCS has known how you can model interactivity whenever needed, and it has happily done this, because the very start of the field.

**Update (8/30):** An associated point is really as follows. Will it ever provide the critics pause that, here within the Elite Church-Turing Ivory Tower (the ECTIT), the main research styles within the last 2 decades have incorporated interactive proofs, multiparty cryptographic protocols, codes for interactive communication, asynchronous protocols for routing, consensus, rumor-distributing, leader-election, etc. and also the cost of anarchy in economic systems? If putting Turing’s perception of computation in the center from the field causes it to be so difficult to go over interaction, how one thing so couple of people have observed?

**Another Update:** To folks who keep banging the drum about greater-level formalisms being vastly more intuitive than TMs, with no one thinking when it comes to TMs like a practical matter, allow me to ask an very simple question. What exactly is it that lets all individuals high-level languages *exist* to begin with, that ensures they are able to continually be compiled lower to machine code? Can it be. err. **THE CHURCH-TURING THESIS**. the identical one you have been ragging on? To explain, the Church-Turing Thesis isn’t the declare that “TURING MACHINEZ RULE!!” Rather, it is the declare that any reasonable programming language is going to be equivalent in significant capacity to Turing machines — and *as a result*. which you may too think with regards to the greater-level languages whether it’s easier to do this. This, obviously, would be a radical new insight 60-75 years back.

**Final Update:** I have produced your blog publish for more discussion of the answer.

clarified August 25 ’12 at 17:18

There’s a considerable distinction between toasters and interaction: every type of computation has some IO mechanism. Toasters appear only rarely. Some types of computation model IO naively: for instance Turing machines cope with IO only informally. This isn’t problematic where computation is described as functional, i.e. beginning by having an input and ending by having an output, because it is with Turing machines. However, this naively becomes troublesome when you wish to cope with genuine concurrent phenomena, e.g. when are a couple of interactive computations equal? (Ongoing below.) Martin Berger August 29 ’12 at 9:50

I’d reason that interaction is much more fundamental than functional computation on binary strings (as e.g. modelled by Turing machines), which computation ought to be viewed as finitary, rule led interaction. Martin Berger August 29 ’12 at 10:03

*exactly how should we debate in regards to a so-known as Thesis named after both Turing and Church, neither who really mentioned in their own individual writing the thesis because it has later been construed evolved?* — See also: Euler’s formula, Gaussian elimination, Euclid’s formula, the Pythagorean theorem. Shaun August 30 ’12 at 11:44

I believe the problem is fairly simple.

All interactive formalisms could be simulated by Turing machines.

TMs are inconvenient languages for research on interactive computation (generally) since the interesting issues get drowned in the noise of encodings.

Everyone focusing on the mathematisation of interaction is aware of this.

Allow me to explain this in greater detail.

Turing machines can clearly model all existing interactive types of computing within the following sense: Choose some encoding from the relevant syntax as binary strings, write a TM that can take as input two encoded interactive programs P, Q (inside a selected type of interactive computation) and returns true exactly when there’s a 1-step reduction from P to Q within the relevant term rewriting system (in case your calculus includes a ternary transition relation, proceed mutatis mutandis). Which means you had a TM that will a step-by-step simulation of computation within the interactive calculus. Clearly pi-calculus, ambient calculus, CCS, CSP, Petri-nets, timed pi-calculus and then any other interactive type of computation that’s been studied could be expressed within this sense. This is exactly what people mean once they say interaction doesn’t exceed TMs. If you’re able to develop an interactive formalism that’s physically implementable although not expressible within this sense, please make an application for your Turing award.

N. Krishnaswami describes another method of modelling interactivity using oracle tapes. This method differs from the interpretation from the reduction/transition relation above, since the perception of TM is altered: we change from plain TMs to TMs with oracle tapes. This method is popular in complexity theory and cryptography, mostly since it enables researchers during these fields to transfer their tools and is a result of the consecutive towards the concurrent world.

The issue with approaches would be that the genuinly concurrency theoretic issues are obscured. Concurrency theory seeks to know interaction like a phenomenon sui generis. Both approaches via TMs simply replace a handy formalism for expressing an interactive programming language having a less convenient formalism.

In neither approach genuinely concurrency theoretic issues, i.e. communication and it is supporting infrastructure possess a direct representation. They’re there, visible towards the trained eye, but encoded, hidden within the impenetrable fog of encoding complexity. So both approaches can be harmful at mathematisation from the key concerns of interactive computation. For example take what may be the best idea within the theory of programming languages within the last 50 years, Milner et al’s axiomatisation of scope extrusion (that is a key part of an over-all theory of compositionality):

$$P(nu x)Q equiv (nu x)(PQ) quadtext x notin fv(P)$$

How superbly simple this concept happens when expressed inside a tailor-made language language such as the pi-calculus. Carrying this out while using encoding of pi-calculus into TMs would most likely fill 20 pages.

Quite simply, the invention of explicit formalisms for interaction makes the next contribution to information technology: the direct axiomatisation from the key primitives for communication (e.g. input and output operators) and also the supporting mechanisms (e.g. new name generation, parallel composition etc). This axiomatisation is continuing to grow right into a veritable research tradition using its own conferences, schools, terminology.

An identical situation obtains in mathematics: most concepts might be written lower while using language of set theory (or topos theory), but we mostly prefer greater level concepts like groups, rings, topological spaces and so forth.

When it comes to number computability (i.e. computing functions from $mathbb to mathbb$), all known types of computation are equivalent.

However, will still be correct that Turing machines are fairly painful for modelling qualities like interactivity. This is because a bit subtle, and is due to the sorts of questions that you want to inquire about interactive computations.

The typical first pass at modelling interaction with TMs is by using oracle tapes. Without effort, you are able to consider the string printed around the oracle tape like a “conjecture” from the Turing machine’s I/O interaction using the atmosphere. However, consider the types of questions you want to inquire about interactive programs: for instance, we may want to realize that a pc program won’t output your financial data unless of course it receives your password as input, and in addition that programs don’t leak details about passwords. Speaking about this sort of constraint is extremely painful with oracle strings, because it reflects a temporal, epistemic constraint around the trace from the interaction, and the phrase oracle tapes request you to give you the whole string in advance.

I believe getting this right is doable, and basically amounts (1) to thinking about oracle strings less a collection, but because a topological space whose open sets encode the modal logic of your time and understanding that you would like to model, and (2) making certain the theorems you prove are continuous regarding this topology, viewing predicates as continuous functions from oracle strings to truth values considered the Sierpinski space. I ought to highlight that this can be a guess. in line with the example with domain theory. You’d need to sort out the facts (and most likely publish them to LICS or something like that) to be certain.

Consequently, people choose to model interaction using such things as the Dolev-Yao model, in which you clearly model the interaction between computers and also the atmosphere, to be able to clearly characterize exactly what the attacker knows. This will make it a great deal simpler to formulate appropriate modal logics for reasoning about security, because the condition from the system as well as the condition from the atmosphere are symbolized clearly.

clarified August 23 ’12 at 15:04

studying Lance Fortnows blog, just discovered this recent/nice/lengthy survey article around the subj with lots of perspectives refs [1] (which is not reported to date), includes Wegner/Goldin’s perspective (among many more). Ill just quote Fortnows excellent/emphatic summary/declaration/assertion from the **near-official/uniform/unanimous TCS party line:**

“A couple of computer scientists nonetheless attempt to reason that the [Church-Turing] thesis does not capture some facets of computation. A few of these happen to be printed in esteemed venues for example Science, Communications from the ACM, and today in general number of papers in ACM Ubiquity. **Many people outdoors of information technology may think that there’s a significant debate concerning the nature of computation. There is not.”**

[1] Turings Titanic Machine by Craig S Cooper CACM Vol 55

clarified August 29 ’12 at 21:51

I’m greatly in complete agreement with Aaronson’s comments.

I do not understand Milner’s work. (e.g. pi calculus, which Milner invented to explain communicating processes). It is extremely unreadable in my experience, much like almost all papers on maths and logic, for example Lambek’s theories. I probably have that Lambek’s ideas are extremely good, but I must discover their whereabouts converted into some type of pidgin British will be able to read.

I’m tossed by Milner’s comment that lambda calculus is okay for “consecutive processes” however that some thing is required for communicating processes.

My (possibly nave) point of view was that can’t be so, because pi-calculus is Turing complete, and for that reason could be converted robotically to a different Turing-complete notation, i.e. lambda calculus. Therefore Milner’s pi-calculus notation could be converted instantly to lambda calculus.

It appears which i have identified a task: without effort, it ought to be easy to robotically convert in one Turing-complete language to a different. can there be an formula to get this done? I will need to check out Google. Maybe this really is incredibly difficult to do, so that as hard because the halting problem.

I looked yesterday around the internet, and located papers on types of lambda calculus. I surprised to locate this appears to become a very deep rabbit hole.

clarified February 26 at 12:11

This can be a factor, when you add (pure) interactivity, formality is out your window. It’s really no longer a “closed” system. The issue then is, what’s the perception of computation once interactivity enters in? That answer: well, either another user/machine is substituting for many of the computation (which may be enscribed just by another, bigger, condition machine) or you are no more inside a formally-definable system and you are now playing a game . by which situation there’s no use of the Church-Turing thesis.

clarified August 30 ’12 at 2:29

Interactive types of computation like process calculi are games meaning of game semantics. Martin Berger August 30 ’12 at 6:22

Human conduct does not matter. What matters is the fact that computable interactive devices act within an algorithmic, mechanical manner for their inputs. Martin Berger Sep 3 ’12 at 13:05

@ Mark J, I don’t understand what you’re saying. The interactive approach simply states that the system is computable whether it reacts to the inputs inside a mechanical way, using finite sources. Yes, when the other area of the interaction does something crazy, like inputting Chaitin’s Omega, then your mechanical device can perform something crazy, like computing the halting problem. What exactly? Martin Berger Sep 4 ’12 at 9:58

skimming Wegner’s paper, its obvious he’s as being a bit melodramatic and contrarian, but he’s a place. the way forward for computing is perhaps a lot more considerably centered in robotics. AI. or datamining (of vast real life “big data”) that they doesnt appear to say greatly by name, but that they is clearly alluding to together with his model. which areas greatly largely concentrate on the world outdoors of the TMs inputs and outputs.

in the past additionally, it passed the name cybernetics as invented/formulated by Weiner. the purpose of robotics is the fact that inputs and outputs aren’t just digital and without meaning, which might conclude searching in a TM they’re, however they have real life implications/effects/causes etc. and also the machine forms a feedback loop using the atmosphere.

and so i would reason that TMs and robotics form a really natural synergy or symbiotic relationship as they say. but this isn’t a radical assertion and just what Wegner announces with great fanfare is, phrased in numerous terms, not so questionable or novel. quite simply, Wegner appears to become setting themself as an intellectual or academic iconoclast in the style purposely. and thus who’s the TCS community to deny him that melodramatic framing? nonetheless see [2] for any serious rebuttal.

Wegner’s illustration of driving a vehicle is extremely relevant other key recent breakthoughs in TCS could be reported:

- the DARPA road race challenge as well as Google’s closing in around the technology of the driving vehicle.[3]
- the situation from the Big Blue AI chess victory over Kasparov
- the current Dark Blue Risk Challenge victory
- the more and more autonomous Mars rover
- a current announced breakthrough in without supervision object recognition by Google.[4]
- commercialized speech recognition

but it’s true, what began out decades ago as mere theory with TMs has become a really real-world phenomenon and segments from the ivory tower TCS community may be in certain resistance or perhaps denial of this fact and also the connected, fundamental [near Kuhnian] transformation and shift “presently in play”. this somewhat ironic because Turing was very used in a lot of his perspectives studies for example his curiosity about an operational AI test (the Turing test), chemical dynamics, chess solving computation, etc [5].

you may also check this out inside a microcosm on this website in clashes over how you can define the scope, and heated arguments over whether a particular apparently innocuous tag known as application-of-theory is legitimate.[7]

and lets observe that TCS is actually studying many interactive types of computation and far key research is happening on the bottom. particularly interactive proof systems which all of the important computing classes could be defined when it comes to.[6]