wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/www.quantamagazine.org\/alan-turings-most-important-machine-was-never-built-20230503\/#comments<\/a><\/br> Computation is a familiar concept most of us understand intuitively. Take the function f<\/em>(x<\/em>) = x<\/em> + 3. When x<\/em> is three, f<\/em>(3) = 3 + 3. Six. Easy. It seems obvious that this function is computable. But some functions aren\u2019t so simple, and it\u2019s not so easy to determine if they can be computed, meaning they may never give us a final answer.<\/p>\n In 1928, the German mathematicians David Hilbert and Wilhelm Ackermann proposed a question called the Entscheidungsproblem<\/em> (\u201cdecision problem\u201d). In time, their question would lead to a formal definition of computability, one that allowed mathematicians to answer a host of new problems and laid the foundation for theoretical computer science.<\/p>\n The definition came from a 23-year-old grad student named Alan Turing, who in 1936 wrote a seminal paper<\/a> that not only formalized the concept of computation, but also proved a fundamental question in mathematics and created the intellectual foundation for the invention of the electronic computer. Turing\u2019s great insight was to provide a concrete answer to the computation question in the form of an abstract machine, later named the Turing machine by his doctoral adviser, Alonzo Church. It\u2019s abstract because it doesn\u2019t (and can\u2019t) physically exist as a tangible device. Instead, it\u2019s a conceptual model of computation: If the machine can calculate a function, then the function is computable.<\/p>\n Here\u2019s how it works. A Turing machine can read and alter symbols on an infinitely long tape, as dictated by a table of rules. The tape is made up of \u201ccells,\u201d which can each store exactly one symbol, and the machine reads and rewrites the cells\u2019 contents with a tape head. Each rule in the table determines what the machine should do based on both its current state and the symbol it\u2019s reading. The machine can enter a final state (\u201caccept state\u201d or \u201creject state\u201d) upon which it halts, accepting or rejecting the input. Or it falls into an infinite loop and continues reading the tape forever.<\/p>\n The best way to understand a Turing machine is to consider a simple example. Let\u2019s imagine one that\u2019s designed to tell us whether a given input is the number zero. We\u2019ll use the input number 0001 accompanied by blank symbols (#), so \u201c#0001#\u201d is the relevant part of our tape.<\/p>\n The machine starts in the initial state, which we\u2019ll call q0. It reads the leftmost cell on our tape and finds a blank space. The rules say, \u201cWhen in state q0, if the symbol is #, leave it as is without modification, move one cell to the right, and change the machine\u2019s state to q1.\u201d After this step, the machine is in state q1, and its head is reading the second symbol, 0.<\/p>\n Now we look for a rule that applies to these conditions. We find one that says, \u201cRemain in state q1 and move the head one cell to the right.\u201d This leaves us in the same position (in state q1, reading \u201c0\u201d), so we keep moving to the right until the head finally reads a different number, the 1.<\/p>\n When we consult the table again, we find a new rule: \u201cIf we encounter a 1, transition to q2, which is a \u2018reject\u2019 state.\u201d The machine stops, answering \u201cNo\u201d to the original question, \u201cIs \u20180001\u2019 zero?\u201d<\/p>\n If instead the input is \u201c#0000#,\u201d the machine will encounter a # after all those zeros. When we consult the table, we find a rule saying that this means the machine enters state q3, an \u201caccept\u201d state. Now the machine answers \u201cYes\u201d to the question \u201cIs \u20180000\u2019 zero?\u201d<\/p>\n With his abstract machine, Turing established a model of computation to answer the Entscheidungsproblem, which formally asks: Given a set of mathematical axioms, is there a mechanical process \u2014 a set of instructions, which today we\u2019d call an algorithm \u2014 that can always determine whether a given statement is true?<\/p>\n Say that we want to find an algorithm that can tell us if a certain chess position is possible. Here, the axioms are the rules of chess that govern legal moves. Can we follow a finite step-by-step sequence of procedures to arrive at that position? Though some positions may take longer than our lifetime to analyze \u2014 an algorithm may generate all possible positions and compare each of them to the input \u2014 such algorithms exist in the game of chess. As a result, we say that chess is \u201cdecidable.\u201d<\/p>\n However, in 1936, Church and Turing \u2014 using different methods \u2014 independently proved that there is no general way of solving every instance of the Entscheidungsproblem. For example, some games, such as John Conway\u2019s Game of Life, are undecidable: No algorithm can determine whether a certain pattern will appear from an initial pattern.<\/p>\n Turing showed that a function is computable if there exists an algorithm that can execute the desired task. At the same time, he showed that an algorithm is a process that can be defined by a Turing machine. Hence, a computable function is a function that has a Turing machine to compute it. This may seem like a circuitous way to define computability, but it\u2019s the best we\u2019ve got. \u201cIt\u2019s not like you have a choice to define it some other way,\u201d said Michael Sipser<\/a>, a theoretical computer scientist at the Massachusetts Institute of Technology. \u201cI think it is commonly accepted that the Church-Turing thesis says that the informal notion of algorithm corresponds to what any \u2018reasonable\u2019 computational model can do.\u201d Other mathematicians have come up with different models of computation that look quite different on the surface but are actually equivalent: They can do any computation that Turing machines can do, and vice versa.<\/p>\n Only a few years after Kurt G\u00f6del had proved that mathematics was\u00a0incomplete<\/a>, Church and Turing showed with this work that some problems in mathematics are undecidable \u2014 no algorithm, however sophisticated, can tell us whether the answer is yes or no. Both were devastating blows to Hilbert, who had hoped mathematics would have tidy, idealized answers. But it\u2019s perhaps just as well: If a general solution to the Entscheidungsproblem existed, it would mean that all problems in mathematics could be reduced to simple mechanical calculations.<\/p>\n Beyond answering these fundamental questions, Turing\u2019s machine also led directly to the development of modern computers, through a variant known as the universal Turing machine. This is a special kind of Turing machine that can simulate any other Turing machine on any input. It can read a description of other Turing machines (their rules and input tapes) and simulate their behaviors on its own input tape, producing the same output that the simulated machine would produce, just as today\u2019s computers can read any program and execute it. In 1945, John von Neumann proposed a computer architecture \u2014 called the von Neumann architecture \u2014 that made the universal Turing machine concept possible in a real-life machine.<\/p>\n When Sanjeev Arora<\/a>, a theoretical computer scientist at Princeton University, teaches this concept, he emphasizes a broader philosophical picture. \u201cThere\u2019s two notions of \u2018universal,\u2019\u201d he said. \u201cOne notion of the universal is that it can run any other Turing machine. But the other, bigger notion of \u2018universal\u2019 is that it can run any computation that you will come up with in the universe.\u201d In the world of classical physics, any physical process can be modeled or simulated using algorithms, which, in turn, can be simulated by a Turing machine.<\/p>\n Another notable and increasingly useful variant is the probabilistic Turing machine. Unlike a regular Turing machine \u2014 which has a well-defined reaction to every input \u2014 a probabilistic Turing machine can have multiple reactions based on probabilities. This means it can yield different results for the same input at different times. Surprisingly, this kind of probabilistic strategy<\/a> can be more efficient than a purely deterministic approach for certain problems. Ideas from probabilistic Turing machines have been shown to be practically useful in areas such as cryptography, optimization and machine learning.<\/p>\n These abstract machines are perhaps the best evidence that asking fundamental questions can be among the most useful things a scientist can do.<\/p>\n<\/div>\n <\/br><\/br><\/br><\/p>\n
\nThe Most Important Machine That Was Never Built<\/br>
\n2023-05-04 21:58:02<\/br><\/p>\n