wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/www.quantamagazine.org\/alan-turing-and-the-power-of-negative-thinking-20230905\/#comments<\/a><\/br> Algorithms have become ubiquitous. They optimize our commutes, process payments and coordinate the flow of internet traffic. It seems that for every problem that can be articulated in precise mathematical terms, there\u2019s an algorithm that can solve it, at least in principle.<\/p>\n But that\u2019s not the case \u2014 some seemingly simple problems can never be solved algorithmically. The pioneering computer scientist Alan Turing proved<\/a> the existence of such \u201cuncomputable\u201d problems nearly a century ago, in the same paper where he formulated the mathematical model of computation<\/a> that launched modern computer science.<\/p>\n Turing proved this groundbreaking result using a counterintuitive strategy: He defined a problem that simply rejects every attempt to solve it.<\/p>\n \u201cI ask you what you\u2019re doing, and then I say, \u2018No, I\u2019m going to do something different,\u2019\u201d said Rahul Ilango<\/a>, a graduate student at the Massachusetts Institute of Technology studying theoretical computer science.<\/p>\n Turing\u2019s strategy was based on a mathematical technique called diagonalization that has a distinguished history. Here\u2019s a simplified account of the logic behind his proof.<\/p>\n Diagonalization stems from a clever trick for solving a mundane problem that involves strings of bits, each of which can be either 0 or 1. Given a list of such strings, all equally long, can you generate a new string that isn\u2019t on the list?<\/p>\n The most straightforward strategy is to consider each possible string in turn. Suppose you have five strings, each five bits long. Start by scanning the list for 00000. If it\u2019s not there, you can stop; if it is, you move on to 00001 and repeat the process. This is simple enough, but it\u2019s slow for long lists of long strings.<\/p>\n Diagonalization is an alternate approach that builds up a missing string bit by bit. Start with the first bit of the first string on the list and invert it \u2014 that\u2019ll be the first bit of your new string. Then invert the second bit of the second string and use that as the second bit of the new string, and repeat until you get to the end of the list. The bits you flip ensure that the new string differs from every string on the original list in at least one place. (They also form a diagonal line through the list of strings, giving the technique its name.)<\/p>\n Diagonalization only needs to examine one bit from each string on the list, so it\u2019s often much faster than other methods. But its true power lies in how well it plays with infinity.<\/p>\n \u201cThe strings can now be infinite; the list can be infinite \u2014 it still works,\u201d said Ryan Williams<\/a>, a theoretical computer scientist at MIT.<\/p>\n The first person to harness this power was Georg Cantor, the founder of the mathematical subfield of set theory. In 1873, Cantor used diagonalization to prove that some infinities are larger than others<\/a>. Six decades later, Turing adapted Cantor\u2019s version of diagonalization to the theory of computation, giving it a distinctly contrarian flavor.<\/p>\n Turing wanted to prove the existence of mathematical problems that no algorithm can solve \u2014 that is, problems with well-defined inputs and outputs but no foolproof procedure for getting from input to output. He made this vague task more manageable by focusing exclusively on decision problems, where the input can be any string of 0s and 1s and the output is either 0 or 1.<\/p>\n Determining whether a number is prime (divisible only by 1 and itself) is one example of a decision problem \u2014 given an input string representing a number, the correct output is 1 if the number is prime and 0 if it isn\u2019t. Another example is checking computer programs for syntax errors (the equivalent of grammatical mistakes). There, input strings represent code for different programs \u2014 all programs can be represented this way, since that\u2019s how they\u2019re stored and executed on computers \u2014 and the goal is to output 1 if the code contains a syntax error and 0 if it doesn\u2019t.<\/p>\n An algorithm solves a problem only if it produces the correct output for every possible input \u2014 if it fails even once, it\u2019s not a general-purpose algorithm for that problem. Ordinarily, you\u2019d first specify the problem you want to solve and then try to find an algorithm that solves it. Turing, in search of unsolvable problems, turned this logic on its head \u2014 he imagined an infinite list of all possible algorithms and used diagonalization to construct an obstinate problem that would thwart every algorithm on the list.<\/p>\n Imagine a rigged game of 20 questions, where rather than starting with a particular object in mind, the answerer invents an excuse to say no to each question. By the end of the game, they\u2019ve described an object defined entirely by the qualities it lacks.<\/p>\n Turing\u2019s diagonalization proof is a version of this game where the questions run through the infinite list of possible algorithms, repeatedly asking, \u201cCan this algorithm solve the problem we\u2019d like to prove uncomputable?\u201d<\/p>\n \u201cIt\u2019s sort of \u2018infinity questions,\u2019\u201d Williams said.<\/p>\n To win the game, Turing needed to craft a problem where the answer is no for every algorithm. That meant identifying a particular input that makes the first algorithm output the wrong answer, another input that makes the second one fail, and so on. He found those special inputs using a trick similar to one Kurt G\u00f6del had recently used to prove<\/a> that self-referential assertions like \u201cthis statement is unprovable\u201d spelled trouble for the foundations of mathematics.<\/p>\n The key insight was that every algorithm (or program) can be represented as a string of 0s and 1s. That means, as in the example of the error-checking program, that an algorithm can take the code of another algorithm as an input. In principle, an algorithm can even take its own code as an input.<\/p>\n With this insight, we can define an uncomputable problem like the one in Turing\u2019s proof: \u201cGiven an input string representing the code of an algorithm, output 1 if that algorithm outputs 0 when its own code is the input; otherwise, output 0.\u201d Every algorithm that tries to solve this problem will produce the wrong output on at least one input \u2014 namely, the input corresponding to its own code. That means this perverse problem can\u2019t be solved by any algorithm whatsoever.<\/p>\n Computer scientists weren\u2019t yet through with diagonalization. In 1965, Juris Hartmanis and Richard Stearns adapted Turing\u2019s argument to prove<\/a> that not all computable problems are created equal \u2014 some are intrinsically harder than others. That result launched the field of computational complexity theory, which studies the difficulty of computational problems.<\/p>\n But complexity theory also revealed the limits of Turing\u2019s contrary method. In 1975, Theodore Baker, John Gill and Robert Solovay proved<\/a> that many open questions in complexity theory can never be resolved by diagonalization alone. Chief among these is the famous P versus NP problem, which asks whether all problems with easily checkable solutions are also easy to solve with the right ingenious algorithm.<\/p>\n Diagonalization\u2019s blind spots are a direct consequence of the high level of abstraction that makes it so powerful. Turing\u2019s proof didn\u2019t involve any uncomputable problem that might arise in practice \u2014 instead, it concocted such a problem on the fly. Other diagonalization proofs are similarly aloof from the real world, so they can\u2019t resolve questions where real-world details matter.<\/p>\n \u201cThey handle computation at a distance,\u201d Williams said. \u201cI imagine a guy who is dealing with viruses and accesses them through some glove box.\u201d<\/p>\n The failure of diagonalization was an early indication that solving the P versus NP problem would be a long journey<\/a>. But despite its limitations, diagonalization remains one of the key tools in complexity theorists\u2019 arsenal. In 2011, Williams used it together with a raft of other techniques to prove<\/a> that a certain restricted model of computation couldn\u2019t solve some extraordinarily hard problems \u2014 a result that had eluded researchers for 25 years. It was a far cry from resolving P versus NP, but it still represented major progress.<\/p>\n If you want to prove that something\u2019s not possible, don\u2019t underestimate the power of just saying no.<\/p>\n<\/div>\n <\/br><\/br><\/br><\/p>\n
\nAlan Turing and the Power of Negative Thinking<\/br>
\n2023-09-06 21:58:20<\/br><\/p>\nString Theory\u00a0\u00a0\u00a0\u00a0\u00a0 <\/strong><\/h2>\n
The Limitation Game<\/strong><\/h2>\n
What Negation Can\u2019t Do<\/strong><\/h2>\n