wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/www.quantamagazine.org\/a-new-approach-to-computation-reimagines-artificial-intelligence-20230413\/#comments<\/a><\/br> The paper built upon work done in the mid-1990s by Kanerva and Tony Plate, at the time a doctoral student with Geoff Hinton at the University of Toronto. The two independently developed the algebra for manipulating hypervectors and hinted at its usefulness for high-dimensional computing.<\/p>\n Given our hypervectors for shapes and colors, the system developed by Kanerva and Plate shows us how to manipulate them using certain mathematical operations. Those actions correspond to ways of symbolically manipulating concepts.<\/p>\n The first operation is multiplication. This is a way of combining ideas. For example, multiplying the vector SHAPE with the vector CIRCLE binds the two into a representation of the idea \u201cSHAPE is CIRCLE.\u201d This new \u201cbound\u201d vector is nearly orthogonal to both SHAPE and CIRCLE. And the individual components are recoverable \u2014 an important feature if you want to extract information from bound vectors. Given a bound vector that represents your Volkswagen, you can unbind and retrieve the vector for its color: PURPLE.<\/p>\n The second operation, addition, creates a new vector that represents what\u2019s called a superposition of concepts. For example, you can take two bound vectors, \u201cSHAPE is CIRCLE\u201d and \u201cCOLOR is RED,\u201d and add them together to create a vector that represents a circular shape that is red in color. Again, the superposed vector can be decomposed into its constituents.<\/p>\n The third operation is permutation; it involves rearranging the individual elements of the vectors. For example, if you have a three-dimensional vector with values labeled x<\/em>, y<\/em> and z<\/em>, permutation might move the value of x<\/em> to y<\/em>, y<\/em> to z<\/em>, and z<\/em> to x<\/em>. \u201cPermutation allows you to build structure,\u201d Kanerva said. \u201cIt allows you to deal with sequences, things that happen one after another.\u201d Consider two events, represented by the hypervectors A and B. We can superpose them into one vector, but that would destroy information about the order of events. Combining addition with permutation preserves the order; the events can be retrieved in order by reversing the operations.<\/p>\n Together, these three operations proved enough to create a formal algebra of hypervectors that allowed for symbolic reasoning. But many researchers were slow to grasp the potential of hyperdimensional computing, including Olshausen. \u201cIt just didn\u2019t sink in,\u201d he said.<\/p>\n In 2015, a student of Olshausen\u2019s named Eric Weiss demonstrated one aspect of hyperdimensional computing\u2019s unique abilities. Weiss figured out how to represent a complex image as a single hyperdimensional vector that contains information about all the objects in the image, including their properties, such as colors, positions and sizes.<\/p>\n \u201cI practically fell out of my chair,\u201d Olshausen said. \u201cAll of a sudden the lightbulb went on.\u201d<\/p>\n Soon more teams began developing hyperdimensional algorithms to replicate simple tasks that deep neural networks had begun tackling about two decades before, such as classifying images.<\/p>\n Consider an annotated data set that consists of images of handwritten digits. An algorithm analyzes the features of each image using some predetermined scheme. It then creates a hypervector for each image. Next, the algorithm adds the hypervectors for all images of zero to create a hypervector for the idea of zero. It then does the same for all digits, creating 10 \u201cclass\u201d hypervectors, one for each digit.<\/p>\n Now the algorithm is given an unlabeled image. It creates a hypervector for this new image, then compares the hypervector against the stored class hypervectors. This comparison determines the digit that the new image is most similar to.<\/p>\n Yet this is just the beginning. The strengths of hyperdimensional computing lie in the ability to compose and decompose hypervectors for reasoning. The latest demonstration of this came in March, when Abbas Rahimi<\/a> and colleagues at IBM Research in Zurich used hyperdimensional computing with neural networks to solve a classic problem<\/a> in abstract visual reasoning \u2014 a significant challenge for typical ANNs, and even some humans. Known as Raven\u2019s progressive matrices, the problem presents images of geometric objects in, say, a 3-by-3 grid. One position in the grid is blank. The subject must choose, from a set of candidate images, the image that best fits the blank.<\/p>\n \u201cWe said, \u2018This is really \u2026 the killer example for visual abstract reasoning, let\u2019s jump in,\u2019\u201d Rahimi said.<\/p>\n To solve the problem using hyperdimensional computing, the team first created a dictionary of hypervectors to represent the objects in each image; each hypervector in the dictionary represents an object and some combination of its attributes. The team then trained a neural network to examine an image and generate a bipolar hypervector \u2014 an element can be +1 or \u22121 \u2014 that\u2019s as close as possible to some superposition of hypervectors in the dictionary; the generated hypervector thus contains information about all the objects and their attributes in the image. \u201cYou guide the neural network to a meaningful conceptual space,\u201d Rahimi said.<\/p>\n Once the network has generated hypervectors for each of the context images and for each candidate for the blank slot, another algorithm analyzes the hypervectors to create probability distributions for the number of objects in each image, their size, and other characteristics. These probability distributions, which speak to the likely characteristics of both the context and candidate images, can be transformed into hypervectors, allowing the use of algebra to predict the most likely candidate image to fill the vacant slot.<\/p>\n Their approach was nearly 88% accurate on one set of problems, whereas neural network\u2013only solutions were less than 61% accurate. The team also showed that, for 3-by-3 grids, their system was almost 250 times faster than a traditional method that uses rules of symbolic logic to reason, since that method must search through an enormous rulebook to determine the correct next step.<\/p>\n Not only does hyperdimensional computing give us the power to solve problems symbolically, it also addresses some niggling issues of traditional computing. The performance of today\u2019s computers degrades rapidly if errors caused by, say, a random bit flip (a 0 becomes 1 or vice versa) cannot be corrected by built-in error-correcting mechanisms. Moreover, these error-correcting mechanisms can impose a penalty on performance of up to 25%, said Xun Jiao<\/a>, a computer scientist at Villanova University.<\/p>\n Hyperdimensional computing tolerates errors better, because even if a hypervector suffers significant numbers of random bit flips, it is still close to the original vector. This implies that any reasoning using these vectors is not meaningfully impacted in the face of errors. Jiao\u2019s team has shown<\/a> that these systems are at least 10 times more tolerant of hardware faults than traditional ANNs, which themselves are orders of magnitude more resilient than traditional computing architectures. \u201cWe can leverage all [that] resilience to design some efficient hardware,\u201d Jiao said.<\/p>\n Another advantage of hyperdimensional computing is transparency: The algebra clearly tells you why the system chose the answer it did. The same is not true for traditional neural networks. Olshausen, Rahimi and others are developing hybrid systems in which neural networks map things in the physical world to hypervectors, and then hyperdimensional algebra takes over. \u201cThings like analogical reasoning just fall in your lap,\u201d Olshausen said. \u201cThis is what we should expect of any AI system. We should be able to understand it just like we understand an airplane or a television set.\u201d<\/p>\n All of these benefits over traditional computing suggest that hyperdimensional computing is well suited for a new generation of extremely sturdy, low-power hardware. It\u2019s also compatible with \u201cin-memory computing systems,\u201d which perform the computing on the same hardware that stores data (unlike existing von Neumann computers that inefficiently shuttle data between memory and the central processing unit). Some of these new devices can be analog, operating at very low voltages, making them energy-efficient<\/a> but also prone to random noise. For von Neumann computing, this randomness is \u201cthe wall that you can\u2019t go beyond,\u201d Olshausen said. But with hyperdimensional computing, \u201cyou can just punch through it.\u201d<\/p>\n Despite such advantages, hyperdimensional computing is still in its infancy. \u201cThere\u2019s real potential here,\u201d Ferm\u00fcller said. But she points out that it still needs to be tested against real-world problems and at bigger scales, closer to the size of modern neural networks.<\/p>\n \u201cFor problems at scale, this needs very efficient hardware,\u201d Rahimi said. \u201cFor example, how [do you] efficiently search over 1 billion items?\u201d<\/p>\n All of this should come with time, Kanerva said. \u201cThere are other secrets [that] high-dimensional spaces hold,\u201d he said. \u201cI see this as the very beginning of time for computing with vectors.\u201d<\/p>\n<\/div>\n <\/br><\/br><\/br><\/p>\n
\nA New Approach to Computation Reimagines Artificial Intelligence<\/br>
\n2023-04-17 21:58:04<\/br><\/p>\nHarnessing the Power<\/strong><\/h2>\n
A Promising Start<\/strong><\/h2>\n