wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/www.quantamagazine.org\/scientists-find-optimal-balance-of-data-storage-and-time-20240208\/#comments<\/a><\/br> The team built their hash table in two parts. They had a primary data structure, in which the items are stored without any wasted bits at all, and a secondary data structure, which helps a query request find the item it\u2019s looking for. While the group did not invent the notion of a secondary data structure, they did make a crucial discovery that made their hyperefficient hash table possible: The structure\u2019s overall memory efficiency depends on how the primary structure arranges its stored items.<\/p>\n The basic idea is that every item in the primary structure has preferred storage locations \u2014 a best location, a second-best one, a third best and so on. If an item is in its best spot, the number 1 is affixed to it, and that number is stored in the secondary data structure. In response to a query, the secondary structure provides just the number 1, which spells out the item\u2019s exact location in the primary structure.<\/p>\n If the item is in its 100th-best spot, the secondary data structure attaches the number 100. And because the system uses binary, it represents the number 100 as 1100100. It takes more memory, of course, to store the number 1100100 than 1 \u2014 the number assigned to an item when it\u2019s in the best spot. Differences like that become significant if you\u2019re storing, say, a million items.<\/p>\n So the team realized that if you continually shift items in the primary data structure into their more preferred locations, you could significantly reduce the memory consumed by the secondary structure without having to increase query times.<\/p>\n \u201cBefore this work, no one had realized you could further compress the data structure by moving information around,\u201d Pagh said. \u201cThat was the big insight of the Bender paper.\u201d<\/p>\n The authors showed that their invention established a new upper bound for the most efficient hash tables, meaning that it was the best data structure yet devised in terms of both time and space efficiency. But the possibility remained that someone else might do even better.<\/p>\n The next year, a team led by Huacheng Yu<\/a>, a computer scientist at Princeton University, tried to improve the Bender team\u2019s hash table. \u201cWe worked really hard and couldn\u2019t do it,\u201d said Renfei Zhou<\/a>, a student at Tsinghua University in Beijing and a member of Yu\u2019s team. \u201cThat\u2019s when we suspected that their upper bound was [also] a lower bound\u201d \u2014 the best that can possibly be achieved. \u201cWhen the upper bound equals the lower bound, the game is over, and you have your answer.\u201d No matter how clever you are, no hash table can do any better.<\/p>\n Yu\u2019s team employed a novel strategy to find out if that hunch was correct by calculating a lower bound from first principles. First, they reasoned that to perform an insertion or a deletion, a hash table \u2014 or, really, any data structure \u2014 must access the computer\u2019s memory some number of times. If they could figure out the minimum number of times needed for a space-efficient hash table, they could multiply that by the time required per access (a constant), giving them a lower bound on the runtime.<\/p>\n But if they didn\u2019t know anything about the hash table (except that it was space-efficient), how could the researchers figure out the minimum number of times required to access the memory? They derived it purely from theory, using a seemingly unrelated field called the theory of communication complexity, which studies how many bits are required to convey information between two parties. Eventually, the team succeeded: They figured out how many times a data structure must access its memory per operation.<\/p>\n<\/div>\n <\/br><\/br><\/br><\/p>\n
\nScientists Find Optimal Balance of Data Storage and Time<\/br>
\n2024-02-12 21:58:53<\/br><\/p>\nBound to Succeed<\/strong><\/h2>\n