wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/18\/meta-built-a-code-generating-ai-model-similar-to-copilot\/<\/a><\/br> Meta says it\u2019s created a generative AI tool for coding similar to GitHub\u2019s Copilot.<\/p>\n The company made the announcement at an event focused on its AI infrastructure efforts<\/a>, including custom chips Meta\u2019s building to accelerate the training of generative AI models. The coding tool, called CodeCompose, isn\u2019t available publicly \u2014 at least not yet. But Meta says its teams use it internally to get code suggestions for Python and other languages as they type in IDEs like VS Code.<\/p>\n \u201cThe underlying model is built on top of public research from [Meta] that we have tuned for our internal use cases and codebases,\u201d Michael Bolin, a software engineer at Meta, said in a prerecorded video. \u201cOn the product side, we\u2019re able to integrate CodeCompose into any surface where our developers or data scientists work with code.\u201d<\/p>\n The largest of several CodeCompose models Meta trained has 6.7 billion parameters, a little over half the number of parameters in the model on which Copilot is based. Parameters are the parts of the model learned from historical training data and essentially define the skill of the model on a problem, such as generating text.<\/p>\n CodeCompose was fine-tuned on Meta\u2019s first-party code, including internal libraries and frameworks written in Hack, a Meta-developed programming language, so it can incorporate those into its programming suggestions. And its base training data set was filtered of poor coding practices and errors, like deprecated APIs, to reduce the chance that the model recommends a problematic slice of code.<\/p>\n <\/p>\n In practice, CodeCompose makes suggestions like annotations and import statements as a user types. The system can complete single lines of code or multiple lines, optionally filling in entire large chunks of code.\u00a0<\/span><\/p>\n \u201cCodeCompose can take advantage of the surrounding code to provide better suggestions,\u201d Bolin continued. \u201cIt can also uses code comments as a signal in generating code.\u201d<\/span><\/p>\n Meta claims that thousands of employees are accepting suggestions from CodeCompose every week and that the acceptance rate is over 20%.<\/p>\n The company didn\u2019t address, however, the controversies around code-generating AI.<\/p>\n Microsoft, GitHub and OpenAI are being\u00a0sued<\/a>\u00a0in a class action lawsuit that accuses them of violating copyright law by allowing Copilot to regurgitate sections of licensed code without providing credit. Liability aside, some legal experts have suggested that AI like Copilot could put companies at risk if they were to unwittingly incorporate copyrighted suggestions from the tool into their production software.<\/p>\n It\u2019s unclear whether CodeCompose, too, was trained on licensed or copyrighted code \u2014 even accidentally. When reached for comment, a Meta spokesperson had this to say:<\/p>\n \u201cCodeCompose was trained on InCoder, which was released by Meta\u2019s AI research division. In a paper<\/a> detailing InCoder, we note that, to train InCoder, \u2018We collect a corpus of (1) public code with permissive, non-copyleft, open source licenses from GitHub and GitLab and (2) StackOverflow questions, answers and comments.\u2019 The only additional training we do for CodeCompose is on Meta\u2019s internal code.\u201d<\/p>\n Generative coding tools can also introduce insecure code. According to a recent study<\/a> out of Stanford, software engineers who use code-generating AI systems are more likely to cause security vulnerabilities in the apps they develop. While the study didn\u2019t look at CodeCompose specifically, it stands to reason that developers who use it would fall victim to the same.<\/p>\n Bolin stressed that developers needn\u2019t follow CodeCompose\u2019s suggestions and that security was a \u201cmajor consideration\u201d in creating the model. \u201cWe are extremely excited with our progress on CodeCompose to date, and we believe that our developers are best served by bringing this work in house,\u201d he added<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nMeta built a code-generating AI model similar to Copilot<\/br>
\n2023-05-18 21:58:18<\/br><\/p>\n