wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/06\/07\/contextual-ai-launches-from-stealth-to-build-enterprise-focused-language-models\/<\/a><\/br> Large language models (LLMs) like OpenAI\u2019s GPT-4<\/a> are powerful, paradigm-shifting tools that promise to upend industries. But they suffer from limitations that make them less attractive to enterprise organizations with strict compliance and governance requirements. For example, LLMs have a tendency to make up information with high confidence, and they\u2019re architected in a way that makes it difficult to remove \u2014 or even revise \u2014 their knowledge base.<\/p>\n To solve for these and other roadblocks, Douwe Kiela co-founded Contextual AI<\/a>, which today launched out of stealth with $20 million in seed funding. Backed by investors including Bain Capital Ventures (which led the seed), Lightspeed, Greycroft and SV Angel, Contextual AI ambitiously aims to build the \u201cnext generation\u201d of LLMs for the enterprise.<\/p>\n \u201cWe created the company to address the needs of enterprises in the burgeoning area of generative AI, which has thus far largely focused on consumers,\u201d Kiela told TechCrunch via email. \u201cContextual AI is solving for several obstacles that exist today in getting enterprises to adopt generative AI.\u201d<\/p>\n Kiela and Contextual AI\u2019s other co-founder, Amanpreet Singh, worked together at AI startup Hugging Face and Meta before deciding to go it their own in early February. While at Meta, Kiela led research into a technique called retrieval augmented generation (RAG), which forms the basis of Contextual AI\u2019s text-generating AI technology.<\/p>\n So what\u2019s RAG? In a nutshell, RAG \u2014 which Google\u2019s DeepMInd R&D division has also explored<\/a> \u2014 augments LLMs with external sources, like files and webpages, to improve their performance. Given a prompt (e.g. \u201cWho\u2019s the president of the U.S.?\u201d), RAG looks for data within the sources that might be relevant. Then, it packages the results with the original prompt and feeds it to an LLM, generating a \u201ccontext-aware\u201d response (e.g. \u201cThe current president is Joe Biden, according to the official White House website\u201d).<\/p>\n By contrast, in response to a question like \u201cWhat\u2019s Nepal\u2019s GDP by year?,\u201d a typical LLM (e.g. ChatGPT) might only return the GDP up to a certain date and fail to cite the source of the information.<\/p>\n Kiela asserts that RAG can solve the other outstanding issues with today\u2019s LLMs, like those around attribution and customization. With conventional LLMs, it can be tough to know why the models respond the way they do, and adding data sources to LLMs often requires retraining or fine-tuning \u2014 steps (usually) avoided with RAG.<\/p>\n \u201cRAG language models can be smaller than equivalent language models and still achieve the same performance. This makes them a lot faster, meaning lower latency and lower cost,\u201d Kiela said. \u201cOur solution addresses the shortcomings and inherited issues of existing approaches. We believe that integrating and jointly optimizing different modules for data integration, reasoning, speech and even seeing and listening will unlock the true potential of language models for enterprise use cases.\u201d<\/p>\n My colleague Ron Miller has mused<\/a> about how generative AI\u2019s future in the enterprise could be smaller, more focused language models. I don\u2019t dispute that. But perhaps instead of exclusively fine-tuned, enterprise-focused LLMs, it\u2019ll be a combination of \u201csmaller\u201d models and existing LLMs augmented with troves of company-specific documents.<\/p>\n Contextual AI isn\u2019t the first to explore this idea. OpenAI and its close partner, Microsoft, recently<\/a> launched a plug-ins framework that allows third parties to add sources of information to LLMs like GPT-4. Other startups, like LlamaIndex<\/a>, are experimenting with ways to inject personal or private data, including enterprise data, into LLMs.<\/p>\n But Contextual AI claims to have inroads in the enterprise. While the company is pre-revenue at the present, Kiela claims that Contextual AI is in talks with Fortune 500 companies to pilot its technology.<\/p>\n \u201cEnterprises need to be certain that the answers they\u2019re getting from generative AI are accurate, reliable and traceable,\u201d Kiela said. \u201cContextual AI will make it easy for employers and their valuable knowledge workers to gain the efficiency benefits that generative AI can provide, while doing so safely and accurately \u2026 Several generative AI companies have stated they will pursue the enterprise market, but Contextual AI will take a different approach by building a much more integrated solution geared specifically for enterprise use cases.\u201d<\/p>\n Contextual AI, which has around eight employees, plans to spend the bulk of its seed funding on product development, which will include investing in a compute cluster to train LLMs. The company plans to grow its workforce to close to 20 people by the end of 2023.<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nContextual AI launches from stealth to build enterprise-focused language models<\/br>
\n2023-06-07 21:56:43<\/br><\/p>\n