wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/06\/13\/openai-intros-new-generative-text-features-while-reducing-pricing\/<\/a><\/br> As the competition in the generative AI space grows fiercer, OpenAI is upgrading its text-generating models while reducing pricing.<\/p>\n Today, OpenAI announced<\/a> the release of new versions of GPT-3.5-turbo<\/a> and GPT-4<\/a>, the latter being its latest text-generating AI, with a capability called function calling. As OpenAI explains in a blog post<\/a>, function calling allows developers to describe programming functions to GPT-3.5-turbo and GPT-4 and have the models create code to execute those functions.<\/p>\n For example, function calling can help to create chatbots that answer questions by calling external tools, convert natural language into database queries and extract structured data from text. \u201cThese models have been fine-tuned to both detect when a function needs to be called \u2026 and to respond with JSON<\/a> that adheres to the function signature,\u201d OpenAI writes. \u201cFunction calling allows developers to more reliably get structured data back from the model.\u201d<\/p>\n Beyond function calling, OpenAI is introducing a flavor of GPT-3.5-turbo with a greatly expanded context window. Context window, measured in tokens<\/a>, or raw bits of text, refers to the text the model considers before generating any additional text. Models with small context windows tend to \u201cforget\u201d the content of even very recent conversations, leading them to veer off topic \u2014 often in problematic ways.\u00a0<\/span><\/p>\n The new GPT-3.5-turbo offers four times the context length (16,000 tokens) of the vanilla GPT-3.5-turbo at twice the price \u2014 $0.003 per 1,000 input tokens (i.e. tokens fed into the model) and $0.004 per 1,000 output tokens (tokens the model generates). OpenAI says that it can ingest around 20 pages of text in a single go \u2014 short of the hundreds of pages that AI startup Anthropic\u2019s flagship model can process<\/a>, notably. (OpenAI is testing<\/a> a version of GPT-4 with a 32,000-token context window, but only in limited release.)<\/p>\n On the plus side, OpenAI says that it\u2019s reducing pricing for GPT-3.5-turbo \u2014 the original, not the version with the expanded context window \u2014 by 25%. Developers can now use the model for $0.0015 per 1,000 input tokens and $0.002 per 1,000 output tokens, which equates to roughly 700 pages per dollar.<\/p>\n Pricing is also being reduced for text-embedding-ada-002, one of OpenAI\u2019s more popular text embedding models. Text embeddings measure the relatedness of text strings, and are commonly used for search (where results are ranked by relevance to a query string) and recommendations (where items with related text strings are recommended).<\/p>\n Text-embedding-ada-002 now costs $0.0001 per 1,000 tokens, a 75% reduction from the previous price. OpenAI says the reduction was made possible by increased efficiency in its systems \u2014 a key area of focus for the startup, no doubt, as it spends hundreds of millions<\/a> of dollars on R&D and infrastructure.<\/p>\n OpenAI has signaled that incremental updates to existing models \u2014 not massive new from-scratch models \u2014 are its MO following the release of GPT-4 in early March. At a recent conference<\/a> hosted by Economic Times, CEO Sam Altman reaffirmed that OpenAI hasn\u2019t begun training the successor to GPT-4, indicating that the company \u201chas a lot of work to do\u201d before it starts that model.<\/p>\n<\/p><\/div>\n <\/br><\/br><\/br><\/p>\n
\nOpenAI intros new generative text features while reducing pricing<\/br>
\n2023-06-13 21:34:58<\/br><\/p>\n