wp-plugin-hostgator
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114ol-scrapes
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /home4/scienrds/scienceandnerds/wp-includes/functions.php on line 6114Source:https:\/\/techcrunch.com\/2023\/05\/23\/wellen-ai-chatbot\/<\/a><\/br> What are AI chatbots good for? Lovers of sci-fi novels may recall the \u201clibrarian\u201d, a character in Neal Stephenson\u2019s 1992 classic Snow Crash<\/em>; not a person but an AI program and virtual library which was capable of interacting with users in a conversational manner. The fictional concept suggested an elegant and accessible solution to the knowledge discovery problem, just so long as the answer to whatever query it was being asked lurked in its training data.<\/p>\n Fast forward to today and AI chatbots are popping up everywhere. But there\u2019s one major drawback: These general purpose tools are failing to achieve the high level of response-accuracy envisaged in science fiction. Snow Crash<\/em>\u2018s version of conversational AI was almost unfailingly helpful and certainly did not routinely \u201challucinate\u201d (wrong) answers. When asked something it did not explicitly have information on it would \u2018fess up to a knowledge gap, rather than resorting to making stuff up. So it turns out the reality of cutting-edge AI tools is a lot wonkier than some of our finest fictional predictions.<\/p>\n While we\u2019re still far from the strong knowledge dissemination game of the Snow Crash<\/em> librarian we are seeing custom chatbots being honed for utility in a narrower context, where they essentially function as a less tedious website search. So foundational large language models (LLMs), like OpenAI\u2019s GPT, are \u2014 via its API \u2014 being customized by other businesses by being trained on specialist data sets for the purpose of being applied in a specific (i.e. not general purpose) context.<\/p>\n And, in the best examples, these custom chatbots are instructed to keep their responses concise (no waffling pls!), as well as being mandated to show some basic workings (by including links to reference material) as a back-stop against inadvertently misleading information-hungry human interlocutors (who may themselves be prone to hallucinating or seeing what they want to see).<\/p>\n Wellen<\/a>, a New York-based bone health focused fitness startup which launched earlier this year with a subscription service aimed at middle aged women \u2014 touting a science-backed \u201cpersonalized\u201d strength training programs designed to help with osteopenia and osteoporosis \u2014 has just launched one such AI chatbot<\/a> built on OpenAI\u2019s LLM.<\/p>\n Testing out this chatbot, which is clearly labelled as an \u201cexperiment\u201d \u2014 and before you even start interacting with it you have to acknowledge an additional disclaimer emphasizing that its output is \u201cnot medical advice\u201d \u2014 ahead of its launch today it brought to mind a little of the utility of the Snow Crash<\/em> librarian. Or, well, so long as you stay in its expertise lane of all-things bone health.<\/p>\n So, for example, ask it questions like \u2018can osteoporosis be reversed?\u2019 and \u2018is jumping good for bone health?\u2019 and you\u2019ll get concise and coherent (and seemingly accurate) answers that link out to content the startup hosts on its website (written by its in-house experts) for further related reading on your query. On being first launched, it also helpfully offers some examples of pertinent questions you can ask it to get the chatter flowing.<\/p>\n But if you ask it irrelevant (off-topic) stuff \u2014 like \u2018who is the US president?\u2019 or \u2018should I get a new haircut?\u2019 \u2014 you\u2019ll get random responses which don\u2019t address what you\u2019ve asked. Here it tends to serve up unrelated (but still potentially useful) tidbits of info on core topics, as if it\u2019s totally misunderstood the question and\/or is trying to pattern-match a least irrelevant response from the corpus of content it\u2019s comfortable discussing. But it will still be answering something you never asked. (This can include serving unasked for intel on how to pay for it\u2019s personalized fitness programs. Which is certainly one way to deflect junk asks.)<\/p>\n Ask the bot dubious stuff that\u2019s nonetheless related to its area of expertise \u2014 such as medical conspiracy theories about bone health or dodgy stuff about fantastical cures for osteoporosis \u2014 and we found it to be capable at either refuting the nonsense outright or pointing the user back to verified information debunking the junk or both.<\/p>\n The bot also survived our (fairly crude) attempts to persuade it to abandon its guardrails and role-play as something else to try to get it dish out unhelpful or even harmful advice. And it played a very straight bat to obviously ridiculous asks (like whether eating human bones is good for bone health) \u2014 albeit its response to that was perhaps a little too dry and cautious, with the bot telling us: \u201cThere is no mention of eating human bones being good for bone health in the provided context.\u201d But, well, it\u2019s not wrong.<\/p>\n Early impressions of the tool are that it\u2019s extremely easy to use (and a better experience than the average underpowered site search function). It also looks likely to be helpful in supporting Wellen\u2019s users to source useful resources related to bone health. Or just find something they previously read on its website and can\u2019t remember exactly where they saw it. (We managed to get it to list links to all the blog posts it had written on diet and bone health, for instance.)<\/p>\n In this bounded context it looks like a reasonable use of generative AI \u2014 having been designed with safety mechanisms in place to guard against conversations straying off topic or swerving into other misleading pitfalls. And with strict respect for sourcing. (Note there\u2019s a cap on the number of free queries you can ask per day, of six. We\u2019re assuming paying Wellen members are not capped.)<\/p>\n Although you do kinda wonder if it\u2019s overkill using an LLM for this use-case when a simpler decision tree chatbot might have sufficed (at least for mainstream\/predictable queries).<\/p>\n \u201cWe are using OpenAI\u2019s API to create embeddings that produce a vector store of our content,\u201d explains CEO and founder Priya Patel. \u201cWe are leveraging a popular open-source framework called LangChain to facilitate the search and retrieval of information within our embeddings.\u201d<\/p>\n On training data she says they embedded content from their Well Guide<\/a>, as well as other content from the website, noting: \u201cAll of our Well Guide content is written and peer-reviewed by experts in the field, and includes references to peer-reviewed research, medical societies and governmental agencies.\u201d<\/p>\n
\nWellen taps OpenAI\u2019s GPT for a chatbot that dishes advice on bone health<\/br>
\n2023-05-23 21:58:20<\/br><\/p>\n