


"We feel that we have mostly achieved this goal, and given the hosting costs and the inadequacies of our content filters, we decided to bring down the demo." "The original goal of releasing a demo was to disseminate our research in an accessible way," a Stanford University's Human-Centered Artificial Intelligence Institute spokesperson told The Register. The university added that increasing hosting costs and concerns for safety were also factors in its removal. Last week, Stanford University researchers released their version of a chatbot based on Meta's LLaMa AI called "Alpaca" but quickly took it offline after it started having "hallucinations." Some in the large language model (LLM) industry have decided that hallucination is a good euphemism for when an AI spouts false information as if it were factual. Smaller models are much cheaper but seem more inclined to devolve into a mess akin to Microsoft's Tay from 2016. Unfortunately, models like ChatGPT are prohibitively expensive to build and train. Everybody wants in on the ground floor of the technology, so there is currently a gold rush to release the next great AI chatbot. In context: Large language models are dominating the news cycle with no signs of slowing.
