From The Register:
Companies that advocate for local LLMs also cite technological democracy as a driver. “AI is one of the greatest sources of leverage humanity has ever had,” says Emre Can Kartal, growth engineer and lead at Jan, a project from Menlo Research to build locally run models and the tools that manage them. “Our mission is to make sure it remains open and in people’s hands, not concentrated [among] a few tech giants.”
Cost is also a factor. AI companies selling compute power at a loss tend to rate-limit users. Anyone who pays more than $100 a month to one of the foundational model vendors only to get cut off during a marathon AI-powered coding session will understand the issue.
[Clip]
Foundational AI model-as-a-service companies charge for insights by the token, and they’re doing it at a loss. The profits will have to come eventually, whether that’s direct from your pocket, or from your data, you might be interested in other ways to get the benefits of AI without being beholden to a corporation.
Increasingly, people are experimenting with running those models themselves. Thanks to developments in hardware and software, it’s more realistic than you might think.
[Clip]
“I was experimenting extensively with GPT-3 (before ChatGPT), and was building programs you might call ‘agents’ today,” says Yagil Burowski, founder of LM Studio, a tool that allows users to download and run LLMs. “It was a real bummer to remember that, every time my code runs it cost money, because there was just so much to explore.”
[Clip]
Ollama, one of the most popular CLI platforms for running your own LLMs, is a developer layer built atop llama.cpp. It offers single-line installation of over 200 pre-configured LLMs, making it easy for LLM developers to get up and running with their own local generative AI.
Learn More, Read the Complete Article (about 2000 words)
Read original article: Read More
Discover more from DrWeb's Domain
Subscribe to get the latest posts sent to your email.
