Posts

Showing posts from October, 2023

Amazingly easy to run LLMs with Ollama

Image
  There is a new tool called Ollama that makes it really easy to try out different LLMs on your local machine.   Here are several youtube videos about this.   Ollama - Local Models on your machine Sam Witteveen Running Mistral AI on your machine with Ollama Learn Data with Mark Ollama: The Easiest Way to RUN LLMs Locally Prompt Engineering Ollama on Linux: Easily Install Any LLM on Your Server Ian Wootten   It is so easy to run models with Ollama! To install Ollama on our Windows machine we opened a Windows/Linux terminal and entered:   curl https://ollama.ai/install.sh | sh   Then to run a model we can do things like:  ollama run llama2:70b   At https://ollama.ai/library you can see what models are available. After you click on a model you can click "tags" to see all the  different versions. Then you can click to copy the command to run that model.  I look for the largest one that fits in my 48 GB GPU.    At https://ollama.ai/library/llama2/tags there are different model