# Install Ollama like using curl or package manager
# Run and chat with Llama 3.2 model
## Exit using C-d or /bye
# Will run mistral instruction or install if not present
ollama run mistral:instruct '<instruction>'
# Run model from Hugging Face
ollama run hf.co/{username}/{repository}:{quantization}
# Pull model, e.g mistral
## See library of models https://ollama.com/library
# Run without desktop app
OLLAMA_CONTEXT_LENGTH=8192 ollama serve
# By default, Ollama uses a context window size of 2048 tokens.
# This can be overridden with the OLLAMA_CONTEXT_LENGTH environment variable
# Example above sets the default context window to 8K
# Set up a Modelfile for model pre-prompting
## Paste parameters, for example respond like you are ...