Skip to content

Artificial Intelligence (AI) Models

Model NameUse CasesStrengthsNotes
Dolphin-mistralGeneralUncensored, use when other models refuse answers1
CodeboogaProgrammingPython and JavaScript2
CodeGeeXProgrammingCross language translation, plugins for many IDEs3
CodeqwenProgramming4
DolphinMixtralProgrammingUncensored5
Deepseek-R1GeneralReasoning
Gemma2General2B good for low hardware6
Gemma3General, RAGlow hardware/1 GPU, context length, multilingual, multimodal
Llama3General7
Medllama2MedicalMedical questions, trained on open source medical data8
MeditronMedicalMedical questions, diagnosis, information
MistralGeneral, Programming7B ok for low hardware9
MoondreamVisionSmall for edge devices
Nemotron-miniRole-play, RAG, Function4b for low hardware10
Phi3General, RAGlow hardware, context length
Phi4-miniGeneral, RAGlow hardware, multilingual, context length
Qwen2.5-7B-Instruct-1MGenreallong context tasks due to 1 million token context window
Qwen2.5General3b for low hardware11
Qwen2General, ProgrammingChat, Small to Large models12
StarCoderProgrammingTrained on 80+ languages, Small to large models13
WizardCoderProgramming14
ZephyrAssistantTrained version of Mistral and Mixtral as help assistant15

multimodal means the model can do text and image

Model SizeRAM
7B8 GB
13B16 GB
33B32 GB

From Ollama README Guidance 16

  1. Local LLMs on Linux with Ollama

  2. Coding LLMs Copilot Alternatives

  3. Coding LLMs Copilot Alternatives

  4. Local LLMs on Linux with Ollama

  5. Coding LLMs Copilot Alternatives

  6. I Ran 9 Popular LLMs on Raspberry Pi 5; Here’s What I Found

  7. Local LLMs on Linux with Ollama

  8. 5 easy ways to run an LLM locally

  9. Coding LLMs Copilot Alternatives

  10. I Ran 9 Popular LLMs on Raspberry Pi 5; Here’s What I Found

  11. I Ran 9 Popular LLMs on Raspberry Pi 5; Here’s What I Found

  12. Tabby ML Windows Install

  13. Tabby ML Windows Install

  14. Coding LLMs Copilot Alternatives

  15. 5 easy ways to run an LLM locally

  16. Ollama README Guidance on models and RAM