Artificial Intelligence (AI) Models
Model and Use Cases
Section titled “Model and Use Cases”Model Name | Use Cases | Strengths | Notes |
---|---|---|---|
Dolphin-mistral | General | Uncensored, use when other models refuse answers | 1 |
Codebooga | Programming | Python and JavaScript | 2 |
CodeGeeX | Programming | Cross language translation, plugins for many IDEs | 3 |
Codeqwen | Programming | 4 | |
DolphinMixtral | Programming | Uncensored | 5 |
Deepseek-R1 | General | Reasoning | |
Gemma2 | General | 2B good for low hardware | 6 |
Gemma3 | General, RAG | low hardware/1 GPU, context length, multilingual, multimodal | |
Llama3 | General | 7 | |
Medllama2 | Medical | Medical questions, trained on open source medical data | 8 |
Meditron | Medical | Medical questions, diagnosis, information | |
Mistral | General, Programming | 7B ok for low hardware | 9 |
Moondream | Vision | Small for edge devices | |
Nemotron-mini | Role-play, RAG, Function | 4b for low hardware | 10 |
Phi3 | General, RAG | low hardware, context length | |
Phi4-mini | General, RAG | low hardware, multilingual, context length | |
Qwen2.5-7B-Instruct-1M | Genreal | long context tasks due to 1 million token context window | |
Qwen2.5 | General | 3b for low hardware | 11 |
Qwen2 | General, Programming | Chat, Small to Large models | 12 |
StarCoder | Programming | Trained on 80+ languages, Small to large models | 13 |
WizardCoder | Programming | 14 | |
Zephyr | Assistant | Trained version of Mistral and Mixtral as help assistant | 15 |
multimodal means the model can do text and image
Model and RAM Recommendations
Section titled “Model and RAM Recommendations”Model Size | RAM |
---|---|
7B | 8 GB |
13B | 16 GB |
33B | 32 GB |
From Ollama README Guidance 16
Footnotes and Sources
Section titled “Footnotes and Sources”See also
Section titled “See also”- Deepseek R1 Locally, Open-Source Ai Tools, ollama, automation, RAG - Deepseek R1 Locally, Open-Source Ai Tools, ollama, automation, RAG