Artificial Intelligence (AI) Models
Model and Use Cases
Section titled “Model and Use Cases”| Model Name | Use Cases | Strengths | Notes |
|---|---|---|---|
| Codebooga | Programming | Python and JavaScript | 1 |
| CodeGeeX | Programming | Cross language translation, plugins for many IDEs | 2 |
| Codeqwen | Programming | 3 | |
| Deepseek-R1 | General | Reasoning | |
| Dolphin-mistral | General | Uncensored, use when other models refuse answers | 4 |
| DolphinMixtral | Programming | Uncensored | 5 |
| Gemma2 | General | 2B good for low hardware | 6 |
| Gemma3 | General, RAG | low hardware/1 GPU, context length, multilingual, multimodal | |
| GPT-oss-20b | General, Reasoning, Agentic | Configurable reasoning | |
| Granite | General | FOSS, SLM, 4 series low to medium hardware, multi modal | |
| Kimi-K2-Instruct | General, Agentic | Chat, agentic | |
| Llama3 | General | 7 | |
| Meditron | Medical | Medical questions, diagnosis, information | |
| Medllama2 | Medical | Medical questions, trained on open source medical data | 8 |
| MedGemma Series | Medical | Medical text and image comprehension | 9 |
| MedImageInsight | Medical | medical image embeddings (radiology, pathology, etc.) | 10 |
| MedImageParse | Medical | image segmentation | 11 |
| CXRReportGen | Medical | chest X-ray report generation | 12 |
| Mistral | General, Programming | 7B ok for low hardware | 13 |
| Moondream | Vision | Small for edge devices | |
| Nemotron-mini | Role-play, RAG, Function | 4b for low hardware | 14 |
| Phi3 | General, RAG | low hardware, context length | |
| phi-4 | General | low hardware, reasoning | |
| Phi4-mini | General, RAG | low hardware, multilingual, context length | |
| Qwen3 series | General | Multiple models depending on use | |
| Qwen-Image | Image generation, editing | Good at text, especially Chinese | |
| StarCoder | Programming | Trained on 80+ languages, Small to large models | 15 |
| WizardCoder | Programming | 16 | |
| Zephyr | Assistant | Trained version of Mistral and Mixtral as help assistant | 17 |
multimodal means the model can do text and image
Footnotes and Sources
Section titled “Footnotes and Sources”Model and RAM Recommendations
Section titled “Model and RAM Recommendations”| Model Size | RAM |
|---|---|
| 7B | 8 GB |
| 13B | 16 GB |
| 33B | 32 GB |
From Ollama README Guidance 18
Models for Programming, Computer Language Development and Use Cases
Section titled “Models for Programming, Computer Language Development and Use Cases”Summary
Section titled “Summary”Source: Choosing the right model in GitHub Copilot - Microsoft Tech Community
-
General Development Tasks
New functions, creating tests/documentation, improving code
- GPT-4.1
- GPT-5-mini
- Claude Sonnet
- Big Pickle (OpenCode Zen)
-
Light Tasks
Quick explanations, JSON/YAML transformations, small refactors, regex creation, short Q&A
- Claude Haiku 4.5
- MiMo V2 Omni (multi-modal)
- Nemotron 3 Super Free (text only)
-
Complex Debugging, Deep Reasoning
Analyzing code, debugging hard issues, architecture decisions, multi-step reasoning, performance analysis
- GPT-5-MINI
- GPT-5.1
- GPT-5.2
- Claude Opus
-
Multi-step Agentic Development
Entire repository changes, migrations, creating features, multiple file planning, automated workflows (plan > run > change)
- GPT-5.1-Codex-Max
- GPT-5.2-Codex
- MiMo V2 Pro
Specific Use Cases
Section titled “Specific Use Cases”Source: FY26—Advanced-GitHub-Copilot-Workshop/08-models-context/docs - GitHub with some personal notes
| Task | Recommended Model | Why |
|---|---|---|
| General completion, quick question | GPT-4o / GPT-4.1 | Free, general purpose |
| Refactoring a single file | Claude Haiku 4.5 (0.33×) | Fast, inexpensive for focused edits |
| Multi-file feature implementation | Claude Sonnet 4.6 (1×) | Follows instructions |
| Analyse 100k+ token codebase | Gemini 3.1 Pro (1×) | Large 1M context |
| Architecture design | Claude Opus 4.6 (3×) | Reasoning |
| Test generation | Claude Sonnet 4.6 (1×) | Reliable naming + Arrange/Act/Assert structure |
| SQL query generation | GPT-4o (0×) or Sonnet | Both handle SQL well; 0× for routine queries |
| Legacy code explanation | Claude Sonnet 4.6 (1×) | Superior contextual narrative explanation |
| Change, pull request summary | GPT-5 mini (0×) | Fast, accurate, free |
| Documentation | GPT-5 mini (0×) | Fast, accurate, free |
Older Models
Section titled “Older Models”| Model Name | Use Cases | Strengths | Notes |
|---|---|---|---|
| Qwen2.5-Instruct | General | long context tasks due to 1 million token context window | |
| Qwen2.5 | General | 3b for low hardware | 19 |
| Qwen2 | General, Programming | Chat, Small to Large models | 20 |
See also
Section titled “See also”- Deepseek R1 Locally, Open-Source Ai Tools, ollama, automation, RAG - Deepseek R1 Locally, Open-Source Ai Tools, ollama, automation, RAG
Resources
Section titled “Resources”Footnotes
Section titled “Footnotes”-
I Ran 9 Popular LLMs on Raspberry Pi 5; Here’s What I Found ↩
-
MedGemma supports image understanding across radiology, pathology, dermatology, and others. Available on Google Vertex AI and MedGemma - Ollama ↩
-
Healthcare AI foundation models (classic) - Microsoft Foundry (classic) portal | Microsoft Learn ↩
-
Healthcare AI foundation models (classic) - Microsoft Foundry (classic) portal | Microsoft Learn ↩
-
Healthcare AI foundation models (classic) - Microsoft Foundry (classic) portal | Microsoft Learn ↩
-
I Ran 9 Popular LLMs on Raspberry Pi 5; Here’s What I Found ↩
-
I Ran 9 Popular LLMs on Raspberry Pi 5; Here’s What I Found ↩