Small LLM Comparison in 2024
Timeline of major LLM developments from 2019 to 2024, here are the top 3 Large Language Models (LLMs) in various domains, their key specifications, and performance suitability:
Domain | Model Name | Developer |
---|---|---|
General Q&A (Text Generation) | Mistral 7B | Mistral AI (2023) |
LLaMA 2 (7B) | Meta (2023) | |
Phi-3-Mini | Microsoft (2024) | |
Coding | Phi-3-Small | Microsoft (2024) |
DeciCoder-1B | Deci (2023) | |
GPT-2 Small | OpenAI (2020) | |
Math | Dolly-v2-3B | Databricks (2023) |
Phi-1.5 | Microsoft (2024) | |
LLaMA 2 (7B) | Meta (2023) | |
Image Understanding | Dolly-v2-3B | Databricks (2023) |
LLaMA 2 (7B) | Meta (2023) | |
Phi-3-Mini | Microsoft (2024) | |
Translation | Mistral 7B | Mistral AI (2023) |
Phi-3-Small | Microsoft (2024) | |
LLaMA 2 (7B) | Meta (2023) |
1. General Q&A (Text Generation)
Mistral 7B (2023 by Mistral AI)
- Technical Specs and Hardware Requirements:
- Parameters: 7 billion
- RAM: 4GB (smartphones), 8GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 20GB
- Benchmark: Outperforms Llama 2 13B on all benchmarks; limited context window of 8,000 tokens.
Phi-3-Mini (2024 by Microsoft)
- Technical Specs and Hardware Requirements:
- Parameters: 3 billion
- RAM: 4GB (smartphones), 6GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 10GB
- Benchmark: Fast response times, limited factual recall.
LLaMA 2 (7B) (2023 by Meta)
- Technical Specs and Hardware Requirements:
- Parameters: 7 billion
- RAM: 6GB (smartphones)
- CPU: Snapdragon 855 or higher
- Storage: 8GB
- Benchmark: Slower on-device performance compared to Mistral 7B, efficient for basic tasks.
2. Coding
DeciCoder-1B (2023 by Deci)
- Technical Specs and Hardware Requirements:
- Parameters: 1 billion
- RAM: 2GB (smartphones), 4GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 2GB
- Benchmark: Great for basic coding tasks, runs well on low resources.
Phi-3-Small (2024 by Microsoft)
- Technical Specs and Hardware Requirements:
- Parameters: 7 billion
- RAM: 4GB (smartphones), 8GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 20GB
- Benchmark: Fast and efficient in code understanding.
GPT-2 Small (2020 by OpenAI)
- Technical Specs and Hardware Requirements:
- Parameters: 117 million
- RAM: 2GB (smartphones), 4GB (PCs)
- CPU: Snapdragon 660 or higher
- Storage: 1GB
- Benchmark: Basic coding capabilities, low-resource requirements.
3. Math
Phi-1.5 (2024 by Microsoft)
- Technical Specs and Hardware Requirements:
- Parameters: 1.5 billion
- RAM: 3GB (smartphones), 6GB (PCs)
- CPU: Snapdragon 845 or higher
- Storage: 15GB
- Benchmark: Fast performance, struggles with complex math problems.
Dolly-v2-3B (2023 by Databricks)
- Technical Specs and Hardware Requirements:
- Parameters: 3 billion
- RAM: 4GB (smartphones), 8GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 25GB
- Benchmark: High accuracy but higher RAM usage.
LLaMA 2 (7B) (2023 by Meta)
- Technical Specs and Hardware Requirements:
- Parameters: 7 billion
- RAM: 6GB (smartphones)
- CPU: Snapdragon 855 or higher
- Storage: 8GB
- Benchmark: Efficient for basic math, struggles with advanced calculations.
4. Image Understanding
LLaMA 2 (7B) (2023 by Meta)
- Technical Specs and Hardware Requirements:
- Parameters: 7 billion
- RAM: 6GB (smartphones), 8GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 10GB
- Benchmark: Basic image understanding tasks, works on mid-tier smartphones.
Dolly-v2-3B (2023 by Databricks)
- Technical Specs and Hardware Requirements:
- Parameters: 3 billion
- RAM: 4GB (smartphones), 8GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 20GB
- Benchmark: Decent performance in image-to-text tasks but requires more RAM.
Phi-3-Mini (2024 by Microsoft)
- Technical Specs and Hardware Requirements:
- Parameters: 3 billion
- RAM: 4GB (smartphones), 6GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 10GB
- Benchmark: Works well with basic image recognition but limited context handling.
5. Translation (Chinese, Japanese)
LLaMA 2 (7B) (2023 by Meta)
- Technical Specs and Hardware Requirements:
- Parameters: 7 billion
- RAM: 6GB (smartphones)
- CPU: Snapdragon 855 or higher
- Storage: 10GB
- Benchmark: Efficient for on-device translations but limited language precision.
Phi-3-Small (2024 by Microsoft)
- Technical Specs and Hardware Requirements:
- Parameters: 3 billion
- RAM: 4GB (smartphones), 6GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 8GB
- Benchmark: Good for basic translations but slower in complex phrases.
Mistral 7B (2023 by Mistral AI)
- Technical Specs and Hardware Requirements:
- Parameters: 7 billion
- RAM: 4GB (smartphones), 8GB (PCs)
- CPU: Snapdragon 855 or higher
- Storage: 20GB
- Benchmark: Outperforms Llama 2 13B on all benchmarks; limited context window of 8,000 tokens; good for basic translation tasks, handles multiple languages decently.
This post is licensed under CC BY 4.0 by the author.