Post

Small LLM Comparison in 2024

Timeline of major LLM developments from 2019 to 2024, here are the top 3 Large Language Models (LLMs) in various domains, their key specifications, and performance suitability:


DomainModel NameDeveloper
General Q&A (Text Generation)Mistral 7BMistral AI (2023)
 LLaMA 2 (7B)Meta (2023)
 Phi-3-MiniMicrosoft (2024)
CodingPhi-3-SmallMicrosoft (2024)
 DeciCoder-1BDeci (2023)
 GPT-2 SmallOpenAI (2020)
MathDolly-v2-3BDatabricks (2023)
 Phi-1.5Microsoft (2024)
 LLaMA 2 (7B)Meta (2023)
Image UnderstandingDolly-v2-3BDatabricks (2023)
 LLaMA 2 (7B)Meta (2023)
 Phi-3-MiniMicrosoft (2024)
TranslationMistral 7BMistral AI (2023)
 Phi-3-SmallMicrosoft (2024)
 LLaMA 2 (7B)Meta (2023)

1. General Q&A (Text Generation)

Mistral 7B (2023 by Mistral AI)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 7 billion
    • RAM: 4GB (smartphones), 8GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 20GB
  2. Benchmark: Outperforms Llama 2 13B on all benchmarks; limited context window of 8,000 tokens.

Phi-3-Mini (2024 by Microsoft)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 3 billion
    • RAM: 4GB (smartphones), 6GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 10GB
  2. Benchmark: Fast response times, limited factual recall.

LLaMA 2 (7B) (2023 by Meta)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 7 billion
    • RAM: 6GB (smartphones)
    • CPU: Snapdragon 855 or higher
    • Storage: 8GB
  2. Benchmark: Slower on-device performance compared to Mistral 7B, efficient for basic tasks.

2. Coding

DeciCoder-1B (2023 by Deci)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 1 billion
    • RAM: 2GB (smartphones), 4GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 2GB
  2. Benchmark: Great for basic coding tasks, runs well on low resources.

Phi-3-Small (2024 by Microsoft)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 7 billion
    • RAM: 4GB (smartphones), 8GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 20GB
  2. Benchmark: Fast and efficient in code understanding.

GPT-2 Small (2020 by OpenAI)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 117 million
    • RAM: 2GB (smartphones), 4GB (PCs)
    • CPU: Snapdragon 660 or higher
    • Storage: 1GB
  2. Benchmark: Basic coding capabilities, low-resource requirements.

3. Math

Phi-1.5 (2024 by Microsoft)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 1.5 billion
    • RAM: 3GB (smartphones), 6GB (PCs)
    • CPU: Snapdragon 845 or higher
    • Storage: 15GB
  2. Benchmark: Fast performance, struggles with complex math problems.

Dolly-v2-3B (2023 by Databricks)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 3 billion
    • RAM: 4GB (smartphones), 8GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 25GB
  2. Benchmark: High accuracy but higher RAM usage.

LLaMA 2 (7B) (2023 by Meta)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 7 billion
    • RAM: 6GB (smartphones)
    • CPU: Snapdragon 855 or higher
    • Storage: 8GB
  2. Benchmark: Efficient for basic math, struggles with advanced calculations.

4. Image Understanding

LLaMA 2 (7B) (2023 by Meta)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 7 billion
    • RAM: 6GB (smartphones), 8GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 10GB
  2. Benchmark: Basic image understanding tasks, works on mid-tier smartphones.

Dolly-v2-3B (2023 by Databricks)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 3 billion
    • RAM: 4GB (smartphones), 8GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 20GB
  2. Benchmark: Decent performance in image-to-text tasks but requires more RAM.

Phi-3-Mini (2024 by Microsoft)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 3 billion
    • RAM: 4GB (smartphones), 6GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 10GB
  2. Benchmark: Works well with basic image recognition but limited context handling.

5. Translation (Chinese, Japanese)

LLaMA 2 (7B) (2023 by Meta)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 7 billion
    • RAM: 6GB (smartphones)
    • CPU: Snapdragon 855 or higher
    • Storage: 10GB
  2. Benchmark: Efficient for on-device translations but limited language precision.

Phi-3-Small (2024 by Microsoft)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 3 billion
    • RAM: 4GB (smartphones), 6GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 8GB
  2. Benchmark: Good for basic translations but slower in complex phrases.

Mistral 7B (2023 by Mistral AI)

  1. Technical Specs and Hardware Requirements:
    • Parameters: 7 billion
    • RAM: 4GB (smartphones), 8GB (PCs)
    • CPU: Snapdragon 855 or higher
    • Storage: 20GB
  2. Benchmark: Outperforms Llama 2 13B on all benchmarks; limited context window of 8,000 tokens; good for basic translation tasks, handles multiple languages decently.

This post is licensed under CC BY 4.0 by the author.