TOP

Run SLM (phi3, gemma2, mathstral, llama3.1) on SBC (LattePanda Sigma)

Introduction

In today's era of intelligent computing, Single Board Computers (SBC) have gained increasing popularity among developers due to their compact design and exceptional computing performance. At the same time, Small Language Models (SLMs) play a crucial role in diverse application scenarios, thanks to their efficiency and convenience. This article aims to provide an in-depth analysis of the performance of various SLMs on the Lattepanda Sigma x86 hardware, running Ubuntu 22.04. We will conduct a detailed comparison of models such as mathstral, phi3, llama3.1, mamba codestral, deepseek v2, gemma2 2b, and gemma2 9b in terms of execution speed, model size, open-source licenses, and runtime frameworks. Our goal is to provide developers with valuable data and insights.

 

mathstral-7B-v0.1

Model size: 4.1GB

Speed: 7.33 tokens/s

Open-source license: Apache 2.0

Runtime framework: ollama

 

Mathstral is built on Mistral 7B, supporting a context window length of 32k. It is a specialized large code model based on the Mamba2 architecture for mathematical reasoning.

Scores of different models on the MATH task

Scores of different models on the MATH task

 

Install ollama and run the command:

CODE
curl -fsSL https://ollama.com/install.sh | sh
sudo ollama run mathstral

Token speed:

Token speed of mathstral-7B-v0.1 running on LattePanda Sigma

Token speed of mathstral-7B-v0.1 running on LattePanda Sigma

 

Q&A:

Prove that the sum of any two prime is always even,except for the number 2

1. Understanding Prime Numbers:

A prime number is defined as a natural number greater than 1 that has no positive divisors other than 1 and itself. The only even prime number is 2 because all other even numbers are divisible by 2, which means they have more than two divisors (they are composite).

 

2. Considering the Sum of Two Prime Numbers:

We need to examine the sum of any two prime numbers and show that this sum is always even except when one or both of the primes are 2.

 

3. Case Analysis on Even Prime Numbers:

· The only even prime number is 2.

· If we have a sum involving 2, say \(2 + p\) where \(p\) is another

 

prime:

\[

2 + p = \text{even} + p

\]

Since 2 is even and the addition of an even number and any other integer results in an even number, we get an even sum.

 

4. Case Analysis on Odd Prime Numbers:

Any odd prime number \(p\) can be expressed as \(p = 2k + 1\), where

\(k \in \mathbb{Z}\).

Consider the sum of two such odd primes:

\[

p_1 + p_2 = (2k_1 + 1) + (2k_2 + 1)

\]

Simplifying, we get:

\[

p_1 + p_2 = 2k_1 + 1 + 2k_2 + 1 = 2(k_1 + k_2) + 2 = 2(k_1 + k_2 + 1)

\]

Since \(k_1\) and \(k_2\) are integers, \(k_1 + k_2 + 1\) is also an integer. Thus, the sum of two odd primes is even.

 

Special Case Analysis:

· If both primes are the number 2:

  \[

  2 + 2 = 4 \quad \text{(which is even)}

  \]

· If one prime is 2 and the other is an odd prime \(p\):

  \[

  2 + p = \text{even} + p = \text{even}

  \]

 

6. Conclusion:

From the above steps, we conclude that the sum of any two prime numbers is always even except for the number 2.

 

\[

\boxed{\text{The sum of any two primes is always even except for the

number 2}}

\]

 

From 5 different books, how many different ways can you choose 3 books?

1. Understanding the Problem:

We need to select 3 books out of 5 different books and determine the number of different ways this selection can be made.

 

2. Applying Combinatorial Formula:

The number of ways to choose \(k\) items from a set of \(n\) items without regard to order is given by the binomial coefficient, which is denoted as \(\binom{n}{k}\). This can be calculated using the formula:

\[

\binom{n}{k} = \frac{n!}{k!(n-k)!}

\]

Here, \(n = 5\) and \(k = 3\).

 

3. Substitute Values:

Substituting \(n = 5\) and \(k = 3\) into the binomial coefficient formula, we get:

\[

\binom{5}{3} = \frac{5!}{3!(5-3)!}

\]

Simplifying further, since \(5-3 = 2\), we have:

\[

\binom{5}{3} = \frac{5!}{3!2!}

\]

 

4. Calculate Factorials:

The factorial of a number \(n\) is the product of all positive integers up to \(n\). Thus, we calculate:

\[

5! = 5 \times 4 \times 3 \times 2 \times 1 = 120

\]

\[

3! = 3 \times 2 \times 1 = 6

\]

\[

2! = 2 \times 1 = 2

\]

 

5. Simplify the Expression:

Now, we substitute these factorial values back into our binomial coefficient expression:

\[

\binom{5}{3} = \frac{120}{6 \times 2} = \frac{120}{12} = 10

\]

 

6. Conclusion:

Therefore, the number of different ways to choose 3 books from 5 different books is:

\[

\boxed{10}

\]

 

Solve the inequailty:2x -5>7

1. Starting with the Inequality:

\[

2x - 5 > 7

\]

 

2. Isolate the \(x\) Term:

To isolate the term involving \(x\), add 5 to both sides of the inequality:

\[

2x - 5 + 5 > 7 + 5 \\

2x > 12

\]

 

3. Solve for \(x\):

Divide both sides by 2 to solve for \(x\):

\[

x > \frac{12}{2} \\

x > 6

\]

 

4. Conclusion:

The solution to the inequality \(2x - 5 > 7\) is:

\[

\boxed{x > 6}

\]

 

Give the vertics of a tirangle and A(1,2),B(4,6),and C(5,2),find the area of the triangle

1. Understanding Coordinate Geometry:

The vertices of the triangle are given as \(A(1, 2)\), \(B(4, 6)\), and \(C(5, 2)\).

 

2. Using Determinant Formula for Area of Triangle:

The area of a triangle whose vertices are \((x_1, y_1)\), \((x_2, y_2)\), and \((x_3, y_3)\) is given by:

\[

\text{Area} = \frac{1}{2} \left| x_1(y_2 - y_3) + x_2(y_3 - y_1) +

x_3(y_1 - y_2) \right|

\]

 

3. Substitute Given Vertices:

Substitute the coordinates of \(A\), \(B\), and \(C\) into the determinant formula:

\[

A = (1, 2), \, B = (4, 6), \, C = (5, 2)

\]

This gives us:

\[

\text{Area} = \frac{1}{2} \left| 1(6 - 2) + 4(2 - 2) + 5(2 - 6) \right|

\]

 

4. Simplify the Expression:

Simplify inside the absolute value:

\[

= \frac{1}{2} \left| 1 \cdot 4 + 4 \cdot 0 + 5 \cdot (-4) \right| \\

= \frac{1}{2} \left| 4 - 20 \right| \\

= \frac{1}{2} \left| -16 \right| \\

= \frac{1}{2} \times 16 = 8

\]

 

5. Conclusion:

The area of the triangle with vertices \(A(1, 2)\), \(B(4, 6)\), and \(C(5, 2)\) is:

\[

\boxed{8}

\]

 

phi3-3.8b

Model size: 2.2GB

Speed: 12 tokens/s

Open-source license: MIT

Runtime framework: ollama

 

Install ollama and run the command:

CODE
sudo ollama run phi3

Phi-3, is a family of open AI models developed by Microsoft. Phi-3 models are the most capable and cost-effective small language models (SLMs) available, outperforming models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks. The Phi-3 Family includes mini, small, medium and vision versions, trained based on different parameter amounts to serve various application scenarios.

Token speed:

Token speed of phi3-3.8b running on LattePanda Sigma

Token speed of phi3-3.8b running on LattePanda Sigma

 

llama3.1-8b

Model size: 4.7GB

Speed: 6.83 tokens/s

Open-source license: llama3.1

Runtime framework: ollama

 

Install ollama and run the command:

CODE
sudo ollama run llama3.1

Meta's Llama 3 is the latest version in the open-access Llama series. Llama 3 offers two versions: the 8B version, suitable for efficient deployment and development on consumer-grade GPUs, and the 70B version, designed for large-scale AI applications. Compared to Llama 2, the biggest change in Llama 3 is the expanded vocabulary. Additionally, the 8B version of the model now uses Grouped Query Attention (GQA), which helps handle longer contexts.

Token speed:

Token speed of deepseek-v2-16b-q4 running on LattePanda Sigma

Token speed of llama3.1-8b running on LattePanda Sigma

 

deepseek-v2-16b-q4

Model size: 8.9 GB

Speed: 17 tokens/s

Open-source license: Deepseek license

Runtime framework: ollama

 

Install ollama and run the command:

CODE
sudo ollama run deepseek-v2

DeepSeek-V2 is a powerful Mixture of Experts (MoE) language model known for its economical training and efficient inference. It includes a total of 236 billion parameters, with 21 billion activated per token. Compared to DeepSeek 67B, DeepSeek-V2 offers enhanced performance while saving 42.5% in training costs, reducing KV cache usage by 93.3%, and improving maximum generation throughput by 5.76 times.

 

Token speed:

Token speed of deepseek-v2-16b-q4 running on LattePanda Sigma

Token speed of deepseek-v2-16b-q4 running on LattePanda Sigma

 

gemma2-2b-q4

Model size: 1.6 GB

Speed: 15 tokens/s

Open-source license: gemma license

Runtime framework: ollama

 

Install ollama and run the command:

CODE
sudo ollama run gemma2:2b

Google Gemma 2 is a high-performing and efficient model by now available in three sizes: 2B, 9B, and 27B.

Features

- Content Creation and Distribution

- Text Generation: These models can generate creative text formats such as poetry, scripts, code, marketing copy, and email drafts.

- Chatbots and Conversational AI: Provide conversational interfaces for customer service, virtual assistants, or interactive applications.

- Text Summarization: Generate concise summaries of text corpora, research papers, or reports.

 

- Research and Education

- Natural Language Processing (NLP) Research: These models serve as a foundation for researchers to experiment with NLP techniques, develop algorithms, and contribute to the advancement of the field.

- Language Learning Tools: Support interactive language learning experiences, help with grammar correction, or provide writing exercises.

- Knowledge Exploration: Assist researchers in exploring large volumes of text by generating summaries or answering questions about specific topics.

 

gemma2-9b-q4

Model size: 5.4 GB

Speed: 5.13 tokens/s

Open-source license: gemma license

Runtime framework: ollama

 

Install ollama and run the command:

CODE
sudo ollama run gemma2

Token speed:

Token Speed of gemma2-9b-q4 running on LattePanda Sigma

Token Speed of gemma2-9b-q4 running on LattePanda Sigma

CODE
python3 convert.py models/7B/ --ctx 4096
./quantize ./models/7B/ggml-model-f16.gguf ./models/7B/ggml-model-q4_0.gguf q4_0

2. Use the llama.cpp framework. The llama.cpp branch has been updated.

Wait for subsequent support from the author.

 

Summary

Comparison of Different SLMs on Lattepanda Sigma:

Comparison of Different SLMs on Lattepanda Sigma

 

This article summarizes the performance of various small language models on Lattepanda Sigma hardware. By comparing models such as mathstral, phi3, llama3.1, mamba-codestral, deepseek-v2, and gemma2 in terms of model size, speed, open-source license, and runtime framework, we find the following characteristics:

 

1. mathstral-7B: With a speed of 7.33 tokens/s, this model focuses on mathematical reasoning. Its speed is adequate, making it suitable for specific scenarios.

 

2. phi3-3.8b: This model is compact and offers fast performance.

 

3. llama3.1-8b: With a speed of 6.83 tokens/s, it is slower but has a larger vocabulary, making it suitable for handling long contexts.

 

4. deepseek-v2-16b-q4: As an MoE expert mixture model, it excels in inference speed.

 

5. gemma2: These models have broad applications in content creation, research, and education. The gemma2-2b, being the smallest model, has better speed than gemma2-9b but has a smaller context window compared to the 9b version.

 

In summary, developers should choose the most suitable model based on their specific needs to achieve the best performance and results.