MENU
Language

[Free × easy] How to run local LLMs (Mistral, DeepSeek R1 lightweight version, etc.) on Jan|Beginners OK! AI Implementation Steps

Here’s how to use Jan to run a local LLM (Large Language Model).

目次

What is Jan?

Jan is an open-source LLM execution environment that runs completely offline. It focuses on GUI operation, and even without technical knowledge, you can enjoy AI chats like ChatGPT locally.

The local LLM format supported by Jan is GGUF format (.gguf).
In addition, if you use API integration, you can also use OpenAI: proprietary format (GPT architecture), Claude: Anthropic’s in-house format, Gemini: PaLM/Gemini system, etc., but it will no longer be completely offline. In the case of API integration, there may be a fee.

Jan Installation Instructions (Windows/Mac/Linux)

1. [Download from the official website]

  1. Visit the above URL
  2. Click on the installer that suits your OS (Windows/macOS/Linux)

2. [Launch and install the installer]

  • Double-click on the downloaded file (e.g.,jan-xxx.exe ) to install it
  • Follow the instructions to complete the installation

Local LLM execution steps

3. [Launch Jan]

  • When you launch the Jan app, you will see the default screen.

4. [Download Models]

Select “Hub” from the left menu

Search for the model you want to use.
This time we will run the local LLM, so select “On-device” and type Mistral. The version will appear in the suggestions, so this time select “mistral-nemo”.

Reference: Click here for a list of Mistral’s LLMs

Click the “Download” button to the right of the displayed model.

5. [Start Chat]

  1. Create a new thread from the left “Thread
  2. Enter a prompt in the chat window
  3. You’ll see the AI-generated response on the right

Recommended environment for running Jan

  • MacOS : 13 or higher
  • Windows:
    • Windows 10 or later
    • To enable GPU support:
      • Nvidia GPU with CUDA Toolkit 11.7 or later
      • Nvidia driver 470.63.01 or higher
  • Linux:
    • glibc 2.27 or higher (checkldd --version at)
    • GCC 11, G++ 11, CPP 11 or higher.
    • To enable GPU support:
      • Nvidia GPU with CUDA Toolkit 11.7 or later
      • Nvidia driver 470.63.01 or higher

Source: Jan Github: https://github.com/menloresearch/jan?tab=readme-ov-file#readme

Model format (GGUF format)

Jan uses the GGUF format quantization model distributed by Hugging Face and others. It is optimized for local execution and is capable of lightweight and fast operation.

Notes

  • Jan can also be integrated with some cloud AI (chatGPT, Claude, Gemini, etc.), but this requires a separate API key. In the case of API integration, there may be a fee.

Official documentation

Jan official website
https://jan.ai

Jan GitHub Repository
https://github.com/menloresearch/jan

Windows Installation Guide
https://jan.ai/docs/desktop/windows

Linux Installation Guide
https://jan.ai/docs/desktop/linux

Let's share this post !

Author of this article

AIアーティスト | エンジニア | ライター | 最新のAI技術やトレンド、注目のモデル解説、そして実践に役立つ豊富なリソースまで、幅広い内容を記事にしています。フォローしてねヾ(^^)ノ

Comments

To comment

目次