Run AI Models on Your Computer with Just a Few Clicks
LLM in Your Hands: How to Run AI Models Locally, Quickly, and Easily
Continuing our series on six ways to run AI models locally without the need for external APIs, today’s feature is the tool Jan!
Have you ever imagined running AI models directly on your computer with ease, in just a few clicks?
With Jan, this possibility becomes a reality. It’s an amazing tool that’s transforming the way we interact with technology.
Introduction
Artificial intelligence is rapidly becoming an essential part of many sectors, from healthcare to industrial automation, particularly with large language models (LLMs) and generative models. However, running these large models is still a challenging task, often requiring substantial infrastructure and technical expertise. That’s where Jan steps in, simplifying the process and making AI accessible to everyone, regardless of experience level.
Advantages of Jan
Ease of Use: With Jan, you can run AI models with just a few clicks, eliminating the need for complex installations or in-depth technical knowledge.
Accessibility: Designed to be intuitive, Jan allows both beginners and experts to explore AI’s potential without hassle.
Versatility: Ideal for developers, researchers, and enthusiasts, Jan lets you run various AI models across applications, from data analysis to text generation.
Resources and API: Jan offers access to a rich library of ready-to-use models, so you can jump into your projects right away. It also allows you to configure a local API server using local models.
Support and Community: By using Jan, you join a growing community of users where you can share experiences, tips, and get support.
Open Source: As an open-source tool, Jan is fully customizable to meet your needs (available on GitHub).
Installation and Setup
To get started, visit the homepage and select your platform to download the installer. Follow the installation instructions, and you’re ready to go.
After installation, Jan will ask which LLM model you’d like to use. For this guide, we’ll select the Llama 3.2 3B Instruct Q8, a model designed for instruction-based tasks and optimized for resource-limited environments with 8-bit quantization.
Now, you’re ready to interact with your model! All interactions occur locally on your computer.
You can also adjust model parameters, such as temperature, top-p, maximum token count, and more. Simply click on “Model” in the upper right corner.
For specific instructions, go to “Assistant / Instructions” in your prompt options. This way, the model will follow your commands across all conversations.
Adding New Models
If you’d like to add new models, click on the “Hub” icon below the chat icon in the upper left.
The platform will list available models that you can download and run locally.
You can also import a model you’ve downloaded from other platforms like GPT4All or LM Studio. Click on “Import Model” and specify the path where the models are stored. For reference, on Windows it might look like:
GPT4All:
C:/Users/<user_name>/AppData/Local/nomic.ai/GPT4All/
LM Studio:
C:/Users/<user_name>/.cache/lm-studio/models
Local Server
As with LM Studio and GPT4All, Jan can also be used as a local API server with ease. Click on the menu icon on the left, above “Settings,” and select “Start Server.”
Once activated, your local server is running, and you can visit http://localhost:133 to view the available endpoints.
Conclusion
Running LLMs locally on your computer can be an exciting experience, allowing you to use and evaluate open-source models with full security and privacy. Best of all, this process doesn’t need to be complicated!
With Jan, you can transform your computer into a true AI lab in minutes.
Stay tuned for upcoming posts as we continue exploring more AI models! I’d appreciate your support in sharing this article and the blog. 💞