TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
NEW! Try Stackie AI
AI / Linux

Connect to a Local Ollama AI Instance From Within Your LAN

Tired of Ollama AI hogging all the resources on your personal computer? Install it on another machine in your network and tap into the service via GUI.
Sep 6th, 2025 7:00am by
Featued image for: Connect to a Local Ollama AI Instance From Within Your LAN
Feature image via Unsplash and Ollama logo. 

I’ve become a big fan of using a locally installed instance of Ollama AI, a tool to run large language models (LLMs) on your own computer. Part of the reason for that is because of how much energy AI consumes when it’s used via the standard methods.

For a while, I was using Ollama on my desktop machine, but discovered there were a few reasons why that wasn’t optimal. First, Ollama was consuming too many resources, which led to slowdowns on my desktop. Second, I was limited to only using Ollama on my desktop — unless I wanted to SSH into my desktop and start the AI from there.

Then, I discovered a better method: I could install Ollama on a server and then connect to it from any machine on my network.

I want to show you two different methods for doing this, one from the command line and the other via a graphical user interface (GUI). It’s much easier than you might think and only requires a minimal configuration on the server end.

With that said, let’s make this happen.

Install Ollama

The first thing you must do is to install Ollama on the server. I would suggest deploying an instance of Ubuntu Server for this because I’ve had very good luck with running Ollama on Ubuntu.

With that said, log in to your Ubuntu Server instance and run the following command to install Ollama:


Once the installation completes, Ollama has been successfully installed. You can then pull an LLM to your local machine with a command like:


Or, if you want the gpt-oss model:


Once you’ve taken care of that, you’re ready to configure Ollama to accept remote connections.

Configure Ollama for Remote Connections

Open the Ollama systemd configuration file with the command:


Under the [service] section, add the following:


Save and close the file.

The above line opens Ollama to connections from any location. Do keep in mind that you’ll want to make sure your LAN is secure; otherwise, some bad actor could sneak into your LAN and do things with Ollama.

If you want to be able to access your Ollama instance from outside the LAN, you would need to configure your router to direct incoming traffic on port 11434 to the hosting server.

Reload the systemctl daemon with the command:


Restart Ollama with the command:


You’re now ready to connect from your LAN.

Connecting via the Terminal

Open a terminal window on the local machine to which you want to connect to the Ollama server. On that machine, enter the following command:


Where IP_ADDRESS is the IP address of the Ollama server.

You should be greeted by the Ollama text prompt, where you can start running your own queries. When you’re finished, exit the Ollama prompt with:


This will end not only your Ollama session, but the remote connection.

Connecting to Your Remote Ollama Instance via a GUI

We’ll now connect to our remote Ollama instance via a GUI. The GUI in question is Msty because it makes doing this very easy. You should be able to connect to a remote Ollama instance via the official GUI, but I’ve yet to succeed at making that work.

Instead, we’ll use Msty, which is a superior GUI anyway. Msty has tons of features, can run on all three of the major platforms (Linux, macOS, and Windows) and is free to use.

If you’ve not already installed Msty, head to the official site and download the version for your operating system. Installing Msty is fairly straightforward, so you shouldn’t have any problems with it.

With Msty installed, open the GUI app. From the main window, click the Remote Model Providers icon in the left sidebar and then click Add Remote Model Provider. In the resulting window (Figure 1), fill out the information as such:

  • Provider: Select Ollama Remote from the drop-down.
  • Give the remote a memorable name.
  • Enter the server endpoint in the form of http://SERVER_IP:PORT, where SERVER_IP is the IP address of the Ollama hosting server and PORT is the port you’ve configured (default is 11434).

Screenshot

Figure 1: Configuring a remote Ollama server is fairly straightforward.

When prompted, click Fetch Models, select the model(s) you want (from the Model drop-down) and then click Add.

Using the Remote Instance With Msty

Back at the Msty main window, start a new chat. From the Model drop-down (Figure 2), you should now see a Remote section with the model(s) you added.

Screenshot

Figure 2: Selecting the newly added remote Ollama instance.

Select that entry and type your query. This time, the query will be answered by the remote instance of Ollama. Because you’re running Ollama on a server, your query responses should be faster than they are when running them directly from your desktop.

I’ve now started using Ollama strictly with this type of setup to avoid CPU/RAM bottlenecks on my desktop PCs. I’ve found using Ollama remotely to be faster and more reliable.

Created with Sketch.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.