How to Connect LM Studio on Your Laptop to OpenWebUI on a Mac Mini
Notion**Use OpenWebUI as the always-on interface, but let your laptop provide the model only when LM Studio is actually running. **
The Short Version
If OpenWebUI lives on a Mac mini that stays on all the time, and LM Studio only runs sometimes on your laptop, the cleanest setup is usually OpenWebUI Direct Connections, not a global admin-side provider.
That is the key distinction:
- Admin / global OpenAI-compatible connections are resolved by the OpenWebUI server on the Mac mini.
- Direct Connections are resolved by the browser, which means they can point at services running on the same machine you are using to open OpenWebUI.Direct Connections In practice, that means this can work beautifully even when OpenWebUI is hosted on your Mac mini at home and you access it remotely from your laptop through Twingate:
http://127.0.0.1:1234/v1If you add that as a Direct Connection, 127.0.0.1 refers to your laptop browser session, not the Mac mini backend.
💡 Recommended setup: Keep OpenWebUI always running on the Mac mini. Only start LM Studio when you need it. When LM Studio is running, use a user-level Direct Connection from your laptop browser to
http://127.0.0.1:1234/v1.
Why This Setup Works Better Than a Global Admin Connection
OpenWebUI supports any backend that implements the OpenAI-compatible API.OpenAI-Compatible Providers
LM Studio exposes exactly that API, including /v1/models and /v1/chat/completions.LM Studio OpenAI Compatibility
The reason people get tripped up is that OpenWebUI has two different connection paths:
- Global server-side connections under the admin OpenAI connection screen.
- Per-user Direct Connections under user settings. If you add your laptop LM Studio URL in the admin screen, then the Mac mini has to reach the laptop every time OpenWebUI refreshes models. That is fine only if the laptop is reliably reachable and usually online.
If you add the LM Studio server as a Direct Connection, the browser talks to LM Studio directly. That makes it ideal for a laptop that is only sometimes serving.
Step 1: Enable Direct Connections in OpenWebUI
On the Mac mini, open:
Admin Settings -> Connections -> Direct ConnectionsTurn it on.Direct Connections
Once enabled, you can add your own API endpoint from the regular user settings panel.
Step 2: Start LM Studio on the Laptop and Configure the Server Correctly
Open LM Studio on the laptop and go to the Developer tab.
Enable or confirm these settings:
Start Server= onServer Port=1234unless you changed itEnable CORS= onServe on Local Network= on only if you want other devices on your LAN to reach itRequire Authentication= optional, but recommended if you expose the server beyond localhost LM Studio documents these options directly in its server settings guide.LM Studio Server Settings
If you prefer CLI, LM Studio also supports:
lms server start --corsThat starts the server and enables CORS for browser-based access.lms server start
If you want other machines to reach the laptop server, enable Serve on Local Network. LM Studio says this makes the server bind to your LAN IP instead of localhost.Serve on Local Network
If you enable authentication, create an API token in LM Studio and use it as a Bearer token from OpenWebUI.LM Studio Authentication
Step 3: Add the Connection in User Settings, Not the Admin OpenAI Screen
From the normal OpenWebUI interface, open:
User Settings -> ConnectionsClick + and add a new Direct Connection.
Use this URL if you are opening OpenWebUI from the same laptop that runs LM Studio
http://127.0.0.1:1234/v1Use this URL only if you need the LM Studio server reachable from another device on your LAN
http://<your-laptop-ip>:1234/v1Fill the remaining fields like this:
Auth:Noneif LM Studio authentication is offAPI Key: leave blank unless LM Studio authentication is onPrefix ID: optional, useful only if you want model names prefixed likelaptop/qwenModel IDs: usually leave empty so OpenWebUI can auto-discover from/v1/modelsThe important detail is the trailing/v1.
If you enter only:
http://192.168.x.x:1234OpenWebUI may try to probe /models instead of /v1/models, which breaks model discovery. LM Studio's OpenAI-compatible base URL examples explicitly use http://localhost:1234/v1.LM Studio OpenAI Compatibility
Step 4: Verify the Endpoint Before Blaming OpenWebUI
Before troubleshooting the UI, test the API directly.
Open this in a browser on the laptop:
http://127.0.0.1:1234/v1/modelsOr, if you are testing LAN reachability:
http://<your-laptop-ip>:1234/v1/modelsIf that returns JSON, the LM Studio side is configured correctly.
If it does not, check:
- LM Studio server is actually running
- a model is loaded or available
- CORS is enabled
- you did not forget
/v1 - authentication is either disabled or matched with the right token
Step 5: Select the Model in OpenWebUI and Chat Normally
Once the connection is saved, refresh it from the connection dialog and check the model selector.
OpenWebUI should now show your LM Studio-served model as one of the available models. At that point, you still get the OpenWebUI interface, chats, and the rest of your OpenWebUI setup while inference goes directly from the browser to LM Studio.Direct Connections
Because Direct Connections are still marked experimental, test the specific OpenWebUI flows you care about most before depending on them heavily in production.
When You Should Use the Admin / Global Connection Instead
Use the admin-side OpenAI-compatible connection only if you want any device that reaches OpenWebUI to also be able to use the laptop-hosted LM Studio model.
That setup can work, but only if all of the following are true:
- the Mac mini can reliably reach the laptop
- the laptop is usually online when OpenWebUI refreshes models
- you use the laptop's reachable IP or hostname, not
localhost - you accept that an offline laptop can slow model loading
OpenWebUI documents that slow or unreachable endpoints can delay model list loading, and it exposes
AIOHTTP_CLIENT_TIMEOUT_MODEL_LISTto shorten the wait.Server Connectivity Issues
For an occasionally-off laptop, this is usually the wrong default.
Where Twingate Fits In
Twingate is still useful here, just not in the way most people first imagine.
Good use of Twingate in this setup
Use Twingate so your laptop can securely reach the Mac mini and open OpenWebUI from anywhere.
Less ideal use of Twingate here
Trying to make your roaming, sometimes-off laptop act like a stable backend resource for the Mac mini is awkward.
Twingate's resource model assumes the target resource is reachable from the Connector in the relevant remote network, and Twingate Connectors run as Linux containers or Linux systemd services.Twingate Resources Twingate Connectors
That makes Twingate excellent for reaching your home lab, but not the cleanest mechanism for making a transient laptop LM Studio server behave like a permanent backend for your Mac mini.
If you later want the laptop model available from any device, a peer VPN or reverse tunnel is usually a better fit than trying to force that workflow through your home-lab Twingate topology.
Troubleshooting Checklist
Symptom: OpenWebUI is hitting /models instead of /v1/models
Cause: Your URL is missing /v1.
Fix:
http://127.0.0.1:1234/v1not:
http://127.0.0.1:1234Symptom: The admin connection to localhost does not find the laptop server
Cause: On the admin screen, localhost means the Mac mini or the OpenWebUI container, not your laptop.
Fix: Use a Direct Connection or the laptop's real IP/hostname.
Symptom: Browser shows CORS errors
Cause: Direct Connections are browser-side, so LM Studio must allow cross-origin requests.
Fix: Enable Enable CORS in LM Studio server settings.LM Studio Server Settings
Symptom: The connection saves but no model appears
Cause: /v1/models is not reachable, authentication is mismatched, or the connection needs refresh.
Fix:
- verify the
/v1/modelsURL directly - refresh the connection in OpenWebUI
- leave
Model IDsempty unless the provider requires manual allowlisting
Symptom: It works on the laptop but not on your phone or another computer
Cause: 127.0.0.1 always points to the device making the request.
Fix: Use the laptop LAN IP and enable Serve on Local Network in LM Studio.
The Best Default for This Exact Use Case
For a Mac mini that hosts OpenWebUI full-time and a laptop that only sometimes runs LM Studio:
- Keep OpenWebUI always on at home.
- Use Twingate only to reach OpenWebUI remotely.
- Enable Direct Connections in OpenWebUI.
- When LM Studio is running on the laptop, add or enable:
http://127.0.0.1:1234/v1- Use the model from OpenWebUI like any other chat model. That gives you the cleanest operational behavior with the fewest moving parts.