Artificial intelligence has become a mainstream tool, but most advanced models still come with restrictions, paywalls, or API limits. This changed dramatically when OpenAI finally embraced the “open” in its name and released ChatGPT-OSS — a truly open-source, free, and customizable large language model.
In this guide, you’ll learn how to:
- Install ChatGPT-OSS on your local machine.
- Run it offline using LM Studio and Ollama.
- Connect it with Visual Studio Code for coding assistance.
- Expose your local model to the internet using ngrok.
- Integrate it with n8n to achieve unlimited automation.
- Build your own personal AI chatbot with Open Web UI.
Let’s dive in.
Contents
- 1 What Is ChatGPT-OSS?
- 2 How to Install ChatGPT-OSS Locally
- 3 Running ChatGPT-OSS in Visual Studio Code
- 4 Make Your Local Model Accessible Remotely with ngrok
- 5 Unlimited Automation with n8n
- 6 Build Your Own Custom Chatbot with Open Web UI
- 7 Key Takeaways
- 8 FAQ
- 9 1. What is ChatGPT-OSS?
- 10 Can I use ChatGPT-OSS commercially?
- 11 Do I need a powerful PC?
- 12 How do I access my local model from anywhere?
- 13 What makes this better than ChatGPT API?
- 14 Conclusion
What Is ChatGPT-OSS?
ChatGPT-OSS (Open Source Software) was announced on August 5 in OpenAI’s official blog. It comes in two versions:
- 120B model – 120 billion parameters (requires ~80GB RAM).
- 20B model – 20 billion parameters (runs on machines with ~16GB RAM).
Key facts about ChatGPT-OSS:
- Licensed under Apache 2.0, meaning it’s free for personal and commercial use.
- You can train, fine-tune, distill, share, and distribute without restrictions.
- Uses reinforcement learning (RLHF) methods, similar to earlier OpenAI models.
- Benchmarks show strong performance in coding, reasoning, and healthcare tasks.
Unlike earlier models locked behind proprietary APIs, ChatGPT-OSS empowers developers and individuals to run world-class AI on their own machines. Just like our first impressions of GPT-5’s speed and accuracy, benchmark results show that open-source models are catching up fast.
How to Install ChatGPT-OSS Locally
You can install and run ChatGPT-OSS using two popular tools: LM Studio and Ollama. Both simplify downloading, running, and managing large language models.
Step 1: Download LM Studio
- Visit lmstudio.ai.
- Choose your operating system (Windows, macOS, Linux).
- Download and install the application.
LM Studio will automatically filter compatible models for your system and even warn you if your hardware cannot handle a specific version.
Step 2: Download Ollama
- Go to ollama.com.
- Download for your OS (macOS, Windows, Linux).
- Install and launch the app.
Ollama now comes with a built-in chat interface and optional web search integration.
Step 3: Install a Model
Inside LM Studio or Ollama:
- Search for ChatGPT-OSS 20B (lighter version).
- Click Download.
- Once installed, start a new chat and test prompts locally.
⚠️ Note: The 120B model requires enterprise-level GPUs (e.g., NVIDIA H100), making it impractical for most personal setups.
Running ChatGPT-OSS in Visual Studio Code
To use ChatGPT-OSS as a coding assistant, integrate it with Visual Studio Code:
- Install Visual Studio Code.
- Open the Extensions tab.
- Search for Cline and install it.
- Configure Cline to use LM Studio as the backend provider.
- Load ChatGPT-OSS in LM Studio’s “Developer” section.
- Start coding!
Example: Prompt the model to build a football game in JavaScript. The model will generate the entire game locally, even without an internet connection. By exposing your model with ngrok and integrating it into workflows, you can unlock powerful agent-like behaviors — similar to what we covered in our ChatGPT Agent Mode guide.
Make Your Local Model Accessible Remotely with ngrok
Running a model locally is great, but what if you want to access it from anywhere or connect external apps? That’s where ngrok comes in.
Steps:
- Sign up at ngrok.com.
- Install ngrok on your machine.
- Connect your ngrok account via API token.
- Expose your LM Studio local server (e.g.,
localhost:1234
) to the internet. - Get a public URL for your local model.
Now, any app that supports OpenAI-compatible APIs can use your local ChatGPT-OSS.
Unlimited Automation with n8n
n8n is a powerful open-source automation platform. By connecting ChatGPT-OSS to n8n, you can run unlimited workflows without worrying about API costs.
Setup:
- Deploy n8n on a cloud server or locally.
- Create a new workflow.
- Add a “Chat Model” node.
- Use the ngrok public URL as your OpenAI API endpoint.
- Test the connection – success!
Now you can automate tasks like email replies, report generation, or even customer support — all powered by your self-hosted AI model. Developers can also extend these models with orchestration frameworks, as discussed in our LangChain vs LlamaIndex 2025 guide.
Build Your Own Custom Chatbot with Open Web UI
To make ChatGPT-OSS more user-friendly, deploy Open Web UI:
- Search “Open Web UI” on your hosting platform (e.g., Replit, Railway, Render).
- Deploy the app.
- Enter your ngrok public URL as the API base.
- Add any key (since LM Studio doesn’t validate it).
- Launch the chatbot UI.
Now you have your own private AI chatbot accessible from any device.
Key Takeaways
- Free & Unlimited: ChatGPT-OSS can be used without subscription limits.
- Local Power: Run advanced AI models directly on your machine.
- Remote Access: With ngrok, your local AI can be accessed anywhere.
- Automation: Integrate with n8n to replace costly SaaS AI tools.
- Custom Chatbots: Deploy Open Web UI for personal or business chat solutions.
FAQ
1. What is ChatGPT-OSS?
It’s an open-source release of ChatGPT, available in 120B and 20B versions, under Apache 2.0 license.
Can I use ChatGPT-OSS commercially?
Yes. Both personal and commercial use are allowed without restrictions.
Do I need a powerful PC?
The 20B version runs on consumer PCs with ~16GB RAM. The 120B version requires high-end GPUs.
How do I access my local model from anywhere?
Use ngrok to expose your LM Studio server and get a public URL.
What makes this better than ChatGPT API?
No token limits, no API bills — complete freedom to run and customize.
Conclusion
The release of ChatGPT-OSS marks a turning point in AI accessibility. What once required supercomputers can now run on personal laptops. By combining LM Studio, Ollama, ngrok, n8n, and Open Web UI, anyone can build a self-hosted AI ecosystem that is free, private, and unlimited.
The AI revolution is no longer just for big tech companies — it’s for everyone.
Would you install ChatGPT-OSS on your computer, or do you prefer cloud-based AI tools? Share your thoughts in the comments below!