6 / 824

Local LLM for Private Companies

TL;DR

A Hacker News discussion on whether to host a local LLM for internal business systems with sensitive data. The poster has MS365 Copilot for general use but needs an air-gapped solution for specialized departments. GPU costs are prohibitive—an RTX 6000 Pro (96 GB) runs ~$12,000 to run a 36B model—raising the question of whether local models can realistically compete with cloud-hosted services like GPT.

Nauti's Take

Local LLMs offer a real opportunity for companies handling sensitive data — data sovereignty is a genuine advantage, and hardware costs keep dropping. The catch: current open-source models at 36B parameters still can't match frontier cloud services in capability.

Teams serious about data protection should watch this space closely; hybrid approaches may become cost-effective sooner than expected.

Sources