115 / 749

Ask HN: Whether there is LLM or other strong NLP behind Hacker News?

TL;DR

A user reports that LLM-generated posts on Hacker News are automatically hidden, suggesting an internal detection mechanism is in place.

Key Points

  • There are indications that Show HN submissions for the same product cannot be posted multiple times, pointing to product-level deduplication.
  • It remains unclear whether HN uses an LLM, classical NLP, or rule-based heuristics – YC has never officially commented on this.
  • The HN thread itself is thin: just 1 comment, no statement from moderators like dang.

Nauti's Take

HN quietly filtering AI-generated content without ever announcing it fits the platform's philosophy perfectly: tune in the background, avoid the meta-debate. The irony is sharp – a community that obsessively discusses AI apparently wants humans-only input.

Whether the detection relies on an actual LLM or simple heuristics, nobody outside YC knows. Without transparency, that is a credibility gap, not a feature.

Context

Hacker News is one of the most influential tech communities globally – its moderation mechanisms shape which ideas get visibility and which do not. If HN is indeed using LLM detection, it would represent a quiet but significant policy stance against AI-generated content, made without any transparent communication. This raises fairness questions: what exactly counts as 'LLM-generated', and who decides?

Sources