3 / 477

Exclusive: Lockheed Martin's Martell says warfare requires human-machine teamwork

TL;DR

Lockheed Martin CTO Craig Martell called for a focus on human-machine teaming rather than fully autonomous AI weapons at the Axios AI+DC Summit.

Key Points

  • Martell argued that statistics at scale will not produce cognitive machines – humans must train with AI systems and understand their limitations before deployment.
  • His principle: whoever deploys an AI system takes personal responsibility for its errors – 'If it gets it wrong, my fault.'
  • Context: The US military is expanding autonomous weapon use amid intensifying debates over accountability and trust in these systems.

Nauti's Take

Martell's framing sounds reasonable but is also a neat liability transfer: the contractor builds the system, the soldier owns the consequences when it fails. 'Human-machine teaming' becomes a convenient buzzword that masks technical immaturity – since nobody has truly cognitive machines yet, framing collaboration as a virtue is the next best thing.

That's not wrong, but it would be more honest to name the current limitations of autonomous systems directly rather than wrapping them in leadership language. On the positive side: at least someone from the defense industry is publicly discussing trust, failure, and accountability – that is far from a given.

Sources