💀 doomscrolling.ai
uncanny
💀045

Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training

github.com·3 days ago

A researcher discovered that duplicating just 3 layers in a 24-billion parameter AI model dramatically improves its logical reasoning abilities without any training - jumping from 22% to 76% accuracy on deduction tasks. The finding suggests AI models contain discrete 'reasoning circuits' that can be replicated to make the model 'think longer,' raising unsettling questions about the modular nature of machine cognition and how easily AI capabilities can be enhanced through architectural manipulation.

reasoningmodel architecturecognitive enhancementemergent behaviorcapability gains

More concerning developments in AI

See all stories