Show HN: AutoThink – Boosts local LLM performance by 43% with adaptive reasoning

May 28, 2025 - 04:30
 0  0
Show HN: AutoThink – Boosts local LLM performance by 43% with adaptive reasoning
The motivation for AutoThink came from watching how current reasoning models waste computation - they spend the same amount of "thinking time" on "what's 2+2?" as they do on complex mathematical proofs. This seemed obviously inefficient.

The breakthrough was combining two techniques I'd been working on separately: adaptive classification (which can learn new categories without retraining) and an open source implementation of Pivotal Token Search from Microsoft's Phi-4 paper. When I put them together with dynamic token budgeting, the performance gains were much better than expected.

What surprised me most was that the technique actually uses fewer tokens on average while improving performance. The adaptive allocation means simple queries finish faster, offsetting the extra computation on complex ones.

A few technical notes:

- The steering vectors are small (typically <1MB per pattern) and add minimal memory overhead

- Classification adds about 10ms latency, which is negligible

- Target layer selection matters - I found middle layers (15-20) work best for most models

I'd love feedback on:

- Have you tried similar adaptive approaches with your models?

- What other reasoning patterns would be useful to steer toward?

- Ideas for automatically detecting the optimal target layer?

Thanks for checking it out! Happy to answer any questions about the implementation or results.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0