26.06.2024 13:09 Uhr, Quelle: Toms Hardware

AI researchers found a way to run LLMs at a lightbulb-esque 13 watts with no loss in performance

Elimination of matrix multiplication from LLM processing can massively increase performance-per-watt with the correct optimizations, researchers from UC Santa Cruz demonstrate. It remains to be seen how applicable this approach is for AI in general.

Weiterlesen bei Toms Hardware

Digg del.icio.us Facebook email MySpace Technorati Twitter

JustMac.info © Thomas Lohner - Impressum - Datenschutz