New Deepseek model drastically reduces resource usage by converting text and documents into images — 'vision-text compression' uses up to 20 times fewer tokens
by from Latest from Tom's Hardware on (#70X7F)
Developers of the Chinese Deepseek artificial intelligence have found a novel way to reduce the number of tokens it uses, particularly when accessing memories. By converting blocks of text into images and leveraging visual processing rather than text parsing, the AI can reduce the number of tokens required by between seven and 20 times.