AI users have to choose between accuracy or sustainability
Briefly

Recent research highlights the environmental impact of large language models (LLMs) as access to these technologies broadens. A study involving 14 models from various developers found that many models exceed power requirements for routine questions, with their carbon footprints varying dramatically based on size and function. Notably, prompts that require deeper reasoning are more taxing on the environment. The findings raise questions about the necessity of using oversized models for straightforward tasks, which end up generating excessive emissions compared to smaller alternatives.
The researchers discovered that many large language models (LLMs) are overpowered for typical queries, with smaller models able to achieve similar accuracy with significantly lower emissions.
The study highlighted a trade-off between model accuracy and environmental impact, with some models producing excessive carbon emissions even for simple factual inquiries.
Read at Fast Company
[
|
]