
"Tokens are the billable metric for LLM usage simply because they are easy to count - even if this seems to challenge those who count them. Throw a prompt into an LLM, and it recognizes its lexemes, a linguistic term dating from the 1930s for the units of meaning and modification."
"Basing costs on token consumption, whether it's for code suggestion, generation, or AI debugging, makes as much sense - less, even - than paying programmers per keystroke in and character out."
"There is no concept of usable work actually done, no sense that inefficiencies are rewarded, and no easy way to relate the price paid to the actual value delivered."
Fans of the creative arts often eavesdrop on creators, but discussions frequently revolve around mundane topics like money. In the realm of LLMs, conversations center on tokens, specifically token incremental burn syndrome (TIBS). Tokens serve as a billable metric for LLM usage, simplifying the counting process. However, basing costs on token consumption lacks a meaningful relationship to actual work done, leading to inefficiencies and a flawed understanding of value in AI applications.
Read at Theregister
Unable to calculate read time
Collection
[
|
...
]