ポスト

This may end up being a big deal: Usually LLMs just predict the next token in a sequence, one at a time, but if you have them predict the next several tokens at once you get significantly better performance, faster, and with no added costs. The gains are better for bigger models

メニューを開く

Ethan Mollick@emollick

人気ポスト

もっと見る
Yahoo!リアルタイム検索アプリ