動画検索
関連広告
検索結果
Intro
Custom prompt
Full 65B parameter model THEORETICAL requirements
Writing code with 7B model
Start of Tutorial
Download quantized model
How to run the 7B model after downloaded
How to run the 13B model
RAM requirements for 13B
Comparing 7B to 13B
Unexpected results with 13B model
Pros/Cons to ChatGPT
Conclusion to 7B vs 13B
Running models above 13B parameters
If you have problems
Intro
How can the LLaMA and Alpaca models be fine-tuned (also input/output token limit)?
Can the models generate code (WATCH THIS if you are disappointed with the results of the Alpaca model)?
Do these models use the GPU?
What are the RAM requirements to run larger LLaMA variants?
Do the models work for languages other than English?
Can we use this to query a database with natural language?
More questions answered in my Medium article
Outro