Updated Dec 1032,768 context
$0.30 / 1M input tokens$0.30 / 1M output tokens
A pretrained generative Sparse Mixture of Experts, by Mistral AI, for chat and instruction use. Incorporates 8 experts (feed-forward networks) for a total of 47B parameters.
Instruct model fine-tuned by Mistral. If you send raw prompt
s, you should
use [INST]
and [/INST]
tokens. Otherwise, with chat messages
, the prompt will be formatted automatically for you.
#moe