Meta will reportedly release smaller versions of its Llama language model as companies look to offer more cost-effective AI models to the public. 

The Information reports that the company plans to launch two small Llama 3 versions this month before putting out the flagship model this summer. The Verge reached out to Meta for comment. 

These models typically cannot handle long strings of instructions from users but are faster, more flexible, and, most importantly, cheaper to run than a regular-sized model. But these are still powerful AI models, able to summarize PDFs and conversations and write code. Larger models are usually used for more complicated tasks like generating photos or tasks that require several commands to execute. Since small models only work with a smaller number of parameters (data that it learns), these also require less computing power and, therefore, are more cost-effective.

Lightweight models tend to attract users who don’t necessarily want to use the breadth of a large language model for their applications. Smaller models can most often be deployed in specific projects like code assistance or in devices that cannot handle the power usage of a bigger AI model, like phones or laptops. 

Share.
Exit mobile version