The support for the Llama 2 family of large language models (LLMs) was announced by Meta and Microsoft and will be available on Azure and Windows. The second version of Meta’s big language model, known as Llama 2, is currently available to the public free of charge for commercial use. Although you won’t have to pay anything to use Llama 2 for commercial or research purposes because it’s open source, you will have to pay for Microsoft’s enterprise hosting service.
Chatbots powered by generative artificial intelligence, such as OpenAI’s ChatGPT and Google’s Bard, rely on large language models for their functionality. At the beginning of this year, Microsoft introduced an AI-powered search option for Bing that makes use of ChatGPT.
As a result of this cooperation with Meta, Microsoft will now also make Llama 2 accessible through Azure Artificial Intelligence and Windows. In addition to that, it will be made available through providers such as Amazon Web Services and others.
“We believe that an open approach is the right one for the development of today’s AI models, especially those in the generative space,” stated Meta. “By making AI models available openly, they can be of benefit to everyone.”
Meta asserted that this approach is “safer” due to the fact that more researchers and developers are able to stress test it, discover difficulties, and find solutions to problems more quickly.
Customers of Azure may now simply and securely deploy the 7B, 13B, and 70B-parameter Llama 2 models. In addition, Azure customers can fine-tune and deploy these models. In addition to this, Llama will be improved so that it can operate more efficiently locally on Windows. Llama will be available to Windows developers if they target the DirecTEL execution provider by using the ONNX Runtime.
In the meantime, Qualcomm is collaborating with Meta in order to provide Llama 2 AI solutions to mobile devices and personal computers beginning in 2019.