LM Studio is a software application designed to facilitate the local operation of large language models (LLMs) such as Llama, Mistral, and Phi-3 directly on a user's computer. This platform is particularly focused on privacy and offline functionality, allowing users to run AI models without the need for an internet connection and without sharing data externally.
lmstudio-python
) and JavaScript (lmstudio-js
), simplifying the integration of LLMs into applications.pip install lmstudio
import lmstudio as lms
llm = lms.llm() # Load any available LLM
prediction = llm.respond_stream("What is a Capybara?")
for token in prediction:
print(token, end="", flush=True)
LM Studio does not collect user data, ensuring that all personal data remains private and solely on the user's device. This feature is a cornerstone of the platform, appealing particularly to those concerned with data privacy.
Users can run any compatible LLM from Hugging Face in GGUF and MLX formats. However, some models might be too large or not supported, and currently, image generation models are not supported.
While the LM Studio GUI app is proprietary, the CLI tool lms
, the Core SDK, and the MLX inferencing engine are open-source under the MIT license. This setup encourages community contributions and the use of open-source libraries.
Organizations interested in using LM Studio for business purposes can fill out a request form to explore corporate use cases and support.
In summary, LM Studio offers a robust toolkit for anyone looking to leverage the power of large language models locally, with an emphasis on privacy, ease of use, and extensive support for developers through SDKs and open-source tools.