ollama + ollama web + fastapi app (langchain) demo
https://github.com/fanqingsong/ollama-docker
Welcome to the Ollama Docker Compose Setup! This project simplifies the deployment of Ollama using Docker Compose, making it easy to run Ollama with all its dependencies in a containerized environment.
ollama
https://python.langchain.com/docs/integrations/llms/ollama/Ollama
Ollama allows you to run open-source large language models, such as Llama 2, locally.
Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile.
It optimizes setup and configuration details, including GPU usage.
For a complete list of supported models and model variants, see the Ollama model library.
同义模型
https://ollama.com/library/qwen
meta模型
https://ollama.com/library/llama2
ollama镜像预置模型:
https://github.com/FultonBrowne/ollama-docker
ollama + ollamaweb 容器化部署方案
https://github.com/lgdd/chatollama
LangChain
https://python.langchain.com/docs/get_started/introduction
LangChain is a framework for developing applications powered by large language models (LLMs).
LangChain simplifies every stage of the LLM application lifecycle:
- Development: Build your applications using LangChain's open-source building blocks and components. Hit the ground running using third-party integrations and Templates.
- Productionization: Use LangSmith to inspect, monitor and evaluate your chains, so that you can continuously optimize and deploy with confidence.
- Deployment: Turn any chain into an API with LangServe.
Concretely, the framework consists of the following open-source libraries:
langchain-core
: Base abstractions and LangChain Expression Language.langchain-community
: Third party integrations.
- Partner packages (e.g.
langchain-openai
,langchain-anthropic
, etc.): Some integrations have been further split into their own lightweight packages that only depend onlangchain-core
.langchain
: Chains, agents, and retrieval strategies that make up an application's cognitive architecture.- langgraph: Build robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
- langserve: Deploy LangChain chains as REST APIs.
The broader ecosystem includes:
- LangSmith: A developer platform that lets you debug, test, evaluate, and monitor LLM applications and seamlessly integrates with LangChain.
API
https://api.python.langchain.com/en/latest/langchain_api_reference.html#
integration of ollama and langchain
https://python.langchain.com/docs/integrations/llms/ollama/#via-langchainfrom langchain_community.llms import Ollama llm = Ollama(model="llama3") llm.invoke("Tell me a joke")
RAG
https://zhuanlan.zhihu.com/p/695140853
https://github.com/fanqingsong/ollama-docker/blob/main/src/rag.py
https://github.com/fanqingsong/DocQA/blob/main/app.py
标签:web,Ollama,com,langchain,LangChain,https,demo,ollama From: https://www.cnblogs.com/lightsong/p/18172964