Tutorials
New to LangChain or LLM app development in general? Read this material to quickly get up and running building your first applications.
If you're looking to get up and running quickly with chat models, vector stores, or other LangChain components from a specific provider, check out our supported integrations.
Refer to the how-to guides for more detail on using common LangChain components.
See the conceptual documentation for high level explanations of all LangChain concepts.
Basicsβ
- LLM applications: Build and deploy a simple LLM application.
- Chatbots: Build a chatbot that incorporates memory.
- Vector stores: Build vector stores and use them to retrieve data.
- Agents: Build an agent that interacts with external tools.
Working with external knowledgeβ
- Retrieval Augmented Generation (RAG): Build an application that uses your own documents to inform its responses.
- Conversational RAG: Build a RAG application that incorporates a memory of its user interactions.
- Question-Answering with SQL: Build a question-answering system that executes SQL queries to inform its responses.
- Query Analysis: Build a RAG application that analyzes questions to generate filters and other structured queries.
- Local RAG: Build a RAG application using LLMs running locally on your machine.
- Question-Answering with Graph Databases: Build a question-answering system that queries a graph database to inform its responses.
- Question-Answering with PDFs: Build a question-answering system that ingests PDFs and uses them to inform its responses.
Specialized tasksβ
- Extraction: Extract structured data from text and other unstructured media.
- Synthetic data: Generate synthetic data using LLMs.
- Classification: Classify text into categories or labels.
- Summarization: Generate summaries of (potentially long) texts.
LangGraphβ
LangGraph is an extension of LangChain aimed at building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph.
LangGraph documentation is currently hosted on a separate site. You can peruse LangGraph tutorials here.
LangSmithβ
LangSmith allows you to closely trace, monitor and evaluate your LLM application. It seamlessly integrates with LangChain, and you can use it to inspect and debug individual steps of your chains as you build.
LangSmith documentation is hosted on a separate site. You can peruse LangSmith tutorials here.
Evaluationβ
LangSmith helps you evaluate the performance of your LLM applications. The tutorial below is a great way to get started:
Moreβ
For more tutorials, see our cookbook section.