About this course
Learn how to rapidly build future-proof generative AI apps, locally or in the cloud, using AI orchestration frameworks like LangChain and LlamaIndex.
Building local AI apps with LangChain and LlamaIndex
54sWhat you should know
1m 19sSetting up your environment for building AI apps
1m 23sAI orchestration concepts
3m 3sBuilding an app with the OpenAI API
4m 33sRunning local LLMs
8m 54sYour first LangChain app
3m 26sYour first LlamaIndex app
4m 22sDebugging AI apps
4m 3sAI over local documents: Retrieval-augmented generation
5m 14sChoosing an embedding
3m 56sRAG with LlamaIndex
5mRAG with LangChain
4m 29sChallenge: Document summarization
2m 39sSolution: Document summarization
1m 26sApp concepts for chaining and more complex workflows
3m 23sGetting JSON out of the LLM
6m 21sLLM function calling
5m 25sChallenge: Local LLM task offloading
2m 53sSolution: Local LLM task offloading
1m 41sIntroduction to the ReAct agent framework
3m 41sImplementing a ReAct agent
5m 4sChallenge: LangChain and LlamaIndex strengths and weaknesses
1m 12sSolution: LangChain and LlamaIndex strengths and weaknesses
1m 8sNext steps for AI app engineers
2m 20s