![]() ![]() Llm = OpenAI(temperature=0, openai_api_key=openai_api_key)Ĭhain = load_summarize_chain(llm, chain_type="map_reduce") # Initialize the OpenAI module, load and run the summarize chain # Create Document objects for the texts (max 3 pages)ĭocs = ] Texts = text_splitter.split_text(source_text) St.error(f"Please provide the missing fields.") If not openai_api_key.strip() or not source_text.strip(): Source_text = st.text_area("Source Text", height=200) Openai_api_key = st.text_input("OpenAI API Key", type="password") # Get OpenAI API key and source text input import os, streamlit as stįrom langchain.text_splitter import CharacterTextSplitterįrom import Documentįrom import load_summarize_chain Shoutout to the official LangChain docs - much of the code is borrowed or influenced by it. Here's an excerpt from the streamlit_app.py file - you can find the complete source code on GitHub. Depending on your needs, you can also use prompt templates to augment the response. This method supports three types of chains - map_reduce, stuff, and refine - with map_reduce being the easiest chain to get started with. In a subsequent post, I'll discuss how vector stores like Chroma or Pinecone can be used to deal with large documents.Īlright, so first, we split the text input into smaller chunks ("documents"), and then call the load_summarize_chain method to perform text summarization over the input. Since the input text in this tutorial is relatively small, we do not need additional vector stores or databases to store and retrieve the input. To deal with this, we'll use the concept of "chunking". While this is ok for question-answering or chatbot use cases, a summary naturally requires access to the entire input. you can only send a specific amount of text (or tokens) per request. ![]() Summarization of a large body of text or multiple documents generally runs into context window limitations i.e. prompts, LLMs, and multiple chains too) into a single application. To summarize text, we'll use the LangChain Chains module, which allows us to combine multiple components (e.g. ![]() Build a Streamlit App with LangChain for Summarization In this post, we'll create a simple Streamlit application that summarizes text input from the user with LangChain and OpenAI. Together, LangChain and Streamlit are a simple yet powerful combination for getting started with LLM web applications. Streamlit, on the other hand, is an open-source Python library that allows you to create and share interactive web apps and data visualisations with ease. It can be used for chatbots, text summarisation, data generation, code understanding, question answering, evaluation, and more. LangChain is an open-source framework created to aid the development of applications leveraging the power of large language models (LLMs). ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |