Make Reader
We show how LlamaIndex can fit with your Make.com workflow by sending the GPT Index response to a scenario webhook.
If youโre opening this Notebook on colab, you will probably need to install LlamaIndex ๐ฆ.
%pip install llama-index-readers-make-com!pip install llama-indeximport loggingimport sys
logging.basicConfig(stream=sys.stdout, level=logging.INFO)logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout))from llama_index.core import VectorStoreIndex, SimpleDirectoryReaderfrom llama_index.readers.make_com import MakeWrapperDownload Data
!mkdir -p 'data/paul_graham/'!wget 'https://raw.githubusercontent.com/run-llama/llama_index/main/docs/examples/data/paul_graham/paul_graham_essay.txt' -O 'data/paul_graham/paul_graham_essay.txt'documents = SimpleDirectoryReader("./data/paul_graham/").load_data()index = VectorStoreIndex.from_documents(documents=documents)# set Logging to DEBUG for more detailed outputs# query indexquery_str = "What did the author do growing up?"query_engine = index.as_query_engine()response = query_engine.query(query_str)# Send response to Make.com webhookwrapper = MakeWrapper()wrapper.pass_response_to_webhook("<webhook_url>", response, query_str)