Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Slack Summarization Plugin as a use case #57

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion LangChain Cookbook Part 1 - Fundamentals.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1573,7 +1573,7 @@
"\n",
"The language model that drives decision making.\n",
"\n",
"More specifically, an agent takes in an input and returns a response corresponding to an action to take along with an action input. You can see different types of agents (which are better for different use cases) [here](https://python.langchain.com/en/latest/modules/agents/agents/agent_types.html)."
"More specifically, an agent takes in an input and returns a response corresponding to an action to take along with an action input. You can see different types of agents (which are better for different use cases) [here](https://python.langchain.com/en/latest/modules/agents/agent_types.html)."
]
},
{
Expand Down
3 changes: 2 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ Prompt Engineering (my favorite resources):
| Project | Contact | Difficulty | Open Sourced? | Notes |
| - | ----------- | ---------- | :-: | ---------- |
| [SummarizePaper.com](https://www.summarizepaper.com/) | Quentin Kral | 🐒 Intermediate | ✅ [Code](https://github.com/summarizepaper/summarizepaper) | Summarize arXiv papers |
| SlackSummarizerPlugin | Matias Sandacz | 🐒 Intermediate | ✅ [Code](https://github.com/matisandacz/SlackSummarization) | Summarize Slack Conversations |

<br>

Expand Down Expand Up @@ -101,4 +102,4 @@ As an open-source project in a rapidly developing field, we are extremely open t

Submit a PR with notes.

This repo and series is provided by [DataIndependent](https://dataindependent.com/) and run by [Greg Kamradt](https://twitter.com/GregKamradt)
This repo and series is provided by [DataIndependent](https://dataindependent.com/) and run by [Greg Kamradt](https://twitter.com/GregKamradt)
16 changes: 8 additions & 8 deletions data_generation/Expert Structured Output (Using Kor).ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@
" My sister's name is Rachel.\n",
" My brother's name Joe. My dog's name is Spot\n",
"\"\"\"\n",
"output = chain.predict_and_parse(text=(text))[\"data\"]\n",
"output = chain.run(text=(text))[\"data\"]\n",
"\n",
"printOutput(output)\n",
"# Notice how there isn't \"spot\" in the results list because it's the name of a dog, not a person."
Expand Down Expand Up @@ -218,7 +218,7 @@
}
],
"source": [
"output = chain.predict_and_parse(text=(\"The dog went to the park\"))[\"data\"]\n",
"output = chain.run(text=(\"The dog went to the park\"))[\"data\"]\n",
"printOutput(output)"
]
},
Expand Down Expand Up @@ -300,7 +300,7 @@
"text=\"Palm trees are brown with a 6 rating. Sequoia trees are green\"\n",
"\n",
"chain = create_extraction_chain(llm, plant_schema)\n",
"output = chain.predict_and_parse(text=text)['data']\n",
"output = chain.run(text=text)['data']\n",
"\n",
"printOutput(output)"
]
Expand Down Expand Up @@ -402,7 +402,7 @@
"\n",
"# Changed the encoder to json\n",
"chain = create_extraction_chain(llm, cars_schema, encoder_or_encoder_class=\"json\")\n",
"output = chain.predict_and_parse(text=text)['data']\n",
"output = chain.run(text=text)['data']\n",
"\n",
"printOutput(output)"
]
Expand Down Expand Up @@ -529,7 +529,7 @@
],
"source": [
"chain = create_extraction_chain(llm, schema, encoder_or_encoder_class='json')\n",
"output = chain.predict_and_parse(text=\"please add 15 more units sold to 2023\")['data']\n",
"output = chain.run(text=\"please add 15 more units sold to 2023\")['data']\n",
"\n",
"printOutput(output)"
]
Expand Down Expand Up @@ -891,7 +891,7 @@
}
],
"source": [
"output = chain.predict_and_parse(text=text)[\"data\"]\n",
"output = chain.run(text=text)[\"data\"]\n",
"\n",
"printOutput(output)"
]
Expand Down Expand Up @@ -1027,7 +1027,7 @@
],
"source": [
"chain = create_extraction_chain(llm, salary_range)\n",
"output = chain.predict_and_parse(text=text)[\"data\"]\n",
"output = chain.run(text=text)[\"data\"]\n",
"\n",
"printOutput(output)"
]
Expand Down Expand Up @@ -1070,7 +1070,7 @@
],
"source": [
"with get_openai_callback() as cb:\n",
" result = chain.predict_and_parse(text=text)\n",
" result = chain.run(text=text)\n",
" print(f\"Total Tokens: {cb.total_tokens}\")\n",
" print(f\"Prompt Tokens: {cb.prompt_tokens}\")\n",
" print(f\"Completion Tokens: {cb.completion_tokens}\")\n",
Expand Down