16 Comments

I'm trying to follow along with this tutorial but the formatting is off and python is picky about formatting. I've read your comment about "full code can be found in my repo" but it's not obvious where in the repo this code is. Care to provide a link to the .py file for this tutorial please?

Expand full comment

Like your tutorial and have subscribed to your newsletter! I'm new to medium. Can I get you quick feedback on an even simpler agent https://the-pocket.github.io/PocketFlow/design_pattern/agent.html, built on an 100-line framework

I made https://substack.com/home/post/p-158349453

Expand full comment

Hello Nir, thanks for the excellent blog post, I would like to have few inputs from you, especially with the way we use the word AI Agents, In the example you have shown as part of blog is more of an AI system which is a workflow, according to the blog from Anthropic "Building Effective Agents", an AI Agent is one which takes decisions autonomously. It would be helpful if you can write a blog on building a true AI agent, and how to distinguish when to use AI workflow and when to use an AI Agent

Expand full comment

Thanks for your input, Ravi.

The use case in this blog post is intentionally simple, as its goal is to provide an introduction and help readers understand what agents are and how to create one.

Expand full comment

AttributeError Traceback (most recent call last)

in <cell line: 0>()

5

6 state_input = {"text": sample_text}

----> 7 result = app.invoke(state_input)

8

9 print("Classification:", result["classification"])

6 frames

in classification_node(state)

6 )

7 message = HumanMessage(content=prompt.format(text=state["text"]))

----> 8 classification = llm.invoke([message]).content.strip()

9 return {"classification": classification}

10

AttributeError: 'tuple' object has no attribute 'invoke'

Could anyone help identify why this is happening? Thanks!

Expand full comment

I saw that you opened an issue on the repo. I'll take care of it (or someone else will) soon.

Thanks! 🙂

Expand full comment

Very good post. Can I translate part of this article into Spanish with links to you and a description of your newsletter?

Expand full comment

Hmm okay.

Please put these info at the beginning of your post as a disclaimer with link to the original post and the newsletter,as you said

Expand full comment

Thanks for sharing this, I managed to successfully run the whole code and get the intended output.

That being said, I am struggling to see how this code (putting aside the usage of the LangGraph framework) is different from the old good hard coded and predictable series of functions calls, when an output of a function A serves as an input of a function B. What specifically is "agentic" in this code? How this agent "can dynamically adjust its focus based on what it discovers"?

Expand full comment

Me, I feel like this needs a stronger use case. It's not clear how AI chaining did something that a series of well-reasoned prompts couldn't.

Expand full comment

Thanks for your input, Dave.

The use case in this blog post is intentionally simple, as its goal is to provide an introduction and help readers understand what agents are and how to create one.

Expand full comment

It seems that the app should be changed to; import os

from typing import TypedDict, List

from langgraph.graph import StateGraph, END

from langchain.prompts import PromptTemplate

from langchain_openai import ChatOpenAI

from langchain.schema import HumanMessage

class State(TypedDict):

text: str

classification: str

entities: List[str]

summary: str

llm = ChatOpenAI(model="gpt-4-0613", temperature=0)

def classification_node(state: State):

prompt = PromptTemplate(

input_variables=["text"],

template="Classify the following text into one of the categories: News, Blog, Research, or Other.\n\nText:{text}\n\nCategory:"

)

message = HumanMessage(content=prompt.format(text=state["text"]))

classification = llm.invoke([message]).content.strip()

return {"classification": classification, **state}

def entity_extraction_node(state: State):

prompt = PromptTemplate(

input_variables=["text"],

template="Extract all the entities (Person, Organization, Location) from the following text. Provide the result as a comma-separated list.\n\nText:{text}\n\nEntities:"

)

message = HumanMessage(content=prompt.format(text=state["text"]))

entities_str = llm.invoke([message]).content.strip()

entities = [e.strip() for e in entities_str.split(",")] if entities_str else []

return {"entities": entities, **state}

def summarization_node(state: State):

prompt = PromptTemplate(

input_variables=["text"],

template="Summarize the following text in one short sentence.\n\nText:{text}\n\nSummary:"

)

message = HumanMessage(content=prompt.format(text=state["text"]))

summary = llm.invoke([message]).content.strip()

return {"summary": summary, **state}

workflow = StateGraph(State)

# Add nodes to the graph

workflow.add_node("classification_node", classification_node)

workflow.add_node("entity_extraction", entity_extraction_node)

workflow.add_node("summarization", summarization_node)

# Add edges to the graph

workflow.set_entry_point("classification_node")

workflow.add_edge("classification_node", "entity_extraction")

workflow.add_edge("entity_extraction", "summarization")

workflow.add_edge("summarization", END)

# Compile the graph

app = workflow.compile() # This line is crucial

sample_text = """

OpenAI has announced the GPT-4 model, which is a large multimodal model that exhibits human-level performance on various professional benchmarks. It is developed to improve the alignment and safety of AI systems.

additionally, the model is designed to be more efficient and scalable than its predecessor, GPT-3. The GPT-4 model is expected to be released in the coming months and will be available to the public for research and development purposes.

"""

state_input = {"text": sample_text, "classification": "", "entities": [], "summary": ""}

result = app.invoke(state_input)

print("Classification:", result["classification"])

print("\nEntities:", result["entities"])

print("\nSummary:", result["summary"])

But at the end no access to gpt-4-0613

Expand full comment

the full code can be found in my repo and it works (I tested it yesterday).

if you choose to work with openAI you'll need to configure a private key on their website first. but you can use any llm that you want.

Expand full comment

And where is your repo please?

Expand full comment

Found the repo, I'll check, thx

Expand full comment