Neynar AI: The Lifecycle of a Response

Sep 16, 2025

by

Neynar

Over the last couple of months, we’ve been building out Neynar AI, a class of AI-first products designed to enhance users’ experience on Farcaster. We began with the Neynar AI miniapp launched earlier this summer, and recently released the Neynar AI in-feed agent that acts as a “Grok for Farcaster”. 

We’ve been thrilled to see the community’s response to both the mini app and the in-feed agent, with Neynar AI sending thousands of messages in the weeks it's been live. 

Many users have been curious about how the agent works and what’s happening behind the scenes, so we wanted to share a closer look into some of the technical details and workflows that make the agent possible. Let’s walk through the lifecycle of a response from the Neynar agent:

  1. Triggering a response

The first step is when a user triggers a response by the agent. This trigger either occurs via our mini app server when a user sends a message in the mini app, or via a Neynar webhook when a user tags @neynar in a cast. 

In the mini app, we run no checks - any user is able to send a message to the mini app at any time and will receive a response regardless of their Neynar Score or the nature of the request. 

The feed agent, however, is a bit more restrictive, and runs a few checks before deciding to move forward with generating a response:

  • That we are within internal rate limits

  • The user’s Neynar score is > 0.85

  • Checking the user is actually trying to talk to the Neynar bot and not just mentioning the account. For this, we use an internal LLM check with a custom prompt to validate that it is a query that we should respond to. 

  1. Establishing context

Before generating a response, we need to establish the context to generate the response, which consists of a system prompt (a base-level prompt that is applied to all interactions to inform how the agent responds) and request-specific context. 

The system prompt is different between the mini app and the in-feed agent. One major difference is the personality we give to each version of the agent: the mini app is more of a formal, information-focused assistant, whereas the in-feed agent is designed to have more personality and distinct tone as a social account on the network. Request-specific context includes information about the user asking the request and the current time, and for the in-feed agent, also context about the thread that the request is in including previous replies and embeds. 

  1. Generating a response

During the actual response generation, we leverage custom-built tools that the LLM can interact with to inform its response - these tools include looking up a user, running SQL queries on Farcaster data, analyzing trades, searching for casts, and more. Each tool contains a set of instructions that tells the LLM how to use it, along with an input and output schema for using the tools properly.

The LLM then begins a message stream where it goes back and forth with itself, deciding what tools to use and what steps to take, while also sharing this stream with us. In the mini app, this enables us to show what it’s doing at any given time - e.g. “Searching for casts…” so the user is aware of what’s currently happening.

The mini app response also adds details such as linking the source casts that the response originated from.

  1. Final checks and publishing

After the LLM is finished forming a response, we have another set of checks to ensure the response is approved (i.e. it is not leaking anything sensitive, it meets criteria for response length, it is not launching a clanker token, etc.). Once the response is complete and passed, we finally save it to our database and send the message in the mini app or publish it as a cast response to the user. 

This process describes the overall lifecycle of a Neynar AI response. Most of the magic that makes Neynar AI unique from just a thin LLM wrapper is in the infrastructure we’ve built to enable custom tools around Farcaster and onchain data that also leverage our own APIs. Tool usage enables a unique “composability” in what we're able to achieve because we can continue to plug in new tools as user behaviors and developments reveal emergent use cases.

If you’re interested in using Neynar AI, you can use the mini app at ai.neynar.com or tag @neynar with a request on Farcaster. We're continuously working on new ways to improve Neynar AI and build it into a best-in-class agent on the Farcaster network.

Keep reading