AGENT

Query Consistency

Ask about the current data consistency of subgraphs in real-time. As you use the tool it "learns" more information it can leverage for future investigations

All Dify workflows utilized are shared below this point

Subgraph POI Cross-Check ← downloadable

This workflow is designed to validate and cross-check query results from multiple indexers on a given subgraph deployment. It begins by pulling a list of active indexers using an HTTP request. The workflow then iterates through each indexer, sending queries to retrieve Proof of Indexing (POI) data. A consensus calculation is performed to determine the most common POI among the indexers. Additionally, it extracts subgraph features such as API version, spec version, and network details for comprehensive analysis. The final step involves writing this data into a Supabase database for storage and further analysis. This process ensures that discrepancies in indexing are identified and documented efficiently.

Indexer Divergence Block ← downloadable

This workflow is designed to identify the block where two indexers diverge in their live Proof of Indexing (POI). It begins by taking input parameters such as IPFS subgraph deployment hash, and URLs for two indexer statuses. The process involves fetching content from IPFS to determine the minimum start block value using a Python script. An HTTP request is made to retrieve the end block number from a GraphQL response. Another HTTP request identifies the divergence block by querying an external API server. The response is processed to extract divergence details like divergent block, queries, and duration. Finally, these results are written into a Supabase database for record-keeping.

Execute Subgraph Queries ← downloadable

This workflow is designed to execute GraphQL queries on a specified subgraph deployment. It begins by receiving a GraphQL query and a subgraph deployment ID as input. The 'Sanitize String' node processes the query string to remove unnecessary whitespace and escape quotes for JSON compatibility. The sanitized query is then sent via an HTTP request to the specified subgraph using the 'GraphQL Request' node. Once the response is received, the 'Check Long Length' node evaluates whether the response exceeds 10,000 characters. If it does, the workflow branches to use an LLM (GPT-4o-mini) in the 'Summarize Response' node to generate a concise summary of the lengthy response. The workflow concludes with either returning the original or summarized results based on this evaluation.

Identify Contracts Transactions ← downloadable

This workflow is designed to identify transactions associated with a specific smart contract on a given blockchain block. The process begins by extracting the indexed chain from an IPFS subgraph deployment hash using the 'pull_indexed_chain_from_deployment_hash' tool. It then converts the blockchain format for compatibility with Flipside's naming conventions through the 'Convert Blockchain Format' node. If the network is supported, it queries Flipside's API for transaction data using Python code in the 'Flipside Data' node. If successful, it writes this data to a Supabase database via another Python script in the 'Write to Database' node. In cases where no data is returned or if the network isn't supported, appropriate responses are generated.

Supabase Query Executor ← downloadable

This workflow is designed to identify transactions associated with a specific smart contract on a given blockchain block. The process begins by extracting the indexed chain from an IPFS subgraph deployment hash using the 'pull_indexed_chain_from_deployment_hash' tool. It then converts the blockchain format for compatibility with Flipside's naming conventions through the 'Convert Blockchain Format' node. If the network is supported, it queries Flipside's API for transaction data using Python code in the 'Flipside Data' node. If successful, it writes this data to a Supabase database via another Python script in the 'Write to Database' node. In cases where no data is returned or if the network isn't supported, appropriate responses are generated.

Indexer Entity Change Lookup ← downloadable

This workflow is designed to look up entity changes in a specific block for a given subgraph using an indexer. It starts by receiving input parameters such as the subgraph deployment ID, indexer status URL, and block number. An HTTP request node sends these parameters to retrieve the entity changes from the specified block. The response is then processed by a Python code node that checks if the length of the query results exceeds 10,000 characters. If it does, an LLM node summarizes the long response using OpenAI's GPT-4o-mini model. Depending on whether the response needs summarization or not, the workflow ends with either short or summarized results.

Indexer Entity Change Diff ← downloadable

This workflow is designed to compare entity changes between two indexers for a specified subgraph deployment and block number. It begins with an HTTP request to each indexer's status URL to retrieve entity change data. The responses are then aggregated and passed through a Python script that checks for differences between the two sets of data. If differences are found, they are further analyzed to determine if they exceed a certain length. If so, the differences are summarized using an LLM model. Finally, all relevant information including whether the blocks are identical or not, any differences found, and potential errors are logged into a Supabase database for record-keeping.

Visualize Metric by Day ← downloadable


This workflow is designed to process JSON data containing time series information and visualize it using ECharts. It begins with a 'Start' node where the user inputs JSON data. The 'Create Chart' node, implemented in Python, parses this data to extract dates and various metrics. It then constructs an ECharts configuration object that defines how the data should be visualized as a line chart. The chart includes features like axis tooltips, legends, and color-coded lines for each metric. Finally, the 'End' node outputs the generated ECharts configuration as a formatted string ready for rendering.

IPFS content pull ← downloadable


This workflow is designed to fetch content from IPFS (InterPlanetary File System) by converting various forms of IPFS URLs into valid API requests. It starts with a 'Start' node that requires an input of an IPFS hash. The 'Extract Content' node processes this input, handling different cases such as full ipfs:// URLs or bare IPFS hashes starting with 'Qm'. It constructs a proper URL for the API request using The Graph's IPFS API endpoint. The node then sends a GET request to fetch the content, handling JSON responses or raw text if JSON parsing fails. Finally, the 'Return IPFS Content' end node outputs the fetched content, completing the process.