diff --git a/website/pages/ar/cookbook/base-testnet.mdx b/website/pages/ar/cookbook/base-testnet.mdx deleted file mode 100644 index a32276dd1875..000000000000 --- a/website/pages/ar/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base Sepolia testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Subgraph Studio - -### 1. قم بتثبيت Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -ابدأ الرسم البياني الفرعي الخاص بك من عقد قائم. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- (AssemblyScript Mappings (mapping.ts هذا هو الكود الذي يترجم البيانات من مصادر البيانات الخاصة بك إلى الكيانات المحددة في المخطط. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/ar/cookbook/grafting-hotfix.mdx b/website/pages/ar/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/ar/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ar/cookbook/timeseries.mdx b/website/pages/ar/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/ar/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ar/cookbook/upgrading-a-subgraph.mdx b/website/pages/ar/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index b69433a19c5e..000000000000 --- a/website/pages/ar/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## مقدمة - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### المتطلبات الأساسية - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. انشر ما يلي وحدد الإصدار الجديد في الأمر (مثل v0.0.1 ، v0.0.2 ، إلخ): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## مصادر إضافية - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/cs/cookbook/base-testnet.mdx b/website/pages/cs/cookbook/base-testnet.mdx deleted file mode 100644 index c38c8030cc27..000000000000 --- a/website/pages/cs/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Budování podgrafů na základně ---- - -Tento průvodce vás rychle provede inicializací, vytvořením a nasazením subgrafu v testovací síti Base (testnet). - -Požadavky: - -- Adresa kontraktu testovací sítě Base Sepolia -- Kryptopeněženka (např. MetaMask nebo Coinbase Wallet) - -## Subgraph Studio - -### 1. Nainstalujte Graph CLI - -Graph CLI (>=v0.41.0) je napsáno v jazyce JavaScript a k jeho použití je třeba mít nainstalovaný buď `npm`, nebo `yarn`. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Vytvoření podgrafu v Podgraf Studio - -Přejděte do [Subgraph Studio](https://thegraph.com/studio/) a připojte svou kryptopeněženku. - -Po připojení klikněte na tlačítko "Create a Subgraph", zadejte název podgrafu a klikněte na tlačítko Vytvoření podgrafu. - -### 3. Inicializujte podgraf - -> Konkrétní příkazy pro svůj podgraf najdete v Podgraf Studio. - -Ujistěte se, že graph-cli je aktualizován na nejnovější verzi (nad 0.41.0) - -```sh -graph --version -``` - -Inicializujte podgraf z existující smlouvy. - -```sh -graph init --studio -``` - -Váš podgraf slug je identifikátor vašeho podgrafu. Nástroj CLI vás provede kroky pro vytvoření podgrafu, včetně: - -- Protokol: ethereum -- Subgraph slug: `` -- Adresář, ve kterém se má podgraf vytvořit: `` -- Ethereum network: base-sepolia -- Smluvní adresa: `` -- Startovní blok (volitelné) -- Název smlouvy: `` -- Ano/ne pro indexování událostí (ano znamená, že váš podgraf bude založen na entitách ve schématu a jednoduchých mapováních pro emitované události) - -### 3. Napište svůj podgraf - -> Pokud jsou emitované události jedinou věcí, kterou chcete indexovat, není třeba provádět žádnou další práci a můžete přejít k dalšímu kroku. - -Předchozí příkaz vytvoří základní podgraf, který můžete použít jako výchozí bod pro sestavení vašeho podgrafu. Při provádění změn v podgrafu budete primárně pracovat se třemi soubory: - -- Manifest (subgraph.yaml) - Manifest definuje, jaké datové zdroje budou vaše podgrafy indexovat. Nezapomeňte do souboru manifestu přidat jako název sítě `base-sepolia`, abyste mohli svůj subgraf nasadit na Base Sepolia. -- Schéma (schema.graphql) - Schéma GraphQL definuje, jaká data chcete z podgrafu získat. -- AssemblyScript Mapování (mapping.ts) - Jedná se o kód, který převádí data z datových zdrojů na entity definované ve schématu. - -Pokud chcete indexovat další data, musíte rozšířit manifest, schéma a mapování. - -Pro více informací o tom, jak napsat svůj podgraf, se podívejte do části [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Před nasazením podgrafu se musíte ověřit v Podgraf Studio. To provedete spuštěním následujícího příkazu: - -Potvrďte podgraf ve studio - -``` -graph auth --studio -``` - -Poté přejděte do adresáře svého podgrafu. - -``` - cd -``` - -Vytvořte podgraf pomocí následujícího příkazu: - -```` -``` -graph codegen && graph build -``` -```` - -Nakonec můžete podgraf nasadit pomocí tohoto příkazu: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Dotaz na podgraf - -Po nasazení vašeho podgrafu jej můžete dotazovat z vaší dApp pomocí vývojové dotazovací URL v Podgraf Studio. - -Poznámka - Studio API je omezeno rychlostí. Proto by mělo být přednostně používáno pro vývoj a testování. - -Další informace o dotazování na data z podgrafu naleznete na stránce [Dotazování podgrafu](/querying/querying-the-graph). diff --git a/website/pages/cs/cookbook/grafting-hotfix.mdx b/website/pages/cs/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/cs/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/cs/cookbook/timeseries.mdx b/website/pages/cs/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/cs/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/cs/cookbook/upgrading-a-subgraph.mdx b/website/pages/cs/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index c0dff9e649fc..000000000000 --- a/website/pages/cs/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrade existujícího podgrafu na síť grafů ---- - -## Úvod - -Toto je návod, jak upgradovat svůj subgraf z hostované služby na decentralizovanou síť The Graph. Více než 1,000 subgrafů bylo úspěšně upgradováno na síť The Graph, včetně projektů jako Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido a mnoha dalších! - -Proces aktualizace je rychlý a vaše subgrafy budou vždy profitovat z reliability a výkonu, které můžete získat pouze na síti Grafu. - -### Předpoklady - -- V hostované službě je nasazen podgraf. - -## Upgrade existujícího podgrafu na síť grafů - - - -Jste-li přihlášeni k hostované službě, můžete k jednoduchému postupu aktualizace podgrafů přistupovat z [vašeho ovládacího panelu](https://thegraph.com/hosted-service/dashboard) nebo ze stránky jednotlivých podgrafů. - -> Tento proces obvykle trvá méně než pět minut. - -1. Vyberte podgraf nebo podgrafy, které chcete aktualizovat. -2. Připojte nebo zadejte přijímající peněženku (peněženku, která se stane vlastníkem podgrafu). -3. Klikněte na tlačítko "Upgrade". - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -Své podgrafy si budete moci živě prohlížet v decentralizované síti prostřednictvím [Graph Explorer](https://thegraph.com/explorer). - -### Další krok? - -Při aktualizaci podgrafu bude automaticky indexován indexátorem aktualizace. Pokud je indexovaný řetězec [plně podporovaný sítí Graf](/developing/supported-networks), můžete přidat nějaký GRT jako "signál", abyste přilákali další indexátory. Doporučujeme kurýrovat podgraf s alespoň 3,000 GRT, abyste přilákali 2-3 indexátory pro vyšší kvalitu služeb. - -Po vygenerování klíče API můžete v síti Graf ihned začít zadávat dotazy na svůj podgraf. - -### Vytvoření klíče API - -Klíč API si můžete vygenerovat v podgraf Studio [zde](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -Tento klíč API můžete použít k dotazování podgrafů v síti Graf. Všichni uživatelé začínají s tarifem Free, který zahrnuje 100,000 bezplatných dotazů měsíčně. Vývojáři se mohou zaregistrovat do plánu růst kreditní nebo debetní karty nebo vložením GRT do fakturačního systému Podgraf Studio. - -> Poznámka: další informace o plánech a o správě účtování v Podgraf Studio najdete v [billing documentation](../billing). - -### Zabezpečení klíče API - -Doporučuje se zabezpečit API omezením jeho používání dvěma způsoby - -1. Autorizované podgrafy -2. Autorizovaná doména - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Dotazování na podgraf v decentralizované síti - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -Jakmile první indexer plně zaindexuje váš podgraf, můžete se začít dotazovat na podgraf v decentralizované síti. Chcete-li získat adresu URL dotazu pro svůj podgraf, můžete ji zkopírovat/vložit kliknutím na symbol vedle adresy URL dotazu. Zobrazí se něco takového: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Důležité: Nezapomeňte nahradit `[api-key]` skutečným klíčem API vygenerovaným v části výše. - -Nyní můžete použít tuto URL dotazu ve své dapp k odesílání vašich GraphQL požadavků - -Gratulujeme! Nyní jste průkopníkem decentralizace! - -> Poznámka: Vzhledem k distribuované povaze sítě se může stát, že různé indexátory indexují až různé bloky. Abyste dostávali pouze čerstvá data, můžete zadat minimální blok, který musí mít Indexer zaindexovaný, aby mohl váš dotaz obsloužit, pomocí argumentu pole block: `{ number_gte: $minBlock }`, jak je uvedeno v příkladu níže: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -Další informace o povaze sítě a o tom, jak zacházet s re-orgy, jsou popsány v dokumentačním článku [Distribuované systémy](/querying/distributed-systems/). - -## Aktualizace podgrafu v síti - -Pokud chcete aktualizovat existující podgraf v síti, můžete to provést nasazením nové verze podgrafu do Podgraf Studio pomocí Graph CLI. - -1. Proveďte změny v aktuálním podgrafu. -2. Nasaďte následující a v příkazu zadejte novou verzi (např. v0.0.1, v0.0.2 atd.): - -```sh -graph deploy --studio --version -``` - -3. Otestujte novou verzi v Podgraf Studio dotazem na hřišti -4. Zveřejněte novou verzi v síti Graf. Nezapomeňte, že to vyžaduje plyn (jak je popsáno v části výše). - -### Poplatek za aktualizaci vlastníka: Hluboký ponor - -> Poznámka: Kurátorství na Arbitrum má plochou křivku vazby. Více informací o Arbitrum [zde](/arbitrum/arbitrum-faq/). - -Aktualizace vyžaduje migraci GRT ze staré verze podgrafu na novou verzi. To znamená, že při každé aktualizaci se vytvoří nová vazební křivka (více o vazebních křivkách [zde](/network/curating#bonding-curve-101)). - -Nová vazební křivka účtuje 1% kurátorskou daň ze všech GRT, které jsou migrovány na novou verzi. Vlastník musí zaplatit 50% z této částky nebo 1.25 %. Zbývajících 1.25 % absorbují všichni kurátoři jako poplatek. Tato motivační konstrukce je zavedena proto, aby vlastník podgrafu nemohl vyčerpat všechny prostředky svého kurátora rekurzivními aktualizačními voláními. Pokud neprobíhá žádná kurátorská činnost, musíte zaplatit minimálně 100 GRT, abyste mohli signalizovat svůj vlastní podgraf. - -Uveďme si příklad, který platí pouze v případě, že je váš podgraf aktivně kurátorován: - -- 100,000 GRT je signalizováno pomocí automatické migrace na v1 podgrafu -- Vlastník provede aktualizaci na verzi v2. 100,000 GRT se převede na novou křivku lepení, přičemž 97,500 GRT se vloží do nové křivky a 2,500 GRT se spálí -- Majitel pak má 1,250 GRT spálené zaplatit polovinu poplatku. Tuto částku musí mít majitel před aktualizací v peněžence, jinak se aktualizace nepodaří. K tomu dojde ve stejné transakci jako k aktualizaci. - -_Ačkoli je tento mechanismus v současné době na síti v provozu, komunita v současné době diskutuje o způsobech, jak snížit náklady na aktualizace pro vývojáře podgrafů._ - -### Udržování stabilní verze podgrafu - -Pokud v podgrafu provádíte mnoho změn, není dobré jej neustále aktualizovat a předbíhat náklady na aktualizaci. Udržování stabilní a konzistentní verze podgrafu je zásadní nejen z hlediska nákladů, ale také proto, aby se indexátory mohly cítit jistě při synchronizaci. Indexery by měly být při plánování aktualizace označeny, aby nebyly ovlivněny časy synchronizace indexerů. Neváhejte využít kanál [#Indexers](https://discord.gg/JexvtHa7dq) na službě Discord, abyste dali Indexerům vědět, kdy verzujete své subgrafy. - -Subgrafy jsou otevřená rozhraní API, která využívají externí vývojáři. Otevřená API musí dodržovat přísné standardy, aby nenarušovala aplikace externích vývojářů. V na síti Grafu musí vývojář podgrafů brát ohled na indexátory a na to, jak dlouho jim trvá synchronizace nového podgrafu **a také na** ostatní vývojáře, kteří používají jejich podgrafy. - -### Aktualizace metadat podgrafu - -Metadata dílčích grafů můžete aktualizovat, aniž byste museli publikovat novou verzi. Metadata zahrnují název podgrafu, obrázek, popis, adresu URL webové stránky, adresu URL zdrojového kódu a kategorie. Vývojáři tak mohou učinit aktualizací podrobností o podgrafu v Podgraf Studio, kde lze upravit všechna příslušná pole. - -Ujistěte se, že je zaškrtnuto políčko **Aktualizovat podrobnosti subgrafu v Průzkumníku**, a klikněte na tlačítko **Uložit**. Pokud je tato možnost zaškrtnuta, bude vygenerována řetězová transakce, která aktualizuje podrobnosti subgrafu v Průzkumníku, aniž by bylo nutné publikovat novou verzi s novým nasazením. - -## Osvědčené postupy pro nasazení podgrafu do sítě grafů - -1. Využití názvu ENS pro vývoj podgrafů: - -- Nastavení ENS [zde](https://app.ens.domains/) -- Přidejte svůj název ENS do nastavení [zde](https://thegraph.com/explorer/settings?view=display-name). - -2. Čím více jsou vaše profily vyplněné, tím větší je šance, že vaše podgrafy budou indexovány a kurátorovány. - -## Odepsání subgrafu v síti Graph - -Postupujte podle pokynů [zde](/managing/transfer-and-deprecate-a-subgraph), abyste svůj podgraf vyřadili a odstranili jej ze sítě The Graph Network. - -## Dotazování podgrafu + fakturace v síti graph - -Hostovaná služba byla vytvořena tak, aby umožnila vývojářům nasazovat své podgrafy bez jakýchkoli omezení. - -V síti Graf jsou poplatky za dotazy hlavní součástí pobídek protokolu. Další informace o přihlášení k API a placení poplatků za dotazy najdete v dokumentaci k účtování [zde](/billing/). - -## Další zdroje - -Pokud jste stále zmateni, nebojte se! Podívejte se na následující zdroje nebo se podívejte na našeho videoprůvodce upgradem subgrafů na decentralizovanou síť níže: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/de/cookbook/base-testnet.mdx b/website/pages/de/cookbook/base-testnet.mdx deleted file mode 100644 index cd96026d2596..000000000000 --- a/website/pages/de/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base Sepolia testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Subgraph Studio - -### 1. Install the Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -Initialize your subgraph from an existing contract. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - Dies ist der Code, der die Daten aus Ihren Datenquellen in die im Schema definierten Entitäten übersetzt. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/de/cookbook/grafting-hotfix.mdx b/website/pages/de/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/de/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/de/cookbook/timeseries.mdx b/website/pages/de/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/de/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/de/cookbook/upgrading-a-subgraph.mdx b/website/pages/de/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index 1aac0794b687..000000000000 --- a/website/pages/de/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduction - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Prerequisites - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Erstellen Sie einen API-Schlüssel - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -Es wird empfohlen, die API zu sichern, indem Sie ihre Verwendung auf zwei Arten einschränken: - -1. Autorisierte Subgrafen -2. Autorisierte Domäne - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Abfragen Ihres Subgrafen im dezentralen Netzwerk - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -Sobald der erste Indexer Ihren Subgrafen vollständig indiziert hat, können Sie mit der Abfrage des Subgrafen im dezentralen Netzwerk beginnen. Um die Abfrage-URL für Ihren Subgrafen abzurufen, können Sie sie kopieren/einfügen, indem Sie auf das Symbol neben der Abfrage-URL klicken. Sie werden so etwas sehen: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -Sie können diese Abfrage-URL jetzt in Ihrer Dapp verwenden, um Ihre GraphQL-Anfragen an sie zu senden. - -Herzliche Glückwünsche! Sie sind jetzt ein Pionier der Dezentralisierung! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/en/cookbook/_meta.js b/website/pages/en/cookbook/_meta.js index f8b5035ddb55..87b6b11ebd1c 100644 --- a/website/pages/en/cookbook/_meta.js +++ b/website/pages/en/cookbook/_meta.js @@ -6,10 +6,12 @@ export default { grafting: '', 'subgraph-uncrashable': '', 'substreams-powered-subgraphs': '', + 'transfer-to-the-graph': '', pruning: 'Subgraph Best Practice 1: Pruning with indexerHints', derivedfrom: 'Subgraph Best Practice 2: Manage Arrays with @derivedFrom', 'immutable-entities-bytes-as-ids': 'Subgraph Best Practice 3: Using Immutable Entities and Bytes as IDs', 'avoid-eth-calls': 'Subgraph Best Practice 4: Avoid eth_calls', - 'transfer-to-the-graph': '', + timeseries: 'Subgraph Best Practice 5: Simplify and Optimize with Timeseries and Aggregations', + 'grafting-hotfix': 'Subgraph Best Practice 6: Grafting and Hotfixing', enums: '', } diff --git a/website/pages/en/cookbook/avoid-eth-calls.mdx b/website/pages/en/cookbook/avoid-eth-calls.mdx index 446b0e8ecd17..25fcb8b0db9d 100644 --- a/website/pages/en/cookbook/avoid-eth-calls.mdx +++ b/website/pages/en/cookbook/avoid-eth-calls.mdx @@ -99,4 +99,18 @@ Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0 ## Conclusion -We can significantly improve indexing performance by minimizing or eliminating `eth_calls` in our subgraphs. +You can significantly improve indexing performance by minimizing or eliminating `eth_calls` in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/en/cookbook/derivedfrom.mdx b/website/pages/en/cookbook/derivedfrom.mdx index 9eed89704b60..75827a185a6b 100644 --- a/website/pages/en/cookbook/derivedfrom.mdx +++ b/website/pages/en/cookbook/derivedfrom.mdx @@ -68,6 +68,20 @@ This will not only make our subgraph more efficient, but it will also unlock thr ## Conclusion -Adopting the `@derivedFrom` directive in subgraphs effectively handles dynamically growing arrays, enhancing indexing efficiency and data retrieval. +Use the `@derivedFrom` directive in subgraphs to effectively manage dynamically growing arrays, enhancing indexing efficiency and data retrieval. -To learn more detailed strategies to avoid large arrays, read this blog from Kevin Jones: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). +For a more detailed explanation of strategies to avoid large arrays, check out Kevin Jones' blog: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/en/cookbook/grafting-hotfix.mdx b/website/pages/en/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/en/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/en/cookbook/immutable-entities-bytes-as-ids.mdx b/website/pages/en/cookbook/immutable-entities-bytes-as-ids.mdx index f38c33385604..725e53d1cf53 100644 --- a/website/pages/en/cookbook/immutable-entities-bytes-as-ids.mdx +++ b/website/pages/en/cookbook/immutable-entities-bytes-as-ids.mdx @@ -174,3 +174,17 @@ Query Response: Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds. Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/). + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/en/cookbook/pruning.mdx b/website/pages/en/cookbook/pruning.mdx index f22a2899f1de..d79d5b8911f9 100644 --- a/website/pages/en/cookbook/pruning.mdx +++ b/website/pages/en/cookbook/pruning.mdx @@ -39,3 +39,17 @@ dataSources: ## Conclusion Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/en/cookbook/timeseries.mdx b/website/pages/en/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/en/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/es/cookbook/base-testnet.mdx b/website/pages/es/cookbook/base-testnet.mdx deleted file mode 100644 index d22f9310155e..000000000000 --- a/website/pages/es/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Construcción de subgrafos en Base ---- - -Esta guía te conducirá rápidamente a través de cómo iniciar, crear e deployar tu subgrafo en Base testnet. - -Lo que necesitarás: - -- A Base Sepolia testnet contract address -- Una wallet cripto (por ejemplo, MetaMask o Coinbase Wallet) - -## Subgraph Studio - -### 1. Instala The Graph CLI - -La CLI de The Graph (>=v0.41.0) está escrita en JavaScript y necesitarás tener instalado `npm` o `yarn` para usarlo. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Inicia tu subgrafo - -> You can find specific commands for your subgraph in Subgraph Studio. - -Asegúrate de que el graph-cli esté actualizado a la última versión (superior a 0.41.0) - -```sh -graph --versión -``` - -Inicia tu subgrafo a partir de un contrato existente. - -```sh -graph init --studio -``` - -Tu slug de subgrafo es un identificador para tu subgrafo. La herramienta CLI te guiará a través de los pasos para crear un subgrafo, que incluyen: - -- Protocolo: ethereum -- Slug de subgrafo: `` -- Directorio para crear el subgrafo en: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - Este es el código que traduce los datos de tus fuentes de datos a las entidades definidas en el esquema. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/es/cookbook/grafting-hotfix.mdx b/website/pages/es/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/es/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/es/cookbook/timeseries.mdx b/website/pages/es/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/es/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/es/cookbook/upgrading-a-subgraph.mdx b/website/pages/es/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index 9c3c0de77f65..000000000000 --- a/website/pages/es/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Actualizando un Subgrafo Existente a la Red de The Graph ---- - -## Introducción - -Esta es una guía sobre cómo actualizar tu subgrafo desde el servicio alojado a la red descentralizada de The Graph. ¡Más de 1,000 subgráfos se han actualizado con éxito a la red de The Graph, incluyendo proyectos como Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, y muchos más! - -El proceso de actualización es rápido y tus subgráfos se beneficiarán para siempre de la confiabilidad y rendimiento que solo puedes obtener en la red de The Graph. - -### Prerrequisitos - -- You have a subgraph deployed on the hosted service. - -## Actualizando un Subgrafo Existente a la Red de The Graph - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Crear una clave API - -Puedes generar una clave API en Subgraph Studio [aquí](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Asegurando tu clave API - -Se recomienda asegurar la API limitando su uso de dos maneras: - -1. Subgrafos autorizados -2. Dominio autorizado - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Consulta de su subgrafo en la red descentralizada - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -Tan pronto como el primer Indexador haya indexado completamente tu subgrafo, podrás empezar a consultarlo en la red descentralizada. Para recuperar la URL de consulta de tu subgrafo, puedes copiar/pegar haciendo clic en el símbolo que aparece junto a la URL de consulta. Verás algo así: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Importante: Asegúrate de reemplazar `[api-key]` con una clave API real generada en la sección anterior. - -Ahora puedes usar esa URL de consulta en tu dapp para enviar tus solicitudes de GraphQL. - -¡Felicitaciones! ¡Ahora eres un pionero de la descentralización! - -> Nota: Debido a la naturaleza distribuida de la red, podría ser el caso que diferentes Indexadores hayan indexado hasta bloques diferentes. Para recibir solo datos actualizados, puedes especificar el bloque mínimo que un Indexador debe haber indexado para atender tu consulta con el argumento de campo: `{ number_gte: $minBlock }`, como se muestra en el ejemplo a continuación: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -Más información sobre la naturaleza de la red y cómo manejar reorganizaciones se describe en el artículo de documentación [Sistemas Distribuidos](/querying/distributed-systems/). - -## Actualizando un subgráfo en la red - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploya lo siguiente y especifica la nueva versión en el comando (por ejemplo, v0.0.1, v0.0.2, etc.): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publica la nueva versión en The Graph Network. Recuerda que esto requiere gas (como se describe en la sección anterior). - -### Tarifa de Actualización del Propietario: Análisis Profundo - -> Nota: La curación en Arbitrum tiene una curva de vinculación plana. Obtén más información sobre Arbitrum [aquí](/arbitrum/arbitrum-faq/). - -Una actualización requiere que GRT se migre de la antigua versión del subgrafo a la nueva versión. Esto significa que para cada actualización, se creará una nueva curva de vinculación (más información sobre las curvas de vinculación [aquí](/network/curating#bonding-curve-101)). - -La nueva curva de vinculación cobra el impuesto de curación del 1% sobre todo el GRT que se está migrando a la nueva versión. El propietario debe pagar el 50% de esto, es decir, el 1.25%. El otro 1.25% es absorbido por todos los curadores como una tarifa. Este diseño de incentivos está en su lugar para evitar que el propietario de un subgrafo pueda agotar todos los fondos de sus curadores con llamadas de actualización recursivas. Si no hay actividad de curación, tendrás que pagar un mínimo de 100 GRT para señalizar tu propio subgrafo. - -Hagamos un ejemplo, esto es solo el caso si tu subgrafo está siendo curado activamente en: - -- Se señalizan 100.000 GRT mediante la auto-migración en v1 de un subgrafo -- Actualizaciones del propietario a la versión 2. Se migra 100,000 GRT a una nueva curva de vinculación, donde 97,500 GRT se colocan en la nueva curva y 2,500 GRT se queman -- Luego, al propietario se le queman 1250 GRT para pagar la mitad de la tarifa. El propietario debe tener esto en su billetera antes de la actualización; de lo contrario, la actualización no se llevará a cabo. Esto sucede en la misma transacción que la actualización. - -_Aunque este mecanismo está actualmente activo en la red, la comunidad está discutiendo formas de reducir el costo de las actualizaciones para los desarrolladores de subgrafos._ - -### Mantener una Versión Estable de un Subgrafo - -Si estás realizando muchos cambios en tu subgrafo, no es una buena idea actualizarlo continuamente y asumir los costos de la actualización. Mantener una versión estable y consistente de tu subgrafo es crucial, no solo desde la perspectiva de costos, sino también para que los Indexers puedan confiar en sus tiempos de sincronización. Deberías informar a los Indexers cuando planeas una actualización para que los tiempos de sincronización de los Indexers no se vean afectados. Siéntete libre de utilizar el canal [#Indexers](https://discord.gg/JexvtHa7dq) en Discord para notificar a los Indexers cuando estás versionando tus subgrafos. - -Los subgrafos son API abiertas que los desarrolladores externos están aprovechando. Las API abiertas deben seguir estándares estrictos para que no rompan las aplicaciones de los desarrolladores externos. En la red de The Graph, un desarrollador de subgrafos debe tener en cuenta a los Indexadores y el tiempo que les lleva sincronizar un nuevo subgrafo, **así como también** a otros desarrolladores que están utilizando sus subgrafos. - -### Actualización de los Metadatos de un subgrafo - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Asegúrate de que la opción **Actualizar Detalles del Subgrafo en el Explorador** esté marcada y haz clic en **Guardar**. Si esto está marcado, se generará una transacción en cadena que actualiza los detalles del subgrafo en el Explorador sin necesidad de publicar una nueva versión con una implementación diferente. - -## Mejores Prácticas para Deployar un Subgrafo en The Graph Network - -1. Aprovechar un nombre ENS para el Desarrollo de Subgrafos: - -- Configura tu ENS [aqui](https://app.ens.domains/) -- Añade tu nombre ENS a tu configuración [aquí](https://thegraph.com/explorer/settings?view=display-name). - -2. Cuanto más completos estén tus perfiles, más posibilidades tendrás de que tus subgrafos sean indexados y curados. - -## Deprecar un Subgrafo en The Graph Network - -Sigue los pasos [aquí](/managing/transfer-and-deprecate-a-subgraph) para retirar tu subgrafo y eliminarlo de la red de The Graph. - -## Consulta de un Subgrafo + Facturación en The Graph Network - -El servicio alojado se estableció para permitir a los desarrolladores desplegar sus subgrafos sin restricciones. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Recursos Adicionales - -¡Si aún estás confundido, no te preocupes! Consulta los siguientes recursos o mira nuestra guía en video sobre cómo actualizar subgrafos a la red descentralizada a continuación: - - - -- [Contratos de la red de the graph](https://github.com/graphprotocol/contracts) -- [Contrato de Curación](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - el contrato subyacente que envuelve el GNS - - Direccion - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Documentación de Subgraph Studio](/deploying/subgraph-studio) diff --git a/website/pages/fr/cookbook/base-testnet.mdx b/website/pages/fr/cookbook/base-testnet.mdx deleted file mode 100644 index b06bcbd77f05..000000000000 --- a/website/pages/fr/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Construction de subgraphs sur Base ---- - -Ce guide vous montrera rapidement comment initialiser, créer et déployer votre subgraph sur le réseau de test de la base. - -Ce dont vous avez besoin : - -- A Base Sepolia testnet contract address -- Un portefeuille cryptographique (par exemple MetaMask ou Coinbase Wallet) - -## Subgraph Studio - -### 1. Installez la CLI de The Graph - -La CLI Graph (>=v0.41.0) est écrite en JavaScript et vous devrez avoir installé « npm » ou « yarn » pour l'utiliser. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialiser votre subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Assurez-vous que le graph-cli est mis à jour vers la dernière version (supérieure à 0.41.0) - -```sh -graph --version -``` - -Initialiser votre subgraph à partir d'un contrat existant. - -```sh -graph init --studio -``` - -Votre nom de subgraph est un identifiant pour votre subgraph. L'outil CLI vous guidera à travers les étapes de la création d'un subgraph, y compris : - -- Protocol: Ethereum -- Subgraph slug: `` -- Répertoire dans lequel créer le subgraph : `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- Mappages AssemblyScript (mapping.ts) - Il s'agit du code qui traduit les données de vos sources de données vers les entités définies dans le schéma. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/fr/cookbook/grafting-hotfix.mdx b/website/pages/fr/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/fr/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/fr/cookbook/timeseries.mdx b/website/pages/fr/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/fr/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/fr/cookbook/upgrading-a-subgraph.mdx b/website/pages/fr/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index b275c5920354..000000000000 --- a/website/pages/fr/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Mise à niveau d'un subgraph existant vers le réseau Graph ---- - -## Présentation - -Ceci est un guide sur la façon de mettre à niveau votre subgraph du service hébergé vers le réseau décentralisé de The Graph. Plus de 1 000 sous-graphiques ont été mis à niveau avec succès vers The Graph Network, y compris des projets tels que Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido et bien d'autres ! - -Le processus de mise à niveau est rapide et vos subgraphs bénéficieront à jamais de la fiabilité et de la performance que vous ne pouvez obtenir que sur le Réseau Graph. - -### Conditions préalables - -- Vous avez un subgraph déployé sur le service hébergé. - -## Mise à niveau d'un subgraph existant vers le réseau Graph - - - -Si vous êtes connecté au service hébergé, vous pouvez accéder à un flux simple pour mettre à niveau vos subgraphs à partir de [votre tableau de bord](https://thegraph.com/hosted-service/dashboard) ou à partir d'une page de subgraph individuelle. - -> Ce processus prend généralement moins de cinq minutes. - -1. Sélectionnez le(s) subgraph(s) que vous souhaitez mettre à niveau. -2. Connecter ou entrer dans le portefeuille destinataire (le portefeuille qui deviendra le propriétaire du subgraph). -3. Cliquez sur le bouton "Mise à niveau". - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -Vous pourrez visualiser vos subgraphs en direct sur le réseau décentralisé via [Graph Explorer](https://thegraph.com/explorer). - -### Quelle est la prochaine étape ? - -Lorsque votre subgraph est mis à jour, il sera automatiquement indexé par l'indexeur de mise à jour. Si la chaîne indexée est [entièrement prise en charge par The Graph Network](/developing/supported-networks), vous pouvez ajouter quelques GRT comme "signal", pour attirer plus d'indexeurs. Il est recommandé de curer votre subgraph avec au moins 3 000 GRT pour attirer 2-3 indexeurs pour une meilleure qualité de service. - -Vous pouvez commencer à interroger votre subgraph immédiatement sur The Graph Network, une fois que vous avez généré une clé API. - -### Créer une clé d'API - -Vous pouvez générer une clé API dans Subgraph Studio [Ici](https://thegraph.com/studio/apikeys/). - -![Page de création de la clé API](/img/api-image.png) - -Vous pouvez utiliser cette clé API pour interroger des subgraphs sur le réseauThe Graph. Tous les utilisateurs commencent par le plan gratuit, qui comprend 100 000 requêtes gratuites par mois. Les développeurs peuvent s'inscrire au Growth Plan en connectant une carte de crédit ou de débit, ou en déposant des GRT dans le système de facturation de Subgraph Studio. - -> Note : voir la [documentation sur la facturation](../facturation) pour plus d'informations sur les plans, et sur la gestion de votre facturation sur Subgraph Studio. - -### Sécurisation de votre clé API - -Il est recommandé de sécuriser l'API en limitant son utilisation de deux manières : - -1. Les subgraphs autorisés -2. Le Domaine autorisé - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Page de verrouillage du subgraph](/img/subgraph-lockdown.png) - -### Interroger votre subgraph sur le réseau décentralisé - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Subgraph Rocket Pool](/img/rocket-pool-subgraph.png) - -Dès que le premier indexeur a complètement indexé votre subgraph, vous pouvez commencer à interroger le subgraph sur le réseau décentralisé. Afin de récupérer l'URL de requête pour votre subgraph, vous pouvez le copier/coller en cliquant sur le symbole à côté de l'URL de requête. Vous verrez quelque chose comme ceci : - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important : Assurez-vous de remplacer `[api-key]` par une clé API réelle générée dans la section ci-dessus. - -Vous pouvez maintenant utiliser cette URL de requête dans votre dapp pour envoyer vos requêtes GraphQL. - -Toutes nos félicitations! Vous êtes désormais un pionnier de la décentralisation ! - -> Notez : En raison de la nature distribuée du réseau, il se peut que différents indexeurs aient indexé des blocs différents. Afin de ne recevoir que des données fraîches, vous pouvez spécifier le bloc minimum qu'un indexeur doit avoir indexé pour servir votre requête avec l'argument de champ block : `{ number_gte: $minBlock }` comme le montre l'exemple ci-dessous : - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -Plus d'informations sur la nature du réseau et sur la manière de gérer les réorganisations sont décrites dans l'article de documentation [Distributed Systems](/querying/distributed-systems/). - -## Mise à jour d'un subgraph sur le réseau - -Si vous souhaitez mettre à jour un subgraph existant sur le réseau, vous pouvez le faire en déployant une nouvelle version de votre subgraph sur Subgraph Studio à l'aide de la CLI Graph. - -1. Make changes to your current subgraph. -2. Déployez les éléments suivants et spécifiez la nouvelle version dans la commande (par exemple v0.0.1, v0.0.2, etc.) : - -```sh -graph déployer --studio --version -``` - -3. Testez la nouvelle version dans Subgraph Studio en effectuant des requêtes dans l'aire de jeu -4. Publier la nouvelle version sur le réseau graph. N'oubliez pas que cela nécessite du gaz (comme décrit dans la section ci-dessus). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -Une mise à jour nécessite que des GRT soient migré de l'ancienne version du subgraph vers la nouvelle version. Cela signifie que pour chaque mise à jour, une nouvelle courbe de liaison sera créée (plus d'informations sur les courbes de liaison [here](/network/curating#bonding-curve-101)). - -Les nouveaux frais de la coube liaison applique la taxe de curation de 1 % sur tous les GRT transférés vers la nouvelle version. Le propriétaire doit payer 50 % de cette taxe, soit 1,25 %. L'autre 1,25 % est absorbé par tous les conservateurs en tant que frais. Ce système d'incitation a été mis en place pour empêcher le propriétaire d'un subgraph de drainer tous les fonds de son curateur par des appels récursifs de mise à jour. S'il n'y a pas d'activité de curation, vous devrez payer un minimum de 100 GRT pour signaler votre propre subgraph. - -Prenons un exemple, ce n'est le cas que si votre subgraph fait l'objet d'une curation active : - -- 100,000 GRT sont signalés en utilisant la migration automatique sur la v1 d'un subgraph -- Le propriétaire passe à la version 2. 100 000 GRT sont transférés vers une nouvelle courbe de collage, où 97 500 GRT sont placés dans la nouvelle courbe et 2 500 GRT sont brûlés -- Le propriétaire fait alors brûler 1 250 GRT pour payer la moitié des frais. Le propriétaire doit l'avoir dans son portefeuille avant la mise à jour, sinon la mise à jour échouera. Cela se produit dans la même transaction que la mise à jour. - -_Bien que ce mécanisme soit actuellement actif sur le réseau, la communauté discute actuellement des moyens de réduire le coût des mises à jour pour les développeurs de subgraphs._ - -### Maintenir une version stable d'un subgraph - -Si vous apportez de nombreuses modifications à votre subgraph, il n'est pas judicieux de le mettre à jour en permanence et de faire face aux coûts de mise à jour. Il est essentiel de maintenir une version stable et cohérente de votre subgraph, non seulement du point de vue des coûts, mais aussi pour que les indexeurs puissent avoir confiance dans leurs temps de synchronisation. Les indexeurs doivent être avertis lorsque vous prévoyez une mise à jour afin que les délais de synchronisation des indexeurs ne soient pas affectés. N'hésitez pas à utiliser le [#Indexers channel](https://discord.gg/JexvtHa7dq) sur Discord pour informer les indexeurs de la mise à jour de vos subgraphs. - -Les subgraphs sont des API ouvertes que les développeurs externes exploitent. Les API ouvertes doivent suivre des normes strictes afin de ne pas casser les applications des développeurs externes. Dans The Graph Network, un développeur de subgraphs doit prendre en compte les indexeurs et le temps qu'il leur faut pour synchroniser un nouveau subgraph **ainsi que** les autres développeurs qui utilisent leurs subgraphs. - -### Mise à jour des métadonnées d'un subgraph - -Vous pouvez mettre à jour les métadonnées de vos subgraphs sans avoir à publier une nouvelle version. Les métadonnées comprennent le nom du subgraph, l'image, la description, l'URL du site web, l'URL du code source et les catégories. Les développeurs peuvent le faire en mettant à jour les détails de leurs subgraphs dans Subgraph Studio, où vous pouvez modifier tous les champs applicables. - -Assurez-vous que **Mettre à jour les détails du subgraph dans l'Explorateur** est coché et cliquez sur **Enregistrer**. Si cette case est cochée, une transaction en chaîne sera générée qui mettra à jour les détails du subgraph dans l'Explorateur sans avoir à publier une nouvelle version avec un nouveau déploiement. - -## Meilleures pratiques pour le déploiement d'un subgraph sur le réseau The Graph - -1. Utilisation d'un nom d'ENS pour le développement de subgraph : - -- Configurez votre ENS [ici] (https://app.ens.domains/) -- Ajoutez votre nom ENS à vos paramètres [ici] (https://thegraph.com/explorer/settings?view=display-name). - -2. Plus vos profils seront complets, plus vos subgraphes auront de chances d'être indexés et conservés. - -## Dépréciation d'un subgraph sur le réseau de graph - -Suivez les étapes [ici](/managing/transfer-and-deprecate-a-subgraph) pour déprécier votre subgraph et le retirer du réseau The Graph Network. - -## Interrogation d'un subgraph + facturation sur le reseau The Graph - -Le service hébergé a été configuré pour permettre aux développeurs de déployer leurs subgraphs sans aucune restriction. - -Sur The Graph Network, les frais de requête doivent être payés dans le cadre des incitations essentielles du protocole. Pour plus d'informations sur l'abonnement aux API et le paiement des frais de requête, consultez la documentation de facturation [ici](/billing/). - -## Ressources additionnelles - -Si vous êtes encore confus, n'ayez crainte ! Consultez les ressources suivantes ou regardez notre guide vidéo sur la mise à niveau des subgraphs vers le réseau décentralisé ci-dessous : - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Contrat de curation] (https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - le contrat sous-jacent autour duquel le GNS s'articule - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Documentation sur Subgraph Studio](/deploying/subgraph-studio) diff --git a/website/pages/ha/cookbook/base-testnet.mdx b/website/pages/ha/cookbook/base-testnet.mdx deleted file mode 100644 index b1e3a4fc7c6d..000000000000 --- a/website/pages/ha/cookbook/base-testnet.mdx +++ /dev/null @@ -1,112 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Subgraph Studio - -### 1. Install the Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in the Subgraph Studio - -Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph" and enter a name for your subgraph. - -Select "Base (testnet)" as the indexed blockchain and click Create Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in the Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -Initialize your subgraph from an existing contract. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-testnet \_ Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-testnet` as the network name in manifest file to deploy your subgraph on Base testnet. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to the Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with the Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in the Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/ha/cookbook/grafting-hotfix.mdx b/website/pages/ha/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/ha/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ha/cookbook/timeseries.mdx b/website/pages/ha/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/ha/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ha/cookbook/upgrading-a-subgraph.mdx b/website/pages/ha/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index e8497f9f6be5..000000000000 --- a/website/pages/ha/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,225 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduction - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Prerequisites - -- You have already deployed a subgraph on the hosted service. -- The subgraph is indexing a chain available on The Graph Network. -- You have a wallet with ETH to publish your subgraph on-chain. -- You have ~10,000 GRT to curate your subgraph so Indexers can begin indexing it. - -## Upgrading an Existing Subgraph to The Graph Network - -> You can find specific commands for your subgraph in the [Subgraph Studio](https://thegraph.com/studio/). - -1. Get the latest version of the graph-cli installed: - -```sh -npm install -g @graphprotocol/graph-cli -``` - -```sh -yarn global add @graphprotocol/graph-cli -``` - -Make sure your `apiVersion` in subgraph.yaml is `0.0.5` or greater. - -2. Inside the subgraph's main project repository, authenticate the subgraph to deploy and build on the studio: - -```sh -graph auth --studio -``` - -3. Generate files and build the subgraph: - -```sh -graph codegen && graph build -``` - -If your subgraph has build errors, refer to the [AssemblyScript Migration Guide](/release-notes/assemblyscript-migration-guide/). - -4. Sign into [Subgraph Studio](https://thegraph.com/studio/) with your wallet and deploy the subgraph. You can find your `` in the Studio UI, which is based on the name of your subgraph. - -```sh -graph deploy --studio -``` - -5. Test queries on the Studio's playground. Here are some examples for the [Sushi - Mainnet Exchange Subgraph](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&view=Playground): - -```sh -{ - users(first: 5) { - id - liquidityPositions { - id - } - } - bundles(first: 5) { - id - ethPrice - } -} -``` - -6. At this point, your subgraph is now deployed on Subgraph Studio, but not yet published to the decentralized network. You can now test the subgraph to make sure it is working as intended using the temporary query URL as seen on top of the right column above. As this name already suggests, this is a temporary URL and should not be used in production. - -- Updating is just publishing another version of your existing subgraph on-chain. -- Because this incurs a cost, it is highly recommended to deploy and test your subgraph in the Subgraph Studio, using the "Development Query URL" before publishing. See an example transaction [here](https://etherscan.io/tx/0xd0c3fa0bc035703c9ba1ce40c1862559b9c5b6ea1198b3320871d535aa0de87b). Prices are roughly around 0.0425 ETH at 100 gwei. -- Any time you need to update your subgraph, you will be charged an update fee. Because this incurs a cost, it is highly recommended to deploy and test your subgraph on Sepolia before deploying to mainnet. It can, in some cases, also require some GRT if there is no signal on that subgraph. In the case there is signal/curation on that subgraph version (using auto-migrate), the taxes will be split. - -7. Publish the subgraph on The Graph's decentralized network by hitting the "Publish" button. - -You should curate your subgraph with GRT to ensure that it is indexed by Indexers. To save on gas costs, you can curate your subgraph in the same transaction that you publish it to the network. It is recommended to curate your subgraph with at least 10,000 GRT for high quality of service. - -And that's it! After you are done publishing, you'll be able to view your subgraphs live on the decentralized network via [The Graph Explorer](https://thegraph.com/explorer). - -Feel free to leverage the [#Curators channel](https://discord.gg/s5HfGMXmbW) on Discord to let Curators know that your subgraph is ready to be signaled. It would also be helpful if you share your expected query volume with them. Therefore, they can estimate how much GRT they should signal on your subgraph. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -At the end of each week, an invoice will be generated based on the query fees that have been incurred during this period. This invoice will be paid automatically using the GRT available in your balance. Your balance will be updated after the cost of your query fees are withdrawn. Query fees are paid in GRT via the Arbitrum network. You will need to add GRT to the Arbitrum billing contract to enable your API key via the following steps: - -- Purchase GRT on an exchange of your choice. -- Send the GRT to your wallet. -- On the Billing page in Studio, click on Add GRT. - -![Add GRT in billing](/img/Add-GRT-New-Page.png) - -- Follow the steps to add your GRT to your billing balance. -- Your GRT will be automatically bridged to the Arbitrum network and added to your billing balance. - -![Billing pane](/img/New-Billing-Pane.png) - -> Note: see the [official billing page](../billing.mdx) for full instructions on adding GRT to your billing balance. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/test/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraph?id=S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo&view=Indexers)). The green line at the top indicates that at the time of posting 8 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to the Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. A good idea is to test small fixes on the Subgraph Studio by publishing to Sepolia. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio -``` - -3. Test the new version in the Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum does not use bonding curves. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in the Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -In order for The Graph Network to truly be decentralized, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -### Estimate Query Fees on the Network - -While this is not a live feature in the product UI, you can set your maximum budget per query by taking the amount you're willing to pay per month and dividing it by your expected query volume. - -While you get to decide on your query budget, there is no guarantee that an Indexer will be willing to serve queries at that price. If a Gateway can match you to an Indexer willing to serve a query at, or lower than, the price you are willing to pay, you will pay the delta/difference of your budget **and** their price. As a consequence, a lower query price reduces the pool of Indexers available to you, which may affect the quality of service you receive. It's beneficial to have high query fees, as that may attract curation and big-name Indexers to your subgraph. - -Remember that it's a dynamic and growing market, but how you interact with it is in your control. There is no maximum or minimum price specified in the protocol or the Gateways. For example, you can look at the price paid by a few of the dapps on the network (on a per-week basis), below. See the last column, which shows query fees in GRT. - -![QueryFee](/img/QueryFee.png) - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/hi/cookbook/base-testnet.mdx b/website/pages/hi/cookbook/base-testnet.mdx deleted file mode 100644 index 186dd9481c97..000000000000 --- a/website/pages/hi/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: आधार पर सबग्राफ बनाना ---- - -यह गाइड आपको बेस टेस्टनेट पर अपना सबग्राफ इनिशियलाइज़ करने, बनाने और तैनात करने के बारे में जल्दी से बताएगी। - -आपको किस चीज़ की ज़रूरत पड़ेगी: - -- A Base Sepolia testnet contract address -- एक क्रिप्टो वॉलेट (जैसे मेटामास्क या कॉइनबेस वॉलेट) - -## Subgraph Studio - -### 1. ग्राफ़ सीएलआई इनस्टॉल करें - -ग्राफ़ सीएलआई (>=v0.41.0) JavaScript में लिखा गया है और इसका उपयोग करने के लिए आपको या तो `npm` या `यार्न` स्थापित करने की आवश्यकता होगी। - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. अपना सबग्राफ इनिशियलाइज़ करें - -> You can find specific commands for your subgraph in Subgraph Studio. - -सुनिश्चित करें कि ग्राफ़-क्ली को नवीनतम (0.41.0 से ऊपर) में अपडेट किया गया है - -```sh -graph --version -``` - -मौजूदा कॉन्ट्रैक्ट से अपना सबग्राफ इनिशियलाइज़ करें। - -```sh -graph init --studio -``` - -आपका सबग्राफ स्लग आपके सबग्राफ के लिए एक पहचानकर्ता है। सीएलआई उपकरण आपको एक सबग्राफ बनाने के चरणों के बारे में बताएगा, जिसमें निम्न शामिल हैं: - -- प्रोटोकॉल: एथेरियम -- सबग्राफ स्लग: `` -- सबग्राफ बनाने के लिए निर्देशिका: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- असेंबलीस्क्रिप्ट मैपिंग (mapping.ts) - यह वह कोड है जो स्कीमा में परिभाषित इकाई के लिए आपके डेटा सोर्स से डेटा का अनुवाद करता है। - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/hi/cookbook/grafting-hotfix.mdx b/website/pages/hi/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/hi/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/hi/cookbook/timeseries.mdx b/website/pages/hi/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/hi/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/hi/cookbook/upgrading-a-subgraph.mdx b/website/pages/hi/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index 0152c7c54e84..000000000000 --- a/website/pages/hi/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## परिचय - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### आवश्यक शर्तें - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### एक एपीआई key बनाएँ - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### अपनी एपीआई key सुरक्षित करना - -यह अनुशंसा की जाती है कि आप एपीआई के उपयोग को दो तरीकों से सीमित करके सुरक्षित करें: - -1. अधिकृत सबग्राफ -2. अधिकृत डोमेन - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### विकेंद्रीकृत नेटवर्क पर अपने सबग्राफ को क्वेरी करना - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -जैसे ही पहले इंडेक्सर ने आपके सबग्राफ को पूरी तरह से इंडेक्स कर लिया, आप सबग्राफ को विकेंद्रीकृत नेटवर्क पर क्वेरी करना शुरू कर सकते हैं। अपने सबग्राफ के लिए क्वेरी URL को पुनः प्राप्त करने के लिए, आप क्वेरी URL के बगल में स्थित प्रतीक पर क्लिक करके इसे कॉपी/पेस्ट कर सकते हैं। आपको कुछ ऐसा दिखाई देगा: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -अब आप अपने ग्राफ़िकल अनुरोधों को भेजने के लिए अपने डैप में उस क्वेरी URL का उपयोग कर सकते हैं। - -बधाई हो! अब आप विकेंद्रीकरण के अग्रणी हैं! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. निम्नलिखित को तैनात करें और कमांड में नया संस्करण निर्दिष्ट करें (जैसे। v0.0.1, v0.0.2, आदि): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. ग्राफ़ नेटवर्क पर नया संस्करण प्रकाशित करें। याद रखें कि इसके लिए गैस की आवश्यकता होती है (जैसा कि ऊपर अनुभाग में बताया गया है)। - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -आइए एक उदाहरण बनाते हैं, यह केवल तभी होता है जब आपका सबग्राफ सक्रिय रूप से क्यूरेट किया जा रहा हो: - -- एक सबग्राफ के v1 पर ऑटो-माइग्रेट का उपयोग करके 100,000 GRT का संकेत दिया जाता है -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### एक सबग्राफ का एक स्थिर संस्करण बनाए रखना - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### सबग्राफ के मेटाडेटा को अपडेट करना - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## ग्राफ़ नेटवर्क में एक सबग्राफ को तैनात करने के लिए सर्वोत्तम अभ्यास - -1. सबग्राफ डेवलपमेंट के लिए ENS नाम का लाभ उठाना: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. आपके प्रोफाइल जितने अधिक भरे हुए हैं, आपके सबग्राफ के अनुक्रमित और क्यूरेट होने की संभावना उतनी ही बेहतर है। - -## ग्राफ़ नेटवर्क पर एक सबग्राफ का बहिष्कार करना - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## ग्राफ़ नेटवर्क पर एक सबग्राफ + बिलिंग को क्वेरी करना - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## अतिरिक्त संसाधन - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/it/cookbook/base-testnet.mdx b/website/pages/it/cookbook/base-testnet.mdx deleted file mode 100644 index 3a1d98a44103..000000000000 --- a/website/pages/it/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base Sepolia testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Subgraph Studio - -### 1. Install the Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -Initialize your subgraph from an existing contract. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/it/cookbook/grafting-hotfix.mdx b/website/pages/it/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/it/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/it/cookbook/timeseries.mdx b/website/pages/it/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/it/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/it/cookbook/upgrading-a-subgraph.mdx b/website/pages/it/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index c7ff2b1213f0..000000000000 --- a/website/pages/it/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduzione - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Prerequisites - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/ja/cookbook/base-testnet.mdx b/website/pages/ja/cookbook/base-testnet.mdx deleted file mode 100644 index be8f071f5c51..000000000000 --- a/website/pages/ja/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Baseでのサブグラフ構築 ---- - -このガイドでは、Baseテストネットでのサブグラフの初期化、作成、デプロイの方法を素早く説明します. - -必要なもの: - -- A Base Sepolia testnet contract address -- 暗号ウォレット(例:MetaMaskまたはCoinbase Wallet) - -## サブグラフスタジオ - -### 1. Graph CLI のインストール - -The Graph CLI (>=v0.41.0) はJavaScriptで書かれており、使用するには `npm` または `yarn` のいずれかをインストールする必要があります。 - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. サブグラフの初期化 - -> You can find specific commands for your subgraph in Subgraph Studio. - -Graph-cliが最新版(0.41.0以上)に更新されていることを確認します。 - -```sh -グラフ --バージョン -``` - -既存のコントラクトからサブグラフを初期化します。 - -```sh -graph init --studio -``` - -サブグラフのスラッグは、サブグラフの識別子となるものです。CLIツールは、サブグラフを作成するためのステップを説明します。 - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - データソースからのデータを、スキーマで定義されたエンティティに変換するコードです。 - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/ja/cookbook/grafting-hotfix.mdx b/website/pages/ja/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/ja/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ja/cookbook/timeseries.mdx b/website/pages/ja/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/ja/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ja/cookbook/upgrading-a-subgraph.mdx b/website/pages/ja/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index 632111950134..000000000000 --- a/website/pages/ja/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: 既存のサブグラフをThe Graph Networkにアップグレードする方法 ---- - -## イントロダクション - -これは、ホストされているサービスからThe Graphの分散型ネットワークへのサブグラフのアップグレード方法に関するガイドです。Snapshot、Loopring、Audius、Premia、Livepeer、Uma、Curve、Lidoなどのプロジェクトを含む1,000以上のサブグラフがThe Graph Networkに成功してアップグレードされました! - -アップグレードのプロセスは迅速であり、あなたのサブグラフは永久にThe Graph Networkでしか得られない信頼性とパフォーマンスの恩恵を受けることができます。 - -### 前提条件 - -- You have a subgraph deployed on the hosted service. - -## 既存のサブグラフをThe Graph Networkにアップグレードする方法 - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### APIキーの作成 - -Subgraph StudioでAPIキーを生成するには、[here](https://thegraph.com/studio/apikeys/)をクリックしてください。 - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### APIキーの確保 - -APIは2つの方法で利用を制限し、セキュリティを確保することが推奨されます。 - -1. オーソライズド・サブグラフ -2. オーソライズド・ドメイン - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### 分散ネットワーク上の自分のサブグラフをクエリ - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -最初のインデクサーがあなたのサブグラフを完全にインデックス化したら、分散ネットワーク上でサブグラフのクエリを開始することができます。クエリURLを取得するためには、クエリURLの横にある記号をクリックしてコピー&ペーストしてください。以下のような画面が表示されます。 - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -重要: [api-key] を前述のセクションで生成した実際のAPIキーで置き換えてください。 - -このQuery URLをダップ内で使用して、GraphQLリクエストを送信することができます。 - -おめでとうございます。あなたは今、分散化のパイオニアです! - -> 注意: ネットワークの分散性のため、異なるインデクサーが異なるブロックまでインデックスを行っている可能性があります。新鮮なデータのみを受け取るために、次のようにしてクエリを提供するためにインデクサーがインデックスを行った最小ブロックを指定することができます。ブロック引数: `{ number_gte: $minBlock }`といった形です。以下の例をご覧ください。 - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -ネットワークの性質や再編成の処理方法に関する詳細な情報は、ドキュメント記事[Distributed Systems](/querying/distributed-systems/) に記載されています。 - -## ネットワーク上のサブグラフの更新 - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. 以下のようにデプロイし、コマンドに新しいバージョンを指定します(例:v0.0.1、v0.0.2 など)。 - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. 新しいバージョンを The Graph Network で公開します。これにはガスが必要であることを忘れてはなりません(上のセクションで説明)。 - -### 所有者更新料金: 詳細 - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -アップデートには、GRTがサブグラフの古いバージョンから新しいバージョンに移行される必要があります。つまり、毎回のアップデートごとに新しいボンディングカーブが作成されます(ボンディングカーブに関する詳細は[here](/network/curating#bonding-curve-101))。 - -新しいボンディングカーブは、新しいバージョンに移行されるすべてのGRTに1%のキュレーション税を課します。オーナーはこのうち50%、すなわち1.25%を支払わなければなりません。もう1.25%は、すべてのキュレーターに料金として請求されます。このインセンティブ設計は、サブグラフのオーナーが再帰的な更新呼び出しでキュレーターの資金をすべて排出できないようにするためのものです。キュレーションの活動がない場合、自分のサブグラフにシグナルを送るためには最低でも100 GRT支払う必要があります。 - -例を挙げてみましょう。これは、サブグラフが積極的にキュレートされている場合にのみ当てはまります。 - -- サブグラフの v1 で自動移行を使用して 100,000 GRT が通知される -- オーナーがバージョン2にアップデートします。100,000 GRTが新しいボンディングカーブに移行され、そのうち97,500 GRTが新しいカーブに投入され、2,500 GRTが燃焼されます。 -- その後、オーナーは手数料の半分を支払うために1250 GRTを燃やします。オーナーはアップデート前にこれをウォレットに持っていなければならず、そうでない場合、アップデートは成功しません。これはアップデートと同じトランザクションで行われます。 - -_このメカニズムは現在ネットワーク上で稼働していますが、コミュニティでは現在、サブグラフ開発者の更新コストを削減する方法について議論しています。_ - -### サブグラフの安定したバージョンの維持 - -サブグラフに大量の変更を加えている場合、継続的に更新し、更新コストを負担することは避けるべきです。サブグラフの安定性と一貫性を維持することは非常に重要です。コストの観点だけでなく、インデクサーが同期時間に自信を持てるようにするためです。更新を計画している場合、インデクサーの同期時間に影響が及ばないように、その計画を通知すべきです。サブグラフのバージョンを更新する際に、Discord[#Indexers channel](https://discord.gg/JexvtHa7dq)を活用して、インデクサーに知らせることができます。 - -サブグラフは、外部開発者が利用しているオープン API です。オープン API は、外部開発者のアプリケーションを破壊しないように、厳格な標準に従う必要があります。グラフ ネットワークでは、サブグラフ開発者は、インデクサーと、そのサブグラフを使用している他の開発者**だけでなく**、新しいサブグラフを同期するのにかかる時間を考慮する必要があります。 - -### サブグラフのメタデータの更新 - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -**エクスプローラーでサブグラフの詳細を更新** がチェックされていることを確認し、**保存** をクリックします。これがチェックされている場合、新しいデプロイメントで新しいバージョンを公開することなく、エクスプローラー内のサブグラフの詳細を更新するオンチェーン トランザクションが生成されます。 - -## グラフネットワークにサブグラフを展開する際のベストプラクティス - -1. サブグラフの開発に ENS 名を活用する - -- ENS をセットアップする [here](https://app.ens.domains/) -- ENS 名を[here](https://thegraph.com/explorer/settings?view=display-name) の設定に追加します。 - -2. プロフィールが充実しているほど、サブグラフがインデックスやキュレーションされる可能性が高くなります。 - -## The Graph Network のサブグラフを廃止する - -[here](/managing/transfer-and-deprecate-a-subgraph) の手順に従って、サブグラフを非推奨にし、グラフ ネットワークから削除します。 - -## The Graph Network でのサブグラフのクエリと課金について - -ホストされたサービスは、開発者が制限なしでサブグラフをデプロイできるように設定されました。 - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## その他のリソース - -まだ混乱している場合でも、心配する必要はありません。次のリソースを確認するか、以下の分散ネットワークへのサブグラフのアップグレードに関するビデオ ガイドをご覧ください。 - - - -- [グラフネットワーク契約](https://github.com/graphprotocol/contracts) -- [キュレーションコントラクト](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - GNSがラップする基礎となるコントラクト - - アドレス - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio ドキュメント](/deploying/subgraph-studio) diff --git a/website/pages/ko/cookbook/base-testnet.mdx b/website/pages/ko/cookbook/base-testnet.mdx deleted file mode 100644 index 3a1d98a44103..000000000000 --- a/website/pages/ko/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base Sepolia testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Subgraph Studio - -### 1. Install the Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -Initialize your subgraph from an existing contract. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/ko/cookbook/grafting-hotfix.mdx b/website/pages/ko/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/ko/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ko/cookbook/timeseries.mdx b/website/pages/ko/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/ko/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ko/cookbook/upgrading-a-subgraph.mdx b/website/pages/ko/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index a546f02c0800..000000000000 --- a/website/pages/ko/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduction - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Prerequisites - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/mr/cookbook/base-testnet.mdx b/website/pages/mr/cookbook/base-testnet.mdx deleted file mode 100644 index d62bf749c571..000000000000 --- a/website/pages/mr/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: बेसवर सबग्राफ तयार करणे ---- - -बेस टेस्टनेटवर तुमचा सबग्राफ कसा सुरू करायचा, तयार करायचा आणि उपयोजित करायचा हे मार्गदर्शक तुम्हाला त्वरीत घेऊन जाईल. - -आपल्याला काय आवश्यक असेल: - -- A Base Sepolia testnet contract address -- एक क्रिप्टो वॉलेट (उदा. मेटामास्क किंवा कॉइनबेस वॉलेट) - -## सबग्राफ स्टुडिओ - -### 1. आलेख CLI स्थापित करा - -आलेख CLI (>=v0.41.0) JavaScript मध्ये लिहिलेले आहे आणि ते वापरण्यासाठी तुम्हाला `npm` किंवा `यार्न` स्थापित करणे आवश्यक आहे. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# सूत -यार्न ग्लोबल @graphprotocol/graph-cli जोडा -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. तुमचा सबग्राफ सुरू करा - -> You can find specific commands for your subgraph in Subgraph Studio. - -आलेख-क्ली नवीनतम (०.४१.० च्या वर) अद्यतनित केल्याची खात्री करा. - -```sh -आलेख --आवृत्ती -``` - -Existing करारातून तुमचा subgraph सुरू करा. - -```sh -graph init --studio -``` - -तुमचा subgraph slug तुमच्या subgraph साठी एक ओळखकर्ता आहे. CLI tool तुम्हाला subgraph तयार करण्याच्या through the steps मार्गदर्शन करेल, including: - -- प्रोटोकॉल: इथरियम -- सबग्राफ स्लग: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- असेंबलीस्क्रिप्ट मॅपिंग (mapping.ts) - हा असा कोड आहे जो तुमच्या डेटास्रोतमधील डेटाचे स्कीमामध्ये परिभाषित केलेल्या घटकांमध्ये भाषांतर करतो. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/mr/cookbook/grafting-hotfix.mdx b/website/pages/mr/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/mr/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/mr/cookbook/timeseries.mdx b/website/pages/mr/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/mr/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/mr/cookbook/upgrading-a-subgraph.mdx b/website/pages/mr/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index c6877638a7ab..000000000000 --- a/website/pages/mr/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduction - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### पूर्वतयारी - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/nl/cookbook/base-testnet.mdx b/website/pages/nl/cookbook/base-testnet.mdx deleted file mode 100644 index 3516b2551106..000000000000 --- a/website/pages/nl/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Bouwen van Subgraphs op Arweave ---- - -Deze handleiding leidt je snel door hoe je jouw subgraph op Base testnet kunt initialiseren, creëren en implementeren. - -Wat je nodig hebt: - -- A Base Sepolia testnet contract address -- Een crypto wallet (bijvoorbeeld MetaMask of Coinbase Wallet) - -## Subgraph Studio - -### 1. Installeer de Graph CLI - -De Graph CLI (>=v0.41.0) is geschreven in JavaScript en je zult npm of yarn geïnstalleerd moeten hebben om het te gebruiken. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialiseer je Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Zorg ervoor dat de graph-cli is bijgewerkt naar de nieuwste versie (boven 0.41.0) - -```sh -graph --version -``` - -Initialiseer je subgraph vanuit een bestaand contract. - -```sh -graph init --studio -``` - -Je subgraph slug is een identificator voor je subgraph. De CLI tool zal je door de stappen leiden om een subgraph te creëren, inclusief: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory om de subgraph in te creëren: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/nl/cookbook/grafting-hotfix.mdx b/website/pages/nl/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/nl/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/nl/cookbook/timeseries.mdx b/website/pages/nl/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/nl/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/nl/cookbook/upgrading-a-subgraph.mdx b/website/pages/nl/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index a546f02c0800..000000000000 --- a/website/pages/nl/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduction - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Prerequisites - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/pl/cookbook/base-testnet.mdx b/website/pages/pl/cookbook/base-testnet.mdx deleted file mode 100644 index 3a1d98a44103..000000000000 --- a/website/pages/pl/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base Sepolia testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Subgraph Studio - -### 1. Install the Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -Initialize your subgraph from an existing contract. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/pl/cookbook/grafting-hotfix.mdx b/website/pages/pl/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/pl/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/pl/cookbook/timeseries.mdx b/website/pages/pl/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/pl/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/pl/cookbook/upgrading-a-subgraph.mdx b/website/pages/pl/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index a546f02c0800..000000000000 --- a/website/pages/pl/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduction - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Prerequisites - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/pt/cookbook/base-testnet.mdx b/website/pages/pt/cookbook/base-testnet.mdx deleted file mode 100644 index e9f4e14606fa..000000000000 --- a/website/pages/pt/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Construindo Subgraphs na Base ---- - -Este guia dará-lhe uma explicação rápida sobre a inicialização, criação e o lançamento do seu subgraph na testnet Base. - -Serão necessários: - -- Um endereço de contrato na testnet Base Sepolia -- Uma carteira de criptomoedas (por ex. MetaMask ou Coinbase Wallet) - -## Subgraph Studio - -### 1. Como instalar a CLI do Graph - -A CLI (interface de linha de comando) do Graph — acima da versão 0.41.0 — é escrita em JavaScript, e é necessário ter instalado o 'npm' ou o 'yarn' para usá-la. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Como criar o seu subgraph no Subgraph Studio - -Entre no [Subgraph Studio](https://thegraph.com/studio/) e conecte a sua carteira de criptomoedas. - -Após conectado, clique em "Criar um Subgraph", insira um nome para o seu Subgraph e clique em "Criar um Subgraph" novamente. - -### 3. Como Inicializar o seu Subgraph - -> Podes encontrar comandos específicos para o seu subgraph no Subgraph Studio. - -Certifique-se que o graph-cli está atualizado para a sua versão mais recente (acima de 0.41.0) - -```sh -graph --version -``` - -Inicialize o seu subgraph de um contrato existente. - -```sh -graph init --studio -``` - -O seu subgraph slug é uma identidade para o seu subgraph. A ferramenta de CLI guiará-lhe pelos passos para a criação de um subgraph, incluindo: - -- Protocolo: ethereum -- Subgraph slug: `` -- Diretório para a criação do subgraph: `` -- Rede Ethereum: base-sepolia -- Endereço de contrato: `` -- Bloco inicial (opcional) -- Nome do contrato: `` -- Sim/não para a indexação de eventos ("sim" significa que o seu subgraph será equipado com entidades no schema e mapeamentos simples para os eventos emitidos) - -### 3. Como Escrever o seu Subgraph - -> Caso queira indexar apenas eventos emitidos, então não há mais nada a fazer, e é só pular para o próximo passo. - -O comando anterior cria um subgraph de apoio que pode ser usado como ponto inicial para a construção do seu subgraph. Ao fazer mudanças ao subgraph, o trabalho principal será com três arquivos: - -- Manifest (subgraph.yaml) — O manifest define quais fontes de dados seus subgraphs indexarão. Certifique-se de adicionar `base-sepolia` como o nome da rede no arquivo manifest, para lançar o seu subgraph na testnet Base Sepolia. -- Schema (schema.graphql) — O schema GraphQL define quais dados desejas retirar do subgraph. -- Mapeamentos em AssemblyScript (mapping.ts) — Este é o código que traduz dados das suas fontes de dados às entidades definidas no schema. - -Se quiser indexar dados adicionais, precisa estender o manifest, o schema e os mapeamentos. - -Para mais informações sobre como escrever o seu subgraph, veja [Criando um Subgraph](/desenvolvimento/criando-um-subgraph). - -### 4. Como Lançar ao Subgraph Studio - -Antes de poder lançar o seu subgraph, deves autenticá-lo com o Subgraph Studio. Isto é possível ao executar o seguinte comando: - -Autentique o subgraph no Studio - -``` -graph auth --studio -``` - -Em seguida, digite o diretório do seu subgraph. - -``` - cd -``` - -Construa o seu subgraph com o seguinte comando: - -```` -``` -graph codegen && graph build -``` -```` - -Finalmente, lance o seu subgraph com este comando: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Como consultar o seu subgraph com queries - -Depois que o seu subgraph for lançado, você pode consultá-lo do seu dapp utilizando o 'Development Query URL' (URL de consulta de desenvolvimento) no Subgraph Studio. - -Nota — A API do Studio tem um 'rate limit' (limite de ritmo). Vendo isto, recomendamos usá-la para desenvolvimento e testes. - -Para aprender mais sobre queries de dados do seu subgraph, veja a página [Como Consultar um Subgraph](/querying/querying-the-graph). diff --git a/website/pages/pt/cookbook/grafting-hotfix.mdx b/website/pages/pt/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/pt/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/pt/cookbook/timeseries.mdx b/website/pages/pt/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/pt/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/pt/cookbook/upgrading-a-subgraph.mdx b/website/pages/pt/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index 070020c2995a..000000000000 --- a/website/pages/pt/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Como Atualizar um Subgraph Existente à Graph Network ---- - -## Introdução - -Este guia ensina como atualizar o seu subgraph do serviço hospedado à rede descentralizada do The Graph. Mais de mil subgraphs foram atualizados à Graph Network com êxito, incluindo projetos como Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido e muito mais! - -O processo é rápido, e os seus subgraphs só tem a ganhar com a confiabilidade e o desempenho da rede da The Graph Network. - -### Pré-requisitos - -- Ter um subgraph editado no serviço hospedado. - -## Como Atualizar um Subgraph Existente à Graph Network - - - -Se estiver logado no serviço hospedado, podes acessar um fluxo simples para atualizar os seus subgraphs do [seu painel](https://thegraph.com/hosted-service/dashboard), ou de uma página de um subgraph individual. - -> Este processo não costuma levar mais que cinco minutos. - -1. Selecione o(s) subgraph(s) que queres atualizar. -2. Conecte ou insira a carteira destinatária (a carteira que será a dona do subgraph). -3. Clique no botão "Upgrade". - -Pronto! Os seus subgraphs serão editados ao Subgraph Studio e publicados na Graph Network. É possível acessar o [Subgraph Studio](https://thegraph.com/studio/) para gerir os seus subgraphs após fazer login com a carteira especificada durante o processo de atualização. - -Poderá ver os seus subgraphs ao vivo na rede descentralizada via [Graph Explorer](https://thegraph.com/explorer). - -### E agora? - -Quando o seu subgraph for atualizado, será indexado automaticamente pelo indexador de atualização. Se a chain indexada for [totalmente apoiada pela Graph Network](/developing/supported-networks), podes adicionar um pouco de GRT como "sinal" para atrair mais Indexadores. Recomendamos curar o seu subgraph com no mínimo 3000 GRT para atrair 2 ou 3 Indexadores, para uma qualidade maior de serviço. - -Podes começar a consultar o seu subgraph imediatamente na Graph Network após gerar uma chave API. - -### Como criar uma chave API - -É possível gerar uma chave API no Subgraph Studio [aqui](https://thegraph.com/studio/apikeys/). - -![Página de criação de chave API](/img/api-image.png) - -Podes usar esta chave API para consultar subgraphs em queries na Graph Network. Todos os utilizadores começam no Plano Grátis, que inclui 100 mil queries grátis por mês. Programadores podem assinar o Plano de Crescimento ao conectar um cartão de crédito ou débito, ou depositar GRT, ao sistema de cobranças do Subgraph Studio. - -> Nota: confira a [documentação das cobranças](../billing) para mais informações sobre planos e a gestão das suas cobranças no Subgraph Studio. - -### Como proteger a sua chave API - -É ideal proteger a API com imposições de limite ao seu uso, em duas maneiras: - -1. Subgraphs Autorizados -2. Domínio Autorizado - -A sua chave API pode ser guardada [aqui](https://thegraph.com/studio/apikeys/). - -![Página de trancamento de subgraphs](/img/subgraph-lockdown.png) - -### Como consultar o seu subgraph na rede descentralizada - -Agora é possível verificar o estado dos Indexadores da rede no Graph Explorer (exemplo [aqui](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). A linha verde no topo indica que na hora da postagem, 7 Indexadores indexaram aquele subgraph com sucesso. Na aba do Indexador, dá para ver quais Indexadores captaram seu subgraph. - -![Subgraph do Rocket Pool](/img/rocket-pool-subgraph.png) - -Assim que o primeiro Indexer tiver indexado o seu subgraph por completo, pode começar a consultar o subgraph na rede descentralizada. O URL de consulta para o seu subgraph pode ser copiado e colado com um clique no símbolo próximo ao URL de consulta. Aparecerá algo assim: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Importante: Substitua o `[api-key]` com uma chave API verídica, gerada na seção acima. - -Agora, pode usar aquele URL de Consulta no seu dapp para enviar os seus pedidos no GraphQL. - -Parabéns! Viraste um pioneiro da descentralização! - -> Nota: Devido à natureza distribuída da rede, pode ser que Indexadores diferentes tenham indexado até blocos diferentes. Para receber apenas dados recentes, especifique o bloco mínimo que um indexador deve indexar para servir seu query com o argumento block: `{ number_gte: $minBlock }` como no exemplo abaixo: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -Veja mais informações sobre a rede, e como lidar com reorganizações, no artigo da documentação [Sistemas Distribuídos](/querying/distributed-systems/). - -## Como Atualizar um Subgraph na Rede - -É possível atualizar um subgraph já existente na rede ao lançar uma nova versão do seu subgraph ao Subgraph Studio, através do Graph CLI. - -1. Faça mudanças no seu subgraph atual. -2. Lance o seguinte e especifique a nova versão no comando (por ex. v0.0.1, v0.0.2, etc.): - -```sh -graph deploy --studio --version -``` - -3. Teste a nova versão no Subgraph Studio com queries no playground -4. Publique a nova versão na rede do The Graph. Não esqueça que isto exige gas (como descrito acima). - -### Sobre as Taxas de Upgrade para o Dono - -> Nota: A curadoria no Arbitrum tem uma bonding curve fixa. Aprenda mais sobre o Arbitrum [aqui](/arbitrum/arbitrum-faq/). - -Upgrades exigem GRT para migrar da versão antiga do subgraph à versão nova. Portanto, a cada atualização, será criada uma bonding curve (curva de união; mais sobre bonding curves aqui: [here](/network/curating#bonding-curve-101)). - -A nova bonding curve cobra a taxa de curação de 1% sobre todo GRT a ser migrado à nova versão. O titular deve pagar 50% disto, ou 1,25%. Os outros 1,25% são absorvidos por todos os curadores como um tributo. Este incentivo existe para que o dono de um subgraph não possa esvaziar os fundos dos curadores com chamadas recursivas de atualização. Se não houver atividade de curação, é necessário pagar no mínimo 100 GRT para sinalizar seu próprio subgraph. - -Vamos fazer um exemplo. Isto só acontece se o seu subgraph for curado ativamente: - -- São sinalizados 100.000 GRT com a função de migração automática na v1 de um subgraph -- O dono atualiza à v2. São migrados 100.000 GRT a uma nova bonding curve, sendo que 97,500 GRT entram na curva nova e 2.500 são queimados -- O dono então queima 1.250 GRT para pagar por metade da taxa. O dono deve ter isto na sua carteira antes da atualização; caso contrário, o upgrade falhará. Isto acontece na mesma transação do upgrade. - -_Enquanto este mecanismo permanece ao vivo na rede, a comunidade atualmente discute maneiras de reduzir o custo de atualizações para programadores de subgraphs._ - -### Como Conservar uma Versão Estável de Subgraph - -Se for fazer muitas mudanças ao seu subgraph, não é bom atualizá-lo constantemente e afrontar os custos da atualização. É importante conservar uma versão estável e consistente do seu subgraph; não só pelo custo, mas também para que os Indexadores tenham confiança em seus tempos de sincronização. Os Indexadores devem ser avisados dos seus planos de atualização, para que os tempos de sincronização dos Indexadores não sejam afetados. Use à vontade o [canal dos #Indexers](https://discord.gg/JexvtHa7dq) no Discord para avisar aos Indexadores quando for mudar a versão dos seus subgraphs. - -Subgraphs são APIs abertas usadas por programadores externos. As APIs abertas devem seguir padrões estritos para não quebrarem os aplicativos de programadores externos. Na The Graph Network (rede do The Graph), um programador de Subgraph deve considerar os Indexadores e o tempo que levam para sincronizar um novo subgraph, **assim como** outros desenvolvedores a usarem seus subgraphs. - -### Como Atualizar os Metadados de um Subgraph - -Os metadados dos seus subgraphs podem ser atualizados sem precisar editar uma versão nova. Os metadados incluem o nome do subgraph, a imagem, a descrição, o URL do site, o URL do código fonte, e categorias. Os programadores podem fazer isto com uma atualização dos detalhes de seus subgraphs no Subgraph Studio, onde todos os campos aplicáveis podem ser editados. - -Marque a opção **Update Subgraph Details in Explorer\* (Atualizar Detalhes do Subgraph no Explorador) e clique em **Save\*\* (Salvar). Se marcada, será gerada uma transação on-chain que atualiza detalhes do subgraph no Explorer, sem precisar publicar uma nova versão com um novo lançamento. - -## As Melhores Práticas para Lançar um Subgraph à Graph Network - -1. Use um nome ENS para Desenvolvimento de Subgraph: - -- Prepare o seu ENS [aqui](https://app.ens.domains/) -- Adicione o seu nome ENS às suas configurações [aqui](https://thegraph.com/explorer/settings?view=display-name). - -2. Quanto mais preenchidos os seus perfis, maiores serão as oportunidades de indexar e curar os seus subgraphs. - -## Como Depreciar um Subgraph na The Graph Network - -Siga os passos [aqui](/managing/transfer-and-deprecate-a-subgraph) para depreciar o seu subgraph e retirá-lo da The Graph Network. - -## Queries em um Subgraph + Cobrança na The Graph Network - -O Serviço Hospedado foi preparado para que os programadores lancem os seus subgraphs sem qualquer restrição. - -Na Graph Network, é necessário pagar taxas de query como uma parte essencial dos incentivos do protocolo. Para saber mais sobre subscrições em APIs e pagamentos de taxas de query, confira a documentação das cobranças [aqui](/billing/). - -## Outros Recursos - -Se ainda tem dúvidas, não tem problema! Confira os seguintes recursos ou assista o nosso guia em vídeo sobre atualizar e migrar subgraphs à rede descentralizada abaixo: - - - -- [Contratos da Graph Network](https://github.com/graphprotocol/contracts) -- [Contrato de Curadoria](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - o contrato subjacente em qual o GNS se revolve - - Endereço - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Documentação do Subgraph Studio](/deploying/subgraph-studio) diff --git a/website/pages/ro/cookbook/base-testnet.mdx b/website/pages/ro/cookbook/base-testnet.mdx deleted file mode 100644 index 3a1d98a44103..000000000000 --- a/website/pages/ro/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base Sepolia testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Subgraph Studio - -### 1. Install the Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -Initialize your subgraph from an existing contract. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/ro/cookbook/grafting-hotfix.mdx b/website/pages/ro/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/ro/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ro/cookbook/timeseries.mdx b/website/pages/ro/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/ro/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ro/cookbook/upgrading-a-subgraph.mdx b/website/pages/ro/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index a546f02c0800..000000000000 --- a/website/pages/ro/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduction - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Prerequisites - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/ru/cookbook/base-testnet.mdx b/website/pages/ru/cookbook/base-testnet.mdx deleted file mode 100644 index c4236de8cb44..000000000000 --- a/website/pages/ru/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Создание субграфов на Base ---- - -Это руководство быстро познакомит Вас с тем, как инициализировать, создать и развернуть Ваш субграф в тестовой сети Base. - -Что вам понадобится: - -- Адрес контракта тестовой сети Base Sepolia -- Криптокошелек (например, MetaMask или Coinbase Wallet) - -## Subgraph Studio - -### 1. Установка the Graph CLI - -The Graph CLI (>=v0.41.0) написан на JavaScript, и для его использования Вам потребуется установить либо `npm`, либо `yarn`. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Создание субграфа в Subgraph Studio - -Перейдите в [Subgraph Studio](https://thegraph.com/studio/) и подключите свой криптокошелек. - -После подключения нажмите «Создать субграф», введите имя Вашего субграфа и нажмите «Создать субграф». - -### 3. Инициализация Вашего Субграфа - -> Вы можете найти конкретные команды для своего субграфа в Subgraph Studio. - -Убедитесь, что graph-cli обновлен до последней версии (выше 0.41.0) - -```sh -graph --version -``` - -Инициализируйте свой субграф из существующего контракта. - -```sh -graph init --studio -``` - -Ваш subgraph slug - это идентификатор для Вашего субграфа. Инструмент CLI проведет Вас по шагам создания субграфа, включая: - -- Протокол: ethereum -- Subgraph slug: `` -- Каталог для создания подграфа в: `` -- Ethereum network: base-sepolia -- Адрес контракта: `` -- Стартовый блок (необязателен) -- Имя контракта: `` -- Да/нет для индексирования событий ("да" означает, что Ваш субграф будет загружен объектами в схеме и простыми мэппингами для происходящих событий) - -### 3. Создание Вашего субграфа - -> Если происходящие события - это единственное, что Вы хотите проиндексировать, то никакой дополнительной работы не требуется, и Вы можете перейти к следующему шагу. - -Предыдущая команда создает каркас субграфа, который Вы можете использовать в качестве отправной точки для построения своего субграфа. При внесении изменений в субграф Вы будете в основном работать с тремя файлами: - -- Манифест (subgraph.yaml) - Манифест определяет, какие источники данных будут индексироваться Вашими субграфами. Обязательно добавьте `base-sepolia` в качестве имени сети в файле манифеста, чтобы развернуть свой субграф в тестовой сети Base. -- Схема (schema.graphql) - Схема GraphQL определяет, какие данные Вы хотите извлечь из субграфа. -- AssemblyScript Mappings (mapping.ts) - это код, который преобразует данные из ваших источников данных в объекты, определенные в схеме. - -Если Вы хотите проиндексировать дополнительные данные, Вам нужно будет расширить манифест, схему и мэппинг. - -Для получения дополнительной информации о том, как создать свой субграф, см. [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Запуск в Subgraph Studio - -Прежде чем Вы сможете развернуть свой субграф, Вам необходимо будет пройти аутентификацию в Subgraph Studio. Вы можете сделать это, выполнив следующую команду: - -Аутентифицируйте субграф в studio - -``` -graph auth --studio -``` - -Затем войдите в свою директорию субграфа. - -``` - cd -``` - -Создайте свой субграф с помощью следующей команды: - -```` -``` -graph codegen && graph build -``` -```` - -В заключение Вы можете развернуть свой субграф с помощью этой команды: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Запрос Вашего субграфа - -Как только Ваш субграф развернут, Вы можете сделать запрос к нему из своего dapp, используя `Development Query URL` в Subgraph Studio. - -Примечание - Скорость работы Studio API ограничена. Следовательно, предпочтительно его использовать для разработки и тестирования. - -Чтобы узнать больше о запросе данных из Вашего субграфа, посетите страницу [Querying a Subgraph](/querying/querying-the-graph). diff --git a/website/pages/ru/cookbook/grafting-hotfix.mdx b/website/pages/ru/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/ru/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ru/cookbook/timeseries.mdx b/website/pages/ru/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/ru/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ru/cookbook/upgrading-a-subgraph.mdx b/website/pages/ru/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index bbdd32cd96f0..000000000000 --- a/website/pages/ru/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Введение - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Предварительные требования - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Создание API ключа - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Обеспечение безопасности вашего API ключа - -Рекомендуется обеспечить безопасность доступа к API, ограничивая его использование двумя способами: - -1. Авторизованные подграфы -2. Авторизованный домен - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Запрос вашего подграфа в децентрализованной сети - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -Как только первый индексатор полностью проиндексирует ваш подграф, вы можете начать делать запросы к нему в децентрализованной сети. Чтобы получить URL запроса для вашего подграфа, вы можете скопировать/вставить его, нажав на символ рядом с URL запроса. Вы увидите что-то вроде этого: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -Теперь вы можете использовать этот URL-адрес запроса в своем приложении для отправки своих запросов GraphQL. - -Поздравляю! Теперь вы являетесь пионером децентрализации! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Задеплойте следующее и укажите новую версию в команде (например, v0.0.1, v0.0.2 и т. д.): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Опубликуйте новую версию в сети Graph. Помните, что для этого требуется газ (как описано в разделе выше). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Давайте приведем пример, это только в том случае, если ваш сабграф активно курируется на: - -- 100 000 GRT сигнализируемая с помощью автоматической миграции на v1 подграфа -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Поддержание стабильной версии подграфа - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Обновление метаданных подграфа - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Рекомендации по развертыванию подграфа в сети Graph - -1. Использование имени ENS для разработки подграфа: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. Чем более заполнены ваши профили, тем больше шансов, что ваши подграфы будут проиндексированы и обработаны куратором. - -## Удаление подграфа в сети Graph - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Запрос подграфа + выставление счетов в сети Graph - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Дополнительные источники - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/sv/cookbook/base-testnet.mdx b/website/pages/sv/cookbook/base-testnet.mdx deleted file mode 100644 index 87e4ccd8c8b4..000000000000 --- a/website/pages/sv/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Bygga subgrafer på basen ---- - -Den här guiden tar dig snabbt igenom hur du initierar, skapar och distribuerar din subgraf på Base testnet. - -Vad du behöver: - -- A Base Sepolia testnet contract address -- En krypto-plånbok (t.ex. MetaMask eller Coinbase Kallet) - -## Subgraf Studio - -### 1. Installera Graph CLI - -Graph CLI (>=v0.41.0) är skriven i JavaScript och du måste ha antingen `npm` eller `yran` installerad för att använda den. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initiera din subgraf - -> You can find specific commands for your subgraph in Subgraph Studio. - -Se till att graph-cli uppdateras till senast (över 0.41.0) - -```sh -graph --version -``` - -Initiera din subgraf från ett befintligt kontrakt. - -```sh -graph init --studio -``` - -Din subgraf snigel är en identifierare för din subgraf. CLI verktyget leder dig genom stegen för att skapa en subgraf, inklusive: - -- Protokoll: ethereum -- Subgraf snigel: `` -- Katalog för att skapa subgrafen i: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript mappningar (mapping.ts) - Detta är koden som översätter data från dina datakällor till de enheter som definieras i schemat. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/sv/cookbook/grafting-hotfix.mdx b/website/pages/sv/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/sv/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/sv/cookbook/timeseries.mdx b/website/pages/sv/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/sv/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/sv/cookbook/upgrading-a-subgraph.mdx b/website/pages/sv/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index 0f990ca07e83..000000000000 --- a/website/pages/sv/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Uppgradera en befintlig subgraf till The Graph Nätverk ---- - -## Introduktion - -Det här är en guide för hur du uppgraderar din subgraf från värdtjänsten till The Graphs decentraliserade nätverk. Över 1 000 subgrafer har framgångsrikt uppgraderat till The Graph Nätverk inklusive projekt som Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido och många fler! - -Processen att uppgradera är snabb och dina subgrafer kommer för alltid att dra nytta av den tillförlitlighet och prestanda som du bara kan få på The Graph Nätverk. - -### Förutsättningar - -- You have a subgraph deployed on the hosted service. - -## Uppgradera en befintlig subgraf till The Graph Nätverk - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Skapa en API nyckel - -Du kan generera en API-nyckel i Subgraf Studio [here](https://thegraph.com/studio/apikeys/). - -![Sida för att skapa API-nyckel](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Säkra din API nyckel - -Det rekommenderas att du säkrar API: et genom att begränsa dess användning på två sätt: - -1. Auktoriserade subgrafer -2. Auktoriserad Domän - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraf lockdown sida](/img/subgraph-lockdown.png) - -### Fråga din subgraf på det decentraliserade nätverket - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraf](/img/rocket-pool-subgraph.png) - -Så snart den första indexeraren har indexerat din subgraf helt och hållet kan du börja fråga subgrafen på det decentraliserade nätverket. För att hämta fråge URL för din subgraf kan du kopiera/klistra in den genom att klicka på symbolen bredvid fråge URL. Du kommer att se något sånt här: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Viktigt: Se till att ersätta `[api-key]` med en faktisk API-nyckel som genereras i avsnittet ovan. - -Du kan nu använda den fråge URL i din dapp för att skicka dina GraphQL förfrågningar till. - -Grattis! Du är nu en pionjär inom decentralisering! - -> Obs: På grund av nätverkets distribuerade karaktär kan det vara så att olika indexerare har indexerat upp till olika block. För att bara ta emot färska data kan du ange det minsta block som en indexerare måste ha indexerat för att kunna betjäna din fråga med blocket: `{ number_gte: $minBlock }` field argument som visas i exemplet nedan: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -Mer information om nätverkets karaktär och hur man hanterar omorganisationer beskrivs i dokumentationsartikeln [Distribuerade system](/querying/distributed-systems/). - -## Uppdatera en subgraf i nätverket - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Distribuera följande och ange den nya versionen i kommandot (t. ex. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publicera den nya versionen på The Graph Nätverk. Kom ihåg att detta kräver gas (som beskrivs i avsnittet ovan). - -### Ägaruppdateringsavgift: Djupdykning - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -En uppdatering kräver att GRT migreras från den gamla versionen av subgrafen till den nya versionen. Detta innebär att för varje uppdatering kommer en ny bindningskurva att skapas (mer om bindningskurvor [here](/network/curating#bonding-curve-101)). - -Den nya bindningskurvan tar ut 1% kurationsskatt på all GRT som migreras till den nya versionen. Ägaren ska betala 50% av detta eller 1,25 %. De övriga 1,25 % tas upp av alla kuratorer som en avgift. Denna incitamentsdesign är på plats för att förhindra att en ägare av en subgraf kan tömma alla sina curatorers medel med rekursiva uppdateringsanrop. Om det inte finns någon kurationsaktivitet måste du betala minst 100 GRT för att signalera din egen subgraf. - -Låt oss ta ett exempel, detta är bara fallet om din subgraf aktivt kureras på: - -- 100 000 GRT signaleras med automigrera på v1 av en subgraf -- Ägaruppdateringar till v2. 100 000 BRT migreras till en ny bindningskurva, där 97,500 BRT sätts in i den nya kurvan och 2,500 GRT bränns -- Ägaren låter sedan bränna 1250 GRT för att betala halva avgiften. Ägaren måste ha detta i sin plånbok innan uppdateringen, annars kommer uppdateringen inte att lyckas. Detta sker i samma transaktion som uppdateringen. - -_Medan den här mekanismen för närvarande är aktiv på nätverket diskuterar communityn för närvarande sätt att minska kostnaderna för uppdateringar för subgraf utvecklare._ - -### Upprätthålla en stabil version av en subgraf - -Om du gör många ändringar i din subgraf är det inte en bra idé att kontinuerligt uppdatera den och stå för uppdateringskostnaderna. Att upprätthålla en stabil och konsistent version av din subgraf är avgörande, inte bara ur kostnadsperspektiv, utan också så att Indexers kan känna sig trygga med sina synkroniseringstider. Indexers bör informeras när du planerar en uppdatering så att deras synkroniseringstider inte påverkas. Tveka inte att utnyttja kanalen [#Indexers](https://discord.gg/JexvtHa7dq) på Discord för att låta Indexers veta när du versionerar dina subgrafer. - -Subgrafer är öppna API: er som externa utvecklare utnyttjar. Öppna API: er måste följa strikta standarder så att de inte bryter mot externa utvecklares applikationer. I The Graph Nätverk måste en subgrafutvecklare överväga indexerare och hur lång tid det tar för dem att synkronisera en ny subgraf **liksom** andra utvecklare som använder deras subgrafer. - -### Uppdatera metadata för en subgraf - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Se till att **Uppdatera subgraf detaljer i Utforskaren** är markerad och klicka på **Spara**. Om detta är markerat kommer en transaktion i kedjan att genereras som uppdaterar subgraf detaljer i Utforskaren utan att behöva publicera en ny version med en ny distribution. - -## Bästa metoder för att distribuera en subgraf till Graph Nätverk - -1. Utnyttja ett ENS namn för subgraf utveckling: - -- Konfigurera din ENS [here](https://app.ens.domains/) -- Lägg till ditt ENS namn i dina inställningar [here](https://thegraph.com/explorer/settings?view=display-name). - -2. Ju mer kompletta dina profiler är, desto bättre är chansen att dina subgrafer indexeras och kureras. - -## Avskrivning av en subgraf i The Graph Nätverk - -Följ stegen [here](/managing/transfer-and-deprecate-a-subgraph) för att depreciera din subgraph och ta bort den från The Graph Nätverk. - -## Förfrågan om en undergraf + fakturering på The Graph Nätverk - -Den hostade tjänsten skapades för att låta utvecklare distribuera sina subgrafer utan några begränsningar. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Ytterligare resurser - -Om du fortfarande är förvirrad, var inte rädd! Kolla in följande resurser eller se vår videoguide om uppgradering av undergrafer till det decentraliserade nätverket nedan: - - - -- [The Graf Nätverk Kontrakt](https://github.com/graphprotocol/contracts) -- [Kuration Kontrakt](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - det underliggande kontraktet som GNS omsluter - - Adress - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Dokumentation för Subgraf Studio](/deploying/subgraf-studio) diff --git a/website/pages/tr/cookbook/base-testnet.mdx b/website/pages/tr/cookbook/base-testnet.mdx deleted file mode 100644 index fa615597e6dc..000000000000 --- a/website/pages/tr/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Base'de Subgraphlar Oluşturma ---- - -Bu kılavuz, Base test ağı üzerinde subgraph'ınızı nasıl başlatacağınızı, oluşturacağınızı ve dağıtacağınızı hızlı bir şekilde açıklamaktadır. - -İhtiyacınız olanlar: - -- A Base Sepolia testnet contract address -- Bir kripto cüzdanı (Örneğin, MetaMask veya Coinbase Cüzdanı) - -## Subgraph Stüdyo - -### The Graph CLI'ı Yükleyin - -The Graph CLI (>=v0.41.0), JavaScript dilinde yazılmıştır ve kullanmak için `npm` veya `yarn`'ın yüklü olması gerekmektedir. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Subgraph'ınızı başlatın - -> You can find specific commands for your subgraph in Subgraph Studio. - -graph-cli'nin en son sürümüne (0.41.0 üzeri) güncellendiğinden emin olun - -```sh -graph --version -``` - -Mevcut bir sözleşmeden subgraph'ınızı başlatın. - -```sh -graph init --studio -``` - -Subgraph kısa adı, subgraph'ınız için bir tanımlayıcıdır. CLI aracı, subgraph oluşturmak için size adım adım eşlik edecektir, bunlar: - -- Protokol: ethereum -- Subgraph kısa adı: `` -- Subgraph oluşturmak için dizin: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Eşleştirmeleri (mapping.ts) - Bu, veri kaynaklarınızdaki verileri şemada tanımlanan varlıklara çeviren koddur. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/tr/cookbook/grafting-hotfix.mdx b/website/pages/tr/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/tr/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/tr/cookbook/timeseries.mdx b/website/pages/tr/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/tr/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/tr/cookbook/upgrading-a-subgraph.mdx b/website/pages/tr/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index 8582df134a31..000000000000 --- a/website/pages/tr/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Mevcut Bir Subgraph'ı Graph Ağına Yükseltme ---- - -## Giriş - -Bu, subgraph'ınızı barındırılan hizmetten Graph'ın merkeziyetsiz ağına nasıl yükselteceğinize yönelik bir rehberdirr. Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido ve daha birçok proje dahil olmak üzere 1.000'den fazla subgraph başarıyla Graph Ağı'na yükseltildi! - -Yükseltme işlemi hızlıdır ve subgraphlar'ınız yalnızca Graph Ağı'nda elde edebileceğiniz güvenilirlik ve performanstan sonsuza kadar yararlanacaktır. - -### Ön Koşullar - -- You have a subgraph deployed on the hosted service. - -## Mevcut Bir Subgraph'ı Graph Ağına Yükseltme - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Bir API anahtarı oluşturun - -Subgraph Stüdyo'da bir API anahtarı oluşturabilirsiniz [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### API anahtarınızın güvenliğini sağlama - -API'nin kullanımını iki şekilde sınırlandırarak güvenliğini sağlamanız önerilir: - -1. Yetkilendirilmiş Subgraphlar -2. Yetkilendirilmiş Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Merkeziyetsiz ağ üzerinde subgraph'ınızı sorgulama - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -İlk İndeksleyici subgraph'ınızı tam olarak indekslediğinde, subgraph'ı merkeziyetsiz ağda sorgulamaya başlayabilirsiniz. Subgraph'ınızın sorgu URL'sini almak için, sorgu URL'sinin yanındaki simgeye tıklayarak kopyalayıp yapıştırabilirsiniz. Şunun gibi bir şey göreceksiniz: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Önemli: `[api-key]` yerine yukarıdaki bölümde oluşturulan gerçek API anahtarını kullandığınızdan emin olun. - -Artık GraphQL isteklerinizi göndermek için merkeziyetsiz uygulamanızda bu Sorgu URL'sini kullanabilirsiniz. - -Tebrikler! Artık merkeziyetsizliğin öncülerinden birisiniz! - -> Not: Ağın dağıtılmış yapısı nedeniyle, farklı İndeksleyicilerin farklı blokları indekslemiş olması söz konusu olabilir. Yalnızca yeni verileri almak için, aşağıdaki örnekte gösterildiği gibi block: `{ number_gte: $minBlock }` alan bağımsız değişkeniyle sorgunuzu sunmak için bir İndeksleyicinin indeklemesi gereken minimum bloğu belirtebilirsiniz: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -Ağın doğası ve yeniden düzenlemelerin nasıl ele alınacağı hakkında daha fazla bilgi [Dağıtılmış Sistemler] (/querying/distributed-systems/) dokümantasyon makalesinde açıklanmaktadır. - -## Ağ Üzerinde Bir Subgraph'ın Güncellenmesi - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Aşağıdakileri dağıtın ve komutta yeni sürümü belirtin (örn. v0.0.1, v0.0.2, vb.): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Yeni sürümü Graph Ağı'nda yayınlayın. Bunun için gas gerektiğini unutmayınız (yukarıdaki bölümde açıklandığı gibi). - -### Sahip Güncelleme Ücreti: Derinlemesine İnceleme - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -Bir güncelleme GRT'nin subgraph eski versiyonundan yeni versiyonuna taşınmasını gerektirmektedir. Bu, her güncelleme için yeni bir bağlanma eğrisinin oluşturulacağı anlamına gelir (bağlanma eğrileri hakkında daha fazla bilgi [here](/network/curating#bonding-curve-101)). - -Yeni bağlanma eğrisi, yeni versiyona taşınan tüm GRT'den %1 kürasyon vergisi almaktadır. Sahip bunun %50'sini veya %1,25'ini ödemek zorundadır. Diğer %1,25'lik kısım ise tüm küratörler tarafından ücret olarak karşılanır. Bu teşvik tasarımı, bir subgraph sahibinin tekrarlamalı güncelleme çağrılarıyla küratörün tüm fonlarını tüketmesini önlemek için uygulanmaktadır. Herhangi bir küratörlük faaliyeti yoksa, kendi subgraph'ınızı sinyallemek için en az 100 GRT ödemeniz gerekecektir. - -Bir örnek verelim, bu yalnızca subgraph'ınızda aktif olarak küratörlük yapılıyorsa geçerlidir: - -- 100.000 GRT, bir subgraph'ın birinci versiyonunda otomatik geçiş kullanılarak bildirilir -- Subgraph sahibi, ikinci versiyona güncelleme yapar. 100.000 GRT yeni bir bağlanma eğrisine taşınır, 97.500 GRT yeni eğriye yerleştirilir ve 2.500 GRT yakılır -- Sahip, daha sonra ücretin yarısını ödemek için 1250 GRT yakmış bulunmaktadır. Sahip, güncelleme öncesinde bunu cüzdanlarında bulundurmalıdır; aksi halde güncelleme başarılı olmayacaktır. Bu, güncelleme ile aynı işlemde gerçekleşir. - -_Bu mekanizma şu anda ağda yayında olsa da, topluluk şu anda subgraph geliştiricileri için güncelleme maliyetini azaltmanın yollarını tartışıyor._ - -### Bir Subgraph'ın Kararlı Bir Sürümünü Koruma - -Subgraph'ınızda çok fazla değişiklik yapıyorsanız, onu sürekli güncellemek ve güncelleme maliyetlerini karşılamak iyi bir fikir değildir. Subgraph'ınız istikrarlı ve tutarlı bir sürümünü korumak, yalnızca maliyet açısından değil, aynı zamanda İndeksleyicilerin senkronizasyon sürelerinden emin olabilmeleri için de kritik öneme sahiptir. İndeksleyicilerin senkronizasyon sürelerinin etkilenmemesi için bir güncelleme planladığınızda indeksleyiciler sinyallenmelidir. Subgraph'ınızı sürümlendirirken İndeksleyicileri bilgilendirmek için Discord'daki [#Indexers kanalını](https://discord.gg/JexvtHa7dq) kullanmaktan çekinmeyin. - -Subgraphlar, harici geliştiricilerin yararlandığı açık API'lerdir. Açık API'lerin harici geliştiricilerin uygulamalarını bozmaması için katı standartlara uyması gerekmektedir. Graph Ağı'nda bir subgraph geliştiricisi, İndeksleyicileri, yeni bir subgraph'ı senkronize etmenin onlar için ne kadar sürdüğünü ve **aynı zamanda** subgraph'ı kullanan diğer geliştiricileri de göz önünde bulundurmalıdır. - -### Bir Subgraph'ın Üst Verisini Güncelleme - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -**Gezgin'de Subgraph Ayrıntılarını Güncelle** seçeneğinin işaretli olduğundan emin olun ve **Kaydet** seçeneğine tıklayın. Bunu işaretlediğiniz takdirde, yeni bir dağıtımla, yeni bir sürüm yayınlamak zorunda kalmadan Gezgin'deki subgraph ayrıntılarını güncelleyen bir zincir içi işlem oluşturulacaktır. - -## Bir Subgraph'ı Graph Ağına Dağıtmak için En İyi Uygulamalar - -1. Subgraph Geliştirme için bir ENS adından yararlanma: - -- ENS'nizi oluşturun [here](https://app.ens.domains/) -- ENS adınızı ayarlarınıza ekleyin [here](https://thegraph.com/explorer/settings?view=display-name). - -2. Profilleriniz ne kadar dolu olursa, subgraphlar'ınızın indekslenme ve kürate edilme şansı o kadar artar. - -## Graph Ağında Bir Subgraph'ın Kullanımdan Kaldırılması - -Subgraph'ınızı kullanımdan kaldırmak ve Graph Ağı'ndan silmek için adımları izleyin [here](/managing/transfer-and-deprecate-a-subgraph). - -## Bir Subgraph'ı Sorgulama + Graph Ağında Faturalama - -Barındırılan hizmet, geliştiricilerin subgrpahlar'ını herhangi bir kısıtlama olmaksızın dağıtmalarına izin verecek şekilde oluşturulmuştur. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Ek Kaynaklar - -Eğer hala kafanız karışıksa, endişelenmeyin! Aşağıdaki kaynaklara göz atın veya subgraphları merkeziyetsiz ağa yükseltme hakkındaki video kılavuzumuzu izleyin: - - - -- [Graph Ağı Kontratları](https://github.com/graphprotocol/contracts) -- [Kürasyon Sözleşmesi] (https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - GNS'nin sarmaladığı temel sözleşme - - Adres - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Stüdyo dökümantasyonu](/deploying/subgraph-studio) diff --git a/website/pages/uk/cookbook/base-testnet.mdx b/website/pages/uk/cookbook/base-testnet.mdx deleted file mode 100644 index 33d4dc7876af..000000000000 --- a/website/pages/uk/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base Sepolia testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Субграф Студія - -### 1. Install the Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -Initialize your subgraph from an existing contract. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - Це код, який транслює дані з ваших джерел даних до елементів, визначених у схемі. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/uk/cookbook/grafting-hotfix.mdx b/website/pages/uk/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/uk/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/uk/cookbook/timeseries.mdx b/website/pages/uk/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/uk/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/uk/cookbook/upgrading-a-subgraph.mdx b/website/pages/uk/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index a546f02c0800..000000000000 --- a/website/pages/uk/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduction - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Prerequisites - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/ur/cookbook/base-testnet.mdx b/website/pages/ur/cookbook/base-testnet.mdx deleted file mode 100644 index c68ed62c589c..000000000000 --- a/website/pages/ur/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: سب گرافس کو بیس پر بنانا ---- - -یہ گائیڈ آپ کو تیزی سے لے جائے گا کہ بیس ٹیسٹ نیٹ پر اپنے سب گراف کو کس طرح شروع کرنا، تخلیق کرنا اور تعینات کرنا ہے. - -آپ کو کیا ضرورت ہو گی: - -- A Base Sepolia testnet contract address -- ایک کرپٹو والیٹ (مثلاً میٹاماسک یا کوائن بیس والیٹ) - -## سب گراف اسٹوڈیو - -### ۱. گراف CLI انسٹال کریں - -گراف CLI (>=v0.41.0) جاوا اسکریپٹ میں لکہی گئ ہے اور اسے استعمال کرنے کے لیے آپ کو یا تو `npm` یا `yarn` انسٹال کرنے کی ضرورت ہوگی. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. اپنا سب گراف شروع کریں - -> You can find specific commands for your subgraph in Subgraph Studio. - -یقینی بنائیں کہ گراف-cli کو تازہ ترین (0.41.0 سے اوپر) پر اپ ڈیٹ کیا گیا ہے - -```sh -گراف -- ورژن -``` - -اپنے سب گراف کو موجودہ کنٹریکٹ سے شروع کریں. - -```sh -graph init --studio -``` - -آپ کا سب گراف سلگ آپ کے سب گراف کا شناخت کنندہ ہے۔ CLI ٹول آپ کو سب گراف بنانے کے مراحل سے گزرے گا، بشمول: - -- پروٹوکول: ایتھریم -- سب گراف سلگ: `` -- سب گراف بنانے کے لیے ڈائرکٹری: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- اسمبلی اسکرپٹ میپنگ (mapping.ts) - یہ وہ کوڈ ہے جو آپ کے ڈیٹا سورس سے ڈیٹا کو اسکیما میں بیان کردہ اداروں میں ترجمہ کرتا ہے. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/ur/cookbook/grafting-hotfix.mdx b/website/pages/ur/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/ur/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ur/cookbook/timeseries.mdx b/website/pages/ur/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/ur/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/ur/cookbook/upgrading-a-subgraph.mdx b/website/pages/ur/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index be1fc0a5da9e..000000000000 --- a/website/pages/ur/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: موجودہ سب گراف کو گراف نیٹ ورک میں اپ گریڈ کرنا ---- - -## تعارف - -یہ ایک گائیڈ ہے کہ آپ اپنے سب گراف کو ہوسٹڈ سروس سے گراف کے ڈیسنٹرالا ئزڈ نیٹ ورک میں کیسے اپ گریڈ کریں۔ گراف نیٹ ورک میں 1,000 سے زیادہ سب گرافس کامیابی کے ساتھ اپ گریڈ ہو چکے ہیں جن میں اسنیپ شاٹ، لوپرنگ، آڈیئس، پریمیا، لائیوپیر، اوما، کریو، لڈو، اور بہت سارے پروجیکٹس شامل ہیں! - -اپ گریڈ کرنے کا عمل تیز ہے اور آپ کے سب گرافس ہمیشہ کے لیے قابل اعتماد اور کارکردگی سے مستفید ہوں گے جو آپ صرف گراف نیٹ ورک پر حاصل کر سکتے ہیں. - -### شرطیں - -- You have a subgraph deployed on the hosted service. - -## موجودہ سب گراف کو گراف نیٹ ورک میں اپ گریڈ کرنا - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### ایک API key بنائیں - -آپ سب گراف اسٹوڈیو میں ایک API کلید بنا سکتے ہیں [here](https://thegraph.com/studio/apikeys/). - -![API کلیدی تخلیق کا پیج](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### اپنی API کلید کو محفوظ کرنا - -یہ تجویز کیا جاتا ہے کہ آپ API کو دو طریقوں سے اس کے استعمال کو محدود کرکے محفوظ کریں: - -1. مجاز سب گراف -2. مجاز ڈومین - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![سب گراف لاک ڈاؤن پیج](/img/subgraph-lockdown.png) - -### ڈیسینٹرالائزڈ نیٹ ورک پر آپ کے سب گراف سے کیوری کرنا - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -جیسے ہی پہلے انڈیکسر نے آپ کے سب گراف کو مکمل طور پر انڈیکس کیا ہے آپ ڈیسینٹرالائزڈ نیٹ ورک پر سب گراف سے کیوری کرنا شروع کر سکتے ہیں۔ اپنے سب گراف کے لیے کیوری کا URL بازیافت کرنے کے لیے، آپ کیوری کے URL کے آگے علامت پر کلک کر کے اسے کاپی/پیسٹ کر سکتے ہیں۔ آپ کو کچھ اس طرح نظر آئے گا: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -اہم: یقینی بنائیں کہ `[api-key]` کو اوپر والے سیکشن میں تیار کردہ ایک حقیقی API key سے تبدیل کریں. - -اب آپ اپنی GraphQL کی درخواستیں بھیجنے کے لیے اپنے ڈیپ میں اس کیوری URL استعمال کر سکتے ہیں. - -مبارک ہو! اب آپ ڈیسینٹرالائزیشن کے علمبردار ہیں! - -> نوٹ: نیٹ ورک کی تقسیم شدہ نوعیت کی وجہ سے ایسا ہو سکتا ہے کہ مختلف انڈیکسرز نے مختلف بلاکس تک انڈیکس کیا ہو۔ صرف تازہ ڈیٹا حاصل کرنے کے لیے آپ کم از کم بلاک کی وضاحت کر سکتے ہیں کہ ایک انڈیکسر کو بلاک کے ساتھ آپ کے کیوریز کو پیش کرنے کے لیے انڈیکس کیا جانا چاہیے: `{ number_gte: $minBlock }` فیلڈ آرگیومنٹ جیسا کہ ذیل کی مثال میں دکھایا گیا ہے: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -نیٹ ورک کی نوعیت اور دوبارہ تنظیموں کو سنبھالنے کے طریقہ کے بارے میں مزید معلومات دستاویزی مضمون [ڈسٹری بیوٹڈ سسٹمز](/querying/distributed-systems/) میں بیان کی گئی ہیں. - -## نیٹ ورک پر سب گراف کو اپ ڈیٹ کرنا - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. درج ذیل کو تعینات کریں اور نۓ ورزن کی کمانڈ میں وضاحت کریں (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. گراف نیٹ ورک پر نۓ ورزن کو شائع کریں. یاد رکہیں کے اس کو گیس فیس کی ضرورت ہوتی ہے (جیسا کے اوپر والے حصے میں بیان کیا گیا ہے). - -### مالک کی تازہ کاری کی فیس: ڈیپ ڈائیو - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -ایک اپ ڈیٹ کے لیے GRT کو سب گراف کے پرانے ورژن سے نئے ورژن میں منتقل کرنے کی ضرورت ہوتی ہے۔ اس کا مطلب یہ ہے کہ ہر اپ ڈیٹ کے لیے، ایک نیا بانڈنگ کریو بنایا جائے گا (بانڈنگ منحنی خطوط پر مزید [یہاں](/network/curating#bonding-curve-101)). - -نیا بانڈنگ کریو نئے ورژن میں منتقل ہونے والے تمام GRT پر 1% کیوریشن ٹیکس وصول کرتا ہے۔ مالک کو اس کا 50% یا 1.25% ادا کرنا ہوگا۔ باقی 1.25% تمام کیوریٹرز فیس کے طور پر جذب کر لیتے ہیں۔ یہ ترغیبی ڈیزائن ایک سب گراف کے مالک کو اپنے کیوریٹر کے تمام فنڈز کو بار بار آنے والی اپ ڈیٹ کالوں کے ذریعے نکالنے سے روکنے کے لیے بنایا گیا ہے۔ اگر کوئی کیوریشن سرگرمی نہیں ہے، تو آپ کو اپنا سب گراف سگنل کرنے کے لیے کم از کم 100 GRT ادا کرنا ہوگا. - -چلو ایک مثال بناتے ہیں، یہ صرف اس صورت میں ہے جب آپ کا سب گراف فعال طور پر تیار کیا جا رہا ہے: - -- 100,000 GRT کا سگنل سب گراف کے v1 پر خودکار منتقلی کا استعمال کرتے ہوئے کیا جاتا ہے -- V2 میں مالک کی تازہ کاری۔ 100,000 GRT کو ایک نئے بانڈنگ وکر میں منتقل کیا جاتا ہے، جہاں 97,500 GRT کو نئے وکر میں ڈال دیا جاتا ہے اور 2,500 GRT کو جلا دیا جاتا ہے -- اس کے بعد مالک نے آدھی فیس ادا کرنے کے لیے 1250 GRT جلا دیے ہیں۔ اپ ڈیٹ کرنے سے پہلے مالک کے پاس اپنے والیٹ میں یہ ہونا ضروری ہے، بصورت دیگر، اپ ڈیٹ کامیاب نہیں ہوگا۔ یہ اسی ٹرانزیکشن میں ہوتا ہے جیسا کہ اپ ڈیٹ ہوتا ہے. - -_جبکہ یہ طریقہ کار فی الحال نیٹ ورک پر رواں ہے، کمیونٹی فی الحال سب گراف ڈویلپرز کے لیے اپ ڈیٹس کی لاگت کو کم کرنے کے طریقوں پر تبادلہ خیال کر رہی ہے._ - -### سب گراف کے مستحکم ورزن کو برقرار رکھنا - -اگر آپ اپنے سب گراف میں بہت زیادہ تبدیلیاں کر رہے ہیں، تو اسے مسلسل اپ ڈیٹ کرنا اور اپ ڈیٹ کی لاگت کو سامنے رکھنا اچھا خیال نہیں ہے۔ اپنے سب گراف کے ایک مستحکم اور مستقل ورژن کو برقرار رکھنا بہت ضروری ہے، نہ صرف لاگت کے نقطہ نظر سے بلکہ اس لیے بھی کہ انڈیکسرز اپنے مطابقت پذیری کے اوقات میں اعتماد محسوس کر سکیں۔ جب آپ اپ ڈیٹ کا ارادہ رکھتے ہیں تو انڈیکسرز کو جھنڈا لگانا چاہیے تاکہ انڈیکسر کی مطابقت پذیری کے اوقات متاثر نہ ہوں۔ ڈسکارڈ پر بلا جھجھک [#Indexers چینل](https://discord.gg/JexvtHa7dq) سے فائدہ اٹھائیں تاکہ انڈیکسرز کو یہ معلوم ہو سکے کہ آپ اپنے سب گرافس کو کب ورژن بنا رہے ہیں. - -سب گراف اوپن APIs ہیں جن کا بیرونی ڈویلپر فائدہ اٹھا رہے ہیں۔ اوپن APIs کو سخت معیارات پر عمل کرنے کی ضرورت ہے تاکہ وہ بیرونی ڈویلپرز کی ایپلیکیشنز کو نہ توڑیں۔ گراف نیٹ ورک میں، ایک سب گراف ڈویلپر کو انڈیکسرز پر غور کرنا چاہیے اور یہ کہ ایک نئے سب گراف کو ** اور ساتھ ہی** دوسرے ڈویلپرز جو اپنے سب گرافس استعمال کر رہے ہیں ہم آہنگ ہونے میں کتنا وقت لگتا ہے. - -### سب گراف کے میٹا ڈیٹا کو اپ ڈیٹ کرنا - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -یقینی بنائیں کہ **ایکسپلورر میں سب گراف کی تفصیلات اپ ڈیٹ کریں** کو چیک کیا گیا ہے اور **محفوظ کریں** پر کلک کریں۔ اگر اس کی جانچ پڑتال کی جاتی ہے تو، ایک آن چین ٹرانزیکشن تیار کیا جائے گا جو ایکسپلورر میں سب گراف کی تفصیلات کو نئی تعیناتی کے ساتھ نیا ورژن شائع کیے بغیر اپ ڈیٹ کرتا ہے. - -## سب گراف کو گراف نیٹ ورک پر تعینات کرنے کے بہترین طریقے - -1. سب گراف ڈیولپمنٹ کے لیے ENS نام کا فائدہ اٹھانا: - -- اپنا ENS [یہاں](https://app.ens.domains/) سیٹ کریں -- اپنی ترتیبات میں اپنا ENS نام شامل کریں [یہاں](https://thegraph.com/explorer/settings?view=display-name). - -2. جتنی زیادہ آپ کی پروفائل بہری ہے گی, اتنے ہی زیادہ امکانات ہوں گے آپ کے سب گراف کے انڈیکس اور کیوریٹ ہونے کے. - -## گراف نیٹ ورک پر سب گراف کو فرسودہ کرنا - -اپنے سب گراف کو فرسودہ کرنے اور اسے گراف نیٹ ورک سے ہٹانے کے لیے [یہاں](/managing/transfer-and-deprecate-a-subgraph) کے مراحل پر عمل کریں. - -## سب گراف کا کیوری کرنا + گراف نیٹ ورک پر بلنگ - -ہوسٹڈ سروس ڈویلپرز کو بغیر کسی پابندی کے اپنے سب گرافس کو تعینات کرنے کی اجازت دینے کے لیے ترتیب دی گئی تھی. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## اضافی وسائل - -اگر آپ اب بھی الجھن میں ہیں، تو ڈرو نہیں! درج ذیل وسائل کو دیکھیں یا ذیل میں ڈیسنٹرالا ئزڈ نیٹ ورک میں سب گرافس کو اپ گریڈ کرنے کے بارے میں ہماری ویڈیو گائیڈ دیکھیں: - - - -- [گراف نیٹ ورک کنٹریکٹس](https://github.com/graphprotocol/contracts) -- [کیوریشن کنٹریکٹ](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - بنیادی کنٹریکٹ جسے GNS لپیٹتا ہے - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [سب گراف اسٹوڈیو دستاویزات](/تعیناتی/سب گراف اسٹوڈیو) diff --git a/website/pages/vi/cookbook/base-testnet.mdx b/website/pages/vi/cookbook/base-testnet.mdx deleted file mode 100644 index 4aa3b662be8f..000000000000 --- a/website/pages/vi/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base Sepolia testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Subgraph Studio - -### 1. Cài đặt Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -Khởi tạo subgraph của bạn từ một hợp đồng hiện có. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- Ánh xạ AssemblyScript (mapping.ts) - Đây là mã dịch dữ liệu từ các nguồn dữ liệu của bạn sang các thực thể được xác định trong lược đồ. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/vi/cookbook/grafting-hotfix.mdx b/website/pages/vi/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/vi/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/vi/cookbook/timeseries.mdx b/website/pages/vi/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/vi/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/vi/cookbook/upgrading-a-subgraph.mdx b/website/pages/vi/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index 81de1ea90ad4..000000000000 --- a/website/pages/vi/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Giới thiệu - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Điều kiện tiên quyết - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/yo/cookbook/base-testnet.mdx b/website/pages/yo/cookbook/base-testnet.mdx deleted file mode 100644 index 3a1d98a44103..000000000000 --- a/website/pages/yo/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: Building Subgraphs on Base ---- - -This guide will quickly take you through how to initialize, create, and deploy your subgraph on Base testnet. - -What you'll need: - -- A Base Sepolia testnet contract address -- A crypto wallet (e.g. MetaMask or Coinbase Wallet) - -## Subgraph Studio - -### 1. Install the Graph CLI - -The Graph CLI (>=v0.41.0) is written in JavaScript and you will need to have either `npm` or `yarn` installed to use it. - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. Initialize your Subgraph - -> You can find specific commands for your subgraph in Subgraph Studio. - -Make sure that the graph-cli is updated to latest (above 0.41.0) - -```sh -graph --version -``` - -Initialize your subgraph from an existing contract. - -```sh -graph init --studio -``` - -Your subgraph slug is an identifier for your subgraph. The CLI tool will walk you through the steps for creating a subgraph, including: - -- Protocol: ethereum -- Subgraph slug: `` -- Directory to create the subgraph in: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript Mappings (mapping.ts) - This is the code that translates data from your datasources to the entities defined in the schema. - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/yo/cookbook/grafting-hotfix.mdx b/website/pages/yo/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/yo/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/yo/cookbook/timeseries.mdx b/website/pages/yo/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/yo/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/yo/cookbook/upgrading-a-subgraph.mdx b/website/pages/yo/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index a546f02c0800..000000000000 --- a/website/pages/yo/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: Upgrading an Existing Subgraph to The Graph Network ---- - -## Introduction - -This is a guide on how to upgrade your subgraph from the hosted service to The Graph's decentralized network. Over 1,000 subgraphs have successfully upgraded to The Graph Network including projects like Snapshot, Loopring, Audius, Premia, Livepeer, Uma, Curve, Lido, and many more! - -The process of upgrading is quick and your subgraphs will forever benefit from the reliability and performance that you can only get on The Graph Network. - -### Prerequisites - -- You have a subgraph deployed on the hosted service. - -## Upgrading an Existing Subgraph to The Graph Network - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### Create an API key - -You can generate an API key in Subgraph Studio [here](https://thegraph.com/studio/apikeys/). - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### Securing your API key - -It is recommended that you secure the API by limiting its usage in two ways: - -1. Authorized Subgraphs -2. Authorized Domain - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### Querying your subgraph on the decentralized network - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -As soon as the first Indexer has fully indexed your subgraph you can start to query the subgraph on the decentralized network. In order to retrieve the query URL for your subgraph, you can copy/paste it by clicking on the symbol next to the query URL. You will see something like this: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -Important: Make sure to replace `[api-key]` with an actual API key generated in the section above. - -You can now use that Query URL in your dapp to send your GraphQL requests to. - -Congratulations! You are now a pioneer of decentralization! - -> Note: Due to the distributed nature of the network it might be the case that different Indexers have indexed up to different blocks. In order to only receive fresh data you can specify the minimum block an Indexer has to have indexed in order to serve your query with the block: `{ number_gte: $minBlock }` field argument as shown in the example below: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -More information about the nature of the network and how to handle re-orgs are described in the documentation article [Distributed Systems](/querying/distributed-systems/). - -## Updating a Subgraph on the Network - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. Deploy the following and specify the new version in the command (eg. v0.0.1, v0.0.2, etc): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above). - -### Owner Update Fee: Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -An update requires GRT to be migrated from the old version of the subgraph to the new version. This means that for every update, a new bonding curve will be created (more on bonding curves [here](/network/curating#bonding-curve-101)). - -The new bonding curve charges the 1% curation tax on all GRT being migrated to the new version. The owner must pay 50% of this or 1.25%. The other 1.25% is absorbed by all the curators as a fee. This incentive design is in place to prevent an owner of a subgraph from being able to drain all their curator's funds with recursive update calls. If there is no curation activity, you will have to pay a minimum of 100 GRT in order to signal your own subgraph. - -Let's make an example, this is only the case if your subgraph is being actively curated on: - -- 100,000 GRT is signaled using auto-migrate on v1 of a subgraph -- Owner updates to v2. 100,000 GRT is migrated to a new bonding curve, where 97,500 GRT get put into the new curve and 2,500 GRT is burned -- The owner then has 1250 GRT burned to pay for half the fee. The owner must have this in their wallet before the update, otherwise, the update will not succeed. This happens in the same transaction as the update. - -_While this mechanism is currently live on the network, the community is currently discussing ways to reduce the cost of updates for subgraph developers._ - -### Maintaining a Stable Version of a Subgraph - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -Subgraphs are open APIs that external developers are leveraging. Open APIs need to follow strict standards so that they do not break external developers' applications. In The Graph Network, a subgraph developer must consider Indexers and how long it takes them to sync a new subgraph **as well as** other developers who are using their subgraphs. - -### Updating the Metadata of a Subgraph - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment. - -## Best Practices for Deploying a Subgraph to The Graph Network - -1. Leveraging an ENS name for Subgraph Development: - -- Set up your ENS [here](https://app.ens.domains/) -- Add your ENS name to your settings [here](https://thegraph.com/explorer/settings?view=display-name). - -2. The more filled out your profiles are, the better the chances for your subgraphs to be indexed and curated. - -## Deprecating a Subgraph on The Graph Network - -Follow the steps [here](/managing/transfer-and-deprecate-a-subgraph) to deprecate your subgraph and remove it from The Graph Network. - -## Querying a Subgraph + Billing on The Graph Network - -The hosted service was set up to allow developers to deploy their subgraphs without any restrictions. - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## Additional Resources - -If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around - - Address - `0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/pages/zh/cookbook/base-testnet.mdx b/website/pages/zh/cookbook/base-testnet.mdx deleted file mode 100644 index 0158cdd95f3f..000000000000 --- a/website/pages/zh/cookbook/base-testnet.mdx +++ /dev/null @@ -1,111 +0,0 @@ ---- -title: 在Base上构建子图 ---- - -本指南将带你快速了解如何在Base测试网上初始化、创建和部署子图。 - -你将需要: - -- A Base Sepolia testnet contract address -- 一个加密货币钱包(例如MetaMask或Coinbase钱包) - -## 子图工作室 - -### 1. 安装 Graph CLI - -Graph CLI(>=v0.41.0) 是用 JavaScript 编写的,你需要预安装npm 或yarn 包管理器才能使用。 - -```sh -# NPM -npm install -g @graphprotocol/graph-cli - -# Yarn -yarn global add @graphprotocol/graph-cli -``` - -### 2. Create your subgraph in Subgraph Studio - -Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet. - -Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph. - -### 3. 初始化子图 - -> You can find specific commands for your subgraph in Subgraph Studio. - -确保graph-cli 更新到最新版本 (0.41.0以上) - -```sh -graph --version -``` - -从现有合约初始化子图。 - -```sh -graph init --studio -``` - -子图slug是子图的标识符。CLI工具将引导您完成创建子图的步骤,包括: - -- 协议:ethereum -- 子图slug: `` -- 创建子图的目录: `` -- Ethereum network: base-sepolia -- Contract address: `` -- Start block (optional) -- Contract name: `` -- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events) - -### 3. Write your Subgraph - -> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step. - -The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files: - -- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia. -- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph. -- AssemblyScript 映射(mapping.ts)--将数据源中的数据转换为模式中定义的实体的代码。 - -If you want to index additional data, you will need extend the manifest, schema and mappings. - -For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph). - -### 4. Deploy to Subgraph Studio - -Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command: - -Authenticate the subgraph on studio - -``` -graph auth --studio -``` - -Next, enter your subgraph's directory. - -``` - cd -``` - -Build your subgraph with the following command: - -```` -``` -graph codegen && graph build -``` -```` - -Finally, you can deploy your subgraph using this command: - -```` -``` -graph deploy --studio -``` -```` - -### 5. Query your subgraph - -Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio. - -Note - Studio API is rate-limited. Hence should preferably be used for development and testing. - -To learn more about querying data from your subgraph, see the [Querying a Subgraph](/querying/querying-the-graph) page. diff --git a/website/pages/zh/cookbook/grafting-hotfix.mdx b/website/pages/zh/cookbook/grafting-hotfix.mdx new file mode 100644 index 000000000000..4be0a0b07790 --- /dev/null +++ b/website/pages/zh/cookbook/grafting-hotfix.mdx @@ -0,0 +1,186 @@ +--- +Subgraph Best Practice 6 - Use Grafting for Quick Hotfix Deployment +--- + +## TLDR + +Grafting is a powerful feature in subgraph development that allows you to build and deploy new subgraphs while reusing the indexed data from existing ones. + +### Overview + +This feature enables quick deployment of hotfixes for critical issues, eliminating the need to re-index the entire subgraph from scratch. By preserving historical data, grafting minimizes downtime and ensures continuity in data services. + +## Benefits of Grafting for Hotfixes + +1. **Rapid Deployment** + + - **Minimize Downtime**: When a subgraph encounters a critical error and stops indexing, grafting enables you to deploy a fix immediately without waiting for re-indexing. + - **Immediate Recovery**: The new subgraph continues from the last indexed block, ensuring that data services remain uninterrupted. + +2. **Data Preservation** + + - **Reuse Historical Data**: Grafting copies the existing data from the base subgraph, so you don’t lose valuable historical records. + - **Consistency**: Maintains data continuity, which is crucial for applications relying on consistent historical data. + +3. **Efficiency** + - **Save Time and Resources**: Avoids the computational overhead of re-indexing large datasets. + - **Focus on Fixes**: Allows developers to concentrate on resolving issues rather than managing data recovery. + +## Best Practices When Using Grafting for Hotfixes + +1. **Initial Deployment Without Grafting** + + - **Start Clean**: Always deploy your initial subgraph without grafting to ensure that it’s stable and functions as expected. + - **Test Thoroughly**: Validate the subgraph’s performance to minimize the need for future hotfixes. + +2. **Implementing the Hotfix with Grafting** + + - **Identify the Issue**: When a critical error occurs, determine the block number of the last successfully indexed event. + - **Create a New Subgraph**: Develop a new subgraph that includes the hotfix. + - **Configure Grafting**: Use grafting to copy data up to the identified block number from the failed subgraph. + - **Deploy Quickly**: Publish the grafted subgraph to restore service as soon as possible. + +3. **Post-Hotfix Actions** + + - **Monitor Performance**: Ensure the grafted subgraph is indexing correctly and the hotfix resolves the issue. + - **Republish Without Grafting**: Once stable, deploy a new version of the subgraph without grafting for long-term maintenance. + > Note: Relying on grafting indefinitely is not recommended as it can complicate future updates and maintenance. + - **Update References**: Redirect any services or applications to use the new, non-grafted subgraph. + +4. **Important Considerations** + - **Careful Block Selection**: Choose the graft block number carefully to prevent data loss. + - **Tip**: Use the block number of the last correctly processed event. + - **Use Deployment ID**: Ensure you reference the Deployment ID of the base subgraph, not the Subgraph ID. + - **Note**: The Deployment ID is the unique identifier for a specific subgraph deployment. + - **Feature Declaration**: Remember to declare grafting in the subgraph manifest under features. + +## Example: Deploying a Hotfix with Grafting + +Suppose you have a subgraph tracking a smart contract that has stopped indexing due to a critical error. Here’s how you can use grafting to deploy a hotfix. + +1. **Failed Subgraph Manifest (subgraph.yaml)** + + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: OldSmartContract + network: sepolia + source: + address: '0xOldContractAddress' + abi: Lock + startBlock: 5000000 + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/OldLock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleOldWithdrawal + file: ./src/old-lock.ts + ``` + +2. **New Grafted Subgraph Manifest (subgraph.yaml)** + ```yaml + specVersion: 1.0.0 + schema: + file: ./schema.graphql + dataSources: + - kind: ethereum/contract + name: NewSmartContract + network: sepolia + source: + address: '0xNewContractAddress' + abi: Lock + startBlock: 6000001 # Block after the last indexed block + mapping: + kind: ethereum/events + apiVersion: 0.0.7 + language: wasm/assemblyscript + entities: + - Withdrawal + abis: + - name: Lock + file: ./abis/Lock.json + eventHandlers: + - event: Withdrawal(uint256,uint256) + handler: handleWithdrawal + file: ./src/lock.ts + features: + - grafting + graft: + base: QmBaseDeploymentID # Deployment ID of the failed subgraph + block: 6000000 # Last successfully indexed block + ``` + +**Explanation:** + +- **Data Source Update**: The new subgraph points to 0xNewContractAddress, which may be a fixed version of the smart contract. +- **Start Block**: Set to one block after the last successfully indexed block to avoid reprocessing the error. +- **Grafting Configuration**: + - **base**: Deployment ID of the failed subgraph. + - **block**: Block number where grafting should begin. + +3. **Deployment Steps** + + - **Update the Code**: Implement the hotfix in your mapping scripts (e.g., handleWithdrawal). + - **Adjust the Manifest**: As shown above, update the `subgraph.yaml` with grafting configurations. + - **Deploy the Subgraph**: + - Authenticate with the Graph CLI. + - Deploy the new subgraph using `graph deploy`. + +4. **Post-Deployment** + - **Verify Indexing**: Check that the subgraph is indexing correctly from the graft point. + - **Monitor Data**: Ensure that new data is being captured and the hotfix is effective. + - **Plan for Republish**: Schedule the deployment of a non-grafted version for long-term stability. + +## Warnings and Cautions + +While grafting is a powerful tool for deploying hotfixes quickly, there are specific scenarios where it should be avoided to maintain data integrity and ensure optimal performance. + +- **Incompatible Schema Changes**: If your hotfix requires altering the type of existing fields or removing fields from your schema, grafting is not suitable. Grafting expects the new subgraph’s schema to be compatible with the base subgraph’s schema. Incompatible changes can lead to data inconsistencies and errors because the existing data won’t align with the new schema. +- **Significant Mapping Logic Overhauls**: When the hotfix involves substantial modifications to your mapping logic—such as changing how events are processed or altering handler functions—grafting may not function correctly. The new logic might not be compatible with the data processed under the old logic, leading to incorrect data or failed indexing. +- **Deployments to The Graph Network**: Grafting is not recommended for subgraphs intended for The Graph’s decentralized network (mainnet). It can complicate indexing and may not be fully supported by all Indexers, potentially causing unexpected behavior or increased costs. For mainnet deployments, it’s safer to re-index the subgraph from scratch to ensure full compatibility and reliability. + +### Risk Management + +- **Data Integrity**: Incorrect block numbers can lead to data loss or duplication. +- **Testing**: Always test grafting in a development environment before deploying to production. + +## Conclusion + +Grafting is an effective strategy for deploying hotfixes in subgraph development, enabling you to: + +- **Quickly Recover** from critical errors without re-indexing. +- **Preserve Historical Data**, maintaining continuity for applications and users. +- **Ensure Service Availability** by minimizing downtime during critical fixes. + +However, it’s important to use grafting judiciously and follow best practices to mitigate risks. After stabilizing your subgraph with the hotfix, plan to deploy a non-grafted version to ensure long-term maintainability. + +## Additional Resources + +- **[Grafting Documentation](/cookbook/grafting/)**: Replace a Contract and Keep its History With Grafting +- **[Understanding Deployment IDs](/querying/querying-by-subgraph-id-vs-deployment-id/)**: Learn the difference between Deployment ID and Subgraph ID. + +By incorporating grafting into your subgraph development workflow, you can enhance your ability to respond to issues swiftly, ensuring that your data services remain robust and reliable. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/zh/cookbook/timeseries.mdx b/website/pages/zh/cookbook/timeseries.mdx new file mode 100644 index 000000000000..88ee70005a6e --- /dev/null +++ b/website/pages/zh/cookbook/timeseries.mdx @@ -0,0 +1,194 @@ +--- +title: Subgraph Best Practice 5 - Simplify and Optimize with Timeseries and Aggregations +--- + +## TLDR + +Leveraging the new time-series and aggregations feature in subgraphs can significantly enhance both indexing speed and query performance. + +## Overview + +Timeseries and aggregations reduce data processing overhead and accelerate queries by offloading aggregation computations to the database and simplifying mapping code. This approach is particularly effective when handling large volumes of time-based data. + +## Benefits of Timeseries and Aggregations + +1. Improved Indexing Time + +- Less Data to Load: Mappings handle less data since raw data points are stored as immutable timeseries entities. +- Database-Managed Aggregations: Aggregations are automatically computed by the database, reducing the workload on the mappings. + +2. Simplified Mapping Code + +- No Manual Calculations: Developers no longer need to write complex aggregation logic in mappings. +- Reduced Complexity: Simplifies code maintenance and minimizes the potential for errors. + +3. Dramatically Faster Queries + +- Immutable Data: All timeseries data is immutable, enabling efficient storage and retrieval. +- Efficient Data Separation: Aggregates are stored separately from raw timeseries data, allowing queries to process significantly less data—often several orders of magnitude less. + +### Important Considerations + +- Immutable Data: Timeseries data cannot be altered once written, ensuring data integrity and simplifying indexing. +- Automatic ID and Timestamp Management: id and timestamp fields are automatically managed by graph-node, reducing potential errors. +- Efficient Data Storage: By separating raw data from aggregates, storage is optimized, and queries run faster. + +## How to Implement Timeseries and Aggregations + +### Defining Timeseries Entities + +A timeseries entity represents raw data points collected over time. It is defined with the `@entity(timeseries: true)` annotation. Key requirements: + +- Immutable: Timeseries entities are always immutable. +- Mandatory Fields: + - `id`: Must be of type `Int8!` and is auto-incremented. + - `timestamp`: Must be of type `Timestamp!` and is automatically set to the block timestamp. + +Example: + +```graphql +type Data @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + price: BigDecimal! +} +``` + +### Defining Aggregation Entities + +An aggregation entity computes aggregated values from a timeseries source. It is defined with the `@aggregation` annotation. Key components: + +- Annotation Arguments: + - `intervals`: Specifies time intervals (e.g., `["hour", "day"]`). + +Example: + +```graphql +type Stats @aggregation(intervals: ["hour", "day"], source: "Data") { + id: Int8! + timestamp: Timestamp! + sum: BigDecimal! @aggregate(fn: "sum", arg: "price") +} +``` + +In this example, Stats aggregates the price field from Data over hourly and daily intervals, computing the sum. + +### Querying Aggregated Data + +Aggregations are exposed via query fields that allow filtering and retrieval based on dimensions and time intervals. + +Example: + +```graphql +{ + tokenStats( + interval: "hour" + where: { token: "0x1234567890abcdef", timestamp_gte: "1704164640000000", timestamp_lt: "1704251040000000" } + ) { + id + timestamp + token { + id + } + totalVolume + priceUSD + count + } +} +``` + +### Using Dimensions in Aggregations + +Dimensions are non-aggregated fields used to group data points. They enable aggregations based on specific criteria, such as a token in a financial application. + +Example: + +### Timeseries Entity + +```graphql +type TokenData @entity(timeseries: true) { + id: Int8! + timestamp: Timestamp! + token: Token! + amount: BigDecimal! + priceUSD: BigDecimal! +} +``` + +### Aggregation Entity with Dimension + +```graphql +type TokenStats @aggregation(intervals: ["hour", "day"], source: "TokenData") { + id: Int8! + timestamp: Timestamp! + token: Token! + totalVolume: BigDecimal! @aggregate(fn: "sum", arg: "amount") + priceUSD: BigDecimal! @aggregate(fn: "last", arg: "priceUSD") + count: Int8! @aggregate(fn: "count", cumulative: true) +} +``` + +- Dimension Field: token groups the data, so aggregates are computed per token. +- Aggregates: + - totalVolume: Sum of amount. + - priceUSD: Last recorded priceUSD. + - count: Cumulative count of records. + +### Aggregation Functions and Expressions + +Supported aggregation functions: + +- sum +- count +- min +- max +- first +- last + +### The arg in @aggregate can be + +- A field name from the timeseries entity. +- An expression using fields and constants. + +### Examples of Aggregation Expressions + +- Sum Token Value: @aggregate(fn: "sum", arg: "priceUSD \_ amount") +- Maximum Positive Amount: @aggregate(fn: "max", arg: "greatest(amount0, amount1, 0)") +- Conditional Sum: @aggregate(fn: "sum", arg: "case when amount0 > amount1 then amount0 else 0 end") + +Supported operators and functions include basic arithmetic (+, -, \_, /), comparison operators, logical operators (and, or, not), and SQL functions like greatest, least, coalesce, etc. + +### Query Parameters + +- interval: Specifies the time interval (e.g., "hour"). +- where: Filters based on dimensions and timestamp ranges. +- timestamp_gte / timestamp_lt: Filters for start and end times (microseconds since epoch). + +### Notes + +- Sorting: Results are automatically sorted by timestamp and id in descending order. +- Current Data: An optional current argument can include the current, partially filled interval. + +### Conclusion + +Implementing timeseries and aggregations in subgraphs is a best practice for projects dealing with time-based data. This approach: + +- Enhances Performance: Speeds up indexing and querying by reducing data processing overhead. +- Simplifies Development: Eliminates the need for manual aggregation logic in mappings. +- Scales Efficiently: Handles large volumes of data without compromising on speed or responsiveness. + +By adopting this pattern, developers can build more efficient and scalable subgraphs, providing faster and more reliable data access to end-users. To learn more about implementing timeseries and aggregations, refer to the [Timeseries and Aggregations Readme](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) and consider experimenting with this feature in your subgraphs. + +## Subgraph Best Practices 1-6 + +1. [Improve Query Speed with Subgraph Pruning](/cookbook/pruning/) + +2. [Improve Indexing and Query Responsiveness by Using @derivedFrom](/cookbook/derivedfrom/) + +3. [Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs](/cookbook/immutable-entities-bytes-as-ids/) + +4. [Improve Indexing Speed by Avoiding `eth_calls`](/cookbook/avoid-eth-calls/) + +5. [Simplify and Optimize with Timeseries and Aggregations](/cookbook/timeseries/) + +6. [Use Grafting for Quick Hotfix Deployment](/cookbook/grafting-hotfix/) diff --git a/website/pages/zh/cookbook/upgrading-a-subgraph.mdx b/website/pages/zh/cookbook/upgrading-a-subgraph.mdx deleted file mode 100644 index 7534f0a808d5..000000000000 --- a/website/pages/zh/cookbook/upgrading-a-subgraph.mdx +++ /dev/null @@ -1,156 +0,0 @@ ---- -title: 将现有子图升级到Graph网络 ---- - -## 介绍 - -这是一个关于如何将子图从托管服务升级到Graph的去中心化网络的指南。超过1000个子图已经成功升级到The Graph Network,包括Snapshot、Loopring、Audius、Premia、Livepeer、Uma、Curve、Lido等项目! - -升级过程很快,您的子图将永远受益于Graph网络上独有的高可靠性和高性能。 - -### 先决条件 - -- You have a subgraph deployed on the hosted service. - -## 将现有子图升级到Graph网络 - - - -If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page. - -> This process typically takes less than five minutes. - -1. Select the subgraph(s) you want to upgrade. -2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph). -3. Click the "Upgrade" button. - -That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process. - -You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer). - -### What next? - -When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service. - -You can start to query your subgraph right away on The Graph Network, once you have generated an API key. - -### 创建一个API密钥 - -你可以在Subgraph Studio这里生成一个API密钥(https://thegraph.com/studio/apikeys/)。 - -![API key creation page](/img/api-image.png) - -You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system. - -> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio. - -### 保护您的API密钥 - -建议您通过以下两种限制API使用的方式来保护API: - -1. 授权子图 -2. 授权域名 - -You can secure your API key [here](https://thegraph.com/studio/apikeys/). - -![Subgraph lockdown page](/img/subgraph-lockdown.png) - -### 在去中心化网络上查询子图 - -Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph. - -![Rocket Pool subgraph](/img/rocket-pool-subgraph.png) - -一旦第一个索引人完全索引您的子图,您就可以开始在去中心化的网络上查询子图。为了检索子图的查询URL,您可以通过单击查询URL旁边的符号来复制/粘贴它。你会看到这样的东西: - -`https://gateway.thegraph.com/api/[api-key]/subgraphs/id/S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo` - -重要提示:确保用上一节中生成的实际API密钥替换[api-key]。 - -现在可以在dapp中使用该查询URL发送GraphQL请求。 - -恭喜! 你现在是去中心化的先驱了! - -> 注意:由于网络的去中心化特性,不同的索引人可能会索引不同的区块。为了只接收新数据,你可以指定索引人必须索引的最小区块,以便为查询提供区块参数:`{ number_gte: $minBlock }`字段参数如下例所示: - -```graphql -{ - stakers(block: { number_gte: 14486109 }) { - id - } -} -``` - -关于网络的性质以及如何处理重组的更多信息将在Distributed Systems(/querying/distributed-systems/) 文档文章中描述。 - -## 升级网络上的子图 - -If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI. - -1. Make changes to your current subgraph. -2. 部署以下内容并在命令中指定新版本(例如 v0.0.1、v0.0.2 等): - -```sh -graph deploy --studio --version -``` - -3. Test the new version in Subgraph Studio by querying in the playground -4. 在 The Graph Network 上发布新版本。 请记住,这需要gas(如上一节所述)。 - -### 所有者升级费用:Deep Dive - -> Note: Curation on Arbitrum has a flat bonding curve. Learn more about Arbitrum [here](/arbitrum/arbitrum-faq/). - -升级要求GRT从子图的旧版本迁移到新版本。这意味着每次升级都会创建一条新的收益率曲线(这里更多与收益率曲线相关)。 - -新的收益率曲线对迁移到新版本的所有GRT收取1%的策展税。所有者必须支付其中的50%或1.25%。其余的1.25%由所有策展人承担作为费用。这种激励设计是为了防止子图的所有者能够通过递归升级调用耗尽策展人的所有资金。如果没有策展,您将不得不支付至少100GRT以标记自己的子图。 - -让我们举个例子,仅当您的子图正在获得积极时才策展会出现这种情况: - -- 在子图的 v1 上使用自动迁移发出 100,000 GRT 信号 -- 所有者升级到 v2。 100,000 GRT 被迁移到新粘合曲线,其中 97,500 GRT 进入新曲线,2,500 GRT 被燃烧 -- 然后,所有者耗费了 1250 GRT 以支付一半的费用。 升级前所有者钱包里必须有这个,否则升级不会成功。 这发生在与升级相同的交易中。 - -虽然这种机制目前在网络上运行,但社区目前正在讨论降低子图开发人员升级成本的方法。 - -### 维护子图的稳定版本 - -If you're making a lot of changes to your subgraph, it is not a good idea to continually update it and front the update costs. Maintaining a stable and consistent version of your subgraph is critical, not only from the cost perspective but also so that Indexers can feel confident in their syncing times. Indexers should be flagged when you plan for an update so that Indexer syncing times do not get impacted. Feel free to leverage the [#Indexers channel](https://discord.gg/JexvtHa7dq) on Discord to let Indexers know when you're versioning your subgraphs. - -子图是外部开发人员正在利用的开放 API。 开放 API 需要遵循严格的标准,以免破坏外部开发人员的应用程序。 在 Graph网络中,子图开发人员必须考虑索引人以及同步新子图、使用子图的其他开发人员需要多长时间。 - -### 更新子图的元数据 - -You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields. - -确保选中Update Subgraph Details in Explorer,然后点击保存。 如果选中此项,将生成一个链上交易,更新浏览器中的子图详细信息,而无需发布具有新部署的新版本。 - -## 将子图部署到Graph网络的最佳实践 - -1. 利用 ENS 名称进行子图开发: - -- 设置您的 ENS:https://app.ens.domains/) -- 此处将您的 ENS 名称添加到您的设置中。 - -2. 您的个人资料填写得越多,您的子图被索引和管理的机会就越大。 - -## 弃用Graph网络上的子图 - -按照这里的步骤废弃您的子图并将其从Graph 网络中删除。 - -## 在Graph网络上查询子图 + 计费 - -托管服务的设置允许开发人员不受任何限制地部署他们的子图。 - -On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/). - -## 其他资源 - -如果您仍然感到困惑,请不要害怕! 查看以下资源或观看我们的视频指南,了解将子图迁移到下面的去中心化网络: - - - -- [The Graph Network Contracts](https://github.com/graphprotocol/contracts) -- 策展合约 - GNS - 包裹的底层合约 - 地址 - 0x8fe00a685bcb3b2cc296ff6ffeab10aca4ce1538\` -- [Subgraph Studio documentation](/deploying/subgraph-studio) diff --git a/website/route-lockfile.txt b/website/route-lockfile.txt index de9e6c0a48c3..cef279b371fe 100644 --- a/website/route-lockfile.txt +++ b/website/route-lockfile.txt @@ -9,10 +9,10 @@ /ar/chain-integration-overview/ /ar/cookbook/arweave/ /ar/cookbook/avoid-eth-calls/ -/ar/cookbook/base-testnet/ /ar/cookbook/cosmos/ /ar/cookbook/derivedfrom/ /ar/cookbook/enums/ +/ar/cookbook/grafting-hotfix/ /ar/cookbook/grafting/ /ar/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /ar/cookbook/immutable-entities-bytes-as-ids/ @@ -21,8 +21,8 @@ /ar/cookbook/subgraph-debug-forking/ /ar/cookbook/subgraph-uncrashable/ /ar/cookbook/substreams-powered-subgraphs/ +/ar/cookbook/timeseries/ /ar/cookbook/transfer-to-the-graph/ -/ar/cookbook/upgrading-a-subgraph/ /ar/deploying/deploy-using-subgraph-studio/ /ar/deploying/deploying-a-subgraph-to-hosted/ /ar/deploying/hosted-service/ @@ -83,10 +83,10 @@ /cs/chain-integration-overview/ /cs/cookbook/arweave/ /cs/cookbook/avoid-eth-calls/ -/cs/cookbook/base-testnet/ /cs/cookbook/cosmos/ /cs/cookbook/derivedfrom/ /cs/cookbook/enums/ +/cs/cookbook/grafting-hotfix/ /cs/cookbook/grafting/ /cs/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /cs/cookbook/immutable-entities-bytes-as-ids/ @@ -95,8 +95,8 @@ /cs/cookbook/subgraph-debug-forking/ /cs/cookbook/subgraph-uncrashable/ /cs/cookbook/substreams-powered-subgraphs/ +/cs/cookbook/timeseries/ /cs/cookbook/transfer-to-the-graph/ -/cs/cookbook/upgrading-a-subgraph/ /cs/deploying/deploy-using-subgraph-studio/ /cs/deploying/deploying-a-subgraph-to-hosted/ /cs/deploying/deploying-a-subgraph-to-studio/ @@ -156,10 +156,10 @@ /de/chain-integration-overview/ /de/cookbook/arweave/ /de/cookbook/avoid-eth-calls/ -/de/cookbook/base-testnet/ /de/cookbook/cosmos/ /de/cookbook/derivedfrom/ /de/cookbook/enums/ +/de/cookbook/grafting-hotfix/ /de/cookbook/grafting/ /de/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /de/cookbook/immutable-entities-bytes-as-ids/ @@ -168,8 +168,8 @@ /de/cookbook/subgraph-debug-forking/ /de/cookbook/subgraph-uncrashable/ /de/cookbook/substreams-powered-subgraphs/ +/de/cookbook/timeseries/ /de/cookbook/transfer-to-the-graph/ -/de/cookbook/upgrading-a-subgraph/ /de/deploying/deploy-using-subgraph-studio/ /de/deploying/deploying-a-subgraph-to-hosted/ /de/deploying/deploying-a-subgraph-to-studio/ @@ -234,6 +234,7 @@ /en/cookbook/cosmos/ /en/cookbook/derivedfrom/ /en/cookbook/enums/ +/en/cookbook/grafting-hotfix/ /en/cookbook/grafting/ /en/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /en/cookbook/immutable-entities-bytes-as-ids/ @@ -242,6 +243,7 @@ /en/cookbook/subgraph-debug-forking/ /en/cookbook/subgraph-uncrashable/ /en/cookbook/substreams-powered-subgraphs/ +/en/cookbook/timeseries/ /en/cookbook/transfer-to-the-graph/ /en/deploying/deploy-using-subgraph-studio/ /en/deploying/multiple-networks/ @@ -304,10 +306,10 @@ /es/chain-integration-overview/ /es/cookbook/arweave/ /es/cookbook/avoid-eth-calls/ -/es/cookbook/base-testnet/ /es/cookbook/cosmos/ /es/cookbook/derivedfrom/ /es/cookbook/enums/ +/es/cookbook/grafting-hotfix/ /es/cookbook/grafting/ /es/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /es/cookbook/immutable-entities-bytes-as-ids/ @@ -316,8 +318,8 @@ /es/cookbook/subgraph-debug-forking/ /es/cookbook/subgraph-uncrashable/ /es/cookbook/substreams-powered-subgraphs/ +/es/cookbook/timeseries/ /es/cookbook/transfer-to-the-graph/ -/es/cookbook/upgrading-a-subgraph/ /es/deploying/deploy-using-subgraph-studio/ /es/deploying/deploying-a-subgraph-to-hosted/ /es/deploying/deploying-a-subgraph-to-studio/ @@ -379,10 +381,10 @@ /fr/chain-integration-overview/ /fr/cookbook/arweave/ /fr/cookbook/avoid-eth-calls/ -/fr/cookbook/base-testnet/ /fr/cookbook/cosmos/ /fr/cookbook/derivedfrom/ /fr/cookbook/enums/ +/fr/cookbook/grafting-hotfix/ /fr/cookbook/grafting/ /fr/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /fr/cookbook/immutable-entities-bytes-as-ids/ @@ -391,8 +393,8 @@ /fr/cookbook/subgraph-debug-forking/ /fr/cookbook/subgraph-uncrashable/ /fr/cookbook/substreams-powered-subgraphs/ +/fr/cookbook/timeseries/ /fr/cookbook/transfer-to-the-graph/ -/fr/cookbook/upgrading-a-subgraph/ /fr/deploying/deploy-using-subgraph-studio/ /fr/deploying/deploying-a-subgraph-to-hosted/ /fr/deploying/deploying-a-subgraph-to-studio/ @@ -451,16 +453,16 @@ /ha/billing/ /ha/chain-integration-overview/ /ha/cookbook/arweave/ -/ha/cookbook/base-testnet/ /ha/cookbook/cosmos/ /ha/cookbook/enums/ +/ha/cookbook/grafting-hotfix/ /ha/cookbook/grafting/ /ha/cookbook/near/ /ha/cookbook/subgraph-debug-forking/ /ha/cookbook/subgraph-uncrashable/ /ha/cookbook/substreams-powered-subgraphs/ +/ha/cookbook/timeseries/ /ha/cookbook/transfer-to-the-graph/ -/ha/cookbook/upgrading-a-subgraph/ /ha/deploying/deploy-using-subgraph-studio/ /ha/deploying/deploying-a-subgraph-to-hosted/ /ha/deploying/deploying-a-subgraph-to-studio/ @@ -521,10 +523,10 @@ /hi/chain-integration-overview/ /hi/cookbook/arweave/ /hi/cookbook/avoid-eth-calls/ -/hi/cookbook/base-testnet/ /hi/cookbook/cosmos/ /hi/cookbook/derivedfrom/ /hi/cookbook/enums/ +/hi/cookbook/grafting-hotfix/ /hi/cookbook/grafting/ /hi/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /hi/cookbook/immutable-entities-bytes-as-ids/ @@ -533,8 +535,8 @@ /hi/cookbook/subgraph-debug-forking/ /hi/cookbook/subgraph-uncrashable/ /hi/cookbook/substreams-powered-subgraphs/ +/hi/cookbook/timeseries/ /hi/cookbook/transfer-to-the-graph/ -/hi/cookbook/upgrading-a-subgraph/ /hi/deploying/deploy-using-subgraph-studio/ /hi/deploying/deploying-a-subgraph-to-hosted/ /hi/deploying/deploying-a-subgraph-to-studio/ @@ -596,10 +598,10 @@ /it/chain-integration-overview/ /it/cookbook/arweave/ /it/cookbook/avoid-eth-calls/ -/it/cookbook/base-testnet/ /it/cookbook/cosmos/ /it/cookbook/derivedfrom/ /it/cookbook/enums/ +/it/cookbook/grafting-hotfix/ /it/cookbook/grafting/ /it/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /it/cookbook/immutable-entities-bytes-as-ids/ @@ -608,8 +610,8 @@ /it/cookbook/subgraph-debug-forking/ /it/cookbook/subgraph-uncrashable/ /it/cookbook/substreams-powered-subgraphs/ +/it/cookbook/timeseries/ /it/cookbook/transfer-to-the-graph/ -/it/cookbook/upgrading-a-subgraph/ /it/deploying/deploy-using-subgraph-studio/ /it/deploying/deploying-a-subgraph-to-hosted/ /it/deploying/deploying-a-subgraph-to-studio/ @@ -671,10 +673,10 @@ /ja/chain-integration-overview/ /ja/cookbook/arweave/ /ja/cookbook/avoid-eth-calls/ -/ja/cookbook/base-testnet/ /ja/cookbook/cosmos/ /ja/cookbook/derivedfrom/ /ja/cookbook/enums/ +/ja/cookbook/grafting-hotfix/ /ja/cookbook/grafting/ /ja/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /ja/cookbook/immutable-entities-bytes-as-ids/ @@ -683,8 +685,8 @@ /ja/cookbook/subgraph-debug-forking/ /ja/cookbook/subgraph-uncrashable/ /ja/cookbook/substreams-powered-subgraphs/ +/ja/cookbook/timeseries/ /ja/cookbook/transfer-to-the-graph/ -/ja/cookbook/upgrading-a-subgraph/ /ja/deploying/deploy-using-subgraph-studio/ /ja/deploying/deploying-a-subgraph-to-hosted/ /ja/deploying/deploying-a-subgraph-to-studio/ @@ -744,10 +746,10 @@ /ko/chain-integration-overview/ /ko/cookbook/arweave/ /ko/cookbook/avoid-eth-calls/ -/ko/cookbook/base-testnet/ /ko/cookbook/cosmos/ /ko/cookbook/derivedfrom/ /ko/cookbook/enums/ +/ko/cookbook/grafting-hotfix/ /ko/cookbook/grafting/ /ko/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /ko/cookbook/immutable-entities-bytes-as-ids/ @@ -756,8 +758,8 @@ /ko/cookbook/subgraph-debug-forking/ /ko/cookbook/subgraph-uncrashable/ /ko/cookbook/substreams-powered-subgraphs/ +/ko/cookbook/timeseries/ /ko/cookbook/transfer-to-the-graph/ -/ko/cookbook/upgrading-a-subgraph/ /ko/deploying/deploy-using-subgraph-studio/ /ko/deploying/deploying-a-subgraph-to-hosted/ /ko/deploying/deploying-a-subgraph-to-studio/ @@ -819,10 +821,10 @@ /mr/chain-integration-overview/ /mr/cookbook/arweave/ /mr/cookbook/avoid-eth-calls/ -/mr/cookbook/base-testnet/ /mr/cookbook/cosmos/ /mr/cookbook/derivedfrom/ /mr/cookbook/enums/ +/mr/cookbook/grafting-hotfix/ /mr/cookbook/grafting/ /mr/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /mr/cookbook/immutable-entities-bytes-as-ids/ @@ -831,8 +833,8 @@ /mr/cookbook/subgraph-debug-forking/ /mr/cookbook/subgraph-uncrashable/ /mr/cookbook/substreams-powered-subgraphs/ +/mr/cookbook/timeseries/ /mr/cookbook/transfer-to-the-graph/ -/mr/cookbook/upgrading-a-subgraph/ /mr/deploying/deploy-using-subgraph-studio/ /mr/deploying/deploying-a-subgraph-to-hosted/ /mr/deploying/deploying-a-subgraph-to-studio/ @@ -892,10 +894,10 @@ /nl/chain-integration-overview/ /nl/cookbook/arweave/ /nl/cookbook/avoid-eth-calls/ -/nl/cookbook/base-testnet/ /nl/cookbook/cosmos/ /nl/cookbook/derivedfrom/ /nl/cookbook/enums/ +/nl/cookbook/grafting-hotfix/ /nl/cookbook/grafting/ /nl/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /nl/cookbook/immutable-entities-bytes-as-ids/ @@ -904,8 +906,8 @@ /nl/cookbook/subgraph-debug-forking/ /nl/cookbook/subgraph-uncrashable/ /nl/cookbook/substreams-powered-subgraphs/ +/nl/cookbook/timeseries/ /nl/cookbook/transfer-to-the-graph/ -/nl/cookbook/upgrading-a-subgraph/ /nl/deploying/deploy-using-subgraph-studio/ /nl/deploying/deploying-a-subgraph-to-hosted/ /nl/deploying/deploying-a-subgraph-to-studio/ @@ -965,10 +967,10 @@ /pl/chain-integration-overview/ /pl/cookbook/arweave/ /pl/cookbook/avoid-eth-calls/ -/pl/cookbook/base-testnet/ /pl/cookbook/cosmos/ /pl/cookbook/derivedfrom/ /pl/cookbook/enums/ +/pl/cookbook/grafting-hotfix/ /pl/cookbook/grafting/ /pl/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /pl/cookbook/immutable-entities-bytes-as-ids/ @@ -977,8 +979,8 @@ /pl/cookbook/subgraph-debug-forking/ /pl/cookbook/subgraph-uncrashable/ /pl/cookbook/substreams-powered-subgraphs/ +/pl/cookbook/timeseries/ /pl/cookbook/transfer-to-the-graph/ -/pl/cookbook/upgrading-a-subgraph/ /pl/deploying/deploy-using-subgraph-studio/ /pl/deploying/deploying-a-subgraph-to-hosted/ /pl/deploying/deploying-a-subgraph-to-studio/ @@ -1040,10 +1042,10 @@ /pt/chain-integration-overview/ /pt/cookbook/arweave/ /pt/cookbook/avoid-eth-calls/ -/pt/cookbook/base-testnet/ /pt/cookbook/cosmos/ /pt/cookbook/derivedfrom/ /pt/cookbook/enums/ +/pt/cookbook/grafting-hotfix/ /pt/cookbook/grafting/ /pt/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /pt/cookbook/immutable-entities-bytes-as-ids/ @@ -1052,8 +1054,8 @@ /pt/cookbook/subgraph-debug-forking/ /pt/cookbook/subgraph-uncrashable/ /pt/cookbook/substreams-powered-subgraphs/ +/pt/cookbook/timeseries/ /pt/cookbook/transfer-to-the-graph/ -/pt/cookbook/upgrading-a-subgraph/ /pt/deploying/deploy-using-subgraph-studio/ /pt/deploying/deploying-a-subgraph-to-hosted/ /pt/deploying/deploying-a-subgraph-to-studio/ @@ -1113,10 +1115,10 @@ /ro/chain-integration-overview/ /ro/cookbook/arweave/ /ro/cookbook/avoid-eth-calls/ -/ro/cookbook/base-testnet/ /ro/cookbook/cosmos/ /ro/cookbook/derivedfrom/ /ro/cookbook/enums/ +/ro/cookbook/grafting-hotfix/ /ro/cookbook/grafting/ /ro/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /ro/cookbook/immutable-entities-bytes-as-ids/ @@ -1125,8 +1127,8 @@ /ro/cookbook/subgraph-debug-forking/ /ro/cookbook/subgraph-uncrashable/ /ro/cookbook/substreams-powered-subgraphs/ +/ro/cookbook/timeseries/ /ro/cookbook/transfer-to-the-graph/ -/ro/cookbook/upgrading-a-subgraph/ /ro/deploying/deploy-using-subgraph-studio/ /ro/deploying/deploying-a-subgraph-to-hosted/ /ro/deploying/deploying-a-subgraph-to-studio/ @@ -1188,10 +1190,10 @@ /ru/chain-integration-overview/ /ru/cookbook/arweave/ /ru/cookbook/avoid-eth-calls/ -/ru/cookbook/base-testnet/ /ru/cookbook/cosmos/ /ru/cookbook/derivedfrom/ /ru/cookbook/enums/ +/ru/cookbook/grafting-hotfix/ /ru/cookbook/grafting/ /ru/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /ru/cookbook/immutable-entities-bytes-as-ids/ @@ -1200,8 +1202,8 @@ /ru/cookbook/subgraph-debug-forking/ /ru/cookbook/subgraph-uncrashable/ /ru/cookbook/substreams-powered-subgraphs/ +/ru/cookbook/timeseries/ /ru/cookbook/transfer-to-the-graph/ -/ru/cookbook/upgrading-a-subgraph/ /ru/deploying/deploy-using-subgraph-studio/ /ru/deploying/deploying-a-subgraph-to-hosted/ /ru/deploying/deploying-a-subgraph-to-studio/ @@ -1263,10 +1265,10 @@ /sv/chain-integration-overview/ /sv/cookbook/arweave/ /sv/cookbook/avoid-eth-calls/ -/sv/cookbook/base-testnet/ /sv/cookbook/cosmos/ /sv/cookbook/derivedfrom/ /sv/cookbook/enums/ +/sv/cookbook/grafting-hotfix/ /sv/cookbook/grafting/ /sv/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /sv/cookbook/immutable-entities-bytes-as-ids/ @@ -1275,8 +1277,8 @@ /sv/cookbook/subgraph-debug-forking/ /sv/cookbook/subgraph-uncrashable/ /sv/cookbook/substreams-powered-subgraphs/ +/sv/cookbook/timeseries/ /sv/cookbook/transfer-to-the-graph/ -/sv/cookbook/upgrading-a-subgraph/ /sv/deploying/deploy-using-subgraph-studio/ /sv/deploying/deploying-a-subgraph-to-hosted/ /sv/deploying/deploying-a-subgraph-to-studio/ @@ -1338,10 +1340,10 @@ /tr/chain-integration-overview/ /tr/cookbook/arweave/ /tr/cookbook/avoid-eth-calls/ -/tr/cookbook/base-testnet/ /tr/cookbook/cosmos/ /tr/cookbook/derivedfrom/ /tr/cookbook/enums/ +/tr/cookbook/grafting-hotfix/ /tr/cookbook/grafting/ /tr/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /tr/cookbook/immutable-entities-bytes-as-ids/ @@ -1350,8 +1352,8 @@ /tr/cookbook/subgraph-debug-forking/ /tr/cookbook/subgraph-uncrashable/ /tr/cookbook/substreams-powered-subgraphs/ +/tr/cookbook/timeseries/ /tr/cookbook/transfer-to-the-graph/ -/tr/cookbook/upgrading-a-subgraph/ /tr/deploying/deploy-using-subgraph-studio/ /tr/deploying/deploying-a-subgraph-to-hosted/ /tr/deploying/deploying-a-subgraph-to-studio/ @@ -1411,10 +1413,10 @@ /uk/chain-integration-overview/ /uk/cookbook/arweave/ /uk/cookbook/avoid-eth-calls/ -/uk/cookbook/base-testnet/ /uk/cookbook/cosmos/ /uk/cookbook/derivedfrom/ /uk/cookbook/enums/ +/uk/cookbook/grafting-hotfix/ /uk/cookbook/grafting/ /uk/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /uk/cookbook/immutable-entities-bytes-as-ids/ @@ -1423,8 +1425,8 @@ /uk/cookbook/subgraph-debug-forking/ /uk/cookbook/subgraph-uncrashable/ /uk/cookbook/substreams-powered-subgraphs/ +/uk/cookbook/timeseries/ /uk/cookbook/transfer-to-the-graph/ -/uk/cookbook/upgrading-a-subgraph/ /uk/deploying/deploy-using-subgraph-studio/ /uk/deploying/deploying-a-subgraph-to-hosted/ /uk/deploying/deploying-a-subgraph-to-studio/ @@ -1486,10 +1488,10 @@ /ur/chain-integration-overview/ /ur/cookbook/arweave/ /ur/cookbook/avoid-eth-calls/ -/ur/cookbook/base-testnet/ /ur/cookbook/cosmos/ /ur/cookbook/derivedfrom/ /ur/cookbook/enums/ +/ur/cookbook/grafting-hotfix/ /ur/cookbook/grafting/ /ur/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /ur/cookbook/immutable-entities-bytes-as-ids/ @@ -1498,8 +1500,8 @@ /ur/cookbook/subgraph-debug-forking/ /ur/cookbook/subgraph-uncrashable/ /ur/cookbook/substreams-powered-subgraphs/ +/ur/cookbook/timeseries/ /ur/cookbook/transfer-to-the-graph/ -/ur/cookbook/upgrading-a-subgraph/ /ur/deploying/deploy-using-subgraph-studio/ /ur/deploying/deploying-a-subgraph-to-hosted/ /ur/deploying/deploying-a-subgraph-to-studio/ @@ -1559,10 +1561,10 @@ /vi/chain-integration-overview/ /vi/cookbook/arweave/ /vi/cookbook/avoid-eth-calls/ -/vi/cookbook/base-testnet/ /vi/cookbook/cosmos/ /vi/cookbook/derivedfrom/ /vi/cookbook/enums/ +/vi/cookbook/grafting-hotfix/ /vi/cookbook/grafting/ /vi/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /vi/cookbook/immutable-entities-bytes-as-ids/ @@ -1571,8 +1573,8 @@ /vi/cookbook/subgraph-debug-forking/ /vi/cookbook/subgraph-uncrashable/ /vi/cookbook/substreams-powered-subgraphs/ +/vi/cookbook/timeseries/ /vi/cookbook/transfer-to-the-graph/ -/vi/cookbook/upgrading-a-subgraph/ /vi/deploying/deploy-using-subgraph-studio/ /vi/deploying/deploying-a-subgraph-to-hosted/ /vi/deploying/deploying-a-subgraph-to-studio/ @@ -1632,10 +1634,10 @@ /yo/chain-integration-overview/ /yo/cookbook/arweave/ /yo/cookbook/avoid-eth-calls/ -/yo/cookbook/base-testnet/ /yo/cookbook/cosmos/ /yo/cookbook/derivedfrom/ /yo/cookbook/enums/ +/yo/cookbook/grafting-hotfix/ /yo/cookbook/grafting/ /yo/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /yo/cookbook/immutable-entities-bytes-as-ids/ @@ -1644,8 +1646,8 @@ /yo/cookbook/subgraph-debug-forking/ /yo/cookbook/subgraph-uncrashable/ /yo/cookbook/substreams-powered-subgraphs/ +/yo/cookbook/timeseries/ /yo/cookbook/transfer-to-the-graph/ -/yo/cookbook/upgrading-a-subgraph/ /yo/deploying/deploy-using-subgraph-studio/ /yo/deploying/deploying-a-subgraph-to-hosted/ /yo/deploying/deploying-a-subgraph-to-studio/ @@ -1707,10 +1709,10 @@ /zh/chain-integration-overview/ /zh/cookbook/arweave/ /zh/cookbook/avoid-eth-calls/ -/zh/cookbook/base-testnet/ /zh/cookbook/cosmos/ /zh/cookbook/derivedfrom/ /zh/cookbook/enums/ +/zh/cookbook/grafting-hotfix/ /zh/cookbook/grafting/ /zh/cookbook/how-to-secure-api-keys-using-nextjs-server-components/ /zh/cookbook/immutable-entities-bytes-as-ids/ @@ -1719,8 +1721,8 @@ /zh/cookbook/subgraph-debug-forking/ /zh/cookbook/subgraph-uncrashable/ /zh/cookbook/substreams-powered-subgraphs/ +/zh/cookbook/timeseries/ /zh/cookbook/transfer-to-the-graph/ -/zh/cookbook/upgrading-a-subgraph/ /zh/deploying/deploy-using-subgraph-studio/ /zh/deploying/deploying-a-subgraph-to-hosted/ /zh/deploying/deploying-a-subgraph-to-studio/