diff --git a/website/pages/ar/about.mdx b/website/pages/ar/about.mdx
index 6f3c43739c43..7ac49dc47560 100644
--- a/website/pages/ar/about.mdx
+++ b/website/pages/ar/about.mdx
@@ -10,17 +10,17 @@ The Graph is a decentralized protocol for indexing and querying blockchain data.
المشاريع ذات العقود الذكية المعقدة مثل [ Uniswap ](https://uniswap.org/) و NFTs مثل [ Bored Ape Yacht Club ](https://boredapeyachtclub.com/) تقوم بتخزين البيانات على Ethereum blockchain ، مما يجعل من الصعب قراءة أي شيء بشكل مباشر عدا البيانات الأساسية من blockchain.
-في حالة Bored Ape Yacht Club ، يمكننا إجراء قراءات أساسية على [ العقد ](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) مثل الحصول على مالك Ape معين ،أو الحصول على محتوى URI لـ Ape وذلك بناء على ال ID الخاص به، أو إجمالي العرض ، حيث تتم برمجة عمليات القراءة هذه بشكل مباشر في العقد الذكي ، ولكن في العالم الحقيقي هناك استعلامات وعمليات أكثر تقدمًا غير ممكنة مثل التجميع والبحث والعلاقات والفلترة الغير بسيطة. فمثلا، إذا أردنا الاستعلام عن Apes مملوكة لعنوان معين ،وفلترته حسب إحدى خصائصه، فلن نتمكن من الحصول على تلك المعلومات من خلال التفاعل بشكل مباشر مع العقد نفسه.
+In the case of Bored Ape Yacht Club, we can perform basic read operations on [the contract](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code) like getting the owner of a certain Ape, getting the content URI of an Ape based on their ID, or the total supply. This can be done because these read operations are programmed directly into the smart contract. However, more advanced real-world queries and operations like aggregation, search, relationships, and non-trivial filtering are _not_ possible. For example, if we wanted to query for Apes that are owned by a certain address and filter by one of its characteristics, we would not be able to get that information by interacting directly with the contract itself.
-للحصول على هذه البيانات، يجب معالجة كل [`التحويلات`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) التي حدثت، وقراءة البيانات الوصفية من IPFS باستخدام Token ID و IPFS hash، ومن ثم تجميعه. حتى بالنسبة لهذه الأنواع من الأسئلة البسيطة نسبيا ، قد يستغرق الأمر **ساعات أو حتى أيام** لتطبيق لامركزي (dapp) يعمل في متصفح للحصول على إجابة.
+To get this data, you would have to process every single [`transfer`](https://etherscan.io/address/0xbc4ca0eda7647a8ab7c2061c2e118a18a936f13d#code#L1746) event ever emitted, read the metadata from IPFS using the Token ID and IPFS hash, and then aggregate it. It would take **hours or even days** for a decentralized application (dapp) running in a browser to get an answer to these simple questions.
You could also build out your own server, process the transactions there, save them to a database, and build an API endpoint on top of it all in order to query the data. However, this option is [resource intensive](/network/benefits/), needs maintenance, presents a single point of failure, and breaks important security properties required for decentralization.
**إن فهرسة بيانات الـ blockchain أمر صعب.**
-خصائص الـ Blockchain مثل finality أو chain reorganizations أو uncled blocks تعقد هذه العملية بشكل أكبر ، ولن تجعلها مضيعة للوقت فحسب ، بل أيضا تجعلها من الصعب من الناحية النظرية جلب نتائج الاستعلام الصحيحة من بيانات الـ blockchain.
+Blockchain properties like finality, chain reorganizations, or uncled blocks complicate this process further. They make it time consuming and conceptually hard to retrieve correct query results from blockchain data.
-يقوم The Graph بحل هذا الأمر من خلال بروتوكول لامركزي والذي يقوم بفهرسة والاستعلام عن بيانات الـ blockchain بكفاءة عالية. حيث يمكن بعد ذلك الاستعلام عن APIs (الـ "subgraphs" المفهرسة) باستخدام GraphQL API قياسية. اليوم ، هناك خدمة مستضافة بالإضافة إلى بروتوكول لامركزي بنفس القدرات. كلاهما مدعوم بتطبيق مفتوح المصدر لـ [ Graph Node ](https://github.com/graphprotocol/graph-node).
+The Graph provides a solution with a decentralized protocol that indexes and enables the efficient and high-performance querying of blockchain data. These APIs (indexed "subgraphs") can then be queried with a standard GraphQL API. Today, there is a hosted service as well as a decentralized protocol with the same capabilities. Both are backed by the open source implementation of [Graph Node](https://github.com/graphprotocol/graph-node).
## كيف يعمل The Graph
@@ -42,6 +42,6 @@ This diagram gives more detail about the flow of data once a subgraph manifest h
## الخطوات التالية
-في الأقسام التالية سوف نخوض في المزيد من التفاصيل حول كيفية تعريف الـ subgraphs ، وكيفية نشرها ،وكيفية الاستعلام عن البيانات من الفهارس التي يبنيها الـ Graph Node.
+The following sections provide more detail on how to define subgraphs, how to deploy them, and how to query data from the indexes that Graph Node builds.
-قبل أن تبدأ في كتابة الـ subgraph الخاص بك ، قد ترغب في إلقاء نظرة على The Graph Explorer واستكشاف بعض الـ subgraphs التي تم نشرها. تحتوي الصفحة الخاصة بكل subgraph على playground والذي يتيح لك الاستعلام عن بيانات الـ subgraph باستخدام GraphQL.
+Before you start writing your own subgraph, you might want to have a look at [Graph Explorer](https://thegraph.com/explorer) and explore some of the subgraphs that have already been deployed. The page for each subgraph contains a playground that lets you query that subgraph's data with GraphQL.
diff --git a/website/pages/ar/arbitrum/arbitrum-faq.mdx b/website/pages/ar/arbitrum/arbitrum-faq.mdx
index 2d3f7ee483d5..98346d82a41d 100644
--- a/website/pages/ar/arbitrum/arbitrum-faq.mdx
+++ b/website/pages/ar/arbitrum/arbitrum-faq.mdx
@@ -20,12 +20,20 @@ The Graph community decided to move forward with Arbitrum last year after the ou
## ما الذي يجب علي فعله لاستخدام The Graph في L2؟
-يقوم المستخدمون بربط GRT و ETH باستخدام إحدى الطرق التالية:
+The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One.
-- [The Graph Bridge on Arbitrum](https://bridge.arbitrum.io/?l2ChainId=42161)
-- [TransferTo](https://transferto.xyz/swap)
-- [Connext Bridge](https://bridge.connext.network/)
-- [Hop Exchange](https://app.hop.exchange/#/send?token=ETH)
+Consequently, to pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this:
+
+- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges:
+
+ - [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161)
+ - [TransferTo](https://transferto.xyz/swap)
+
+- If you have other assets on Arbitrum, you can swap them for GRT through a swapping protocol like Uniswap.
+
+- Alternatively, you can acquire GRT directly on Arbitrum through a decentralized exchange.
+
+Once you have GRT on Arbitrum, you can add it to your billing balance.
للاستفادة من استخدام The Graph على L2 ، استخدم قائمة المنسدلة للتبديل بين الشبكات.
@@ -45,7 +53,7 @@ Please help [test the network](https://testnet.thegraph.com/explorer) on L2 and
## هل توجد أي مخاطر مرتبطة بتوسيع الشبكة إلى L2؟
-All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/dev/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf).
+All smart contracts have been thoroughly [audited](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/audits/OpenZeppelin/2022-07-graph-arbitrum-bridge-audit.pdf).
Everything has been tested thoroughly, and a contingency plan is in place to ensure a safe and seamless transition. Details can be found [here](https://forum.thegraph.com/t/gip-0037-the-graph-arbitrum-deployment-with-linear-rewards-minted-in-l2/3551#risks-and-security-considerations-20).
@@ -75,4 +83,4 @@ The bridge has been [heavily audited](https://code4rena.com/contests/2022-10-the
Adding GRT to your Arbitrum billing balance can be done with a one-click experience in [Subgraph Studio](https://thegraph.com/studio/). You'll be able to easily bridge your GRT to Arbitrum and fill your API keys in one transaction.
-Visit the [Billing page](https://thegraph.com/docs/en/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT.
+Visit the [Billing page](/billing/) for more detailed instructions on adding, withdrawing, or acquiring GRT.
diff --git a/website/pages/ar/arbitrum/l2-transfer-tools-guide.mdx b/website/pages/ar/arbitrum/l2-transfer-tools-guide.mdx
index f88e5b58f0e9..33b5b1628783 100644
--- a/website/pages/ar/arbitrum/l2-transfer-tools-guide.mdx
+++ b/website/pages/ar/arbitrum/l2-transfer-tools-guide.mdx
@@ -68,7 +68,7 @@ title: L2 Transfer Tools Guide
إذا قمت بتنفيذ هذه الخطوة، \*\*يجب عليك التأكد من أنك ستستكمل الخطوة 3 في غضون 7 أيام، وإلا فإنك ستفقد الغراف الفرعي والإشارة GRT الخاصة بك. يرجع ذلك إلى آلية التواصل بين الطبقة الأولى والطبقة الثانية في أربترم: الرسائل التي ترسل عبر الجسر هي "تذاكر قابلة لإعادة المحاولة" يجب تنفيذها في غضون 7 أيام، وقد يتطلب التنفيذ الأولي إعادة المحاولة إذا كان هناك زيادة في سعر الغاز على أربترم.
-! [ابدأ النقل إلى الطبقة الثانية] (/ img / startTransferL2.png)
+![Start the transfer to L2](/img/startTransferL2.png)
## الخطوة 2: الانتظار حتى يتم نقل الغراف الفرعي إلى الطبقة الثانية
diff --git a/website/pages/ar/billing.mdx b/website/pages/ar/billing.mdx
index 763ebdbdaf2a..68ee9ca693bd 100644
--- a/website/pages/ar/billing.mdx
+++ b/website/pages/ar/billing.mdx
@@ -2,106 +2,95 @@
title: الفوترة
---
-> يتم إصدار الفواتير على أساس أسبوعي.
+## Subgraph Billing Plans
-يوجد خياران لدفع رسوم الاستعلام:
+There are two plans to use when querying subgraphs on The Graph Network.
-- [الدفع بالعملة الورقية مع Banxa](#billing-with-banxa)
-- [الدفع بمحفظة التشفير](#billing-on-arbitrum)
+- **Free Plan**: The Free Plan includes 100,000 free monthly queries with full access to the Subgraph Studio testing environment. This plan is designed for hobbyists, hackathoners, and those with side projects to try out The Graph before scaling their dapp.
-## الفواتير مع Banxa
+- **Growth Plan**: The Growth Plan includes everything in the Free Plan with all queries after 100,000 monthly queries requiring payments with GRT or credit card. The Growth Plan is flexible enough to cover teams that have established dapps across a variety of use cases.
-يمكّنك Banxa من تجاوز الحاجة إلى صرف العملة ودفع رسوم الاستعلام باستخدام العملة الورقية التي تختارها. سيتم تحويل العملة الورقية إلى GRT ، وإضافتها إلى رصيد حسابك في عقد الفوترة ، واستخدامها للدفع مقابل الاستفسارات المرتبطة بمفاتيح API الخاصة بك.
+
-قد تكون هناك متطلبات تعرف على عميلك (KYC) بناءً على اللوائح المعمول بها في بلدك. لمزيد من المعلومات حول KYC ، يرجى زيارة [ صفحة الأسئلة الشائعة في Banxa ](https://docs.banxa.com/docs/faqs).
+## Query Payments with credit card
-يمكنك معرفة المزيد حول Banxa من خلال قراءة [ وثائقهم ](https://docs.banxa.com/docs).
+- To set up billing with credit/debit cards, users will access Subgraph Studio (https://thegraph.com/studio/)
+ 1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/).
+ 2. انقر على زر "توصيل المحفظة" في الزاوية اليمنى العليا من الصفحة. ستتم إعادة توجيهك إلى صفحة اختيار المحفظة. حدد محفظتك وانقر على "توصيل".
+ 3. Choose “Upgrade plan” if you are upgrading from the Free Plan or choose “Manage Plan” if you have already added GRT to your billing balance in the past. Next, you can estimate the number of queries to get a pricing estimate, but this is not a required step.
+ 4. To choose a credit card payment, choose “Credit card” as the payment method and fill out your credit card information. Those who have used Stripe before can use the Link feature to autofill their details.
+- Invoices will be processed at the end of each month and require an active credit card on file for all queries beyond the free plan quota.
-### دفع رسوم الاستعلام مع Banxa
+## Query Payments with GRT
-1. حدد خيار "الدفع بالبطاقة" في [ Subgraph Studio ](https://thegraph.com/studio/billing/؟show=Deposit).
-2. أدخل مبلغ GRT لإضافته إلى رصيد حسابك.
-3. انقر فوق الزر "متابعة مع Banxa".
-4. Enter necessary banking information on Banxa including payment method & fiat currency of choice.
-5. قم بإنهاء المعاملة.
-
-قد يستغرق الأمر ما يصل إلى 10 دقائق لإكمال المعاملة. بمجرد تأكيد المعاملة ، ستتم إضافة GRT المشتراة تلقائيًا إلى رصيد حسابك على Arbitrum.
-
-## الفوترة على Arbitrum
-
-بينما يعمل بروتوكول TheGraph على Ethereum Mainnet ، [يوجد عقد الفوترة ](https://arbiscan.io/address/0x1b07d3344188908fb6deceac381f3ee63c48477a) على [ Arbitrum ](https://arbitrum.io/ شبكة) لتقليل أوقات المعاملات وتكلفتها. ستحتاج إلى دفع رسوم الاستعلامات الناتجة عن مفاتيح API الخاصة بك. باستخدام عقد الفوترة ، ستتمكن من:
+Subgraph users can use The Graph Token (or GRT) to pay for queries on The Graph Network. With GRT, invoices will be processed at the end of each month and require a sufficient balance of GRT to make queries beyond the Free Plan quota of 100,000 monthly queries. You'll be required to pay fees generated from your API keys. Using the billing contract, you'll be able to:
- إضافة وسحب GRT من رصيد حسابك.
- تتبع أرصدتك بناءً على مقدار GRT الذي أضفته إلى رصيد حسابك ، والمبلغ الذي قمت بإزالته ، وفواتيرك.
- دفع الفواتير تلقائيًا بناءً على رسوم الاستعلام التي تم إنشاؤها ، طالما أن هناك ما يكفي من GRT في رصيد حسابك.
-### إضافة GRT باستخدام محفظة تشفير
-
-
-
-> تمت كتابة هذا القسم بافتراض أن لديك بالفعل GRT في محفظتك المشفرة ، وأنت على شبكة Ethereum mainnet. إذا لم يكن لديك GRT ، فيمكنك التعرف على كيفية الحصول على GRT [ هنا ](#getting-grt).
-
-For a video walkthrough of adding GRT to your billing balance using a crypto wallet, watch this [video](https://youtu.be/4Bw2sh0FxCg).
-
-1. انتقل إلى [ صفحة فوترة Subgraph Studio ](https://thegraph.com/studio/billing/).
-
-2. انقر على زر "توصيل المحفظة" في الزاوية اليمنى العليا من الصفحة. ستتم إعادة توجيهك إلى صفحة اختيار المحفظة. حدد محفظتك وانقر على "توصيل".
-
-3. انقر فوق زر "إضافة GRT" في منتصف الصفحة. ستظهر لوحة جانبية.
-
-4. أدخل مبلغ GRT الذي تريد إضافته إلى رصيد حسابك. يمكنك أيضًا تحديد الحد الأقصى لمبلغ GRT الذي تريد إضافته إلى رصيد حسابك بالنقر فوق الزر "Max".
-
-5. انقر فوق "السماح بالوصول إلى GRT" للسماح لـ Subgraph Studio بالوصول إلى GRT الخاص بك. قم بتوقيع العملية المرتبطة في محفظتك. هذا لن يكلف أي غاز.
+### GRT on Arbitrum or Ethereum
-6. انقر فوق "إضافة GRT إلى رصيد الحساب" لإضافة GRT إلى رصيد حسابك. قم بتوقيع المعاملة المرتبطة في محفظتك. هذا سيكلف الغاز.
+The Graph’s billing system accepts GRT on Arbitrum, and users will need ETH on Arbitrum to pay their gas. While The Graph protocol started on Ethereum Mainnet, all activity, including the billing contracts, is now on Arbitrum One.
-7. بمجرد تأكيد المعاملة ، سترى GRT مضافًا إلى رصيد حسابك في غضون ساعة.
+To pay for queries, you need GRT on Arbitrum. Here are a few different ways to achieve this:
-### سحب GRT باستخدام محفظة تشفير
+- If you already have GRT on Ethereum, you can bridge it to Arbitrum. You can do this via the GRT bridging option provided in Subgraph Studio or by using one of the following bridges:
-> تمت كتابة هذا القسم بافتراض أنك أودعت GRT في رصيد حسابك على [ Subgraph Studio ](https://thegraph.com/studio/billing/) وأنك على شبكة Arbitrum.
+- [The Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161)
+- [الانتقال](https://transferto.xyz/swap)
-1. انتقل إلى [ صفحة فوترة Subgraph Studio](https://thegraph.com/studio/billing/).
+- If you already have assets on Arbitrum, you can swap them for GRT via a swapping protocol like Uniswap.
-2. انقر على زر "توصيل المحفظة" في الزاوية اليمنى العليا من الصفحة. حدد محفظتك وانقر على "توصيل".
+- Alternatively, you acquire GRT directly on Arbitrum through a decentralized exchange.
-3. انقر فوق القائمة المنسدلة بجوار زر "إضافة GRT" في منتصف الصفحة. حدد سحب GRT. ستظهر لوحة جانبية.
+> This section is written assuming you already have GRT in your wallet, and you're on Arbitrum. If you don't have GRT, you can learn how to get GRT [here](#getting-grt).
-4. أدخل مبلغ GRT الذي ترغب في سحبه.
+Once you bridge GRT, you can add it to your billing balance.
-5. انقر فوق "سحب GRT" لسحب GRT من رصيد حسابك. قم بتوقيع المعاملة المرتبطة في محفظتك. هذا سيكلف الغاز. سيتم إرسال GRT إلى محفظة Arbitrum الخاصة بك.
+### Adding GRT using a wallet
-6. بمجرد تأكيد العملية ، سترى أن GRT قد تم سحبه من رصيد حسابك في محفظة Arbitrum الخاصة بك.
+1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/).
+2. انقر على زر "توصيل المحفظة" في الزاوية اليمنى العليا من الصفحة. ستتم إعادة توجيهك إلى صفحة اختيار المحفظة. حدد محفظتك وانقر على "توصيل".
+3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet".
+4. Use the slider to estimate the number of queries you expect to make on a monthly basis.
+ - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page.
+5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network.
+6. Select the number of months you would like to prepay.
+ - Paying in advance does not commit you to future usage. You will only be charged for what you use and you can withdraw your balance at any time.
+7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable.
+8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet.
+ - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas.
+9. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs.
+
+- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance.
+
+### Withdrawing GRT using a wallet
+
+1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/).
+2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect".
+3. Click the "Manage"" button at the top right corner of the page. Select "Withdraw GRT". A side panel will appear.
+4. Enter the amount of GRT you would like to withdraw.
+5. Click 'Withdraw GRT' to withdraw the GRT from your account balance. Sign the associated transaction in your wallet. This will cost gas. The GRT will be sent to your Arbitrum wallet.
+6. Once the transaction is confirmed, you'll see the GRT withdrawn from your account balance in your Arbitrum wallet.
### إضافة GRT باستخدام محفظة متعددة التوقيع (multisig wallet)
-
-
-1. انتقل إلى [ صفحة فوترة Subgraph Studio](https://thegraph.com/studio/billing/).
-
-2. انقر على زر "توصيل المحفظة " في الزاوية اليمنى العليا من الصفحة. حدد محفظتك وانقر على "توصيل". إذا كنت تستخدم [ Gnosis-Safe ](https://gnosis-safe.io/) ، فستتمكن من توصيل multisig بالإضافة إلى محفظة التوقيع الخاصة بك. ثم قم بتوقيع الرسالة المرتبطة. هذا لن يكلف أي غاز.
-
-3. انقر فوق زر "إضافة GRT" في منتصف الصفحة. ستظهر لوحة جانبية.
+1. Go to the [Subgraph Studio Billing page](https://thegraph.com/studio/billing/).
+2. Click on the "Connect Wallet" button on the top right corner of the page. Select your wallet and click on "Connect". If you're using [Gnosis-Safe](https://gnosis-safe.io/), you'll be able to connect your multisig as well as your signing wallet. Then, sign the associated message. This will not cost any gas.
+3. Select the "Manage" button near the top right corner. First time users will see an option to "Upgrade to Growth plan" while returning users will click "Deposit from wallet".
+4. Use the slider to estimate the number of queries you expect to make on a monthly basis.
+ - For suggestions on the number of queries you may use, see our **Frequently Asked Questions** page.
+5. Choose "Cryptocurrency". GRT is currently the only cryptocurrency accepted on The Graph Network.
+6. Select the number of months you would like to prepay.
+ - Paying in advance does not committing you to future usage. You will only be charged for what you use and you can withdraw your balance at any time.
+7. Pick the network from which you are depositing your GRT. GRT on Arbitrum or Ethereum are both acceptable. 8. Click "Allow GRT Access" and then specify the amount of GRT that can be taken from you wallet.
+ - If you are prepaying for multiple months, you must allow access to the amount that corresponds with that amount. This interaction will not cost any gas.
+8. Lastly, click on "Add GRT to Billing Balance". This transaction will require ETH on Arbitrum to cover the gas costs.
-4. بمجرد تأكيد المعاملة ، سترى GRT مضافًا إلى رصيد حسابك في غضون ساعة.
+- Note that GRT deposited from Arbitrum will process within a few moments while GRT deposited from Ethereum will take approximately 15-20 minutes to process. Once the transaction is confirmed, you'll see the GRT added to your account balance.
-### سحب GRT باستخدام محفظة multisig
-
-> تمت كتابة هذا القسم بافتراض أنك أودعت GRT في رصيد حسابك على [ Subgraph Studio ](https://thegraph.com/studio/billing/) وأنك تستخدم Ethereum mainnet.
-
-1. انتقل إلى [ صفحة فوترة Subgraph Studio](https://thegraph.com/studio/billing/).
-
-2. انقر على زر "توصيل المحفظة" في الزاوية اليمنى العليا من الصفحة. حدد محفظتك وانقر على "توصيل".
-
-3. انقر فوق القائمة المنسدلة بجوار زر "إضافة GRT" في منتصف الصفحة. حدد سحب GRT. ستظهر لوحة جانبية.
-
-4. أدخل مبلغ GRT الذي ترغب في سحبه. حدد المحفظة المستلمة التي ستتلقى GRT من هذه المعاملة. سيتم إرسال GRT إلى المحفظة المستلمة على Arbitrum.
-
-5. انقر فوق "سحب GRT" لسحب GRT من رصيد حسابك. قم بتوقيع المعاملة المرتبطة في محفظتك. هذا سيكلف الغاز.
-
-6. بمجرد تأكيد المعاملة ، سترى GRT مضافًا إلى محفظة Arbitrum الخاصة بك في غضون ساعة.
-
-## الحصول على GRT
+## Getting GRT
This section will show you how to get GRT to pay for query fees.
@@ -109,19 +98,19 @@ This section will show you how to get GRT to pay for query fees.
This will be a step by step guide for purchasing GRT on Coinbase.
-1. انتقل إلى [ Coinbase ](https://www.coinbase.com/) وأنشئ حسابًا.
-2. بمجرد إنشاء حساب ، ستحتاج إلى التحقق من هويتك من خلال عملية تعرف على العميل المعروفة باسم KYC. هذه إجرائات روتينية لجميع منصات تداول العملات المشفرة المركزية أو المحافظ الخاصة.
-3. بمجرد التحقق من هويتك ، يمكنك شراء GRT. يمكنك القيام بذلك عن طريق النقر فوق زر "شراء / بيع" في أعلى يمين الصفحة.
-4. حدد العملة التي ترغب في شرائها. حدد GRT.
-5. حدد طريقة الدفع. حدد طريقة الدفع المفضلة لديك.
-6. حدد مبلغ GRT الذي تريد شراءه.
-7. يرجى مراجعة عملية الشراء الخاصة بك. قم بمراجعة عملية الشراء وانقر على "شراء GRT".
-8. قم بتأكيد الشراء. قم بتأكيد الشراء وستكون قد اشتريت GRT بنجاح.
-9. يمكنك نقل GRT من حسابك إلى محفظة التشفير مثل [ MetaMask ](https://metamask.io/).
- - لنقل GRT إلى محفظة التشفير الخاصة بك ، انقر فوق زر "حسابات" في أعلى يمين الصفحة.
- - انقر فوق زر "إرسال" الموجود بجوار حساب GRT.
- - أدخل مبلغ GRT الذي تريد إرساله وعنوان المحفظة الذي تريد الإرسال إليه.
- - انقر على "متابعة" وقم بتأكيد معاملتك. -يرجى ملاحظة أنه بالنسبة لمبالغ الشراء الكبيرة ، قد يطلب منك Coinbase الانتظار من 7 إلى 10 أيام قبل تحويل المبلغ بالكامل إلى محفظة تشفير.
+1. Go to [Coinbase](https://www.coinbase.com/) and create an account.
+2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges.
+3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy/Sell" button on the top right of the page.
+4. Select the currency you want to purchase. Select GRT.
+5. Select the payment method. Select your preferred payment method.
+6. Select the amount of GRT you want to purchase.
+7. Review your purchase. Review your purchase and click "Buy GRT".
+8. Confirm your purchase. Confirm your purchase and you will have successfully purchased GRT.
+9. You can transfer the GRT from your account to your wallet such as [MetaMask](https://metamask.io/).
+ - To transfer the GRT to your wallet, click on the "Accounts" button on the top right of the page.
+ - Click on the "Send" button next to the GRT account.
+ - Enter the amount of GRT you want to send and the wallet address you want to send it to.
+ - Click "Continue" and confirm your transaction. -Please note that for larger purchase amounts, Coinbase may require you to wait 7-10 days before transferring the full amount to a wallet.
You can learn more about getting GRT on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency).
@@ -129,19 +118,19 @@ You can learn more about getting GRT on Coinbase [here](https://help.coinbase.co
This will be a step by step guide for purchasing GRT on Binance.
-1. انتقل إلى [ Binance ](https://www.binance.com/en) وأنشئ حسابًا.
-2. بمجرد إنشاء حساب ، ستحتاج إلى التحقق من هويتك من خلال عملية تعرف باسم KYC (أو اعرف عميلك). هذا إجراء روتيني لجميع المنصات المركزية أو المحافظ الخاصه.
-3. بمجرد التحقق من هويتك ، يمكنك شراء GRT. يمكنك القيام بذلك عن طريق النقر فوق زر "اشترِ الآن" الموجود على في الصفحة الرئيسية.
-4. سيتم نقلك إلى صفحة حيث يمكنك تحديد العملة التي تريد شرائها. حدد GRT.
-5. حدد طريقة الدفع المفضلة لديك. ستتمكن من الدفع بعملات ورقية مختلفة مثل اليورو والدولار الأمريكي والمزيد.
-6. حدد كمية GRT الذي تريد شراءه.
-7. راجع عملية الشراء وانقر على "شراء GRT".
-8. قم بتأكيد عملية الشراء وستتمكن من رؤية GRT الخاص بك في محفظة Binance Spot الخاصة بك.
-9. يمكنك سحب GRT من حسابك إلى محفظتك المشفرة مثل [ MetaMask ](https://metamask.io/).
- - [ لسحب ](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) GRT إلى محفظتك الرقمية ، أضف عنوان محفظتك الرقمية إلى القائمة البيضاء للسحب.
- - انقر فوق زر "المحفظة" ، وانقر فوق سحب ، ثم أختار GRT.
- - أدخل كمية GRT الذي تريد إرساله وعنوان المحفظة الموجودة في القائمة البيضاء الذي تريد إرساله إليه.
- - انقر على "متابعة" وقم بتأكيد معاملتك.
+1. Go to [Binance](https://www.binance.com/en) and create an account.
+2. Once you have created an account, you will need to verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges.
+3. Once you have verified your identity, you can purchase GRT. You can do this by clicking on the "Buy Now" button on the homepage banner.
+4. You will be taken to a page where you can select the currency you want to purchase. Select GRT.
+5. Select your preferred payment method. You'll be able to pay with different fiat currencies such as Euros, US Dollars, and more.
+6. Select the amount of GRT you want to purchase.
+7. Review your purchase and click "Buy GRT".
+8. Confirm your purchase and you will be able to see your GRT in your Binance Spot Wallet.
+9. You can withdraw the GRT from your account to your wallet such as [MetaMask](https://metamask.io/).
+ - [To withdraw](https://www.binance.com/en/blog/ecosystem/how-to-transfer-crypto-from-binance-to-trust-wallet-8305050796630181570) the GRT to your wallet, add your wallet's address to the withdrawel whitelist.
+ - Click on the "wallet" button, click withdraw, and select GRT.
+ - Enter the amount of GRT you want to send and the whitelisted wallet address you want to send it to.
+ - Click "Continue" and confirm your transaction.
You can learn more about getting GRT on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582).
@@ -149,25 +138,25 @@ You can learn more about getting GRT on Binance [here](https://www.binance.com/e
This is how you can purchase GRT on Uniswap.
-1. انتقل إلى [ Uniswap ](https://app.uniswap.org/#/swap) وقم بتوصيل محفظتك.
-2. حدد التوكن الذي ترغب في استبداله. حدد ETH.
-3. حدد التوكن الذي ترغب في تبديله. حدد GRT.
- - تأكد من تبديل التوكن الصحيح. عنوان العقد الذكي GRT هو: `0xc944E90C64B2c07662A292be6244BDf05Cda44a7`
-4. الرجاء إدخال كمية ETH التي ترغب في تحويلها.
-5. انقر على زر "مبادلة".
-6. قم بتأكيد المعاملة في محفظتك وانتظر حتى تتم المعالجة.
+1. Go to [Uniswap](https://app.uniswap.org/swap?chain=arbitrum) and connect your wallet.
+2. Select the token you want to swap from. Select ETH.
+3. Select the token you want to swap to. Select GRT.
+ - Make sure you're swapping for the correct token. The GRT smart contract address on Arbitrum One is: [0x9623063377AD1B27544C965cCd7342f7EA7e88C7](https://arbiscan.io/token/0x9623063377ad1b27544c965ccd7342f7ea7e88c7)
+4. Enter the amount of ETH you want to swap.
+5. Click "Swap".
+6. Confirm the transaction in your wallet and you wait for the transaction to process.
You can learn more about getting GRT on Uniswap [here](https://support.uniswap.org/hc/en-us/articles/8370549680909-How-to-Swap-Tokens-).
-## Getting Ethereum
+## Getting Ether
-This section will show you how to get Ethereum (ETH) to pay for transaction fees or gas costs. ETH is necessary to execute operations on the Ethereum network such as transferring tokens or interacting with contracts.
+This section will show you how to get Ether (ETH) to pay for transaction fees or gas costs. ETH is necessary to execute operations on the Ethereum network such as transferring tokens or interacting with contracts.
### Coinbase
This will be a step by step guide for purchasing ETH on Coinbase.
-1. انتقل إلى [ Coinbase ](https://www.coinbase.com/) وأنشئ حسابًا.
+1. Go to [Coinbase](https://www.coinbase.com/) and create an account.
2. Once you have created an account, verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges.
3. Once you have verified your identity, purchase ETH by clicking on the "Buy/Sell" button on the top right of the page.
4. Select the currency you want to purchase. Select ETH.
@@ -175,11 +164,12 @@ This will be a step by step guide for purchasing ETH on Coinbase.
6. Enter the amount of ETH you want to purchase.
7. Review your purchase and click "Buy ETH".
8. Confirm your purchase and you will have successfully purchased ETH.
-9. You can transfer the ETH from your Coinbase account to your crypto wallet such as [MetaMask](https://metamask.io/).
- - To transfer the ETH to your crypto wallet, click on the "Accounts" button on the top right of the page.
+9. You can transfer the ETH from your Coinbase account to your wallet such as [MetaMask](https://metamask.io/).
+ - To transfer the ETH to your wallet, click on the "Accounts" button on the top right of the page.
- Click on the "Send" button next to the ETH account.
- Enter the amount of ETH you want to send and the wallet address you want to send it to.
- - انقر على "متابعة" وقم بتأكيد معاملتك.
+ - Ensure that you are sending to your Ethereum wallet address on Arbitrum One.
+ - Click "Continue" and confirm your transaction.
You can learn more about getting ETH on Coinbase [here](https://help.coinbase.com/en/coinbase/trading-and-funding/buying-selling-or-converting-crypto/how-do-i-buy-digital-currency).
@@ -187,7 +177,7 @@ You can learn more about getting ETH on Coinbase [here](https://help.coinbase.co
This will be a step by step guide for purchasing ETH on Binance.
-1. انتقل إلى [ Binance ](https://www.binance.com/en) وأنشئ حسابًا.
+1. Go to [Binance](https://www.binance.com/en) and create an account.
2. Once you have created an account, verify your identity through a process known as KYC (or Know Your Customer). This is a standard procedure for all centralized or custodial crypto exchanges.
3. Once you have verified your identity, purchase ETH by clicking on the "Buy Now" button on the homepage banner.
4. Select the currency you want to purchase. Select ETH.
@@ -195,14 +185,29 @@ This will be a step by step guide for purchasing ETH on Binance.
6. Enter the amount of ETH you want to purchase.
7. Review your purchase and click "Buy ETH".
8. Confirm your purchase and you will see your ETH in your Binance Spot Wallet.
-9. You can withdraw the ETH from your account to your crypto wallet such as [MetaMask](https://metamask.io/).
- - To withdraw the ETH to your crypto wallet, add your crypto wallet's address to the withdrawal whitelist.
+9. You can withdraw the ETH from your account to your wallet such as [MetaMask](https://metamask.io/).
+ - To withdraw the ETH to your wallet, add your wallet's address to the withdrawal whitelist.
- Click on the "wallet" button, click withdraw, and select ETH.
- Enter the amount of ETH you want to send and the whitelisted wallet address you want to send it to.
- - انقر على "متابعة" وقم بتأكيد معاملتك.
+ - Ensure that you are sending to your Ethereum wallet address on Arbitrum One.
+ - Click "Continue" and confirm your transaction.
You can learn more about getting ETH on Binance [here](https://www.binance.com/en/support/faq/how-to-buy-cryptocurrency-on-binance-homepage-400c38f5e0cd4b46a1d0805c296b5582).
-## جسر Arbitrum
+## Billing FAQs
+
+### How many queries will I need?
+
+You don't need to know how many queries you'll need in advance. You will only be charged for what you use and you can withdrawal GRT from your account at any time.
+
+We recommend you overestimate the number of queries you will need so that you don’t have to top up your balance frequently. A good estimate for small to medium sized applications is to start with 1M-2M queries per month and monitor usage closely in the first weeks. For larger apps, a good estimate is to use the number of daily visits your site gets multiplied by the number of queries your most active page makes upon opening.
+
+Of course, both new and existing users can reach out to Edge & Node's BD team for a consult to learn more about anticipated usage.
+
+### Can I withdraw GRT from my billing balance?
+
+Yes, you can always withdraw GRT that has not already been used for queries from your billing balance. The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161).
+
+### What happens when my billing balance runs? Will I get a warning?
-The billing contract is only designed to bridge GRT from Ethereum mainnet to the Arbitrum network. If you'd like to transfer your GRT from Arbitrum back to Ethereum mainnet, you'll need to use the [Arbitrum Bridge](https://bridge.arbitrum.io/?l2ChainId=42161).
+You will receive several email notifications before your billing balance runs out.
diff --git a/website/pages/ar/cookbook/arweave.mdx b/website/pages/ar/cookbook/arweave.mdx
index 24eb6fe6bdda..9a7bfaab0270 100644
--- a/website/pages/ar/cookbook/arweave.mdx
+++ b/website/pages/ar/cookbook/arweave.mdx
@@ -2,7 +2,7 @@
title: Building Subgraphs on Arweave
---
-> Arweave support in Graph Node and on the hosted service is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs!
+> Arweave support in Graph Node and on Subgraph Studio is in beta: please reach us on [Discord](https://discord.gg/graphprotocol) with any questions about building Arweave subgraphs!
In this guide, you will learn how to build and deploy Subgraphs to index the Arweave blockchain.
@@ -83,7 +83,7 @@ dataSources:
```
- Arweave subgraphs introduce a new kind of data source (`arweave`)
-- The network should correspond to a network on the hosting Graph Node. On the hosted service, Arweave's mainnet is `arweave-mainnet`
+- The network should correspond to a network on the hosting Graph Node. In Subgraph Studio, Arweave's mainnet is `arweave-mainnet`
- Arweave data sources introduce an optional source.owner field, which is the public key of an Arweave wallet
Arweave data sources support two types of handlers:
@@ -95,7 +95,7 @@ Arweave data sources support two types of handlers:
> Transactions are the building blocks of the Arweave permaweb and they are objects created by end-users.
-> Note: [Bundlr](https://bundlr.network/) transactions are not supported yet.
+> Note: [Irys (previously Bundlr)](https://irys.xyz/) transactions are not supported yet.
## تعريف المخطط
@@ -150,12 +150,12 @@ Block handlers receive a `Block`, while transactions receive a `Transaction`.
Writing the mappings of an Arweave Subgraph is very similar to writing the mappings of an Ethereum Subgraph. For more information, click [here](/developing/creating-a-subgraph/#writing-mappings).
-## Deploying an Arweave Subgraph on the hosted service
+## Deploying an Arweave Subgraph in Subgraph Studio
-Once your subgraph has been created on the hosted service dashboard, you can deploy by using the `graph deploy` CLI command.
+Once your subgraph has been created on your Subgraph Studio dashboard, you can deploy by using the `graph deploy` CLI command.
```bash
-graph deploy --node https://api.thegraph.com/deploy/ --ipfs https://api.thegraph.com/ipfs/ --access-token
+graph deploy --studio --access-token
```
## Querying an Arweave Subgraph
diff --git a/website/pages/ar/cookbook/avoid-eth-calls.mdx b/website/pages/ar/cookbook/avoid-eth-calls.mdx
new file mode 100644
index 000000000000..446b0e8ecd17
--- /dev/null
+++ b/website/pages/ar/cookbook/avoid-eth-calls.mdx
@@ -0,0 +1,102 @@
+---
+title: Subgraph Best Practice 4 - Improve Indexing Speed by Avoiding eth_calls
+---
+
+## TLDR
+
+`eth_calls` are calls that can be made from a subgraph to an Ethereum node. These calls take a significant amount of time to return data, slowing down indexing. If possible, design smart contracts to emit all the data you need so you don’t need to use `eth_calls`.
+
+## Why Avoiding `eth_calls` Is a Best Practice
+
+Subgraphs are optimized to index event data emitted from smart contracts. A subgraph can also index the data coming from an `eth_call`, however, this can significantly slow down subgraph indexing as `eth_calls` require making external calls to smart contracts. The responsiveness of these calls relies not on the subgraph but on the connectivity and responsiveness of the Ethereum node being queried. By minimizing or eliminating eth_calls in our subgraphs, we can significantly improve our indexing speed.
+
+### What Does an eth_call Look Like?
+
+`eth_calls` are often necessary when the data required for a subgraph is not available through emitted events. For example, consider a scenario where a subgraph needs to identify whether ERC20 tokens are part of a specific pool, but the contract only emits a basic `Transfer` event and does not emit an event that contains the data that we need:
+
+```yaml
+event Transfer(address indexed from, address indexed to, uint256 value);
+```
+
+Suppose the tokens' pool membership is determined by a state variable named `getPoolInfo`. In this case, we would need to use an `eth_call` to query this data:
+
+```typescript
+import { Address } from '@graphprotocol/graph-ts'
+import { ERC20, Transfer } from '../generated/ERC20/ERC20'
+import { TokenTransaction } from '../generated/schema'
+
+export function handleTransfer(event: Transfer): void {
+ let transaction = new TokenTransaction(event.transaction.hash.toHex())
+
+ // Bind the ERC20 contract instance to the given address:
+ let instance = ERC20.bind(event.address)
+
+ // Retrieve pool information via eth_call
+ let poolInfo = instance.getPoolInfo(event.params.to)
+
+ transaction.pool = poolInfo.toHexString()
+ transaction.from = event.params.from.toHexString()
+ transaction.to = event.params.to.toHexString()
+ transaction.value = event.params.value
+
+ transaction.save()
+}
+```
+
+This is functional, however is not ideal as it slows down our subgraph’s indexing.
+
+## How to Eliminate `eth_calls`
+
+Ideally, the smart contract should be updated to emit all necessary data within events. For instance, modifying the smart contract to include pool information in the event could eliminate the need for `eth_calls`:
+
+```
+event TransferWithPool(address indexed from, address indexed to, uint256 value, bytes32 indexed poolInfo);
+```
+
+With this update, the subgraph can directly index the required data without external calls:
+
+```typescript
+import { Address } from '@graphprotocol/graph-ts'
+import { ERC20, TransferWithPool } from '../generated/ERC20/ERC20'
+import { TokenTransaction } from '../generated/schema'
+
+export function handleTransferWithPool(event: TransferWithPool): void {
+ let transaction = new TokenTransaction(event.transaction.hash.toHex())
+
+ transaction.pool = event.params.poolInfo.toHexString()
+ transaction.from = event.params.from.toHexString()
+ transaction.to = event.params.to.toHexString()
+ transaction.value = event.params.value
+
+ transaction.save()
+}
+```
+
+This is much more performant as it has eliminated the need for `eth_calls`.
+
+## How to Optimize `eth_calls`
+
+If modifying the smart contract is not possible and `eth_calls` are required, read “[Improve Subgraph Indexing Performance Easily: Reduce eth_calls](https://thegraph.com/blog/improve-subgraph-performance-reduce-eth-calls/)” by Simon Emanuel Schmid to learn various strategies on how to optimize `eth_calls`.
+
+## Reducing the Runtime Overhead of `eth_calls`
+
+For the `eth_calls` that can not be eliminated, the runtime overhead they introduce can be minimized by declaring them in the manifest. When `graph-node` processes a block it performs all declared `eth_calls` in parallel before handlers are run. Calls that are not declared are executed sequentially when handlers run. The runtime improvement comes from performing calls in parallel rather than sequentially - that helps reduce the total time spent in calls but does not eliminate it completely.
+
+Currently, `eth_calls` can only be declared for event handlers. In the manifest, write
+
+```yaml
+event: TransferWithPool(address indexed, address indexed, uint256, bytes32 indexed)
+handler: handleTransferWithPool
+calls:
+ ERC20.poolInfo: ERC20[event.address].getPoolInfo(event.params.to)
+```
+
+The portion highlighted in yellow is the call declaration. The part before the colon is simply a text label that is only used for error messages. The part after the colon has the form `Contract[address].function(params)`. Permissible values for address and params are `event.address` and `event.params.`.
+
+The handler itself accesses the result of this `eth_call` exactly as in the previous section by binding to the contract and making the call. graph-node caches the results of declared `eth_calls` in memory and the call from the handler will retrieve the result from this in memory cache instead of making an actual RPC call.
+
+Note: Declared eth_calls can only be made in subgraphs with specVersion >= 1.2.0.
+
+## Conclusion
+
+We can significantly improve indexing performance by minimizing or eliminating `eth_calls` in our subgraphs.
diff --git a/website/pages/ar/cookbook/base-testnet.mdx b/website/pages/ar/cookbook/base-testnet.mdx
index 89c026c90979..a32276dd1875 100644
--- a/website/pages/ar/cookbook/base-testnet.mdx
+++ b/website/pages/ar/cookbook/base-testnet.mdx
@@ -6,7 +6,7 @@ This guide will quickly take you through how to initialize, create, and deploy y
What you'll need:
-- A Base testnet contract address
+- A Base Sepolia testnet contract address
- A crypto wallet (e.g. MetaMask or Coinbase Wallet)
## Subgraph Studio
@@ -23,17 +23,15 @@ npm install -g @graphprotocol/graph-cli
yarn global add @graphprotocol/graph-cli
```
-### 2. Create your subgraph in the Subgraph Studio
+### 2. Create your subgraph in Subgraph Studio
-Go to the [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet.
+Go to [Subgraph Studio](https://thegraph.com/studio/) and connect your crypto wallet.
-Once connected, click "Create a Subgraph" and enter a name for your subgraph.
-
-Select "Base (testnet)" as the indexed blockchain and click Create Subgraph.
+Once connected, click "Create a Subgraph", enter a name for your subgraph and click Create a Subgraph.
### 3. Initialize your Subgraph
-> You can find specific commands for your subgraph in the Subgraph Studio.
+> You can find specific commands for your subgraph in Subgraph Studio.
Make sure that the graph-cli is updated to latest (above 0.41.0)
@@ -52,28 +50,29 @@ Your subgraph slug is an identifier for your subgraph. The CLI tool will walk yo
- Protocol: ethereum
- Subgraph slug: ``
- Directory to create the subgraph in: ``
-- Ethereum network: base-testnet \_ Contract address: ``
+- Ethereum network: base-sepolia
+- Contract address: ``
- Start block (optional)
- Contract name: ``
- Yes/no to indexing events (yes means your subgraph will be bootstrapped with entities in the schema and simple mappings for emitted events)
-### 3. اكتب الفرعية رسم بياني الخاص بك
+### 3. Write your Subgraph
> If emitted events are the only thing you want to index, then no additional work is required, and you can skip to the next step.
The previous command creates a scaffold subgraph that you can use as a starting point for building your subgraph. When making changes to the subgraph, you will mainly work with three files:
-- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-testnet` as the network name in manifest file to deploy your subgraph on Base testnet.
-- Schema (schema.graphql) - يحدد مخطط GraphQL البيانات التي ترغب في استردادها من الفرعية رسم بياني.
-- (AssemblyScript Mappings (mapping.ts - هذا هو الكود الذي يترجم البيانات من مصادر البيانات الخاصة بك إلى الكيانات المحددة في المخطط.
+- Manifest (subgraph.yaml) - The manifest defines what datasources your subgraphs will index. Make sure to add `base-sepolia` as the network name in manifest file to deploy your subgraph on Base Sepolia.
+- Schema (schema.graphql) - The GraphQL schema defines what data you wish to retreive from the subgraph.
+- (AssemblyScript Mappings (mapping.ts هذا هو الكود الذي يترجم البيانات من مصادر البيانات الخاصة بك إلى الكيانات المحددة في المخطط.
If you want to index additional data, you will need extend the manifest, schema and mappings.
For more information on how to write your subgraph, see [Creating a Subgraph](/developing/creating-a-subgraph).
-### النشر على الفرعية رسم بياني ستوديو
+### 4. Deploy to Subgraph Studio
-Before you can deploy your subgraph, you will need to authenticate with the Subgraph Studio. You can do this by running the following command:
+Before you can deploy your subgraph, you will need to authenticate with Subgraph Studio. You can do this by running the following command:
Authenticate the subgraph on studio
@@ -105,7 +104,7 @@ graph deploy --studio
### 5. Query your subgraph
-Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in the Subgraph Studio.
+Once your subgraph is deployed, you can query it from your dapp using the `Development Query URL` in Subgraph Studio.
Note - Studio API is rate-limited. Hence should preferably be used for development and testing.
diff --git a/website/pages/ar/cookbook/cosmos.mdx b/website/pages/ar/cookbook/cosmos.mdx
index bbd67fcff9c6..0ed45e614eee 100644
--- a/website/pages/ar/cookbook/cosmos.mdx
+++ b/website/pages/ar/cookbook/cosmos.mdx
@@ -2,7 +2,7 @@
title: Building Subgraphs on Cosmos
---
-This guide is an introduction on building subgraphs indexing [Cosmos](https://docs.cosmos.network/) based blockchains.
+This guide is an introduction on building subgraphs indexing [Cosmos](https://cosmos.network/) based blockchains.
## What are Cosmos subgraphs?
@@ -17,11 +17,11 @@ There are four types of handlers supported in Cosmos subgraphs:
Based on the [official Cosmos documentation](https://docs.cosmos.network/):
-> [Events](https://docs.cosmos.network/main/core/events) are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallets to track the execution of various messages and index transactions.
+> [Events](https://docs.cosmos.network/main/learn/advanced/events) are objects that contain information about the execution of the application. They are mainly used by service providers like block explorers and wallets to track the execution of various messages and index transactions.
-> [Transactions](https://docs.cosmos.network/main/core/transactions) are objects created by end-users to trigger state changes in the application.
+> [Transactions](https://docs.cosmos.network/main/learn/advanced/transactions) are objects created by end-users to trigger state changes in the application.
-> [Messages](https://docs.cosmos.network/main/core/transactions#messages) are module-specific objects that trigger state transitions within the scope of the module they belong to.
+> [Messages](https://docs.cosmos.network/main/learn/advanced/transactions#messages) are module-specific objects that trigger state transitions within the scope of the module they belong to.
Even though all data can be accessed with a block handler, other handlers enable subgraph developers to process data in a much more granular way.
@@ -29,9 +29,9 @@ Even though all data can be accessed with a block handler, other handlers enable
### Subgraph Dependencies
-[graph-cli](https://github.com/graphprotocol/graph-cli) is a CLI tool to build and deploy subgraphs, version `>=0.30.0` is required in order to work with Cosmos subgraphs.
+[graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) is a CLI tool to build and deploy subgraphs, version `>=0.30.0` is required in order to work with Cosmos subgraphs.
-[graph-ts](https://github.com/graphprotocol/graph-ts) is a library of subgraph-specific types, version `>=0.27.0` is required in order to work with Cosmos subgraphs.
+[graph-ts](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) is a library of subgraph-specific types, version `>=0.27.0` is required in order to work with Cosmos subgraphs.
### Subgraph Main Components
@@ -79,7 +79,7 @@ dataSources:
### تعريف المخطط
-Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graph-ql-schema).
+Schema definition describes the structure of the resulting subgraph database and the relationships between entities. This is agnostic of the original data source. There are more details on subgraph schema definition [here](/developing/creating-a-subgraph/#the-graphql-schema).
### AssemblyScript Mappings
@@ -176,7 +176,7 @@ You can find the full list of types for the Cosmos integration [here](https://gi
### Message decoding
-It's important to note that Cosmos messages are chain-specific and they are passed to a subgraph in the form of a serialized [Protocol Buffers](https://developers.google.com/protocol-buffers/) payload. As a result, the message data needs to be decoded in a mapping function before it can be processed.
+It's important to note that Cosmos messages are chain-specific and they are passed to a subgraph in the form of a serialized [Protocol Buffers](https://protobuf.dev/) payload. As a result, the message data needs to be decoded in a mapping function before it can be processed.
An example of how to decode message data in a subgraph can be found [here](https://github.com/graphprotocol/graph-tooling/blob/main/examples/cosmos-validator-delegations/src/decoding.ts).
@@ -196,19 +196,17 @@ $ graph build
## Deploying a Cosmos subgraph
-Once your subgraph has been created, you can deploy your subgraph by using the `graph deploy` CLI command after running the `graph create` CLI command:
+بمجرد إنشاء الـ subgraph الخاص بك ، يمكنك نشره باستخدام الأمر `graph deploy`:
-**الخدمة المستضافة**
+**Subgraph Studio**
-```bash
-graph create account/subgraph-name --product hosted-service
-```
+Visit the Subgraph Studio to create a new subgraph.
```bash
-graph deploy account/subgraph-name --product hosted-service
+graph deploy --studio subgraph-name
```
-**Graph Node المحلية (على أساس التكوين الافتراضي):**
+**Local Graph Node (based on default configuration):**
```bash
graph create subgraph-name --node http://localhost:8020
@@ -236,7 +234,7 @@ Cosmos Hub mainnet is `cosmoshub-4`. Cosmos Hub current testnet is `theta-testne
### Osmosis
-> Osmosis support in Graph Node and on the Hosted Service is in beta: please contact the graph team with any questions about building Osmosis subgraphs!
+> Osmosis support in Graph Node and on Subgraph Studio is in beta: please contact the graph team with any questions about building Osmosis subgraphs!
#### What is Osmosis?
@@ -248,7 +246,7 @@ Osmosis mainnet is `osmosis-1`. Osmosis current testnet is `osmo-test-4`.
## أمثلة على الـ Subgraphs
-فيما يلي بعض الأمثلة على الـ subgraphs للرجوع إليها:
+Here are some example subgraphs for reference:
[Block Filtering Example](https://github.com/graphprotocol/graph-tooling/tree/main/examples/cosmos-block-filtering)
diff --git a/website/pages/ar/cookbook/derivedfrom.mdx b/website/pages/ar/cookbook/derivedfrom.mdx
new file mode 100644
index 000000000000..69dd48047744
--- /dev/null
+++ b/website/pages/ar/cookbook/derivedfrom.mdx
@@ -0,0 +1,74 @@
+---
+title: Subgraph Best Practice 2 - Improve Indexing and Query Responsiveness By Using @derivedFrom
+---
+
+## TLDR
+
+Arrays in your schema can really slow down a subgraph's performance as they grow beyond thousands of entries. If possible, the `@derivedFrom` directive should be used when using arrays as it prevents large arrays from forming, simplifies handlers, and reduces the size of individual entities, improving indexing speed and query performance significantly.
+
+## How to Use the `@derivedFrom` Directive
+
+You just need to add a `@derivedFrom` directive after your array in your schema. Like this:
+
+```graphql
+comments: [Comment!]! @derivedFrom(field: "post")
+```
+
+`@derivedFrom` creates efficient one-to-many relationships, enabling an entity to dynamically associate with multiple related entities based on a field in the related entity. This approach removes the need for both sides of the relationship to store duplicate data, making the subgraph more efficient.
+
+### Example Use Case for `@derivedFrom`
+
+An example of a dynamically growing array is a blogging platform where a “Post” can have many “Comments”.
+
+Let’s start with our two entities, `Post` and `Comment`
+
+Without optimization, you could implement it like this with an array:
+
+```graphql
+type Post @entity {
+ id: Bytes!
+ title: String!
+ content: String!
+ comments: [Comment!]!
+}
+
+type Comment @entity {
+ id: Bytes!
+ content: String!
+}
+```
+
+Arrays like these will effectively store extra Comments data on the Post side of the relationship.
+
+Here’s what an optimized version looks like using `@derivedFrom`:
+
+```graphql
+type Post @entity {
+ id: Bytes!
+ title: String!
+ content: String!
+ comments: [Comment!]! @derivedFrom(field: "post")
+}
+
+type Comment @entity {
+ id: Bytes!
+ content: String!
+ post: Post!
+}
+```
+
+Just by adding the `@derivedFrom` directive, this schema will only store the “Comments” on the “Comments” side of the relationship and not on the “Post” side of the relationship. Arrays are stored across individual rows, which allows them to expand significantly. This can lead to particularly large sizes if their growth is unbounded.
+
+This will not only make our subgraph more efficient, but it will also unlock three features:
+
+1. We can query the `Post` and see all of its comments.
+
+2. We can do a reverse lookup and query any `Comment` and see which post it comes from.
+
+3. We can use [Derived Field Loaders](/developing/graph-ts/api/#looking-up-derived-entities) to unlock the ability to directly access and manipulate data from virtual relationships in our subgraph mappings.
+
+## Conclusion
+
+Adopting the `@derivedFrom` directive in subgraphs effectively handles dynamically growing arrays, enhancing indexing efficiency and data retrieval.
+
+To learn more detailed strategies to avoid large arrays, read this blog from Kevin Jones: [Best Practices in Subgraph Development: Avoiding Large Arrays](https://thegraph.com/blog/improve-subgraph-performance-avoiding-large-arrays/).
diff --git a/website/pages/ar/cookbook/grafting.mdx b/website/pages/ar/cookbook/grafting.mdx
index 3ceb3758235c..548091ac5b7d 100644
--- a/website/pages/ar/cookbook/grafting.mdx
+++ b/website/pages/ar/cookbook/grafting.mdx
@@ -20,13 +20,13 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o
For more information, you can check:
-- [تطعيم(Grafting)](https://thegraph.com/docs/en/developing/creating-a-subgraph#grafting-onto-existing-subgraphs)
+- [تطعيم(Grafting)](/developing/creating-a-subgraph#grafting-onto-existing-subgraphs)
In this tutorial, we will be covering a basic usecase. We will replace an existing contract with an identical contract (with a new address, but the same code). Then, graft the existing subgraph onto the "base" subgraph that tracks the new contract.
## Important Note on Grafting When Upgrading to the Network
-> **Caution**: if you are upgrading your subgraph from Subgraph Studio or the hosted service to the decentralized network, it is strongly recommended to avoid using grafting during the upgrade process.
+> **Caution**: It is recommended to not use grafting for subgraphs published to The Graph Network
### Why Is This Important?
@@ -42,9 +42,9 @@ By adhering to these guidelines, you minimize risks and ensure a smoother migrat
## Building an Existing Subgraph
-Building subgraphs is an essential part of The Graph, described more in depth [here](http://localhost:3000/en/cookbook/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided:
+Building subgraphs is an essential part of The Graph, described more in depth [here](/quick-start/). To be able to build and deploy the existing subgraph used in this tutorial, the following repo is provided:
-- [Subgraph example repo](https://github.com/t-proctor/grafting-tutorial)
+- [Subgraph example repo](https://github.com/Shiyasmohd/grafting-tutorial)
> Note: The contract used in the subgraph was taken from the following [Hackathon Starterkit](https://github.com/schmidsi/hackathon-starterkit).
@@ -59,11 +59,11 @@ schema:
dataSources:
- kind: ethereum
name: Lock
- network: goerli
+ network: sepolia
source:
- address: '0x4Ed995e775D3629b0566D2279f058729Ae6EA493'
+ address: '0xb3aabe721794b85fe4e72134795c2f93b4eb7e63'
abi: Lock
- startBlock: 7674603
+ startBlock: 5955690
mapping:
kind: ethereum/events
apiVersion: 0.0.6
@@ -80,7 +80,7 @@ dataSources:
```
- The `Lock` data source is the abi and contract address we will get when we compile and deploy the contract
-- The network should correspond to a indexed network being queried. Since we're running on Goerli testnet, the network is `goerli`
+- The network should correspond to a indexed network being queried. Since we're running on Sepolia testnet, the network is `sepolia`
- The `mapping` section defines the triggers of interest and the functions that should be run in response to those triggers. In this case, we are listening for the `Withdrawal` event and calling the `handleWithdrawal` function when it is emitted.
## Grafting Manifest Definition
@@ -93,17 +93,17 @@ features:
- grafting # feature name
graft:
base: Qm... # subgraph ID of base subgraph
- block: 1502122 # block number
+ block: 5956000 # block number
```
-- `features:` is a list of all used [feature names](developing/creating-a-subgraph/#experimental-features).
+- `features:` is a list of all used [feature names](/developing/creating-a-subgraph/#experimental-features).
- `graft:` is a map of the `base` subgraph and the block to graft on to. The `block` is the block number to start indexing from. The Graph will copy the data of the base subgraph up to and including the given block and then continue indexing the new subgraph from that block on.
The `base` and `block` values can be found by deploying two subgraphs: one for the base indexing and one with grafting
## Deploying the Base Subgraph
-1. Go to [The Graph Studio UI](https://thegraph.com/studio/) and create a subgraph on Goerli testnet called `graft-example`
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-example`
2. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-example` folder from the repo
3. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground
@@ -124,14 +124,14 @@ It returns something like this:
"data": {
"withdrawals": [
{
- "id": "0x13098b538a61837e9f29b32fb40527bbbe63c9120c250242b02b69bb42c287e5-5",
+ "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000",
"amount": "0",
- "when": "1664367528"
+ "when": "1716394824"
},
{
- "id": "0x800c92fcc0edbd26f74e19ad058c62008a47c7789f2064023b987028343dd498-3",
+ "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000",
"amount": "0",
- "when": "1664367648"
+ "when": "1716394848"
}
]
}
@@ -144,8 +144,8 @@ Once you have verified the subgraph is indexing properly, you can quickly update
The graft replacement subgraph.yaml will have a new contract address. This could happen when you update your dapp, redeploy a contract, etc.
-1. Go to [The Graph Studio UI](https://thegraph.com/studio/) and create a subgraph on Goerli testnet called `graft-replacement`
-2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://goerli.etherscan.io/tx/0x800c92fcc0edbd26f74e19ad058c62008a47c7789f2064023b987028343dd498) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in The Graph Studio UI.
+1. Go to [Subgraph Studio](https://thegraph.com/studio/) and create a subgraph on Sepolia testnet called `graft-replacement`
+2. Create a new manifest. The `subgraph.yaml` for `graph-replacement` contains a different contract address and new information about how it should graft. These are the `block` of the [last event emitted](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452) you care about by the old contract and the `base` of the old subgraph. The `base` subgraph ID is the `Deployment ID` of your original `graph-example` subgraph. You can find this in Subgraph Studio.
3. Follow the directions in the `AUTH & DEPLOY` section on your subgraph page in the `graft-replacement` folder from the repo
4. Once finished, verify the subgraph is indexing properly. If you run the following command in The Graph Playground
@@ -166,37 +166,37 @@ It should return the following:
"data": {
"withdrawals": [
{
- "id": "0x13098b538a61837e9f29b32fb40527bbbe63c9120c250242b02b69bb42c287e5-5",
+ "id": "0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d0a000000",
"amount": "0",
- "when": "1664367528"
+ "when": "1716394824"
},
{
- "id": "0x800c92fcc0edbd26f74e19ad058c62008a47c7789f2064023b987028343dd498-3",
+ "id": "0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc45203000000",
"amount": "0",
- "when": "1664367648"
+ "when": "1716394848"
},
{
- "id": "0xb4010e4c76f86762beb997a13cf020231778eaf7c64fa3b7794971a5e6b343d3-22",
+ "id": "0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af06000000",
"amount": "0",
- "when": "1664371512"
+ "when": "1716429732"
}
]
}
}
```
-You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://goerli.etherscan.io/tx/0x800c92fcc0edbd26f74e19ad058c62008a47c7789f2064023b987028343dd498) and [Event 2](https://goerli.etherscan.io/address/0x4ed995e775d3629b0566d2279f058729ae6ea493). The new contract emitted one `Withdrawal` after, [Event 3](https://goerli.etherscan.io/tx/0xb4010e4c76f86762beb997a13cf020231778eaf7c64fa3b7794971a5e6b343d3). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph.
+You can see that the `graft-replacement` subgraph is indexing from older `graph-example` data and newer data from the new contract address. The original contract emitted two `Withdrawal` events, [Event 1](https://sepolia.etherscan.io/tx/0xe8323d21c4f104607b10b0fff9fc24b9612b9488795dea8196b2d5f980d3dc1d) and [Event 2](https://sepolia.etherscan.io/tx/0xea1cee35036f2cacb72f2a336be3e54ab911f5bebd58f23400ebb8ecc5cfc452). The new contract emitted one `Withdrawal` after, [Event 3](https://sepolia.etherscan.io/tx/0x2410475f76a44754bae66d293d14eac34f98ec03a3689cbbb56a716d20b209af). The two previously indexed transactions (Event 1 and 2) and the new transaction (Event 3) were combined together in the `graft-replacement` subgraph.
-Congrats! You have succesfully grafted a subgraph onto another subgraph.
+Congrats! You have successfully grafted a subgraph onto another subgraph.
## مصادر إضافية
If you want more experience with grafting, here's a few examples for popular contracts:
-- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/templates/curve.template.yaml)
+- [Curve](https://github.com/messari/subgraphs/blob/master/subgraphs/curve-finance/protocols/curve-finance/config/templates/curve.template.yaml)
- [ERC-721](https://github.com/messari/subgraphs/blob/master/subgraphs/erc721-metadata/subgraph.yaml)
-- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3/protocols/uniswap-v3/config/templates/uniswap.v3.template.yaml),
+- [Uniswap](https://github.com/messari/subgraphs/blob/master/subgraphs/uniswap-v3-forks/protocols/uniswap-v3/config/templates/uniswapV3Template.yaml),
-To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](developing/creating-a-subgraph/#data-source-templates) can achieve similar results
+To become even more of a Graph expert, consider learning about other ways to handle changes in underlying datasources. Alternatives like [Data Source Templates](/developing/creating-a-subgraph/#data-source-templates) can achieve similar results
> Note: A lot of material from this article was taken from the previously published [Arweave article](/cookbook/arweave/)
diff --git a/website/pages/ar/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx b/website/pages/ar/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx
new file mode 100644
index 000000000000..23845a17520d
--- /dev/null
+++ b/website/pages/ar/cookbook/how-to-secure-api-keys-using-nextjs-server-components.mdx
@@ -0,0 +1,123 @@
+---
+title: How to Secure API Keys Using Next.js Server Components
+---
+
+## نظره عامة
+
+We can use [Next.js server components](https://nextjs.org/docs/app/building-your-application/rendering/server-components) to properly secure our API key from exposure in the frontend of our dapp. To further increase our API key security, we can also [restrict our API key to certain subgraphs or domains in Subgraph Studio](/cookbook/upgrading-a-subgraph/#securing-your-api-key).
+
+In this cookbook, we will go over how to create a Next.js server component that queries a subgraph while also hiding the API key from the frontend.
+
+### Caveats
+
+- Next.js server components do not protect API keys from being drained using denial of service attacks.
+- The Graph Network gateways have denial of service detection and mitigation strategies in place, however using server components may weaken these protections.
+- Next.js server components introduce centralization risks as the server can go down.
+
+### Why It's Needed
+
+In a standard React application, API keys included in the frontend code can be exposed to the client-side, posing a security risk. While `.env` files are commonly used, they don't fully protect the keys since React's code is executed on the client side, exposing the API key in the headers. Next.js Server Components address this issue by handling sensitive operations server-side.
+
+### Using client-side rendering to query a subgraph
+
+![Client-side rendering](/img/api-key-client-side-rendering.png)
+
+### المتطلبات الأساسية
+
+- An API key from [Subgraph Studio](https://thegraph.com/studio)
+- Basic knowledge of Next.js and React.
+- An existing Next.js project that uses the [App Router](https://nextjs.org/docs/app).
+
+## Step-by-Step Cookbook
+
+### Step 1: Set Up Environment Variables
+
+1. In our Next.js project root, create a `.env.local` file.
+2. Add our API key: `API_KEY=`.
+
+### Step 2: Create a Server Component
+
+1. In our `components` directory, create a new file, `ServerComponent.js`.
+2. Use the provided example code to set up the server component.
+
+### Step 3: Implement Server-Side API Request
+
+In `ServerComponent.js`, add the following code:
+
+```javascript
+const API_KEY = process.env.API_KEY
+
+export default async function ServerComponent() {
+ const response = await fetch(
+ `https://gateway-arbitrum.network.thegraph.com/api/${API_KEY}/subgraphs/id/HUZDsRpEVP2AvzDCyzDHtdc64dyDxx8FQjzsmqSg4H3B`,
+ {
+ method: 'POST',
+ headers: {
+ 'Content-Type': 'application/json',
+ },
+ body: JSON.stringify({
+ query: /* GraphQL */ `
+ {
+ factories(first: 5) {
+ id
+ poolCount
+ txCount
+ totalVolumeUSD
+ }
+ }
+ `,
+ }),
+ },
+ )
+
+ const responseData = await response.json()
+ const data = responseData.data
+
+ return (
+
+
Server Component
+ {data ? (
+
+ {data.factories.map((factory) => (
+
+
ID: {factory.id}
+
Pool Count: {factory.poolCount}
+
Transaction Count: {factory.txCount}
+
Total Volume USD: {factory.totalVolumeUSD}
+
+ ))}
+
+ ) : (
+
Loading data...
+ )}
+
+ )
+}
+```
+
+### Step 4: Use the Server Component
+
+1. In our page file (e.g., `pages/index.js`), import `ServerComponent`.
+2. Render the component:
+
+```javascript
+import ServerComponent from './components/ServerComponent'
+
+export default function Home() {
+ return (
+
+
+
+ )
+}
+```
+
+### Step 5: Run and Test Our Dapp
+
+Start our Next.js application using `npm run dev`. Verify that the server component is fetching data without exposing the API key.
+
+![Server-side rendering](/img/api-key-server-side-rendering.png)
+
+### Conclusion
+
+By utilizing Next.js Server Components, we've effectively hidden the API key from the client-side, enhancing the security of our application. This method ensures that sensitive operations are handled server-side, away from potential client-side vulnerabilities. Finally, be sure to explore [other API key security measures](/cookbook/upgrading-a-subgraph/#securing-your-api-key) to increase your API key security even further.
diff --git a/website/pages/ar/cookbook/immutable-entities-bytes-as-ids.mdx b/website/pages/ar/cookbook/immutable-entities-bytes-as-ids.mdx
new file mode 100644
index 000000000000..f38c33385604
--- /dev/null
+++ b/website/pages/ar/cookbook/immutable-entities-bytes-as-ids.mdx
@@ -0,0 +1,176 @@
+---
+title: Subgraph Best Practice 3 - Improve Indexing and Query Performance by Using Immutable Entities and Bytes as IDs
+---
+
+## TLDR
+
+Using Immutable Entities and Bytes for IDs in our `schema.graphql` file [significantly improves ](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/) indexing speed and query performance.
+
+## Immutable Entities
+
+To make an entity immutable, we simply add `(immutable: true)` to an entity.
+
+```graphql
+type Transfer @entity(immutable: true) {
+ id: Bytes!
+ from: Bytes!
+ to: Bytes!
+ value: BigInt!
+}
+```
+
+By making the `Transfer` entity immutable, graph-node is able to process the entity more efficiently, improving indexing speeds and query responsiveness.
+
+Immutable Entities structures will not change in the future. An ideal entity to become an Immutable Entity would be an entity that is directly logging on-chain event data, such as a `Transfer` event being logged as a `Transfer` entity.
+
+### Under the hood
+
+Mutable entities have a 'block range' indicating their validity. Updating these entities requires the graph node to adjust the block range of previous versions, increasing database workload. Queries also need filtering to find only live entities. Immutable entities are faster because they are all live and since they won't change, no checks or updates are required while writing, and no filtering is required during queries.
+
+### When not to use Immutable Entities
+
+If you have a field like `status` that needs to be modified over time, then you should not make the entity immutable. Otherwise, you should use immutable entities whenever possible.
+
+## Bytes as IDs
+
+Every entity requires an ID. In the previous example, we can see that the ID is already of the Bytes type.
+
+```graphql
+type Transfer @entity(immutable: true) {
+ id: Bytes!
+ from: Bytes!
+ to: Bytes!
+ value: BigInt!
+}
+```
+
+While other types for IDs are possible, such as String and Int8, it is recommended to use the Bytes type for all IDs due to character strings taking twice as much space as Byte strings to store binary data, and comparisons of UTF-8 character strings must take the locale into account which is much more expensive than the bytewise comparison used to compare Byte strings.
+
+### Reasons to Not Use Bytes as IDs
+
+1. If entity IDs must be human-readable such as auto-incremented numerical IDs or readable strings, Bytes for IDs should not be used.
+2. If integrating a subgraph’s data with another data model that does not use Bytes as IDs, Bytes as IDs should not be used.
+3. Indexing and querying performance improvements are not desired.
+
+### Concatenating With Bytes as IDs
+
+It is a common practice in many subgraphs to use string concatenation to combine two properties of an event into a single ID, such as using `event.transaction.hash.toHex() + "-" + event.logIndex.toString()`. However, as this returns a string, this significantly impedes subgraph indexing and querying performance.
+
+Instead, we should use the `concatI32()` method to concatenate event properties. This strategy results in a `Bytes` ID that is much more performant.
+
+```typescript
+export function handleTransfer(event: TransferEvent): void {
+ let entity = new Transfer(event.transaction.hash.concatI32(event.logIndex.toI32()))
+ entity.from = event.params.from
+ entity.to = event.params.to
+ entity.value = event.params.value
+
+ entity.blockNumber = event.block.number
+ entity.blockTimestamp = event.block.timestamp
+ entity.transactionHash = event.transaction.hash
+
+ entity.save()
+}
+```
+
+### Sorting With Bytes as IDs
+
+Sorting using Bytes as IDs is not optimal as seen in this example query and response.
+
+Query:
+
+```graphql
+{
+ transfers(first: 3, orderBy: id) {
+ id
+ from
+ to
+ value
+ }
+}
+```
+
+Query response:
+
+```json
+{
+ "data": {
+ "transfers": [
+ {
+ "id": "0x00010000",
+ "from": "0xabcd...",
+ "to": "0x1234...",
+ "value": "256"
+ },
+ {
+ "id": "0x00020000",
+ "from": "0xefgh...",
+ "to": "0x5678...",
+ "value": "512"
+ },
+ {
+ "id": "0x01000000",
+ "from": "0xijkl...",
+ "to": "0x9abc...",
+ "value": "1"
+ }
+ ]
+ }
+}
+```
+
+The IDs are returned as hex.
+
+To improve sorting, we should create another field on the entity that is a BigInt.
+
+```graphql
+type Transfer @entity {
+ id: Bytes!
+ from: Bytes! # address
+ to: Bytes! # address
+ value: BigInt! # unit256
+ tokenId: BigInt! # uint256
+}
+```
+
+This will allow for sorting to be optimized sequentially.
+
+Query:
+
+```graphql
+{
+ transfers(first: 3, orderBy: tokenId) {
+ id
+ tokenId
+ }
+}
+```
+
+Query Response:
+
+```json
+{
+ "data": {
+ "transfers": [
+ {
+ "id": "0x…",
+ "tokenId": "1"
+ },
+ {
+ "id": "0x…",
+ "tokenId": "2"
+ },
+ {
+ "id": "0x…",
+ "tokenId": "3"
+ }
+ ]
+ }
+}
+```
+
+## Conclusion
+
+Using both Immutable Entities and Bytes as IDs has been shown to markedly improve subgraph efficiency. Specifically, tests have highlighted up to a 28% increase in query performance and up to a 48% acceleration in indexing speeds.
+
+Read more about using Immutable Entities and Bytes as IDs in this blog post by David Lutterkort, a Software Engineer at Edge & Node: [Two Simple Subgraph Performance Improvements](https://thegraph.com/blog/two-simple-subgraph-performance-improvements/).
diff --git a/website/pages/ar/cookbook/near.mdx b/website/pages/ar/cookbook/near.mdx
index 40e13f93cbd1..fc1d46596abe 100644
--- a/website/pages/ar/cookbook/near.mdx
+++ b/website/pages/ar/cookbook/near.mdx
@@ -2,13 +2,11 @@
title: بناء Subgraphs على NEAR
---
-> يتوفر دعم NEAR في Graph Node وفي الخدمة المستضافة(Hosted Service) في مرحلة beta: يرجى التواصل بـ near@thegraph.com إذا كانت لديك أي أسئلة حول بناء subgraphs NEAR!
-
هذا الدليل عبارة عن مقدمة لبناء subgraphs تقوم بفهرسة العقود الذكية على [NEAR blockchain](https://docs.near.org/).
## ما هو NEAR؟
-[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/docs/concepts/new-to-near) for more information.
+[NEAR](https://near.org/) is a smart contract platform for building decentralized applications. Visit the [official documentation](https://docs.near.org/concepts/basics/protocol) for more information.
## ماهي NEAR subgraphs؟
@@ -19,7 +17,7 @@ title: بناء Subgraphs على NEAR
- معالجات الكتل(Block handlers): يتم تشغيلها على كل كتلة جديدة
- معالجات الاستلام (Receipt handlers): يتم تشغيلها في كل مرة يتم فيها تنفيذ رسالة على حساب محدد
-[ من وثائق NEAR ](https://docs.near.org/docs/concepts/transaction#receipt):
+[From the NEAR documentation](https://docs.near.org/build/data-infrastructure/lake-data-structures/receipt):
> الاستلام (Receipt) هو الكائن الوحيد القابل للتنفيذ في النظام. عندما نتحدث عن "معالجة الإجراء" على منصة NEAR ، فإن هذا يعني في النهاية "تطبيق الاستلامات" في مرحلة ما.
@@ -29,13 +27,13 @@ title: بناء Subgraphs على NEAR
`graphprotocol/graph-ts@` هي مكتبة لأنواع خاصة بـ subgraph.
-تطوير NEAR subgraph يتطلب `graph-cli` بإصدار أعلى من `0.23.0` و `graph-ts` بإصدار أعلى من `0.23.0`.
+تطوير NEAR subgraph يتطلب `graph-cli` بإصدار أعلى من ` 0.23.0 ` و `graph-ts` بإصدار أعلى من ` 0.23.0 `.
> Building a NEAR subgraph is very similar to building a subgraph that indexes Ethereum.
هناك ثلاثة جوانب لتعريف الـ subgraph:
-**subgraph.yaml:** الـ subgraph manifest ، وتحديد مصادر البيانات ذات الأهمية ، وكيف يجب أن تتم معالجتها.علما أن NEAR هو `نوع` جديد لمصدر البيانات.
+**subgraph.yaml:** الـ subgraph manifest ، وتحديد مصادر البيانات ذات الأهمية ، وكيف يجب أن تتم معالجتها.علما أن NEAR هو ` نوع ` جديد لمصدر البيانات.
**schema.graphql:** a schema file that defines what data is stored for your subgraph, and how to query it via GraphQL. The requirements for NEAR subgraphs are covered by [the existing documentation](/developing/creating-a-subgraph#the-graphql-schema).
@@ -73,8 +71,8 @@ dataSources:
```
- NEAR subgraphs يقدم `نوعا ` جديدا من مصدر بيانات (`near`)
-- يجب أن يتوافق الـ `network` مع شبكة على Graph Node المضيفة. في الخدمة المستضافة ، الشبكة الرئيسية لـ NEAR هي `near-mainnet` ، وشبكة NEAR's testnet هي `near-testnet`
-- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/docs/concepts/account). This can be an account or a sub-account.
+- The `network` should correspond to a network on the hosting Graph Node. On Subgraph Studio, NEAR's mainnet is `near-mainnet`, and NEAR's testnet is `near-testnet`
+- NEAR data sources introduce an optional `source.account` field, which is a human-readable ID corresponding to a [NEAR account](https://docs.near.org/concepts/protocol/account-model). This can be an account or a sub-account.
- NEAR data sources introduce an alternative optional `source.accounts` field, which contains optional suffixes and prefixes. At least prefix or suffix must be specified, they will match the any account starting or ending with the list of values respectively. The example below would match: `[app|good].*[morning.near|morning.testnet]`. If only a list of prefixes or suffixes is necessary the other field can be omitted.
```yaml
@@ -90,7 +88,7 @@ accounts:
مصادر بيانات NEAR تدعم نوعين من المعالجات:
- `blockHandlers`: يتم تشغيلها على كل كتلة NEAR جديدة. لا يتطلب `source.account`.
-- `receiptHandlers`: يتم تشغيلها في كل استلام حيث يكون مصدر البيانات`source.account` هو المستلم. لاحظ أنه تتم معالجة المطابقات التامة فقط (يجب إضافة حسابات فرعية [subaccounts](https://docs.near.org/docs/concepts/account#subaccounts) كمصادر بيانات مستقلة).
+- `receiptHandlers`: run on every receipt where the data source's `source.account` is the recipient. Note that only exact matches are processed ([subaccounts](https://docs.near.org/tutorials/crosswords/basics/add-functions-call#create-a-subaccount) must be added as independent data sources).
### تعريف المخطط
@@ -165,7 +163,7 @@ class ReceiptWithOutcome {
These types are passed to block & receipt handlers:
- معالجات الكتلة ستتلقى`Block`
-- معالجات الاستلام ستتلقى`ReceiptWithOutcome`
+- معالجات الاستلام ستتلقى` ReceiptWithOutcome `
Otherwise, the rest of the [AssemblyScript API](/developing/assemblyscript-api) is available to NEAR subgraph developers during mapping execution.
@@ -175,34 +173,35 @@ This includes a new JSON parsing function - logs on NEAR are frequently emitted
بمجرد امتلاكك لـ subgraph، فقد حان الوقت لنشره في Graph Node للفهرسة. يمكن نشر NEAR subgraphs في اصدارات Graph Node `>=v0.26.x` (لم يتم وضع علامة(tag) على هذا الإصدار ولم يتم إصداره بعد).
-تدعم Graph's Hosted Service حاليًا فهرسة NEAR mainnet و testnet في مرحلة beta، وذلك باستخدام أسماء الشبكات التالية:
+Subgraph Studio and the upgrade Indexer on The Graph Network currently supports indexing NEAR mainnet and testnet in beta, with the following network names:
- `near-mainnet`
- `near-testnet`
-More information on creating and deploying subgraphs on the Hosted Service can be found [here](/deploying/deploying-a-subgraph-to-hosted).
+More information on creating and deploying subgraphs on Subgraph Studio can be found [here](/deploying/deploying-a-subgraph-to-studio).
-كتمهيد سريع - الخطوة الأولى هي "إنشاء" subgraph خاص بك - يجب القيام بذلك مرة واحدة فقط. على Hosted Service ، يمكن القيام بذلك من [Dashboard](https://thegraph.com/hosted-service/dashboard): الخاص بك "Add Subgraph".
+As a quick primer - the first step is to "create" your subgraph - this only needs to be done once. On Subgraph Studio, this can be done from [your Dashboard](https://thegraph.com/studio/): "Create a subgraph".
بمجرد إنشاء الـ subgraph الخاص بك ، يمكنك نشره باستخدام الأمر `graph deploy`:
```sh
-$ graph create --node subgraph/name # creates a subgraph on a local Graph Node (on the Hosted Service, this is done via the UI)
-$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash
+$ graph create --node # creates a subgraph on a local Graph Node (on Subgraph Studio, this is done via the UI)
+$ graph deploy --node --ipfs https://api.thegraph.com/ipfs/ # uploads the build files to a specified IPFS endpoint, and then deploys the subgraph to a specified Graph Node based on the manifest IPFS hash
```
The node configuration will depend on where the subgraph is being deployed.
-### الخدمة المستضافة
+### Subgraph Studio
```sh
-graph deploy --node https://api.thegraph.com/deploy/ --ipfs https://api.thegraph.com/ipfs/ --access-token
+graph auth --studio
+graph deploy --studio
```
### Local Graph Node (based on default configuration)
```sh
-graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001
+graph deploy --node http://localhost:8020/ --ipfs http://localhost:5001
```
بمجرد نشر الـ subgraph الخاص بك ، سيتم فهرسته بواسطة Graph Node. يمكنك التحقق من تقدمه عن طريق الاستعلام عن الـ subgraph نفسه:
@@ -233,7 +232,7 @@ The GraphQL endpoint for NEAR subgraphs is determined by the schema definition,
## أمثلة على الـ Subgraphs
-فيما يلي بعض الأمثلة على الـ subgraphs للرجوع إليها:
+Here are some example subgraphs for reference:
[NEAR Blocks](https://github.com/graphprotocol/graph-tooling/tree/main/examples/near-blocks)
@@ -281,4 +280,4 @@ If it is a general question about subgraph development, there is a lot more info
## المراجع
-- [وثائق مطور NEAR](https://docs.near.org/docs/develop/basics/getting-started)
+- [وثائق مطور NEAR](https://docs.near.org/tutorials/crosswords/basics/set-up-skeleton)
diff --git a/website/pages/ar/cookbook/pruning.mdx b/website/pages/ar/cookbook/pruning.mdx
new file mode 100644
index 000000000000..f22a2899f1de
--- /dev/null
+++ b/website/pages/ar/cookbook/pruning.mdx
@@ -0,0 +1,41 @@
+---
+title: Subgraph Best Practice 1 - Improve Query Speed with Subgraph Pruning
+---
+
+## TLDR
+
+[Pruning](/developing/creating-a-subgraph/#prune) removes archival entities from the subgraph’s database up to a given block, and removing unused entities from a subgraph’s database will improve a subgraph’s query performance, often dramatically. Using `indexerHints` is an easy way to prune a subgraph.
+
+## How to Prune a Subgraph With `indexerHints`
+
+Add a section called `indexerHints` in the manifest.
+
+`indexerHints` has three `prune` options:
+
+- `prune: auto`: Retains the minimum necessary history as set by the Indexer, optimizing query performance. This is the generally recommended setting and is the default for all subgraphs created by `graph-cli` >= 0.66.0.
+- `prune: `: Sets a custom limit on the number of historical blocks to retain.
+- `prune: never`: No pruning of historical data; retains the entire history and is the default if there is no `indexerHints` section. `prune: never` should be selected if [Time Travel Queries](/querying/graphql-api/#time-travel-queries) are desired.
+
+We can add `indexerHints` to our subgraphs by updating our `subgraph.yaml`:
+
+```yaml
+specVersion: 1.0.0
+schema:
+ file: ./schema.graphql
+indexerHints:
+ prune: auto
+dataSources:
+ - kind: ethereum/contract
+ name: Contract
+ network: mainnet
+```
+
+## Important Considerations
+
+- If [Time Travel Queries](/querying/graphql-api/#time-travel-queries) are desired as well as pruning, pruning must be performed accurately to retain Time Travel Query functionality. Due to this, it is generally not recommended to use `indexerHints: prune: auto` with Time Travel Queries. Instead, prune using `indexerHints: prune: ` to accurately prune to a block height that preserves the historical data required by Time Travel Queries, or use `prune: never` to maintain all data.
+
+- It is not possible to [graft](/cookbook/grafting/) at a block height that has been pruned. If grafting is routinely performed and pruning is desired, it is recommended to use `indexerHints: prune: ` that will accurately retain a set number of blocks (e.g., enough for six months).
+
+## Conclusion
+
+Pruning using `indexerHints` is a best practice for subgraph development, offering significant query performance improvements.
diff --git a/website/pages/ar/cookbook/subgraph-debug-forking.mdx b/website/pages/ar/cookbook/subgraph-debug-forking.mdx
index 0d331deab109..44a8bfa28c2c 100644
--- a/website/pages/ar/cookbook/subgraph-debug-forking.mdx
+++ b/website/pages/ar/cookbook/subgraph-debug-forking.mdx
@@ -2,7 +2,7 @@
title: Quick and Easy Subgraph Debugging Using Forks
---
-كما هو الحال مع العديد من الأنظمة التي تعالج كميات كبيرة من البيانات ، قد يستغرق مفهرسو The Graph أو (Graph nodes) بعض الوقت لمزامنة الـ subgraph الخاص بك مع blockchain المستهدف. التناقض بين التغييرات السريعة بغرض تصحيح الأخطاء وأوقات الانتظار الطويلة اللازمة للفهرسة يؤدي إلى نتائج عكسية للغاية ونحن ندرك ذلك جيدًا. ولهذا السبب نقدم **subgraph forking ** ، الذي تم تطويره بواسطة [ LimeChain ](https://limechain.tech/) ، وفي هذه المقالة سنوضح لكم كيف يمكن استخدام هذه الميزة لتسريع تصحيح أخطاء الـ subgraph بشكل كبير!
+As with many systems processing large amounts of data, The Graph's Indexers (Graph Nodes) may take quite some time to sync-up your subgraph with the target blockchain. The discrepancy between quick changes with the purpose of debugging and long wait times needed for indexing is extremely counterproductive and we are well aware of that. This is why we are introducing **subgraph forking**, developed by [LimeChain](https://limechain.tech/), and in this article I will show you how this feature can be used to substantially speed-up subgraph debugging!
## حسنا، ما هو؟
@@ -12,9 +12,9 @@ title: Quick and Easy Subgraph Debugging Using Forks
## ماذا؟! كيف؟
-عندما تنشر subgraph إلى Graph node بعيدة للقيام بالفهرسة ويفشل عند الكتلة _ X _ ، فإن الخبر الجيد هو أن Graph node ستظل تقدم استعلامات GraphQL باستخدام مخزنها(store)، والذي تمت مزامنته للكتلة(block) _ X _. هذا عظيم! هذا يعني أنه يمكننا الاستفادة من هذا المخزن "المحدث" لإصلاح الأخطاء التي تظهر عند فهرسة الكتلة _ X _.
+When you deploy a subgraph to a remote Graph Node for indexing and it fails at block _X_, the good news is that the Graph Node will still serve GraphQL queries using its store, which is synced-up to block _X_. That's great! This means we can take advantage of this "up-to-date" store to fix the bugs arising when indexing block _X_.
-باختصار ، سنقوم _ بتفريع (fork) الـ subgraph الفاشل _ من Graph node بعيدة والتي تضمن فهرسة الـ subgraph للكتلة _ X _ وذلك من أجل توفير الـ subgraph المنشور محليًا والذي يتم تصحيحه عندالكتلة*X* مع عرض محدث لحالة الفهرسة.
+In a nutshell, we are going to _fork the failing subgraph_ from a remote Graph Node that is guaranteed to have the subgraph indexed up to block _X_ in order to provide the locally deployed subgraph being debugged at block _X_ an up-to-date view of the indexing state.
## من فضلك ، أرني بعض الأكواد!
@@ -44,12 +44,12 @@ export function handleUpdatedGravatar(event: UpdatedGravatar): void {
}
```
-Oops, how unfortunate, when I deploy my perfect looking subgraph to the [Hosted Service](https://thegraph.com/hosted-service/) it fails with the _"Gravatar not found!"_ error.
+Oops, how unfortunate, when I deploy my perfect looking subgraph to [Subgraph Studio](https://thegraph.com/studio/) it fails with the _"Gravatar not found!"_ error.
الطريقة المعتادة لمحاولة الإصلاح هي:
1. إجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة (وأنا أعلم أنه لن يحلها).
-2. Re-deploy the subgraph to the [Hosted Service](https://thegraph.com/hosted-service/) (or another remote Graph node).
+2. Re-deploy the subgraph to [Subgraph Studio](https://thegraph.com/studio/) (or another remote Graph Node).
3. الانتظار حتى تتم المزامنة.
4. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1!
@@ -57,9 +57,9 @@ Oops, how unfortunate, when I deploy my perfect looking subgraph to the [Hosted
باستخدام **subgraph forking** يمكننا التخلص من تلك الخطوة. إليك كيف يبدو:
-0. قم بتجهيز Graph node محلية بمجموعة **_fork-base_** مناسبة.
+0. Spin-up a local Graph Node with the **_appropriate fork-base_** set.
1. قم بإجراء تغيير في مصدر الـ mappings ، والذي تعتقد أنه سيحل المشكلة.
-2. قم بالنشر إلى Graph node محلية ، **_وقم بتفريع الـ subgraph الفاشل_**و**_ البدء من الكتلة التي بها المشكلة_**.
+2. Deploy to the local Graph Node, **_forking the failing subgraph_** and **_starting from the problematic block_**.
3. إذا حدثت المشكلة مرة أخرى ، فارجع إلى 1!
الآن ، قد يكون لديك سؤالان:
@@ -69,18 +69,18 @@ Oops, how unfortunate, when I deploy my perfect looking subgraph to the [Hosted
وأنا أجيب:
-1. `fork-base` هو عنوان URL "الأساسي" ،فمثلا عند إلحاق _subgraph id_ ، يكون عنوان URL الناتج (`/`) هو GraphQL endpoint صالح لمخزن الـ subgraph.
+1. ` fork-base ` هو عنوان URL "الأساسي" ،فمثلا عند إلحاق _subgraph id_ ، يكون عنوان URL الناتج (`/`) هو GraphQL endpoint صالح لمخزن الـ subgraph.
2. الـتفريع سهل ، فلا داعي للقلق:
```bash
$ graph deploy --debug-fork --ipfs http://localhost:5001 --node http://localhost:8020
```
-أيضًا ، لا تنس تعيين حقل `dataSources.source.startBlock` في subgraph manifest لرقم الكتلة(block) التي بها المشكلة، حتى تتمكن من تخطي فهرسة الكتل الغير ضرورية والاستفادة من التفريع!
+أيضًا ، لا تنس تعيين حقل ` dataSources.source.startBlock ` في subgraph manifest لرقم الكتلة(block) التي بها المشكلة، حتى تتمكن من تخطي فهرسة الكتل الغير ضرورية والاستفادة من التفريع!
لذلك ، هذا ما أفعله:
-0. أقوم بتجهيز Graph Node محلية ([هنا كيف تقوم به](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) مع تعيين خيار `fork-base` إلى `https://api.thegraph.com/subgraphs/id/` ، نظرا لأنني سأقوم بتفريع(fork) الـ subgraph الذي به أخطاء والذي نشرته سابقا من الـ [HostedService](https://thegraph.com/hosted-service/).
+1. I spin-up a local Graph Node ([here is how to do it](https://github.com/graphprotocol/graph-node#running-a-local-graph-node)) with the `fork-base` option set to: `https://api.thegraph.com/subgraphs/id/`, since I will fork a subgraph, the buggy one I deployed earlier, from [Subgraph Studio](https://thegraph.com/studio/).
```
$ cargo run -p graph-node --release -- \
@@ -90,13 +90,12 @@ $ cargo run -p graph-node --release -- \
--fork-base https://api.thegraph.com/subgraphs/id/
```
-1. بعد فحص دقيق ، لاحظت أن هناك عدم تطابق في تمثيلات الـ `id` المستخدمة عند فهرسة `Gravatar` في المعالجين الخاصين بي. بينما `handleNewGravatar` يحول (`event.params.id.toHex()`) إلى سداسي ، `handleUpdatedGravatar` يستخدم int32 (`event.params.id.toI32()`) مما يجعل `handleUpdatedGravatar` قلقا من "Gravatar not found!". أنا أجعلهم كلاهما يحولان `id` إلى سداسي.
-2. بعد إجراء التغييرات ، قمت بنشر الـ subgraph الخاص بي على Graph node المحلية **_وتفريع الـsubgraph الفاشل_** وضبط `dataSources.source.startBlock` إلى `6190343` في `subgraph.yaml`:
+2. بعد فحص دقيق ، لاحظت أن هناك عدم تطابق في تمثيلات الـ ` id ` المستخدمة عند فهرسة ` Gravatar ` في المعالجين الخاصين بي. بينما ` handleNewGravatar ` يحول (`event.params.id.toHex()`) إلى سداسي ، `handleUpdatedGravatar` يستخدم int32 (`event.params.id.toI32()`) مما يجعل ` handleUpdatedGravatar ` قلقا من "Gravatar not found!". أنا أجعلهم كلاهما يحولان ` id ` إلى سداسي.
+3. After I made the changes I deploy my subgraph to the local Graph Node, **_forking the failing subgraph_** and setting `dataSources.source.startBlock` to `6190343` in `subgraph.yaml`:
```bash
$ graph deploy gravity --debug-fork QmNp169tKvomnH3cPXTfGg4ZEhAHA6kEq5oy1XDqAxqHmW --ipfs http://localhost:5001 --node http://localhost:8020
```
-3. لقد قمت بفحص السجلات التي تنتجها Graph node المحلية ، ويبدو أن كل شيء يعمل بشكل جيد.
-4. أقوم بنشر الـ subgraph الخاص بي الخالي من الأخطاء لـ Graph node بعيدة وأعيش في سعادة دائمة
-5. النهاية...
+4. I inspect the logs produced by the local Graph Node and, Hooray!, everything seems to be working.
+5. I deploy my now bug-free subgraph to a remote Graph Node and live happily ever after! (no potatoes tho)
diff --git a/website/pages/ar/cookbook/substreams-powered-subgraphs.mdx b/website/pages/ar/cookbook/substreams-powered-subgraphs.mdx
index 6b84c84358c8..8a7998c325f8 100644
--- a/website/pages/ar/cookbook/substreams-powered-subgraphs.mdx
+++ b/website/pages/ar/cookbook/substreams-powered-subgraphs.mdx
@@ -6,7 +6,7 @@ title: Substreams-powered subgraphs
## Requirements
-This cookbook requires [yarn](https://yarnpkg.com/), [the dependencies necessary for local Substreams development](https://substreams.streamingfast.io/developers-guide/installation-requirements), and the latest version of Graph CLI (>=0.52.0):
+This cookbook requires [yarn](https://yarnpkg.com/), [the dependencies necessary for local Substreams development](https://substreams.streamingfast.io/documentation/consume/installing-the-cli), and the latest version of Graph CLI (>=0.52.0):
```
npm install -g @graphprotocol/graph-cli
@@ -45,7 +45,7 @@ message Contract {
The core logic of the Substreams package is a `map_contract` module in `lib.rs`, which processes every block, filtering for Create calls which did not revert, returning `Contracts`:
-```
+```rust
#[substreams::handlers::map]
fn map_contract(block: eth::v2::Block) -> Result {
let contracts = block
@@ -71,7 +71,7 @@ A Substreams package can be used by a subgraph as long as it has a module which
> The `substreams_entity_change` crate also has a dedicated `Tables` function for simply generating entity changes ([documentation](https://docs.rs/substreams-entity-change/1.2.2/substreams_entity_change/tables/index.html)). The Entity Changes generated must be compatible with the `schema.graphql` entities defined in the `subgraph.graphql` of the corresponding subgraph.
-```
+```rust
#[substreams::handlers::map]
pub fn graph_out(contracts: Contracts) -> Result {
// hash map of name to a table
@@ -90,7 +90,7 @@ pub fn graph_out(contracts: Contracts) -> Result Currently the Subgraph Studio and The Graph Network support Substreams-powered subgraphs which index `mainnet` (Mainnet Ethereum).
+> Currently, Subgraph Studio and The Graph Network support Substreams-powered subgraphs which index `mainnet` (Mainnet Ethereum).
```yaml
specVersion: 0.0.4
diff --git a/website/pages/ar/cookbook/upgrading-a-subgraph.mdx b/website/pages/ar/cookbook/upgrading-a-subgraph.mdx
index aa6675676abe..4181a6b18255 100644
--- a/website/pages/ar/cookbook/upgrading-a-subgraph.mdx
+++ b/website/pages/ar/cookbook/upgrading-a-subgraph.mdx
@@ -10,77 +10,29 @@ The process of upgrading is quick and your subgraphs will forever benefit from t
### المتطلبات الأساسية
-- You have already deployed a subgraph on the hosted service.
-- The subgraph is indexing a chain available on The Graph Network.
-- You have a wallet with ETH to publish your subgraph on-chain.
-- You have ~10,000 GRT to curate your subgraph so Indexers can begin indexing it.
+- You have a subgraph deployed on the hosted service.
## Upgrading an Existing Subgraph to The Graph Network
-> You can find specific commands for your subgraph in the [Subgraph Studio](https://thegraph.com/studio/).
+
-1. Get the latest version of the graph-cli installed:
+If you are logged in to the hosted service, you can access a simple flow to upgrade your subgraphs from [your dashboard](https://thegraph.com/hosted-service/dashboard), or from an individual subgraph page.
-```sh
-npm install -g @graphprotocol/graph-cli
-```
-
-```sh
-yarn global add @graphprotocol/graph-cli
-```
-
-Make sure your `apiVersion` in subgraph.yaml is `0.0.5` or greater.
-
-2. Inside the subgraph's main project repository, authenticate the subgraph to deploy and build on the studio:
-
-```sh
-graph auth --studio
-```
-
-3. أنشئ الملفات وقم ببناء الـ الفرعيةرسمبياني:
-
-```sh
-graph codegen && graph build
-```
-
-If your subgraph has build errors, refer to the [AssemblyScript Migration Guide](/release-notes/assemblyscript-migration-guide/).
-
-4. Sign into [Subgraph Studio](https://thegraph.com/studio/) with your wallet and deploy the subgraph. You can find your `` in the Studio UI, which is based on the name of your subgraph.
-
-```sh
-graph deploy --studio
-```
+> This process typically takes less than five minutes.
-5. Test queries on the Studio's playground. Here are some examples for the [Sushi - Mainnet Exchange Subgraph](https://thegraph.com/explorer/subgraph?id=0x4bb4c1b0745ef7b4642feeccd0740dec417ca0a0-0&view=Playground):
+1. Select the subgraph(s) you want to upgrade.
+2. Connect or enter the receiving wallet (the wallet that will become the owner of the subgraph).
+3. Click the "Upgrade" button.
-```sh
-{
- users(first: 5) {
- id
- liquidityPositions {
- id
- }
- }
- bundles(first: 5) {
- id
- ethPrice
- }
-}
-```
-
-6. At this point, your subgraph is now deployed on Subgraph Studio, but not yet published to the decentralized network. You can now test the subgraph to make sure it is working as intended using the temporary query URL as seen on top of the right column above. As this name already suggests, this is a temporary URL and should not be used in production.
-
-- Updating is just publishing another version of your existing subgraph on-chain.
-- Because this incurs a cost, it is highly recommended to deploy and test your subgraph in the Subgraph Studio, using the "Development Query URL" before publishing. See an example transaction [here](https://etherscan.io/tx/0xd0c3fa0bc035703c9ba1ce40c1862559b9c5b6ea1198b3320871d535aa0de87b). Prices are roughly around 0.0425 ETH at 100 gwei.
-- Any time you need to update your subgraph, you will be charged an update fee. Because this incurs a cost, it is highly recommended to deploy and test your subgraph on Goerli before deploying to mainnet. It can, in some cases, also require some GRT if there is no signal on that subgraph. In the case there is signal/curation on that subgraph version (using auto-migrate), the taxes will be split.
+That's it! Your subgraphs will be deployed to Subgraph Studio, and published on The Graph Network. You can access the [Subgraph Studio](https://thegraph.com/studio/) to manage your subgraphs, logging in with the wallet specified during the upgrade process.
-7. Publish the subgraph on The Graph's decentralized network by hitting the "Publish" button.
+You'll be able to view your subgraphs live on the decentralized network via [Graph Explorer](https://thegraph.com/explorer).
-You should curate your subgraph with GRT to ensure that it is indexed by Indexers. To save on gas costs, you can curate your subgraph in the same transaction that you publish it to the network. It is recommended to curate your subgraph with at least 10,000 GRT for high quality of service.
+### What next?
-And that's it! After you are done publishing, you'll be able to view your subgraphs live on the decentralized network via [The Graph Explorer](https://thegraph.com/explorer).
+When your subgraph is upgraded, it will automatically be indexed by the upgrade indexer. If the indexed chain is [fully supported by The Graph Network](/developing/supported-networks), you can add some GRT as "signal", to attract more indexers. It is recommended to curate your subgraph with at least 3,000 GRT to attract 2-3 Indexers for higher quality of service.
-Feel free to leverage the [#Curators channel](https://discord.gg/s5HfGMXmbW) on Discord to let Curators know that your subgraph is ready to be signaled. It would also be helpful if you share your expected query volume with them. Therefore, they can estimate how much GRT they should signal on your subgraph.
+You can start to query your subgraph right away on The Graph Network, once you have generated an API key.
### Create an API key
@@ -88,20 +40,9 @@ You can generate an API key in Subgraph Studio [here](https://thegraph.com/studi
![API key creation page](/img/api-image.png)
-At the end of each week, an invoice will be generated based on the query fees that have been incurred during this period. This invoice will be paid automatically using the GRT available in your balance. Your balance will be updated after the cost of your query fees are withdrawn. Query fees are paid in GRT via the Arbitrum network. You will need to add GRT to the Arbitrum billing contract to enable your API key via the following steps:
+You can use this API key to query subgraphs on The Graph Network. All users start on the Free Plan, which includes 100,000 free queries per month. Developers can sign up for the Growth Plan by connecting a credit or debit card, or by depositing GRT to Subgraph Studio billing system.
-- Purchase GRT on an exchange of your choice.
-- Send the GRT to your wallet.
-- On the Billing page in Studio, click on Add GRT.
-
-![Add GRT in billing](/img/Add-GRT-New-Page.png)
-
-- Follow the steps to add your GRT to your billing balance.
-- Your GRT will be automatically bridged to the Arbitrum network and added to your billing balance.
-
-![Billing pane](/img/New-Billing-Pane.png)
-
-> Note: see the [official billing page](../billing.mdx) for full instructions on adding GRT to your billing balance.
+> Note: see the [billing documentation](../billing) for more information on plans, and on managing your billing on Subgraph Studio.
### Securing your API key
@@ -110,13 +51,13 @@ It is recommended that you secure the API by limiting its usage in two ways:
1. Authorized Subgraphs
2. Authorized Domain
-You can secure your API key [here](https://thegraph.com/studio/apikeys/test/).
+You can secure your API key [here](https://thegraph.com/studio/apikeys/).
![Subgraph lockdown page](/img/subgraph-lockdown.png)
### Querying your subgraph on the decentralized network
-Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraph?id=S9ihna8D733WTEShJ1KctSTCvY1VJ7gdVwhUujq4Ejo&view=Indexers)). The green line at the top indicates that at the time of posting 8 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph.
+Now you can check the indexing status of the Indexers on the network in Graph Explorer (example [here](https://thegraph.com/explorer/subgraphs/Dtj2HicXKpoUjNB7ffdBkMwt3L9Sz3cbENd67AdHu6Vb?view=Indexers&chain=arbitrum-one)). The green line at the top indicates that at the time of posting 7 Indexers successfully indexed that subgraph. Also in the Indexer tab you can see which Indexers picked up your subgraph.
![Rocket Pool subgraph](/img/rocket-pool-subgraph.png)
@@ -144,16 +85,16 @@ More information about the nature of the network and how to handle re-orgs are d
## Updating a Subgraph on the Network
-If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to the Subgraph Studio using the Graph CLI.
+If you would like to update an existing subgraph on the network, you can do this by deploying a new version of your subgraph to Subgraph Studio using the Graph CLI.
-1. Make changes to your current subgraph. A good idea is to test small fixes on the Subgraph Studio by publishing to Goerli.
+1. Make changes to your current subgraph.
2. انشر ما يلي وحدد الإصدار الجديد في الأمر (مثل v0.0.1 ، v0.0.2 ، إلخ):
```sh
-graph deploy --studio
+graph deploy --studio --version
```
-3. Test the new version in the Subgraph Studio by querying in the playground
+3. Test the new version in Subgraph Studio by querying in the playground
4. Publish the new version on The Graph Network. Remember that this requires gas (as described in the section above).
### Owner Update Fee: Deep Dive
@@ -180,7 +121,7 @@ Subgraphs are open APIs that external developers are leveraging. Open APIs need
### Updating the Metadata of a Subgraph
-You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in the Subgraph Studio where you can edit all applicable fields.
+You can update the metadata of your subgraphs without having to publish a new version. The metadata includes the subgraph name, image, description, website URL, source code URL, and categories. Developers can do this by updating their subgraph details in Subgraph Studio where you can edit all applicable fields.
Make sure **Update Subgraph Details in Explorer** is checked and click on **Save**. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment.
@@ -201,23 +142,13 @@ Follow the steps [here](/managing/deprecating-a-subgraph) to deprecate your subg
The hosted service was set up to allow developers to deploy their subgraphs without any restrictions.
-In order for The Graph Network to truly be decentralized, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/).
-
-### Estimate Query Fees on the Network
-
-While this is not a live feature in the product UI, you can set your maximum budget per query by taking the amount you're willing to pay per month and dividing it by your expected query volume.
-
-While you get to decide on your query budget, there is no guarantee that an Indexer will be willing to serve queries at that price. If a Gateway can match you to an Indexer willing to serve a query at, or lower than, the price you are willing to pay, you will pay the delta/difference of your budget **and** their price. As a consequence, a lower query price reduces the pool of Indexers available to you, which may affect the quality of service you receive. It's beneficial to have high query fees, as that may attract curation and big-name Indexers to your subgraph.
-
-Remember that it's a dynamic and growing market, but how you interact with it is in your control. There is no maximum or minimum price specified in the protocol or the Gateways. For example, you can look at the price paid by a few of the dapps on the network (on a per-week basis), below. See the last column, which shows query fees in GRT.
-
-![QueryFee](/img/QueryFee.png)
+On The Graph Network, query fees have to be paid as a core part of the protocol's incentives. For more information on subscribing to APIs and paying the query fees, check out billing documentation [here](/billing/).
## مصادر إضافية
If you're still confused, fear not! Check out the following resources or watch our video guide on upgrading subgraphs to the decentralized network below:
-
+
- [The Graph Network Contracts](https://github.com/graphprotocol/contracts)
- [Curation Contract](https://github.com/graphprotocol/contracts/blob/dev/contracts/curation/Curation.sol) - the underlying contract that the GNS wraps around
diff --git a/website/pages/ar/deploying/deploying-a-subgraph-to-hosted.mdx b/website/pages/ar/deploying/deploying-a-subgraph-to-hosted.mdx
index d73262114e2e..327809be6460 100644
--- a/website/pages/ar/deploying/deploying-a-subgraph-to-hosted.mdx
+++ b/website/pages/ar/deploying/deploying-a-subgraph-to-hosted.mdx
@@ -2,21 +2,21 @@
title: Deploying a Subgraph to the Hosted Service
---
-> If a network is not supported on the Hosted Service, you can run your own [graph-node](https://github.com/graphprotocol/graph-node) to index it.
+> Hosted service endpoints will no longer be available after June 12th 2024. [Learn more](/sunrise).
-This page explains how to deploy a subgraph to the Hosted Service. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-cli). If you have not created a subgraph already, see [creating a subgraph](/developing/creating-a-subgraph).
+This page explains how to deploy a subgraph to the hosted service. To deploy a subgraph you need to first install the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli). If you have not created a subgraph already, see [creating a subgraph](/developing/creating-a-subgraph).
-## Create a Hosted Service account
+## Create a hosted service account
-Before using the Hosted Service, create an account in our Hosted Service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [Hosted Service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button, and complete Github's authorization flow.
+Before using the hosted service, create an account in our hosted service. You will need a [Github](https://github.com/) account for that; if you don't have one, you need to create that first. Then, navigate to the [hosted service](https://thegraph.com/hosted-service/), click on the _'Sign up with Github'_ button, and complete Github's authorization flow.
## Store the Access Token
After creating an account, navigate to your [dashboard](https://thegraph.com/hosted-service/dashboard). Copy the access token displayed on the dashboard and run `graph auth --product hosted-service `. This will store the access token on your computer. You only need to do this once, or if you ever regenerate the access token.
-## Create a Subgraph on the Hosted Service
+## Create a Subgraph on the hosted service
-قبل نشر الـ subgraph ، تحتاج إلى إنشائه في Graph Explorer. انتقل إلى [لوحة القيادة](https://thegraph.com/hosted-service/dashboard) وانقر على _'Add Subgraph'_ واملأ المعلومات أدناه حسب الحاجة:
+Before deploying the subgraph, you need to create it in Graph Explorer. Go to the [dashboard](https://thegraph.com/hosted-service/dashboard) and click on the _Add Subgraph_ button and fill in the information below as appropriate:
**Image** - Select an image to be used as a preview image and thumbnail for the subgraph.
@@ -30,17 +30,17 @@ After creating an account, navigate to your [dashboard](https://thegraph.com/hos
**GitHub URL** - Link to the subgraph repository on GitHub.
-**Hide** - Switching this on hides the subgraph in the Graph Explorer.
+**Hide** - Switching this on hides the subgraph in Graph Explorer.
-After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Defining a Subgraph section](/developing/defining-a-subgraph).
+After saving the new subgraph, you are shown a screen with help on how to install the Graph CLI, how to generate the scaffolding for a new subgraph, and how to deploy your subgraph. The first two steps were covered in the [Creating a Subgraph section](/developing/creating-a-subgraph/).
-## Deploy a Subgraph on the Hosted Service
+## Deploy a Subgraph on the hosted service
-Deploying your subgraph will upload the subgraph files that you've built with `yarn build` to IPFS and tell the Graph Explorer to start indexing your subgraph using these files.
+Deploying your subgraph will upload the subgraph files that you've built with `yarn build` to IPFS and tell Graph Explorer to start indexing your subgraph using these files.
You deploy the subgraph by running `yarn deploy`
-After deploying the subgraph, the Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical blocks, starting with the genesis block, syncing can take from a few minutes to several hours.
+After deploying the subgraph, Graph Explorer will switch to showing the synchronization status of your subgraph. Depending on the amount of data and the number of events that need to be extracted from historical blocks, starting with the genesis block, syncing can take from a few minutes to several hours.
The subgraph status switches to `Synced` once the Graph Node has extracted all data from historical blocks. The Graph Node will continue inspecting blocks for your subgraph as these blocks are mined.
@@ -100,7 +100,7 @@ If you don't have a `networks.json` file, you'll need to manually create one wit
**Note:** You don't have to specify any of the `templates` (if you have any) in the config file, only the `dataSources`. If there are any `templates` declared in the `subgraph.yaml` file, their network will be automatically updated to the one specified with the `--network` option.
-Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `goerli` networks, and this is your `subgraph.yaml`:
+Now, let's assume you want to be able to deploy your subgraph to the `mainnet` and `sepolia` networks, and this is your `subgraph.yaml`:
```yaml
# ...
@@ -124,7 +124,7 @@ This is what your networks config file should look like:
"address": "0x123..."
}
},
- "goerli": {
+ "sepolia": {
"Gravity": {
"address": "0xabc..."
}
@@ -136,20 +136,20 @@ Now we can run one of the following commands:
```sh
# Using default networks.json file
-yarn build --network goerli
+yarn build --network sepolia
# Using custom named file
-yarn build --network goerli --network-file path/to/config
+yarn build --network sepolia --network-file path/to/config
```
-The `build` command will update your `subgraph.yaml` with the `goerli` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this:
+The `build` command will update your `subgraph.yaml` with the `sepolia` configuration and then re-compile the subgraph. Your `subgraph.yaml` file now should look like this:
```yaml
# ...
dataSources:
- kind: ethereum/contract
name: Gravity
- network: goerli
+ network: sepolia
source:
address: '0xabc...'
abi: Gravity
@@ -163,17 +163,17 @@ Now you are ready to `yarn deploy`.
```sh
# Using default networks.json file
-yarn deploy --network goerli
+yarn deploy --network sepolia
# Using custom named file
-yarn deploy --network goerli --network-file path/to/config
+yarn deploy --network sepolia --network-file path/to/config
```
### Using subgraph.yaml template
One solution for older graph-cli versions that allows to parameterize aspects like contract addresses is to generate parts of it using a templating system like [Mustache](https://mustache.github.io/) or [Handlebars](https://handlebarsjs.com/).
-To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Goerli using different contract addresses. You could then define two config files providing the addresses for each network:
+To illustrate this approach, let's assume a subgraph should be deployed to mainnet and Sepolia using different contract addresses. You could then define two config files providing the addresses for each network:
```json
{
@@ -186,7 +186,7 @@ and
```json
{
- "network": "goerli",
+ "network": "sepolia",
"address": "0xabc..."
}
```
@@ -216,7 +216,7 @@ In order to generate a manifest to either network, you could add two additional
"scripts": {
...
"prepare:mainnet": "mustache config/mainnet.json subgraph.template.yaml > subgraph.yaml",
- "prepare:goerli": "mustache config/goerli.json subgraph.template.yaml > subgraph.yaml"
+ "prepare:sepolia": "mustache config/sepolia.json subgraph.template.yaml > subgraph.yaml"
},
"devDependencies": {
...
@@ -225,14 +225,14 @@ In order to generate a manifest to either network, you could add two additional
}
```
-To deploy this subgraph for mainnet or Goerli you would now simply run one of the two following commands:
+To deploy this subgraph for mainnet or Sepolia you would now simply run one of the two following commands:
```sh
# Mainnet:
yarn prepare:mainnet && yarn deploy
-# Goerli:
-yarn prepare:goerli && yarn deploy
+# Sepolia:
+yarn prepare:sepolia && yarn deploy
```
A working example of this can be found [here](https://github.com/graphprotocol/example-subgraph/tree/371232cf68e6d814facf5e5413ad0fef65144759).
@@ -243,7 +243,7 @@ A working example of this can be found [here](https://github.com/graphprotocol/e
If a subgraph syncs successfully, that is a good sign that it will continue to run well forever. However, new triggers on the network might cause your subgraph to hit an untested error condition or it may start to fall behind due to performance issues or issues with the node operators.
-Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the Hosted Service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph:
+Graph Node exposes a graphql endpoint which you can query to check the status of your subgraph. On the hosted service, it is available at `https://api.thegraph.com/index-node/graphql`. On a local node, it is available on port `8030/graphql` by default. The full schema for this endpoint can be found [here](https://github.com/graphprotocol/graph-node/blob/master/server/index-node/src/schema.graphql). Here is an example query that checks the status of the current version of a subgraph:
```graphql
{
@@ -274,18 +274,22 @@ This will give you the `chainHeadBlock` which you can compare with the `latestBl
## Hosted service subgraph archive policy
-The Hosted Service is a free Graph Node Indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL.
+The hosted service is a free Graph Node Indexer. Developers can deploy subgraphs indexing a range of networks, which will be indexed, and made available to query via graphQL.
-To improve the performance of the service for active subgraphs, the Hosted Service will archive subgraphs that are inactive.
+To improve the performance of the service for active subgraphs, the hosted service will archive subgraphs that are inactive.
-**A subgraph is defined as "inactive" if it was deployed to the Hosted Service more than 45 days ago, and if it has received 0 queries in the last 45 days.**
+**A subgraph is defined as "inactive" if it was deployed to the hosted service more than 45 days ago, and if it has received 0 queries in the last 45 days.**
-Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's Hosted Service graphQL playground. Developers can always redeploy an archived subgraph if it is required again.
+Developers will be notified by email if one of their subgraphs has been marked as inactive 7 days before it is removed. If they wish to "activate" their subgraph, they can do so by making a query in their subgraph's hosted service graphQL playground. Developers can always redeploy an archived subgraph if it is required again.
## Subgraph Studio subgraph archive policy
-When a new version of a subgraph is deployed, the previous version is archived (deleted from the graph-node DB). This only happens if the previous version is not published to The Graph's decentralized network.
+A subgraph version in Studio is archived if and only if it meets the following criteria:
-When a subgraph version isn’t queried for over 45 days, that version is archived.
+- The version is not published to the network (or pending publish)
+- The version was created 45 or more days ago
+- The subgraph hasn't been queried in 30 days
+
+In addition, when a new version is deployed, if the subgraph has not been published, then the N-2 version of the subgraph is archived.
Every subgraph affected with this policy has an option to bring the version in question back.
diff --git a/website/pages/ar/deploying/deploying-a-subgraph-to-studio.mdx b/website/pages/ar/deploying/deploying-a-subgraph-to-studio.mdx
index f8771a01ea33..27e20896b559 100644
--- a/website/pages/ar/deploying/deploying-a-subgraph-to-studio.mdx
+++ b/website/pages/ar/deploying/deploying-a-subgraph-to-studio.mdx
@@ -1,19 +1,17 @@
---
-title: Deploying a Subgraph to the Subgraph Studio
+title: Deploying a Subgraph to Subgraph Studio
---
-> Learn how you can deploy non rate-limited subgraphs to Subgraph Studio [here](https://www.notion.so/edgeandnode/The-Graph-Subgraph-Studio-Non-Rate-Limited-Chain-Integration-889fe061ee6b4423a7f8e2c8070b9294).
-
-These are the steps to deploy your subgraph to the Subgraph Studio:
+These are the steps to deploy your subgraph to Subgraph Studio:
- Install The Graph CLI (with either yarn or npm)
-- Create your Subgraph in the Subgraph Studio
+- Create your Subgraph in Subgraph Studio
- Authenticate your account from the CLI
-- Deploying a Subgraph to the Subgraph Studio
+- Deploying a Subgraph to Subgraph Studio
## Installing Graph CLI
-We are using the same CLI to deploy subgraphs to our [hosted service](https://thegraph.com/hosted-service/) and to the [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install graph-cli. This can be done using npm or yarn.
+There is a CLI to deploy subgraphs to [Subgraph Studio](https://thegraph.com/studio/). Here are the commands to install `graph-cli`. This can be done using npm or yarn.
**التثبيت بواسطة yarn:**
diff --git a/website/pages/ar/deploying/hosted-service.mdx b/website/pages/ar/deploying/hosted-service.mdx
index 2dc9334a219c..73e4e778675c 100644
--- a/website/pages/ar/deploying/hosted-service.mdx
+++ b/website/pages/ar/deploying/hosted-service.mdx
@@ -2,7 +2,7 @@
title: ما هي الخدمة المستضافة (Hosted Service)؟
---
-> Please note, the hosted service will begin sunsetting in 2023, but it will remain available to networks that are not supported on the decentralized network. Developers are encouraged to [upgrade their subgraphs to The Graph Network](/cookbook/upgrading-a-subgraph) as more networks are supported. Each network will have their hosted service equivalents gradually sunset to ensure developers have enough time to upgrade subgraphs to the decentralized network. Read more about the sunsetting of the hosted service [here](https://thegraph.com/blog/sunsetting-hosted-service).
+> Please note, hosted service endpoints will no longer be available after June 12th 2024 as all subgraphs will need to upgrade to The Graph Network. Please read more in the [Sunrise FAQ](/sunrise)
This section will walk you through deploying a subgraph to the [hosted service](https://thegraph.com/hosted-service/).
@@ -12,13 +12,13 @@ For a comprehensive list, see [Supported Networks](/developing/supported-network
## إنشاء الـ Subgraph
-First follow the instructions [here](/developing/defining-a-subgraph) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted-service`
+First follow the instructions [here](/developing/creating-a-subgraph/#install-the-graph-cli) to install the Graph CLI. Create a subgraph by passing in `graph init --product hosted-service`
### من عقد موجود
If you already have a smart contract deployed to your network of choice, bootstrapping a new subgraph from this contract can be a good way to get started on the hosted service.
-يمكنك استخدام هذا الأمر لإنشاء subgraph يقوم بفهرسة جميع الأحداث من عقد موجود. هذا سيحاول جلب ABI العقد من [ Etherscan ](https://etherscan.io/).
+You can use this command to create a subgraph that indexes all events from an existing contract. This will attempt to fetch the contract ABI from the block explorer.
```sh
graph init \
@@ -27,7 +27,7 @@ graph init \
/ []
```
-بالإضافة إلى ذلك ، يمكنك استخدام الوسيطات (arguments) الاختيارية التالية. وإذا تعذر جلب ABI من Etherscan ، فإنه يعود إلى طلب مسار ملف محلي. إذا كان في الأمر أية وسيطات اختيارية مفقودة ، فسيأخذك عبر نموذج تفاعلي.
+Additionally, you can use the following optional arguments. If the ABI cannot be fetched from the block explorer, it falls back to requesting a local file path. If any optional arguments are missing from the command, it takes you through an interactive form.
```sh
--network \
@@ -59,4 +59,4 @@ graph init \
## Supported Networks on the hosted service
-You can find the list of the supported networks [Here](/developing/supported-networks).
+You can find the list of the supported networks [here](/developing/supported-networks).
diff --git a/website/pages/ar/deploying/subgraph-studio-faqs.mdx b/website/pages/ar/deploying/subgraph-studio-faqs.mdx
index 8cbd72bcefd4..74c0228e4093 100644
--- a/website/pages/ar/deploying/subgraph-studio-faqs.mdx
+++ b/website/pages/ar/deploying/subgraph-studio-faqs.mdx
@@ -8,7 +8,7 @@ title: الأسئلة الشائعة حول الفرعيةرسم بياني اس
## 2. How do I create an API Key?
-To create an API, navigate to the Subgraph Studio and connect your wallet. You will be able to click the API keys tab at the top. There, you will be able to create an API key.
+To create an API, navigate to Subgraph Studio and connect your wallet. You will be able to click the API keys tab at the top. There, you will be able to create an API key.
## 3. Can I create multiple API Keys?
@@ -20,12 +20,12 @@ After creating an API Key, in the Security section, you can define the domains t
## 5. Can I transfer my subgraph to another owner?
-نعم ، الـ subgraphs التي تم نشرها على Mainnet يمكن نقلها إلى محفظة جديدة أو إلى Multisig. يمكنك القيام بذلك عن طريق النقر فوق النقاط الثلاث الموجودة بجوار زر "Publish" في صفحة تفاصيل الـ subgraph واختيار "Transfer ownership".
+Yes, subgraphs that have been published to Arbitrum One can be transferred to a new wallet or a Multisig. You can do so by clicking the three dots next to the 'Publish' button on the subgraph's details page and selecting 'Transfer ownership'.
Note that you will no longer be able to see or edit the subgraph in Studio once it has been transferred.
## 6. How do I find query URLs for subgraphs if I’m not the developer of the subgraph I want to use?
-يمكنك العثور على URL الاستعلام(query URL) لكل subgraph في قسم Subgraph Details في The Graph Explorer. عند النقر فوق الزر "Query" ، سيتم توجيهك إلى نافذة حيث يمكنك عرض URL الاستعلام لـ subgraph الذي تهتم به. ويمكنك بعد ذلك استبدال `` بمفتاح API الذي ترغب في الاستفادة منه في Subgraph Studio.
+You can find the query URL of each subgraph in the Subgraph Details section of Graph Explorer. When you click on the “Query” button, you will be directed to a pane wherein you can view the query URL of the subgraph you’re interested in. You can then replace the `` placeholder with the API key you wish to leverage in Subgraph Studio.
تذكر أنه يمكنك إنشاء API key والاستعلام عن أي subgraph منشور على الشبكة ، حتى إذا قمت ببناء subgraph بنفسك. حيث أن الاستعلامات عبر API key الجديد ، هي استعلامات مدفوعة مثل أي استعلامات أخرى على الشبكة.
diff --git a/website/pages/ar/deploying/subgraph-studio.mdx b/website/pages/ar/deploying/subgraph-studio.mdx
index 020b33dfeb62..e88c2912787b 100644
--- a/website/pages/ar/deploying/subgraph-studio.mdx
+++ b/website/pages/ar/deploying/subgraph-studio.mdx
@@ -1,12 +1,12 @@
---
-title: كيفية استخدام Subgraph Studio
+title: How to Use Subgraph Studio
---
مرحبًا بك في منصة الاطلاق الجديدة الخاصة بك 👩🏽🚀
-The Subgraph Studio is your place to build and create subgraphs, add metadata, and publish them to the new decentralized Explorer (more on that [here](/network/explorer)).
+Subgraph Studio is your place to build and create subgraphs, add metadata, and publish them to the new decentralized Explorer (more on that [here](/network/explorer)).
-ما يمكنك القيام به في Subgraph Studio:
+What you can do in Subgraph Studio:
- إنشاء subgraph من خلال Studio UI
- نشر subgraph باستخدام CLI
@@ -15,7 +15,7 @@ The Subgraph Studio is your place to build and create subgraphs, add metadata, a
- دمجه في المنصة باستخدام عنوان URL الاستعلام
- إنشاء وإدارة مفاتيح API الخاصة بك لـ subgraphs محددة
-Here in the Subgraph Studio, you have full control over your subgraphs. Not only can you test your subgraphs before you publish them, but you can also restrict your API keys to specific domains and only allow certain Indexers to query from their API keys.
+Here in Subgraph Studio, you have full control over your subgraphs. Not only can you test your subgraphs before you publish them, but you can also restrict your API keys to specific domains and only allow certain Indexers to query from their API keys.
Querying subgraphs generates query fees, used to reward [Indexers](/network/indexing) on the Graph network. If you’re a dapp developer or subgraph developer, the Studio will empower you to build better subgraphs to power your or your community’s queries. The Studio is comprised of 5 main parts:
@@ -27,7 +27,7 @@ Querying subgraphs generates query fees, used to reward [Indexers](/network/inde
## كيف تنشئ حسابك
-1. سجّل الدخول باستخدام محفظتك - يمكنك القيام بذلك عبر MetaMask أو WalletConnect
+1. Sign in with your wallet - you can do this via MetaMask, WalletConnect, Coinbase Wallet or Safe.
1. Once you sign in, you will see your unique deploy key on your account home page. This will allow you to either publish your subgraphs or manage your API keys + billing. You will have a unique deploy key that can be re-generated if you think it has been compromised.
## How to Create a Subgraph in Subgraph Studio
@@ -36,7 +36,7 @@ Querying subgraphs generates query fees, used to reward [Indexers](/network/inde
## توافق الـ Subgraph مع شبكة The Graph
-The Graph Network is not yet able to support all of the data-sources & features available on the Hosted Service. In order to be supported by Indexers on the network, subgraphs must:
+In order to be supported by Indexers on The Graph Network, subgraphs must:
- Index a [supported network](/developing/supported-networks)
- يجب ألا تستخدم أيًا من الميزات التالية:
@@ -50,7 +50,7 @@ The Graph Network is not yet able to support all of the data-sources & features
![دورة حياة الـ Subgraph](/img/subgraph-lifecycle.png)
-After you have created your subgraph, you will be able to deploy it using the [CLI](https://github.com/graphprotocol/graph-cli), or command-line interface. Deploying a subgraph with the CLI will push the subgraph to the Studio where you’ll be able to test subgraphs using the playground. This will eventually allow you to publish to the Graph Network. For more information on CLI setup, [check this out](/developing/defining-a-subgraph#install-the-graph-cli) (pst, make sure you have your deploy key on hand). Remember, deploying is **not the same as** publishing. When you deploy a subgraph, you just push it to the Studio where you’re able to test it. Versus, when you publish a subgraph, you are publishing it on-chain.
+After you have created your subgraph, you will be able to deploy it using the [CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli), or command-line interface. Deploying a subgraph with the CLI will push the subgraph to the Studio where you’ll be able to test subgraphs using the playground. This will eventually allow you to publish to the Graph Network. For more information on CLI setup, [check this out](/developing/defining-a-subgraph#install-the-graph-cli) (psst, make sure you have your deploy key on hand). Remember, deploying is **not the same as** publishing. When you deploy a subgraph, you just push it to the Studio where you’re able to test it. Versus, when you publish a subgraph, you are publishing it on-chain.
## اختبار الـ Subgraph الخاص بك في Subgraph Studio
@@ -60,13 +60,13 @@ After you have created your subgraph, you will be able to deploy it using the [C
You’ve made it this far - congrats!
-In order to publish your subgraph successfully, you’ll need to go through the following steps outlined in this [blog](https://thegraph.com/blog/building-with-subgraph-studio).
+In order to publish your subgraph successfully, you’ll need to go through the following steps outlined in this [section](/publishing/publishing-a-subgraph/).
Check out the video overview below as well:
-
+
-Remember, while you’re going through your publishing flow, you’ll be able to push to either mainnet or Goerli. If you’re a first-time subgraph developer, we highly suggest you start with publishing to Goerli, which is free to do. This will allow you to see how the subgraph will work in The Graph Explorer and will allow you to test curation elements.
+Remember, while you’re going through your publishing flow, you’ll be able to push to either Arbitrum One or Arbitrum Sepolia. If you’re a first-time subgraph developer, we highly suggest you start with publishing to Arbitrum Sepolia, which is free to do. This will allow you to see how the subgraph will work in Graph Explorer and will allow you to test curation elements.
Indexers need to submit mandatory Proof of Indexing records as of a specific block hash. Because publishing a subgraph is an action taken on-chain, remember that the transaction can take up to a few minutes to go through. Any address you use to publish the contract will be the only one able to publish future versions. Choose wisely!
@@ -76,14 +76,14 @@ Indexers need to submit mandatory Proof of Indexing records as of a specific blo
## تعديل إصدار الـ Subgraph الخاص بك باستخدام CLI
-Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to The Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and Indexers will be able to index this new version.
+Developers might want to update their subgraph, for a variety of reasons. When this is the case, you can deploy a new version of your subgraph to the Studio using the CLI (it will only be private at this point) and if you are happy with it, you can publish this new deployment to Graph Explorer. This will create a new version of your subgraph that curators can start signaling on and Indexers will be able to index this new version.
-Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under the profile picture, name, description, etc) by checking an option called **Update Details** in The Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment.
+Up until recently, developers were forced to deploy and publish a new version of their subgraph to the Explorer to update the metadata of their subgraphs. Now, developers can update the metadata of their subgraphs **without having to publish a new version**. Developers can update their subgraph details in the Studio (under the profile picture, name, description, etc) by checking an option called **Update Details** in Graph Explorer. If this is checked, an on-chain transaction will be generated that updates subgraph details in the Explorer without having to publish a new version with a new deployment.
Please note that there are costs associated with publishing a new version of a subgraph to the network. In addition to the transaction fees, developers must also fund a part of the curation tax on the auto-migrating signal. You cannot publish a new version of your subgraph if curators have not signaled on it. For more information on the risks of curation, please read more [here](/network/curating).
### الأرشفة التلقائية لإصدارات الـ Subgraph
-كلما قمت بنشر إصدار subgraph جديد في Subgraph Studio ، سيتم أرشفة الإصدار السابق. لن تتم فهرسة / مزامنة الإصدارات المؤرشفة ، وبالتالي لا يمكن الاستعلام عنها. يمكنك إلغاء أرشفة نسخة مؤرشفة من الـ subgraph الخاص بك في Studio UI. يرجى ملاحظة أن الإصدارات السابقة من الـ subgraphs غير المنشورة (non-publishe) التي تم نشرها (deployed) في Studio ستتم أرشفتها تلقائيا.
+Whenever you deploy a new subgraph version in Subgraph Studio, the previous version will be archived. Archived versions won't be indexed/synced and therefore cannot be queried. You can unarchive an archived version of your subgraph in the Studio UI. Please note that previous versions of non-published subgraphs deployed to the Studio will be automatically archived.
![Subgraph Studio -إلغاء أرشفة](/img/Unarchive.png)
diff --git a/website/pages/ar/developing/creating-a-subgraph.mdx b/website/pages/ar/developing/creating-a-subgraph.mdx
index 96986ffd3407..3aa60c79105a 100644
--- a/website/pages/ar/developing/creating-a-subgraph.mdx
+++ b/website/pages/ar/developing/creating-a-subgraph.mdx
@@ -14,9 +14,9 @@ A subgraph extracts data from a blockchain, processing it and storing it so that
- `AssemblyScript Mappings`: [AssemblyScript](https://github.com/AssemblyScript/assemblyscript) كود يترجم من بيانات الحدث إلى الكيانات المعرفة في مخططك (مثل`mapping.ts` في هذا الدرس)
-> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [10,000 GRT](/network-transition-faq/#how-can-i-ensure-that-my-subgraph-will-be-picked-up-by-indexer-on-the-graph-network).
+> In order to use your subgraph on The Graph's decentralized network, you will need to [create an API key](/deploying/subgraph-studio-faqs/#2-how-do-i-create-an-api-key). It is recommended that you [add signal](/network/curating/#how-to-signal) to your subgraph with at least [3,000 GRT](/sunrise/#how-can-i-ensure-high-quality-of-service-and-redundancy-for-subgraphs-on-the-graph-network).
-قبل الخوض في التفاصيل حول محتويات ملف manifest ، تحتاج إلى تثبيت [Graph CLI](https://github.com/graphprotocol/graph-cli) والذي سوف تحتاجه لبناء ونشر Subgraph.
+Before you go into detail about the contents of the manifest file, you need to install the [Graph CLI](https://github.com/graphprotocol/graph-tooling) which you will need to build and deploy a subgraph.
## قم بتثبيت Graph CLI
@@ -36,7 +36,7 @@ yarn global add @graphprotocol/graph-cli
npm install -g @graphprotocol/graph-cli
```
-Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph on the Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started.
+Once installed, the `graph init` command can be used to set up a new subgraph project, either from an existing contract or from an example subgraph. This command can be used to create a subgraph in Subgraph Studio by passing in `graph init --product subgraph-studio`. If you already have a smart contract deployed to your preferred network, bootstrapping a new subgraph from that contract can be a good way to get started.
## من عقد موجود
@@ -61,7 +61,7 @@ graph init \
graph init --studio
```
-يعتمد مثال الـ subgraph على عقد Gravity بواسطة Dani Grant الذي يدير avatars للمستخدم ويصدر أحداث `NewGravatar` أو `UpdateGravatar` كلما تم إنشاء avatars أو تحديثها. يعالج الـ subgraph هذه الأحداث عن طريق كتابة كيانات `Gravatar` إلى مخزن Graph Node والتأكد من تحديثها وفقا للأحداث. ستنتقل الأقسام التالية إلى الملفات التي تشكل الـ subgraph manifest لهذا المثال.
+The [example subgraph](https://github.com/graphprotocol/example-subgraph) is based on the Gravity contract by Dani Grant that manages user avatars and emits `NewGravatar` or `UpdateGravatar` events whenever avatars are created or updated. The subgraph handles these events by writing `Gravatar` entities to the Graph Node store and ensuring these are updated according to the events. The following sections will go over the files that make up the subgraph manifest for this example.
## Add New dataSources To An Existing Subgraph
@@ -101,6 +101,8 @@ description: Gravatar for Ethereum
repository: https://github.com/graphprotocol/graph-tooling
schema:
file: ./schema.graphql
+indexerHints:
+ prune: auto
dataSources:
- kind: ethereum/contract
name: Gravity
@@ -144,12 +146,16 @@ dataSources:
الإدخالات الهامة لتحديث manifest هي:
-- `description`: a human-readable description of what the subgraph is. This description is displayed by the Graph Explorer when the subgraph is deployed to the hosted service.
+- `specVersion`: a semver version that identifies the supported manifest structure and functionality for the subgraph. The latest version is `1.2.0`. See [specVersion releases](#specversion-releases) section to see more details on features & releases.
-- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed by The Graph Explorer.
+- `description`: a human-readable description of what the subgraph is. This description is displayed in Graph Explorer when the subgraph is deployed to Subgraph Studio.
+
+- `repository`: the URL of the repository where the subgraph manifest can be found. This is also displayed in Graph Explorer.
- `features`: قائمة بجميع أسماء الـ [ الميزات](#experimental-features) المستخدمة.
+- `indexerHints.prune`: Defines the retention of historical block data for a subgraph. See [prune](#prune) in [indexerHints](#indexer-hints) section.
+
- `dataSources.source`: the address of the smart contract the subgraph sources, and the ABI of the smart contract to use. The address is optional; omitting it allows to index matching events from all contracts.
- `dataSources.source.startBlock`: the optional number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created.
@@ -170,6 +176,8 @@ dataSources:
يمكن لـ subgraph واحد فهرسة البيانات من عقود ذكية متعددة. أضف إدخالا لكل عقد يجب فهرسة البيانات منه إلى مصفوفة `dataSources`.
+### Order of Triggering Handlers
+
يتم ترتيب المشغلات (triggers) لمصدر البيانات داخل الكتلة باستخدام العملية التالية:
1. يتم ترتيب triggers الأحداث والاستدعاءات أولا من خلال فهرس الإجراء داخل الكتلة.
@@ -178,6 +186,192 @@ dataSources:
قواعد الترتيب هذه عرضة للتغيير.
+> **Note:** When new [dynamic data source](#data-source-templates-for-dynamically-created-contracts) are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered.
+
+### Indexed Argument Filters / Topic Filters
+
+> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0`
+
+Topic filters, also known as indexed argument filters, are a powerful feature in subgraphs that allow users to precisely filter blockchain events based on the values of their indexed arguments.
+
+- These filters help isolate specific events of interest from the vast stream of events on the blockchain, allowing subgraphs to operate more efficiently by focusing only on relevant data.
+
+- This is useful for creating personal subgraphs that track specific addresses and their interactions with various smart contracts on the blockchain.
+
+#### How Topic Filters Work
+
+When a smart contract emits an event, any arguments that are marked as indexed can be used as filters in a subgraph's manifest. This allows the subgraph to listen selectively for events that match these indexed arguments.
+
+- The event's first indexed argument corresponds to `topic1`, the second to `topic2`, and so on, up to `topic3`, since the Ethereum Virtual Machine (EVM) allows up to three indexed arguments per event.
+
+```solidity
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.0;
+
+contract Token {
+ // Event declaration with indexed parameters for addresses
+ event Transfer(address indexed from, address indexed to, uint256 value);
+
+ // Function to simulate transferring tokens
+ function transfer(address to, uint256 value) public {
+ // Emitting the Transfer event with from, to, and value
+ emit Transfer(msg.sender, to, value);
+ }
+}
+```
+
+In this example:
+
+- The `Transfer` event is used to log transactions of tokens between addresses.
+- The `from` and `to` parameters are indexed, allowing event listeners to filter and monitor transfers involving specific addresses.
+- The `transfer` function is a simple representation of a token transfer action, emitting the Transfer event whenever it is called.
+
+#### Configuration in Subgraphs
+
+Topic filters are defined directly within the event handler configuration in the subgraph manifest. Here is how they are configured:
+
+```yaml
+eventHandlers:
+ - event: SomeEvent(indexed uint256, indexed address, indexed uint256)
+ handler: handleSomeEvent
+ topic1: ['0xValue1', '0xValue2']
+ topic2: ['0xAddress1', '0xAddress2']
+ topic3: ['0xValue3']
+```
+
+In this setup:
+
+- `topic1` corresponds to the first indexed argument of the event, `topic2` to the second, and `topic3` to the third.
+- Each topic can have one or more values, and an event is only processed if it matches one of the values in each specified topic.
+
+##### Filter Logic
+
+- Within a Single Topic: The logic functions as an OR condition. The event will be processed if it matches any one of the listed values in a given topic.
+- Between Different Topics: The logic functions as an AND condition. An event must satisfy all specified conditions across different topics to trigger the associated handler.
+
+#### Example 1: Tracking Direct Transfers from Address A to Address B
+
+```yaml
+eventHandlers:
+ - event: Transfer(indexed address,indexed address,uint256)
+ handler: handleDirectedTransfer
+ topic1: ['0xAddressA'] # Sender Address
+ topic2: ['0xAddressB'] # Receiver Address
+```
+
+In this configuration:
+
+- `topic1` is configured to filter `Transfer` events where `0xAddressA` is the sender.
+- `topic2` is configured to filter `Transfer` events where `0xAddressB` is the receiver.
+- The subgraph will only index transactions that occur directly from `0xAddressA` to `0xAddressB`.
+
+#### Example 2: Tracking Transactions in Either Direction Between Two or More Addresses
+
+```yaml
+eventHandlers:
+ - event: Transfer(indexed address,indexed address,uint256)
+ handler: handleTransferToOrFrom
+ topic1: ['0xAddressA', '0xAddressB', '0xAddressC'] # Sender Address
+ topic2: ['0xAddressB', '0xAddressC'] # Receiver Address
+```
+
+In this configuration:
+
+- `topic1` is configured to filter `Transfer` events where `0xAddressA`, `0xAddressB`, `0xAddressC` is the sender.
+- `topic2` is configured to filter `Transfer` events where `0xAddressB` and `0xAddressC` is the receiver.
+- The subgraph will index transactions that occur in either direction between multiple addresses allowing for comprehensive monitoring of interactions involving all addresses.
+
+## Declared eth_call
+
+> **Requires**: [SpecVersion](#specversion-releases) >= `1.2.0`. Currently, `eth_calls` can only be declared for event handlers.
+
+Declarative `eth_calls` are a valuable subgraph feature that allows `eth_calls` to be executed ahead of time, enabling `graph-node` to execute them in parallel.
+
+This feature does the following:
+
+- Significantly improves the performance of fetching data from the Ethereum blockchain by reducing the total time for multiple calls and optimizing the subgraph's overall efficiency.
+- Allows faster data fetching, resulting in quicker query responses and a better user experience.
+- Reduces wait times for applications that need to aggregate data from multiple Ethereum calls, making the data retrieval process more efficient.
+
+### Key Concepts
+
+- Declarative `eth_calls`: Ethereum calls that are defined to be executed in parallel rather than sequentially.
+- Parallel Execution: Instead of waiting for one call to finish before starting the next, multiple calls can be initiated simultaneously.
+- Time Efficiency: The total time taken for all the calls changes from the sum of the individual call times (sequential) to the time taken by the longest call (parallel).
+
+### Scenario without Declarative `eth_calls`
+
+Imagine you have a subgraph that needs to make three Ethereum calls to fetch data about a user's transactions, balance, and token holdings.
+
+Traditionally, these calls might be made sequentially:
+
+1. Call 1 (Transactions): Takes 3 seconds
+2. Call 2 (Balance): Takes 2 seconds
+3. Call 3 (Token Holdings): Takes 4 seconds
+
+Total time taken = 3 + 2 + 4 = 9 seconds
+
+### Scenario with Declarative `eth_calls`
+
+With this feature, you can declare these calls to be executed in parallel:
+
+1. Call 1 (Transactions): Takes 3 seconds
+2. Call 2 (Balance): Takes 2 seconds
+3. Call 3 (Token Holdings): Takes 4 seconds
+
+Since these calls are executed in parallel, the total time taken is equal to the time taken by the longest call.
+
+Total time taken = max (3, 2, 4) = 4 seconds
+
+### How it Works
+
+1. Declarative Definition: In the subgraph manifest, you declare the Ethereum calls in a way that indicates they can be executed in parallel.
+2. Parallel Execution Engine: The Graph Node's execution engine recognizes these declarations and runs the calls simultaneously.
+3. Result Aggregation: Once all calls are complete, the results are aggregated and used by the subgraph for further processing.
+
+### Example Configuration in Subgraph Manifest
+
+Declared `eth_calls` can access the `event.address` of the underlying event as well as all the `event.params`.
+
+`Subgraph.yaml` using `event.address`:
+
+```yaml
+eventHandlers:
+event: Swap(indexed address,indexed address,int256,int256,uint160,uint128,int24)
+handler: handleSwap
+calls:
+ global0X128: Pool[event.address].feeGrowthGlobal0X128()
+ global1X128: Pool[event.address].feeGrowthGlobal1X128()
+```
+
+Details for the example above:
+
+- `global0X128` is the declared `eth_call`.
+- The text before colon(`global0X128`) is the label for this `eth_call` which is used when logging errors.
+- The text (`Pool[event.address].feeGrowthGlobal0X128()`) is the actual `eth_call` that will be executed, which is in the form of `Contract[address].function(arguments)`
+- The `address` and `arguments` can be replaced with variables that will be available when the handler is executed.
+
+`Subgraph.yaml` using `event.params`
+
+```yaml
+calls:
+ - ERC20DecimalsToken0: ERC20[event.params.token0].decimals()
+```
+
+### SpecVersion Releases
+
+| الاصدار | ملاحظات الإصدار |
+|:-------:| ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 1.2.0 | Added support for [Indexed Argument Filtering](/#indexed-argument-filters--topic-filters) & declared `eth_call` |
+| 1.1.0 | Supports [Timeseries & Aggregations](#timeseries-and-aggregations). Added support for type `Int8` for `id`. |
+| 1.0.0 | Supports [`indexerHints`](/developing/creating-a-subgraph/#indexer-hints) feature to prune subgraphs |
+| 0.0.9 | Supports `endBlock` feature |
+| 0.0.8 | Added support for polling [Block Handlers](developing/creating-a-subgraph/#polling-filter) and [Initialisation Handlers](developing/creating-a-subgraph/#once-filter). |
+| 0.0.7 | Added support for [File Data Sources](developing/creating-a-subgraph/#file-data-sources). |
+| 0.0.6 | Supports fast [Proof of Indexing](/network/indexing/#what-is-a-proof-of-indexing-poi) calculation variant. |
+| 0.0.5 | Added support for event handlers having access to transaction receipts. |
+| 0.0.4 | Added support for managing subgraph features. |
+
### الحصول على ABIs
يجب أن تتطابق ملف (ملفات) ABI مع العقد (العقود) الخاصة بك. هناك عدة طرق للحصول على ملفات ABI:
@@ -248,15 +442,16 @@ For some entity types the `id` is constructed from the id's of two other entitie
We support the following scalars in our GraphQL API:
-| النوع | الوصف |
-| --- | --- |
-| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. |
-| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. |
-| `Boolean` | Scalar for `boolean` values. |
-| `Int` | The GraphQL spec defines `Int` to have a size of 32 bytes. |
-| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. |
-| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. |
-| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. |
+| النوع | الوصف |
+| ------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `Bytes` | مصفوفة Byte ، ممثلة كسلسلة سداسية عشرية. يشيع استخدامها في Ethereum hashes وعناوينه. |
+| `String` | Scalar for `string` values. Null characters are not supported and are automatically removed. |
+| `Boolean` | Scalar for `boolean` values. |
+| `Int` | The GraphQL spec defines `Int` to be a signed 32-bit integer. |
+| `Int8` | An 8-byte signed integer, also known as a 64-bit signed integer, can store values in the range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807. Prefer using this to represent `i64` from ethereum. |
+| `BigInt` | Large integers. Used for Ethereum's `uint32`, `int64`, `uint64`, ..., `uint256` types. Note: Everything below `uint32`, such as `int32`, `uint24` or `int8` is represented as `i32`. |
+| `BigDecimal` | `BigDecimal` High precision decimals represented as a significand and an exponent. The exponent range is from −6143 to +6144. Rounded to 34 significant digits. |
+| `Timestamp` | It is an `i64` value in microseconds. Commonly used for `timestamp` fields for timeseries and aggregations. |
#### Enums
@@ -398,11 +593,11 @@ This more elaborate way of storing many-to-many relationships will result in les
#### إضافة تعليقات إلى المخطط (schema)
-As per GraphQL spec, comments can be added above schema entity attributes using double quotations `""`. This is illustrated in the example below:
+As per GraphQL spec, comments can be added above schema entity attributes using the hash symble `#`. This is illustrated in the example below:
```graphql
type MyFirstEntity @entity {
- "unique identifier and primary key of the entity"
+ # unique identifier and primary key of the entity
id: Bytes!
address: Bytes!
}
@@ -525,13 +720,32 @@ The second handler tries to load the existing `Gravatar` from the Graph Node sto
### الـ IDs الموصى بها لإنشاء كيانات جديدة
-Every entity has to have an `id` that is unique among all entities of the same type. An entity's `id` value is set when the entity is created. Below are some recommended `id` values to consider when creating new entities. NOTE: The value of `id` must be a `string`.
+It is highly recommended to use `Bytes` as the type for `id` fields, and only use `String` for attributes that truly contain human-readable text, like the name of a token. Below are some recommended `id` values to consider when creating new entities.
+
+- `transfer.id = event.transaction.hash`
+
+- `let id = event.transaction.hash.concatI32(event.logIndex.toI32())`
+
+- For entities that store aggregated data, for e.g, daily trade volumes, the `id` usually contains the day number. Here, using a `Bytes` as the `id` is beneficial. Determining the `id` would look like
+
+```typescript
+let dayID = event.block.timestamp.toI32() / 86400
+let id = Bytes.fromI32(dayID)
+```
+
+- Convert constant addresses to `Bytes`.
+
+`const id = Bytes.fromHexString('0xdead...beef')`
+
+There is a [Graph Typescript Library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. It can be imported into `mapping.ts` from `@graphprotocol/graph-ts`.
-- `()event.params.id.toHex`
-- `()event.transaction.from.toHex`
-- `()event.transaction.hash.toHex() + "-" + event.logIndex.toString`
+### Handling of entities with identical IDs
-We provide the [Graph Typescript Library](https://github.com/graphprotocol/graph-ts) which contains utilities for interacting with the Graph Node store and conveniences for handling smart contract data and entities. You can use this library in your mappings by importing `@graphprotocol/graph-ts` in `mapping.ts`.
+When creating and saving a new entity, if an entity with the same ID already exists, the properties of the new entity are always preferred during the merge process. This means that the existing entity will be updated with the values from the new entity.
+
+If a null value is intentionally set for a field in the new entity with the same ID, the existing entity will be updated with the null value.
+
+If no value is set for a field in the new entity with the same ID, the field will result in null as well.
## توليد الكود
@@ -573,7 +787,7 @@ In addition to this, one class is generated for each entity type in the subgraph
> **ملحوظات:** يجب إجراء إنشاء الكود مرة أخرى بعد كل تغيير في مخطط GraphQL أو ABI المضمنة في الـ يظهر. يجب أيضا إجراؤه مرة واحدة على الأقل قبل بناء أو نشر الـ الفرعيةرسم بياني.
-Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to the Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
+Code generation does not check your mapping code in `src/mapping.ts`. If you want to check that before trying to deploy your subgraph to Graph Explorer, you can run `yarn build` and fix any syntax errors that the TypeScript compiler might find.
## قوالب مصدر البيانات
@@ -659,7 +873,7 @@ export function handleNewExchange(event: NewExchange): void {
```
> **ملاحظة:** مصدر البيانات الجديد سيعالج فقط الاستدعاءات والأحداث للكتلة التي تم إنشاؤها فيه وجميع الكتل التالية ، ولكنه لن يعالج البيانات التاريخية ، أي البيانات الموجودة في الكتل السابقة.
->
+>
> إذا كانت الكتل السابقة تحتوي على بيانات ذات صلة بمصدر البيانات الجديد ، فمن الأفضل فهرسة تلك البيانات من خلال قراءة الحالة الحالية للعقد وإنشاء كيانات تمثل تلك الحالة في وقت إنشاء مصدر البيانات الجديد.
### سياق مصدر البيانات
@@ -716,11 +930,115 @@ dataSources:
```
> **ملاحظة:** يمكن البحث عن كتلة إنشاء العقد بسرعة على Etherscan:
->
+>
> 1. ابحث عن العقد بإدخال عنوانه في شريط البحث.
> 2. انقر فوق hash إجراء الإنشاء في قسم `Contract Creator`.
> 3. قم بتحميل صفحة تفاصيل الإجراء(transaction) حيث ستجد كتلة البدء لذلك العقد.
+## Indexer Hints
+
+The `indexerHints` setting in a subgraph's manifest provides directives for indexers on processing and managing a subgraph. It influences operational decisions across data handling, indexing strategies, and optimizations. Presently, it features the `prune` option for managing historical data retention or pruning.
+
+> This feature is available from `specVersion: 1.0.0`
+
+### Prune
+
+`indexerHints.prune`: Defines the retention of historical block data for a subgraph. Options include:
+
+1. `"never"`: No pruning of historical data; retains the entire history.
+2. `"auto"`: Retains the minimum necessary history as set by the indexer, optimizing query performance.
+3. A specific number: Sets a custom limit on the number of historical blocks to retain.
+
+```
+ indexerHints:
+ prune: auto
+```
+
+> The term "history" in this context of subgraphs is about storing data that reflects the old states of mutable entities.
+
+History as of a given block is required for:
+
+- [Time travel queries](/querying/graphql-api/#time-travel-queries), which enable querying the past states of these entities at specific blocks throughout the subgraph's history
+- Using the subgraph as a [graft base](/developing/creating-a-subgraph/#grafting-onto-existing-subgraphs) in another subgraph, at that block
+- Rewinding the subgraph back to that block
+
+If historical data as of the block has been pruned, the above capabilities will not be available.
+
+> Using `"auto"` is generally recommended as it maximizes query performance and is sufficient for most users who do not require access to extensive historical data.
+
+For subgraphs leveraging [time travel queries](/querying/graphql-api/#time-travel-queries), it's advisable to either set a specific number of blocks for historical data retention or use `prune: never` to keep all historical entity states. Below are examples of how to configure both options in your subgraph's settings:
+
+To retain a specific amount of historical data:
+
+```
+ indexerHints:
+ prune: 1000 # Replace 1000 with the desired number of blocks to retain
+```
+
+To preserve the complete history of entity states:
+
+```
+indexerHints:
+ prune: never
+```
+
+You can check the earliest block (with historical state) for a given subgraph by querying the [Indexing Status API](/deploying/deploying-a-subgraph-to-hosted/#checking-subgraph-health):
+
+```
+{
+ indexingStatuses(subgraphs: ["Qm..."]) {
+ subgraph
+ synced
+ health
+ chains {
+ earliestBlock {
+ number
+ }
+ latestBlock {
+ number
+ }
+ chainHeadBlock { number }
+ }
+ }
+}
+```
+
+Note that the `earliestBlock` is the earliest block with historical data, which will be more recent than the `startBlock` specified in the manifest, if the subgraph has been pruned.
+
+## Event Handlers
+
+Event handlers in a subgraph react to specific events emitted by smart contracts on the blockchain and trigger handlers defined in the subgraph's manifest. This enables subgraphs to process and store event data according to defined logic.
+
+### Defining an Event Handler
+
+An event handler is declared within a data source in the subgraph's YAML configuration. It specifies which events to listen for and the corresponding function to execute when those events are detected.
+
+```yaml
+dataSources:
+ - kind: ethereum/contract
+ name: Gravity
+ network: dev
+ source:
+ address: '0x731a10897d267e19b34503ad902d0a29173ba4b1'
+ abi: Gravity
+ mapping:
+ kind: ethereum/events
+ apiVersion: 0.0.6
+ language: wasm/assemblyscript
+ entities:
+ - Gravatar
+ - Transaction
+ abis:
+ - name: Gravity
+ file: ./abis/Gravity.json
+ eventHandlers:
+ - event: Approval(address,address,uint256)
+ handler: handleApproval
+ - event: Transfer(address,address,uint256)
+ handler: handleTransfer
+ topic1: ['0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045', '0xc8dA6BF26964aF9D7eEd9e03E53415D37aA96325'] # Optional topic filter which filters only events with the specified topic.
+```
+
## معالجات الاستدعاء(Call Handlers)
While events provide an effective way to collect relevant changes to the state of a contract, many contracts avoid generating logs to optimize gas costs. In these cases, a subgraph can subscribe to calls made to the data source contract. This is achieved by defining call handlers referencing the function signature and the mapping handler that will process calls to this function. To process these calls, the mapping handler will receive an `ethereum.Call` as an argument with the typed inputs to and outputs from the call. Calls made at any depth in a transaction's call chain will trigger the mapping, allowing activity with the data source contract through proxy contracts to be captured.
@@ -906,12 +1224,11 @@ Inside the handler function, the receipt can be accessed in the `Event.receipt`
Starting from `specVersion` `0.0.4`, subgraph features must be explicitly declared in the `features` section at the top level of the manifest file, using their `camelCase` name, as listed in the table below:
-| الميزة | الاسم |
-| ----------------------------------------------------- | --------------------------------------------------- |
-| [أخطاء غير فادحة](#non-fatal-errors) | `nonFatalErrors` |
-| [البحث عن نص كامل](#defining-fulltext-search-fields) | `fullTextSearch` |
-| [تطعيم(Grafting)](#grafting-onto-existing-subgraphs) | `grafting` |
-| [IPFS على عقود Ethereum](#ipfs-on-ethereum-contracts) | `ipfsOnEthereumContracts` or `nonDeterministicIpfs` |
+| الميزة | الاسم |
+| ---------------------------------------------------- | ---------------- |
+| [أخطاء غير فادحة](#non-fatal-errors) | `nonFatalErrors` |
+| [البحث عن نص كامل](#defining-fulltext-search-fields) | `fullTextSearch` |
+| [تطعيم(Grafting)](#grafting-onto-existing-subgraphs) | `grafting` |
For instance, if a subgraph uses the **Full-Text Search** and the **Non-fatal Errors** features, the `features` field in the manifest should be:
@@ -926,17 +1243,65 @@ dataSources: ...
Note that using a feature without declaring it will incur a **validation error** during subgraph deployment, but no errors will occur if a feature is declared but not used.
-### IPFS على عقود Ethereum
+### Timeseries and Aggregations
-A common use case for combining IPFS with Ethereum is to store data on IPFS that would be too expensive to maintain on-chain, and reference the IPFS hash in Ethereum contracts.
+Timeseries and aggregations enable your subgraph to track statistics like daily average price, hourly total transfers, etc.
-Given such IPFS hashes, subgraphs can read the corresponding files from IPFS using `ipfs.cat` and `ipfs.map`. To do this reliably, it is required that these files are pinned to an IPFS node with high availability, so that the [hosted service](https://thegraph.com/hosted-service) IPFS node can find them during indexing.
+This feature introduces two new types of subgraph entity. Timeseries entities record data points with timestamps. Aggregation entities perform pre-declared calculations on the Timeseries data points on an hourly or daily basis, then store the results for easy access via GraphQL.
+
+#### Example Schema
+
+```graphql
+type Data @entity(timeseries: true) {
+ id: Int8!
+ timestamp: Timestamp!
+ price: BigDecimal!
+}
+
+type Stats @aggregation(intervals: ["hour", "day"], source: "Data") {
+ id: Int8!
+ timestamp: Timestamp!
+ sum: BigDecimal! @aggregate(fn: "sum", arg: "price")
+}
+```
+
+### Defining Timeseries and Aggregations
+
+Timeseries entities are defined with `@entity(timeseries: true)` in schema.graphql. Every timeseries entity must have a unique ID of the int8 type, a timestamp of the Timestamp type, and include data that will be used for calculation by aggregation entities. These Timeseries entities can be saved in regular trigger handlers, and act as the “raw data” for the Aggregation entities.
+
+Aggregation entities are defined with `@aggregation` in schema.graphql. Every aggregation entity defines the source from which it will gather data (which must be a Timeseries entity), sets the intervals (e.g., hour, day), and specifies the aggregation function it will use (e.g., sum, count, min, max, first, last). Aggregation entities are automatically calculated on the basis of the specified source at the end of the required interval.
+
+#### Available Aggregation Intervals
+
+- `hour`: sets the timeseries period every hour, on the hour.
+- `day`: sets the timeseries period every day, starting and ending at 00:00.
+
+#### Available Aggregation Functions
+
+- `sum`: Total of all values.
+- `count`: Number of values.
+- `min`: Minimum value.
+- `max`: Maximum value.
+- `first`: First value in the period.
+- `last`: Last value in the period.
+
+#### Example Aggregations Query
+
+```graphql
+{
+ stats(interval: "hour", where: { timestamp_gt: 1704085200 }) {
+ id
+ timestamp
+ sum
+ }
+}
+```
-> **ملاحظة:** لا تدعم شبكة Graph حتى الآن `ipfs.cat` و `ipfs.map` ، ويجب على المطورين عدم النشر الـ subgraphs للشبكة باستخدام تلك الدالة عبر الـ Studio.
+Note:
-> **[Feature Management](#experimental-features):** `ipfsOnEthereumContracts` must be declared under `features` in the subgraph manifest. For non EVM chains, the `nonDeterministicIpfs` alias can also be used for the same purpose.
+To use Timeseries and Aggregations, a subgraph must have a spec version ≥1.1.0. Note that this feature might undergo significant changes that could affect backward compatibility.
-When running a local Graph Node, the `GRAPH_ALLOW_NON_DETERMINISTIC_IPFS` environment variable must be set in order to index subgraphs using this experimental functionality.
+[Read more](https://github.com/graphprotocol/graph-node/blob/master/docs/aggregations.md) about Timeseries and Aggregations.
### أخطاء غير فادحة
@@ -1017,7 +1382,7 @@ The grafted subgraph can use a GraphQL schema that is not identical to the one o
> **[إدارة الميزات](#experimental-features):**يجب الإعلان عن `التطعيم` ضمن `features` في الفرعيةرسم بياني يظهر.
-## File Data Sources
+## IPFS/Arweave File Data Sources
File data sources are a new subgraph functionality for accessing off-chain data during indexing in a robust, extendable way. File data sources support fetching files from IPFS and from Arweave.
@@ -1025,9 +1390,9 @@ File data sources are a new subgraph functionality for accessing off-chain data
### نظره عامة
-Rather than fetching files "in line" during handler exectuion, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found.
+Rather than fetching files "in line" during handler execution, this introduces templates which can be spawned as new data sources for a given file identifier. These new data sources fetch the files, retrying if they are unsuccessful, running a dedicated handler when the file is found.
-This is similar to the [existing data source templates](https://thegraph.com/docs/en/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources.
+This is similar to the [existing data source templates](/developing/creating-a-subgraph/#data-source-templates), which are used to dynamically create new chain-based data sources.
> This replaces the existing `ipfs.cat` API
@@ -1084,7 +1449,7 @@ type TokenMetadata @entity {
If the relationship is 1:1 between the parent entity and the resulting file data source entity, the simplest pattern is to link the parent entity to a resulting file entity by using the IPFS CID as the lookup. Get in touch on Discord if you are having difficulty modelling your new file-based entities!
-> You can use [nested filters](https://thegraph.com/docs/en/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities.
+> You can use [nested filters](/querying/graphql-api/#example-for-nested-entity-filtering) to filter parent entities on the basis of these nested entities.
#### Add a new templated data source with `kind: file/ipfs` or `kind: file/arweave`
@@ -1108,11 +1473,11 @@ templates:
> Currently `abis` are required, though it is not possible to call contracts from within file data sources
-The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#Limitations) for more details.
+The file data source must specifically mention all the entity types which it will interact with under `entities`. See [limitations](#limitations) for more details.
#### Create a new handler to process files
-This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](https://thegraph.com/docs/en/developing/assemblyscript-api/#json-api)).
+This handler should accept one `Bytes` parameter, which will be the contents of the file, when it is found, which can then be processed. This will often be a JSON file, which can be processed with `graph-ts` helpers ([documentation](/developing/assemblyscript-api/#json-api)).
The CID of the file as a readable string can be accessed via the `dataSource` as follows:
@@ -1156,7 +1521,7 @@ You can now create file data sources during execution of chain-based handlers:
For IPFS, Graph Node supports [v0 and v1 content identifiers](https://docs.ipfs.tech/concepts/content-addressing/), and content identifers with directories (e.g. `bafyreighykzv2we26wfrbzkcdw37sbrby4upq7ae3aqobbq7i4er3tnxci/metadata.json`).
-For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Bundlr, and Graph Node can also fetch files based on [Bundlr manifests](https://docs.bundlr.network/learn/gateways#indexing).
+For Arweave, as of version 0.33.0 Graph Node can fetch files stored on Arweave based on their [transaction ID](https://docs.arweave.org/developers/arweave-node-server/http-api#transactions) from an Arweave gateway ([example file](https://bdxujjl5ev5eerd5ouhhs6o4kjrs4g6hqstzlci5pf6vhxezkgaa.arweave.net/CO9EpX0lekJEfXUOeXncUmMuG8eEp5WJHXl9U9yZUYA)). Arweave supports transactions uploaded via Irys (previously Bundlr), and Graph Node can also fetch files based on [Irys manifests](https://docs.irys.xyz/overview/gateways#indexing).
Example:
@@ -1215,7 +1580,7 @@ Additionally, it is not possible to create data sources from a file data source,
If you are linking NFT metadata to corresponding tokens, use the metadata's IPFS hash to reference a Metadata entity from the Token entity. Save the Metadata entity using the IPFS hash as an ID.
-You can use [DataSource context](https://thegraph.com/docs/en/developing/assemblyscript-api/#entity-and-data-source-context) when creating File Data Sources to pass extra information which will be available to the File Data Source handler.
+You can use [DataSource context](/developing/graph-ts/api/#entity-and-datasourcecontext) when creating File Data Sources to pass extra information which will be available to the File Data Source handler.
If you have entities which are refreshed multiple times, create unique file-based entities using the IPFS hash & the entity ID, and reference them using a derived field in the chain-based entity.
@@ -1225,7 +1590,7 @@ If you have entities which are refreshed multiple times, create unique file-base
File data sources currently require ABIs, even though ABIs are not used ([issue](https://github.com/graphprotocol/graph-cli/issues/961)). Workaround is to add any ABI.
-Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-cli/issues/4309)). Workaround is to create file data source handlers in a dedicated file.
+Handlers for File Data Sources cannot be in files which import `eth_call` contract bindings, failing with "unknown import: `ethereum::ethereum.call` has not been defined" ([issue](https://github.com/graphprotocol/graph-node/issues/4309)). Workaround is to create file data source handlers in a dedicated file.
#### Examples
diff --git a/website/pages/ar/developing/developer-faqs.mdx b/website/pages/ar/developing/developer-faqs.mdx
index 94efea905584..1758e9f909b6 100644
--- a/website/pages/ar/developing/developer-faqs.mdx
+++ b/website/pages/ar/developing/developer-faqs.mdx
@@ -46,17 +46,18 @@ docker pull graphprotocol/graph-node:latest
## 9. How do I call a contract function or access a public state variable from my subgraph mappings?
-Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/assemblyscript-api).
+Take a look at `Access to smart contract` state inside the section [AssemblyScript API](/developing/graph-ts/api/#access-to-smart-contract-state).
## 10. Is it possible to set up a subgraph using `graph init` from `graph-cli` with two contracts? Or should I manually add another datasource in `subgraph.yaml` after running `graph init`?
-Unfortunately, this is currently not possible. `graph init` is intended as a basic starting point, from which you can then add more data sources manually.
+Yes. On `graph init` command itself you can add multiple datasources by entering contracts one after the other. You can also use `graph add` command to add new datasource.
## 11. I want to contribute or add a GitHub issue. Where can I find the open source repositories?
- [graph-node](https://github.com/graphprotocol/graph-node)
-- [graph-cli](https://github.com/graphprotocol/graph-cli)
-- [graph-ts](https://github.com/graphprotocol/graph-ts)
+- [graph-tooling](https://github.com/graphprotocol/graph-tooling)
+- [graph-docs](https://github.com/graphprotocol/docs)
+- [graph-client](https://github.com/graphprotocol/graph-client)
## 12. What is the recommended way to build "autogenerated" ids for an entity when handling events?
@@ -66,7 +67,7 @@ Unfortunately, this is currently not possible. `graph init` is intended as a bas
ضمن ال Subgraph ، تتم معالجة الأحداث دائمًا بالترتيب الذي تظهر به في الكتل ، بغض النظر عما إذا كان ذلك عبر عقود متعددة أم لا.
-## 14. Is it possible to differentiate between networks (mainnet, Goerli, local) from within event handlers?
+## 14. Is it possible to differentiate between networks (mainnet, Sepolia, local) from within event handlers?
نعم. يمكنك القيام بذلك عن طريق استيراد `graph-ts` كما في المثال أدناه:
@@ -77,9 +78,9 @@ Unfortunately, this is currently not possible. `graph init` is intended as a bas
()dataSource.address
```
-## 15. Do you support block and call handlers on Goerli?
+## 15. Do you support block and call handlers on Sepolia?
-Yes. Goerli supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network.
+Yes. Sepolia supports block handlers, call handlers and event handlers. It should be noted that event handlers are far more performant than the other two handlers, and they are supported on every EVM-compatible network.
## 16. Can I import ethers.js or other JS libraries into my subgraph mappings?
@@ -87,7 +88,7 @@ Yes. Goerli supports block handlers, call handlers and event handlers. It should
## 17. Is it possible to specify what block to start indexing on?
-Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: Start blocks
+Yes. `dataSources.source.startBlock` in the `subgraph.yaml` file specifies the number of the block that the data source starts indexing from. In most cases, we suggest using the block in which the contract was created: [Start blocks](/developing/creating-a-subgraph#start-blocks)
## 18. Are there some tips to increase the performance of indexing? My subgraph is taking a very long time to sync
@@ -135,4 +136,10 @@ The Graph will never charge for the hosted service. The Graph is a decentralized
## 27. How do I update a subgraph on mainnet?
-If you’re a subgraph developer, you can deploy a new version of your subgraph to the Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on.
+If you’re a subgraph developer, you can deploy a new version of your subgraph to Subgraph Studio using the CLI. It’ll be private at that point, but if you’re happy with it, you can publish to the decentralized Graph Explorer. This will create a new version of your subgraph that Curators can start signaling on.
+
+## 28. In what order are the event, block, and call handlers triggered for a data source?
+
+Event and call handlers are first ordered by transaction index within the block. Event and call handlers within the same transaction are ordered using a convention: event handlers first then call handlers, each type respecting the order they are defined in the manifest. Block handlers are run after event and call handlers, in the order they are defined in the manifest. Also these ordering rules are subject to change.
+
+When new dynamic data source are created, the handlers defined for dynamic data sources will only start processing after all existing data source handlers are processed, and will repeat in the same sequence whenever triggered.
diff --git a/website/pages/ar/developing/graph-ts/api.mdx b/website/pages/ar/developing/graph-ts/api.mdx
index 06ad54feb70b..b1ad7296262c 100644
--- a/website/pages/ar/developing/graph-ts/api.mdx
+++ b/website/pages/ar/developing/graph-ts/api.mdx
@@ -6,7 +6,7 @@ title: AssemblyScript API
هذه الصفحة توثق APIs المضمنة التي يمكن استخدامها عند كتابة subgraph mappings. يتوفر نوعان من APIs خارج الصندوق:
-- the [Graph TypeScript library](https://github.com/graphprotocol/graph-ts) (`graph-ts`) and
+- the [Graph TypeScript library](https://github.com/graphprotocol/graph-tooling/tree/main/packages/ts) (`graph-ts`) and
- code generated from subgraph files by `graph codegen`.
It is also possible to add other libraries as dependencies, as long as they are compatible with [AssemblyScript](https://github.com/AssemblyScript/assemblyscript). Since this is the language mappings are written in, the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki) is a good source for language and standard library features.
@@ -17,7 +17,7 @@ The `@graphprotocol/graph-ts` library provides the following APIs:
- An `ethereum` API for working with Ethereum smart contracts, events, blocks, transactions, and Ethereum values.
- A `store` API to load and save entities from and to the Graph Node store.
-- A `log` API to log messages to the Graph Node output and the Graph Explorer.
+- A `log` API to log messages to the Graph Node output and Graph Explorer.
- An `ipfs` API to load files from IPFS.
- A `json` API to parse JSON data.
- A `crypto` API to use cryptographic functions.
@@ -27,18 +27,20 @@ The `@graphprotocol/graph-ts` library provides the following APIs:
The `apiVersion` in the subgraph manifest specifies the mapping API version which is run by Graph Node for a given subgraph.
-| الاصدار | ملاحظات الإصدار |
-| :-: | --- |
-| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
-| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
-| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
-| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
-| 0.0.3 | Added `from` field to the Ethereum Call object `etherem.call.address` renamed to `ethereum.call.to` |
-| 0.0.2 | Added `input` field to the Ethereum Transaction object |
+| الاصدار | ملاحظات الإصدار |
+| :-----: | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| 0.0.9 | Adds new host functions [`eth_get_balance`](#balance-of-an-address) & [`hasCode`](#check-if-an-address-is-a-contract-or-eoa) |
+| 0.0.8 | Adds validation for existence of fields in the schema when saving an entity. |
+| 0.0.7 | Added `TransactionReceipt` and `Log` classes to the Ethereum types Added `receipt` field to the Ethereum Event object |
+| 0.0.6 | Added `nonce` field to the Ethereum Transaction object Added `baseFeePerGas` to the Ethereum Block object |
+| 0.0.5 | AssemblyScript upgraded to version 0.19.10 (this includes breaking changes, please see the [`Migration Guide`](/release-notes/assemblyscript-migration-guide)) `ethereum.transaction.gasUsed` renamed to `ethereum.transaction.gasLimit` |
+| 0.0.4 | Added `functionSignature` field to the Ethereum SmartContractCall object |
+| 0.0.3 | Added `from` field to the Ethereum Call object `etherem.call.address` renamed to `ethereum.call.to` |
+| 0.0.2 | Added `input` field to the Ethereum Transaction object |
### الأنواع المضمنة (Built-in)
-Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://github.com/AssemblyScript/assemblyscript/wiki/Types).
+Documentation on the base types built into AssemblyScript can be found in the [AssemblyScript wiki](https://www.assemblyscript.org/types.html).
The following additional types are provided by `@graphprotocol/graph-ts`.
@@ -77,7 +79,7 @@ _Operators_
`BigDecimal` is used to represent arbitrary precision decimals.
-> Note: [Internally](https://github.com/graphprotocol/graph-node/blob/master/graph/src/data/store/scalar.rs) `BigDecimal` is stored in [IEEE-754 decimal128 floating-point format](https://en.wikipedia.org/wiki/Decimal128_floating-point_format), which supports 34 decimal digits of significand. This makes `BigDecimal` unsuitable for representing fixed-point types that can span wider than 34 digits, such as a Solidity [`ufixed256x18`](https://docs.soliditylang.org/en/latest/types.html#fixed-point-numbers) or equivalent.
+> Note: [Internally](https://github.com/graphprotocol/graph-node/blob/master/graph/src/data/store/scalar/bigdecimal.rs) `BigDecimal` is stored in [IEEE-754 decimal128 floating-point format](https://en.wikipedia.org/wiki/Decimal128_floating-point_format), which supports 34 decimal digits of significand. This makes `BigDecimal` unsuitable for representing fixed-point types that can span wider than 34 digits, such as a Solidity [`ufixed256x18`](https://docs.soliditylang.org/en/latest/types.html#fixed-point-numbers) or equivalent.
_Construction_
@@ -219,7 +221,7 @@ It adds the following method on top of the `Bytes` API:
The `store` API allows to load, save and remove entities from and to the Graph Node store.
-Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
+Entities written to the store map one-to-one to the `@entity` types defined in the subgraph's GraphQL schema. To make working with these entities convenient, the `graph codegen` command provided by the [Graph CLI](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli) generates entity classes, which are subclasses of the built-in `Entity` type, with property getters and setters for the fields in the schema as well as methods to load and save these entities.
#### إنشاء الكيانات
@@ -538,7 +540,32 @@ For more information:
- [ABI Spec](https://docs.soliditylang.org/en/v0.7.4/abi-spec.html#types)
- Encoding/decoding [Rust library/CLI](https://github.com/rust-ethereum/ethabi)
-- More [complex example](https://github.com/graphprotocol/graph-node/blob/6a7806cc465949ebb9e5b8269eeb763857797efc/tests/integration-tests/host-exports/src/mapping.ts#L72).
+- More [complex example](https://github.com/graphprotocol/graph-node/blob/08da7cb46ddc8c09f448c5ea4b210c9021ea05ad/tests/integration-tests/host-exports/src/mapping.ts#L86).
+
+#### Balance of an Address
+
+The native token balance of an address can be retrieved using the `ethereum` module. This feature is available from `apiVersion: 0.0.9` which is defined `subgraph.yaml`. The `getBalance()` retrieves the balance of the specified address as of the end of the block in which the event is triggered.
+
+```typescript
+import { ethereum } from '@graphprotocol/graph-ts'
+
+let address = Address.fromString('0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045')
+let balance = ethereum.getBalance(address) // returns balance in BigInt
+```
+
+#### Check if an Address is a Contract or EOA
+
+To check whether an address is a smart contract address or an externally owned address (EOA), use the `hasCode()` function from the `ethereum` module which will return `boolean`. This feature is available from `apiVersion: 0.0.9` which is defined `subgraph.yaml`.
+
+```typescript
+import { ethereum } from '@graphprotocol/graph-ts'
+
+let contractAddr = Address.fromString('0x2E645469f354BB4F5c8a05B3b30A929361cf77eC')
+let isContract = ethereum.hasCode(contractAddr).inner // returns true
+
+let eoa = Address.fromString('0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045')
+let isContract = ethereum.hasCode(eoa).inner // returns false
+```
### Logging API
@@ -546,7 +573,7 @@ For more information:
import { log } from '@graphprotocol/graph-ts
```
-The `log` API allows subgraphs to log information to the Graph Node standard output as well as the Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
+The `log` API allows subgraphs to log information to the Graph Node standard output as well as Graph Explorer. Messages can be logged using different log levels. A basic format string syntax is provided to compose log messages from argument.
The `log` API includes the following functions:
@@ -734,44 +761,44 @@ When the type of a value is certain, it can be converted to a [built-in type](#b
### مرجع تحويلات الأنواع
-| Source(s) | Destination | Conversion function |
-| -------------------- | -------------------- | ---------------------------- |
-| Address | Bytes | none |
-| Address | String | s.toHexString() |
-| BigDecimal | String | s.toString() |
-| BigInt | BigDecimal | s.toBigDecimal() |
-| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() |
-| BigInt | String (unicode) | s.toString() |
-| BigInt | i32 | s.toI32() |
-| Boolean | Boolean | none |
-| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) |
-| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) |
-| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() |
-| Bytes | String (unicode) | s.toString() |
-| Bytes | String (base58) | s.toBase58() |
-| Bytes | i32 | s.toI32() |
-| Bytes | u32 | s.toU32() |
-| Bytes | JSON | json.fromBytes(s) |
-| int8 | i32 | none |
-| int32 | i32 | none |
-| int32 | BigInt | BigInt.fromI32(s) |
-| uint24 | i32 | none |
-| int64 - int256 | BigInt | none |
-| uint32 - uint256 | BigInt | none |
-| JSON | boolean | s.toBool() |
-| JSON | i64 | s.toI64() |
-| JSON | u64 | s.toU64() |
-| JSON | f64 | s.toF64() |
-| JSON | BigInt | s.toBigInt() |
-| JSON | string | s.toString() |
-| JSON | Array | s.toArray() |
-| JSON | Object | s.toObject() |
-| String | Address | Address.fromString(s) |
-| Bytes | Address | Address.fromBytes(s) |
-| String | BigInt | BigInt.fromString(s) |
-| String | BigDecimal | BigDecimal.fromString(s) |
-| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) |
-| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) |
+| Source(s) | Destination | Conversion function |
+| --------------------------------------- | --------------------------------------- | ------------------------------------------------------------------ |
+| Address | Bytes | none |
+| Address | String | s.toHexString() |
+| BigDecimal | String | s.toString() |
+| BigInt | BigDecimal | s.toBigDecimal() |
+| BigInt | String (hexadecimal) | s.toHexString() or s.toHex() |
+| BigInt | String (unicode) | s.toString() |
+| BigInt | i32 | s.toI32() |
+| Boolean | Boolean | none |
+| Bytes (signed) | BigInt | BigInt.fromSignedBytes(s) |
+| Bytes (unsigned) | BigInt | BigInt.fromUnsignedBytes(s) |
+| Bytes | String (hexadecimal) | s.toHexString() or s.toHex() |
+| Bytes | String (unicode) | s.toString() |
+| Bytes | String (base58) | s.toBase58() |
+| Bytes | i32 | s.toI32() |
+| Bytes | u32 | s.toU32() |
+| Bytes | JSON | json.fromBytes(s) |
+| int8 | i32 | none |
+| int32 | i32 | none |
+| int32 | BigInt | BigInt.fromI32(s) |
+| uint24 | i32 | none |
+| int64 - int256 | BigInt | none |
+| uint32 - uint256 | BigInt | none |
+| JSON | boolean | s.toBool() |
+| JSON | i64 | s.toI64() |
+| JSON | u64 | s.toU64() |
+| JSON | f64 | s.toF64() |
+| JSON | BigInt | s.toBigInt() |
+| JSON | string | s.toString() |
+| JSON | Array | s.toArray() |
+| JSON | Object | s.toObject() |
+| String | Address | Address.fromString(s) |
+| Bytes | Address | Address.fromBytes(s) |
+| String | BigInt | BigInt.fromString(s) |
+| String | BigDecimal | BigDecimal.fromString(s) |
+| String (hexadecimal) | Bytes | ByteArray.fromHexString(s) |
+| String (UTF-8) | Bytes | ByteArray.fromUTF8(s) |
### البيانات الوصفية لمصدر البيانات
diff --git a/website/pages/ar/developing/substreams-powered-subgraphs-faq.mdx b/website/pages/ar/developing/substreams-powered-subgraphs-faq.mdx
index 66633e8820e3..d46783a1f7e3 100644
--- a/website/pages/ar/developing/substreams-powered-subgraphs-faq.mdx
+++ b/website/pages/ar/developing/substreams-powered-subgraphs-faq.mdx
@@ -4,7 +4,7 @@ title: Substreams-powered subgraphs FAQ
## What are Substreams?
-تم تطوير سبستريمز بواسطة [ستريمنج فاست] (https://www.streamingfast.io/) وهو محرك معالجة قوي بشكل استثنائي قادر على استيعاب تدفقات غنية من بيانات سلاسل الكتل. يتيح لك سبستريمز تحسين وتشكيل بيانات سلاسل الكتل لاستخلاص سريع وسلس بواسطة تطبيقات المستخدم النهائي. وبشكل أكثر تحديداً، فإن سبستريمز هو محرك يعمل بشكل مستقل عن سلاسل الكتل، وبالتوازي، بأولوية-التدفق ويعمل كطبقة لتحويل بيانات سلاسل الكتل. مدعوماً [بالفايرهوز](https://firehose.streamingfast.io/) يمكن المطورين من كتابة وحدات لغة رست والبناء على وحدات المجتمع وتوفير فهرسة عالية الأداء للبيانات وإدخال [sink](https://substreams.streamingfast.io/developers-guide/sink-targets) بياناتهم إلى أي مكان.
+Developed by [StreamingFast](https://www.streamingfast.io/), Substreams is an exceptionally powerful processing engine capable of consuming rich streams of blockchain data. Substreams allow you to refine and shape blockchain data for fast and seamless digestion by end-user applications. More specifically, Substreams is a blockchain-agnostic, parallelized, and streaming-first engine, serving as a blockchain data transformation layer. Powered by the [Firehose](https://firehose.streamingfast.io/), it enables developers to write Rust modules, build upon community modules, provide extremely high-performance indexing, and [sink](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) their data anywhere.
اذهب إلى [وثائق سبستريمز](/substreams) للتعرف على المزيد حول سبستريمز.
@@ -22,7 +22,7 @@ title: Substreams-powered subgraphs FAQ
## ما هي فوائد استخدام الغرافات الفرعية المدعومة بسبستريمز؟
-Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://substreams.streamingfast.io/developers-guide/modules) to output to different [sinks](https://substreams.streamingfast.io/developers-guide/sink-targets) such as PostgreSQL, MongoDB, and Kafka.
+Substreams-powered subgraphs combine all the benefits of Substreams with the queryability of subgraphs. They bring greater composability and high-performance indexing to The Graph. They also enable new data use cases; for example, once you've built your Substreams-powered Subgraph, you can reuse your [Substreams modules](https://substreams.streamingfast.io/documentation/develop/manifest-modules) to output to different [sinks](https://substreams.streamingfast.io/reference-and-specs/manifests#sink) such as PostgreSQL, MongoDB, and Kafka.
## ماهي فوائد سبستريمز؟
@@ -66,11 +66,13 @@ The [Substreams documentation](/substreams) will teach you how to build Substrea
ستوضح لك [وثائق الغرافات الفرعية المدعومة بواسطة سبستريمز](/cookbook/substreams-powered-subgraphs/) كيفية تجميعها للنشر على الغراف.
+The [latest Substreams Codegen tool](https://streamingfastio.medium.com/substreams-codegen-no-code-tool-to-bootstrap-your-project-a11efe0378c6) will allow you to bootstrap a Substreams project without any code.
+
## What is the role of Rust modules in Substreams?
تعتبر وحدات رست مكافئة لمعينات أسمبلي اسكريبت في الغرافات الفرعية. يتم ترجمتها إلى ويب أسيمبلي بنفس الطريقة، ولكن النموذج البرمجي يسمح بالتنفيذ الموازي. تحدد وحدات رست نوع التحويلات والتجميعات التي ترغب في تطبيقها على بيانات سلاسل الكتل الخام.
-See [modules documentation](https://substreams.streamingfast.io/developers-guide/modules) for details.
+See [modules documentation](https://substreams.streamingfast.io/documentation/develop/manifest-modules) for details.
## What makes Substreams composable?
diff --git a/website/pages/ar/developing/supported-networks.json b/website/pages/ar/developing/supported-networks.json
index 5e12392b8c7d..3ef903c77b7c 100644
--- a/website/pages/ar/developing/supported-networks.json
+++ b/website/pages/ar/developing/supported-networks.json
@@ -2,8 +2,8 @@
"network": "Network",
"cliName": "CLI Name",
"chainId": "Chain ID",
- "studioAndHostedService": "Studio and Hosted Service",
+ "hostedService": "الخدمة المستضافة (Hosted Service)",
+ "subgraphStudio": "Subgraph Studio",
"decentralizedNetwork": "Decentralized Network",
- "supportedByUpgradeIndexer": "Supported only by upgrade Indexer",
- "supportsSubstreams": "Supports Substreams"
+ "integrationType": "Integration Type"
}
diff --git a/website/pages/ar/developing/supported-networks.mdx b/website/pages/ar/developing/supported-networks.mdx
index 44a74bfc0e8e..96e737b0d743 100644
--- a/website/pages/ar/developing/supported-networks.mdx
+++ b/website/pages/ar/developing/supported-networks.mdx
@@ -9,21 +9,16 @@ export const getStaticProps = getStaticPropsForSupportedNetworks(__filename)
-\* Preliminary network support via the [upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/)
-† Supports Substreams
+\* Baseline network support provided by the [upgrade Indexer](https://thegraph.com/blog/upgrade-indexer/).
+\*\* Integration with Graph Node: `evm`, `near`, `cosmos`, `osmosis` and `ar` have native handler and type support in Graph Node. Chains which are Firehose- and Substreams-compatible can leverage the generalised [Substreams-powered subgraph](/cookbook/substreams-powered-subgraphs) integration (this includes `evm` and `near` networks). † Supports deployment of [Substreams-powered subgraphs](/cookbook/substreams-powered-subgraphs).
-The hosted service relies on the stability and reliability of the underlying technologies, namely the provided JSON RPC endpoints.
+- Subgraph Studio relies on the stability and reliability of the underlying technologies, for example JSON-RPC, Firehose and Substreams endpoints.
+- Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs.
+- If a subgraph was published via the CLI and picked up by an Indexer, it could technically be queried even without support, and efforts are underway to further streamline integration of new networks.
+- For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
-Ropsten, Rinkeby and Kovan are being deprecated. Read more on the [Ethereum Foundation Blog](https://blog.ethereum.org/2022/06/21/testnet-deprecation). As of Feb 25th 2023, Ropsten, Rinkeby and Kovan are no longer supported by the hosted service. Goerli will be maintained by client developers post-merge, and is also supported by the hosted service. Developers who currently use Ropsten, Rinkeby or Kovan as their staging/testing environment are encouraged to migrate to Goerli.
-
-Subgraphs indexing Gnosis Chain can now be deployed with the `gnosis` network identifier. `xdai` is still supported for existing hosted service subgraphs.
-
-For a full list of which features are supported on the decentralized network, see [this page](https://github.com/graphprotocol/indexer/blob/main/docs/feature-support-matrix.md).
-
-Substreams-powered subgraphs indexing `mainnet` Ethereum are supported on the Subgraph Studio and decentralized network.
-
-## Graph Node
+## Running Graph Node locally
If your preferred network isn't supported on The Graph's decentralized network, you can run your own [Graph Node](https://github.com/graphprotocol/graph-node) to index any EVM-compatible network. Make sure that the [version](https://github.com/graphprotocol/graph-node/releases) you are using supports the network and you have the needed configuration.
-Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks.
+Graph Node can also index other protocols, via a Firehose integration. Firehose integrations have been created for NEAR, Arweave and Cosmos-based networks. Additionally, Graph Node can support Substreams-powered subgraphs for any network with Substreams support.
diff --git a/website/pages/ar/developing/unit-testing-framework.mdx b/website/pages/ar/developing/unit-testing-framework.mdx
index 54b83b009125..fad1e010d641 100644
--- a/website/pages/ar/developing/unit-testing-framework.mdx
+++ b/website/pages/ar/developing/unit-testing-framework.mdx
@@ -69,9 +69,9 @@ And finally, do not use `graph test` (which uses your global installation of gra
...
},
"dependencies": {
- "@graphprotocol/graph-cli": "^0.30.0",
- "@graphprotocol/graph-ts": "^0.27.0",
- "matchstick-as": "^0.5.0"
+ "@graphprotocol/graph-cli": "^0.56.0",
+ "@graphprotocol/graph-ts": "^0.31.0",
+ "matchstick-as": "^0.6.0"
}
}
```
@@ -116,6 +116,8 @@ graph test path/to/file.test.ts
From `graph-cli 0.25.2`, the `graph test` command supports running `matchstick` in a docker container with the `-d` flag. The docker implementation uses [bind mount](https://docs.docker.com/storage/bind-mounts/) so it does not have to rebuild the docker image every time the `graph test -d` command is executed. Alternatively you can follow the instructions from the [matchstick](https://github.com/LimeChain/matchstick#docker-) repository to run docker manually.
+❗ `graph test -d` forces `docker run` to run with flag `-t`. This must be removed to run inside non-interactive environments (like GitHub CI).
+
❗ If you have previously ran `graph test` you may encounter the following error during docker build:
```sh
@@ -142,9 +144,9 @@ You can try out and play around with the examples from this guide by cloning the
Also you can check out the video series on ["How to use Matchstick to write unit tests for your subgraphs"](https://www.youtube.com/playlist?list=PLTqyKgxaGF3SNakGQwczpSGVjS_xvOv3h)
-## Tests structure (>=0.5.0)
+## Tests structure
-_**IMPORTANT: Requires matchstick-as >=0.5.0**_
+_**IMPORTANT: The test structure described below depens on `matchstick-as` version >=0.5.0**_
### describe()
@@ -522,6 +524,36 @@ assertNotNull(value: T)
entityCount(entityType: string, expectedCount: i32)
```
+As of version 0.6.0, asserts support custom error messages as well
+
+```typescript
+assert.fieldEquals('Gravatar', '0x123', 'id', '0x123', 'Id should be 0x123')
+assert.equals(ethereum.Value.fromI32(1), ethereum.Value.fromI32(1), 'Value should equal 1')
+assert.notInStore('Gravatar', '0x124', 'Gravatar should not be in store')
+assert.addressEquals(Address.zero(), Address.zero(), 'Address should be zero')
+assert.bytesEquals(Bytes.fromUTF8('0x123'), Bytes.fromUTF8('0x123'), 'Bytes should be equal')
+assert.i32Equals(2, 2, 'I32 should equal 2')
+assert.bigIntEquals(BigInt.fromI32(1), BigInt.fromI32(1), 'BigInt should equal 1')
+assert.booleanEquals(true, true, 'Boolean should be true')
+assert.stringEquals('1', '1', 'String should equal 1')
+assert.arrayEquals([ethereum.Value.fromI32(1)], [ethereum.Value.fromI32(1)], 'Arrays should be equal')
+assert.tupleEquals(
+ changetype([ethereum.Value.fromI32(1)]),
+ changetype([ethereum.Value.fromI32(1)]),
+ 'Tuples should be equal',
+)
+assert.assertTrue(true, 'Should be true')
+assert.assertNull(null, 'Should be null')
+assert.assertNotNull('not null', 'Should be not null')
+assert.entityCount('Gravatar', 1, 'There should be 2 gravatars')
+assert.dataSourceCount('GraphTokenLockWallet', 1, 'GraphTokenLockWallet template should have one data source')
+assert.dataSourceExists(
+ 'GraphTokenLockWallet',
+ Address.zero().toHexString(),
+ 'GraphTokenLockWallet should have a data source for zero address',
+)
+```
+
## Write a Unit Test
Let's see how a simple unit test would look like using the Gravatar examples in the [Demo Subgraph](https://github.com/LimeChain/demo-subgraph/blob/main/src/gravity.ts).
@@ -845,7 +877,7 @@ Users can assert that an entity does not exist in the store. The function takes
assert.notInStore('Gravatar', '23')
```
-### Printing the whole store (for debug purposes)
+### Printing the whole store, or single entities from it (for debug purposes)
You can print the whole store to the console using this helper function:
@@ -855,6 +887,15 @@ import { logStore } from 'matchstick-as/assembly/store'
logStore()
```
+As of version 0.6.0, `logStore` no longer prints derived fields, instead users can use the new `logEntity` function. Of course `logEntity` can be used to print any entity, not just ones that have derived fields. `logEntity` takes the entity type, entity id and a `showRelated` flag to indicate if users want to print the related derived entities.
+
+```
+import { logEntity } from 'matchstick-as/assembly/store'
+
+
+logEntity("Gravatar", 23, true)
+```
+
### Expected failure
Users can have expected test failures, using the shouldFail flag on the test() functions:
@@ -908,26 +949,83 @@ Logging critical errors will stop the execution of the tests and blow everything
### Testing derived fields
-Testing derived fields is a feature which (as the example below shows) allows the user to set a field in a certain entity and have another entity be updated automatically if it derives one of its fields from the first entity. Important thing to note is that the first entity needs to be reloaded as the automatic update happens in the store in rust of which the AS code is agnostic.
+Testing derived fields is a feature which allows users to set a field on a certain entity and have another entity be updated automatically if it derives one of its fields from the first entity.
+
+Before version `0.6.0` it was possible to get the derived entities by accessing them as entity fields/properties, like so:
+
+```typescript
+let entity = ExampleEntity.load('id')
+let derivedEntity = entity.derived_entity
+```
+
+As of version `0.6.0`, this is done by using the `loadRelated` function of graph-node, the derived entities can be accessed the same way as in the handlers.
```typescript
test('Derived fields example test', () => {
- let mainAccount = new GraphAccount('12')
- mainAccount.save()
- let operatedAccount = new GraphAccount('1')
- operatedAccount.operators = ['12']
+ let mainAccount = GraphAccount.load('12')!
+
+ assert.assertNull(mainAccount.get('nameSignalTransactions'))
+ assert.assertNull(mainAccount.get('operatorOf'))
+
+ let operatedAccount = GraphAccount.load('1')!
+ operatedAccount.operators = [mainAccount.id]
operatedAccount.save()
- let nst = new NameSignalTransaction('1234')
- nst.signer = '12'
- nst.save()
+
+ mockNameSignalTransaction('1234', mainAccount.id)
+ mockNameSignalTransaction('2', mainAccount.id)
+
+ mainAccount = GraphAccount.load('12')!
assert.assertNull(mainAccount.get('nameSignalTransactions'))
assert.assertNull(mainAccount.get('operatorOf'))
+ const nameSignalTransactions = mainAccount.nameSignalTransactions.load()
+ const operatorsOfMainAccount = mainAccount.operatorOf.load()
+
+ assert.i32Equals(2, nameSignalTransactions.length)
+ assert.i32Equals(1, operatorsOfMainAccount.length)
+
+ assert.stringEquals('1', operatorsOfMainAccount[0].id)
+
+ mockNameSignalTransaction('2345', mainAccount.id)
+
+ let nst = NameSignalTransaction.load('1234')!
+ nst.signer = '11'
+ nst.save()
+
+ store.remove('NameSignalTransaction', '2')
+
mainAccount = GraphAccount.load('12')!
+ assert.i32Equals(1, mainAccount.nameSignalTransactions.load().length)
+})
+```
+
+### Testing `loadInBlock`
- assert.i32Equals(1, mainAccount.nameSignalTransactions.length)
- assert.stringEquals('1', mainAccount.operatorOf[0])
+As of version `0.6.0`, users can test `loadInBlock` by using the `mockInBlockStore`, it allows mocking entities in the block cache.
+
+```typescript
+import { afterAll, beforeAll, describe, mockInBlockStore, test } from 'matchstick-as'
+import { Gravatar } from '../../generated/schema'
+
+describe('loadInBlock', () => {
+ beforeAll(() => {
+ mockInBlockStore('Gravatar', 'gravatarId0', gravatar)
+ })
+
+ afterAll(() => {
+ clearInBlockStore()
+ })
+
+ test('Can use entity.loadInBlock() to retrieve entity from cache store in the current block', () => {
+ let retrievedGravatar = Gravatar.loadInBlock('gravatarId0')
+ assert.stringEquals('gravatarId0', retrievedGravatar!.get('id')!.toString())
+ })
+
+ test("Returns null when calling entity.loadInBlock() if an entity doesn't exist in the current block", () => {
+ let retrievedGravatar = Gravatar.loadInBlock('IDoNotExist')
+ assert.assertNull(retrievedGravatar)
+ })
})
```
@@ -988,6 +1086,198 @@ test('Data source simple mocking example', () => {
Notice that dataSourceMock.resetValues() is called at the end. That's because the values are remembered when they are changed and need to be reset if you want to go back to the default values.
+### Testing dynamic data source creation
+
+As of version `0.6.0`, it is possible to test if a new data source has been created from a template. This feature supports both ethereum/contract and file/ipfs templates. There are four functions for this:
+
+- `assert.dataSourceCount(templateName, expectedCount)` can be used to assert the expected count of data sources from the specified template
+- `assert.dataSourceExists(templateName, address/ipfsHash)` asserts that a data source with the specified identifier (could be a contract address or IPFS file hash) from a specified template was created
+- `logDataSources(templateName)` prints all data sources from the specified template to the console for debugging purposes
+- `readFile(path)` reads a JSON file that represents an IPFS file and returns the content as Bytes
+
+#### Testing `ethereum/contract` templates
+
+```typescript
+test('ethereum/contract dataSource creation example', () => {
+ // Assert there are no dataSources created from GraphTokenLockWallet template
+ assert.dataSourceCount('GraphTokenLockWallet', 0)
+
+ // Create a new GraphTokenLockWallet datasource with address 0xA16081F360e3847006dB660bae1c6d1b2e17eC2A
+ GraphTokenLockWallet.create(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2A'))
+
+ // Assert the dataSource has been created
+ assert.dataSourceCount('GraphTokenLockWallet', 1)
+
+ // Add a second dataSource with context
+ let context = new DataSourceContext()
+ context.set('contextVal', Value.fromI32(325))
+
+ GraphTokenLockWallet.createWithContext(Address.fromString('0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'), context)
+
+ // Assert there are now 2 dataSources
+ assert.dataSourceCount('GraphTokenLockWallet', 2)
+
+ // Assert that a dataSource with address "0xA16081F360e3847006dB660bae1c6d1b2e17eC2B" was created
+ // Keep in mind that `Address` type is transformed to lower case when decoded, so you have to pass the address as all lower case when asserting if it exists
+ assert.dataSourceExists('GraphTokenLockWallet', '0xA16081F360e3847006dB660bae1c6d1b2e17eC2B'.toLowerCase())
+
+ logDataSources('GraphTokenLockWallet')
+})
+```
+
+##### Example `logDataSource` output
+
+```bash
+🛠 {
+ "0xa16081f360e3847006db660bae1c6d1b2e17ec2a": {
+ "kind": "ethereum/contract",
+ "name": "GraphTokenLockWallet",
+ "address": "0xa16081f360e3847006db660bae1c6d1b2e17ec2a",
+ "context": null
+ },
+ "0xa16081f360e3847006db660bae1c6d1b2e17ec2b": {
+ "kind": "ethereum/contract",
+ "name": "GraphTokenLockWallet",
+ "address": "0xa16081f360e3847006db660bae1c6d1b2e17ec2b",
+ "context": {
+ "contextVal": {
+ "type": "Int",
+ "data": 325
+ }
+ }
+ }
+}
+```
+
+#### Testing `file/ipfs` templates
+
+Similarly to contract dynamic data sources, users can test test file datas sources and their handlers
+
+##### Example `subgraph.yaml`
+
+```yaml
+...
+templates:
+ - kind: file/ipfs
+ name: GraphTokenLockMetadata
+ network: mainnet
+ mapping:
+ kind: ethereum/events
+ apiVersion: 0.0.6
+ language: wasm/assemblyscript
+ file: ./src/token-lock-wallet.ts
+ handler: handleMetadata
+ entities:
+ - TokenLockMetadata
+ abis:
+ - name: GraphTokenLockWallet
+ file: ./abis/GraphTokenLockWallet.json
+```
+
+##### Example `schema.graphql`
+
+```graphql
+"""
+Token Lock Wallets which hold locked GRT
+"""
+type TokenLockMetadata @entity {
+ "The address of the token lock wallet"
+ id: ID!
+ "Start time of the release schedule"
+ startTime: BigInt!
+ "End time of the release schedule"
+ endTime: BigInt!
+ "Number of periods between start time and end time"
+ periods: BigInt!
+ "Time when the releases start"
+ releaseStartTime: BigInt!
+}
+```
+
+##### Example `metadata.json`
+
+```json
+{
+ "startTime": 1,
+ "endTime": 1,
+ "periods": 1,
+ "releaseStartTime": 1
+}
+```
+
+##### Example handler
+
+```typescript
+export function handleMetadata(content: Bytes): void {
+ // dataSource.stringParams() returns the File DataSource CID
+ // stringParam() will be mocked in the handler test
+ // for more info https://thegraph.com/docs/en/developing/creating-a-subgraph/#create-a-new-handler-to-process-files
+ let tokenMetadata = new TokenLockMetadata(dataSource.stringParam())
+ const value = json.fromBytes(content).toObject()
+
+ if (value) {
+ const startTime = value.get('startTime')
+ const endTime = value.get('endTime')
+ const periods = value.get('periods')
+ const releaseStartTime = value.get('releaseStartTime')
+
+ if (startTime && endTime && periods && releaseStartTime) {
+ tokenMetadata.startTime = startTime.toBigInt()
+ tokenMetadata.endTime = endTime.toBigInt()
+ tokenMetadata.periods = periods.toBigInt()
+ tokenMetadata.releaseStartTime = releaseStartTime.toBigInt()
+ }
+
+ tokenMetadata.save()
+ }
+}
+```
+
+##### Example test
+
+```typescript
+import { assert, test, dataSourceMock, readFile } from 'matchstick-as'
+import { Address, BigInt, Bytes, DataSourceContext, ipfs, json, store, Value } from '@graphprotocol/graph-ts'
+
+import { handleMetadata } from '../../src/token-lock-wallet'
+import { TokenLockMetadata } from '../../generated/schema'
+import { GraphTokenLockMetadata } from '../../generated/templates'
+
+test('file/ipfs dataSource creation example', () => {
+ // Generate the dataSource CID from the ipfsHash + ipfs path file
+ // For example QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm/example.json
+ const ipfshash = 'QmaXzZhcYnsisuue5WRdQDH6FDvqkLQX1NckLqBYeYYEfm'
+ const CID = `${ipfshash}/example.json`
+
+ // Create a new dataSource using the generated CID
+ GraphTokenLockMetadata.create(CID)
+
+ // Assert the dataSource has been created
+ assert.dataSourceCount('GraphTokenLockMetadata', 1)
+ assert.dataSourceExists('GraphTokenLockMetadata', CID)
+ logDataSources('GraphTokenLockMetadata')
+
+ // Now we have to mock the dataSource metadata and specifically dataSource.stringParam()
+ // dataSource.stringParams actually uses the value of dataSource.address(), so we will mock the address using dataSourceMock from matchstick-as
+ // First we will reset the values and then use dataSourceMock.setAddress() to set the CID
+ dataSourceMock.resetValues()
+ dataSourceMock.setAddress(CID)
+
+ // Now we need to generate the Bytes to pass to the dataSource handler
+ // For this case we introduced a new function readFile, that reads a local json and returns the content as Bytes
+ const content = readFile(`path/to/metadata.json`)
+ handleMetadata(content)
+
+ // Now we will test if a TokenLockMetadata was created
+ const metadata = TokenLockMetadata.load(CID)
+
+ assert.bigIntEquals(metadata!.endTime, BigInt.fromI32(1))
+ assert.bigIntEquals(metadata!.periods, BigInt.fromI32(1))
+ assert.bigIntEquals(metadata!.releaseStartTime, BigInt.fromI32(1))
+ assert.bigIntEquals(metadata!.startTime, BigInt.fromI32(1))
+})
+```
+
## Test Coverage
Using **Matchstick**, subgraph developers are able to run a script that will calculate the test coverage of the written unit tests.
@@ -1081,15 +1371,15 @@ The log output includes the test run duration. Here's an example:
This means you have used `console.log` in your code, which is not supported by AssemblyScript. Please consider using the [Logging API](/developing/assemblyscript-api/#logging-api)
> ERROR TS2554: Expected ? arguments, but got ?.
->
+>
> return new ethereum.Block(defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultAddress, defaultAddressBytes, defaultAddressBytes, defaultAddressBytes, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt, defaultBigInt);
->
+>
> in ~lib/matchstick-as/assembly/defaults.ts(18,12)
->
+>
> ERROR TS2554: Expected ? arguments, but got ?.
->
+>
> return new ethereum.Transaction(defaultAddressBytes, defaultBigInt, defaultAddress, defaultAddress, defaultBigInt, defaultBigInt, defaultBigInt, defaultAddressBytes, defaultBigInt);
->
+>
> in ~lib/matchstick-as/assembly/defaults.ts(24,12)
The mismatch in arguments is caused by mismatch in `graph-ts` and `matchstick-as`. The best way to fix issues like this one is to update everything to the latest released version.
diff --git a/website/pages/ar/firehose.mdx b/website/pages/ar/firehose.mdx
index 02f0d63c72db..0f0fdebbafd0 100644
--- a/website/pages/ar/firehose.mdx
+++ b/website/pages/ar/firehose.mdx
@@ -6,6 +6,8 @@ title: Firehose
Firehose is a new technology developed by StreamingFast working with The Graph Foundation. The product provides **previously unseen capabilities and speeds for indexing blockchain data** using a files-based and streaming-first approach.
+The Graph merges into Go Ethereum/geth with the adoption of [Live Tracer with v1.14.0 release](https://github.com/ethereum/go-ethereum/releases/tag/v1.14.0).
+
Firehose extracts, transforms and saves blockchain data in a highly performant file-based strategy. Blockchain developers can then access data extracted by Firehose through binary data streams. Firehose is intended to stand as a replacement for The Graph’s original blockchain data extraction layer.
## Firehose Documentation
diff --git a/website/pages/ar/glossary.mdx b/website/pages/ar/glossary.mdx
index fea39cbbe23b..a94cb5d4be55 100644
--- a/website/pages/ar/glossary.mdx
+++ b/website/pages/ar/glossary.mdx
@@ -24,7 +24,7 @@ title: قائمة المصطلحات
- **Indexer's Self Stake**: The amount of GRT that Indexers stake to participate in the decentralized network. The minimum is 100,000 GRT, and there is no upper limit.
-- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service by readily serving their queries upon being published. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service.
+- **Upgrade Indexer**: A temporary Indexer designed to act as a fallback for subgraph queries not serviced by other Indexers on the network. It ensures a seamless transition for subgraphs upgrading from the hosted service to The Graph Network. The upgrade Indexer is not competitive with other Indexers. It supports numerous blockchains that were previously only available on the hosted service.
- **Delegators**: Network participants who own GRT and delegate their GRT to Indexers. This allows Indexers to increase their stake in subgraphs on the network. In return, Delegators receive a portion of the Indexing Rewards that Indexers receive for processing subgraphs.
@@ -38,11 +38,11 @@ title: قائمة المصطلحات
- **مطور السوبغراف**: هو المطور الذي يقوم ببناء ونشر السوبغراف على شبكة الغراف اللامركزية.
-- **وصف السبغراف (Subgraph Manifest) **: هو ملف JSON يصف مخطط GraphQL للسبغراف ومصادر البيانات والبيانات الوصفية الأخرى. [هنا](https://ipfs.io/ipfs/QmVQdzeGdPUiLiACeqXRpKAYpyj8Z1yfWLMUq7A7WundUf) مثال.
+- **Subgraph Manifest**: A YAML file that describes the subgraph's GraphQL schema, data sources, and other metadata. [Here](https://github.com/graphprotocol/example-subgraph/blob/master/subgraph.yaml) is an example.
- **الحقبة (Epoch)**: وحدة زمنية داخل الشبكة. حاليًا، تتألف الحقبة من 6,646 كتلة أو تقريبًا يوم واحد.
-- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations exist in one of four phases.
+- **Allocation**: An Indexer can allocate their total GRT stake (including Delegators' stake) towards subgraphs that have been published on The Graph's decentralized network. Allocations can have different statuses:
1. **Active**: An allocation is considered active when it is created on-chain. This is called opening an allocation, and indicates to the network that the Indexer is actively indexing and serving queries for a particular subgraph. Active allocations accrue indexing rewards proportional to the signal on the subgraph, and the amount of GRT allocated.
@@ -54,7 +54,7 @@ title: قائمة المصطلحات
- **Arbitrators**: Arbitrators are network participants appointed through a governance process. The role of the Arbitrator is to decide the outcome of indexing and query disputes. Their goal is to maximize the utility and reliability of The Graph Network.
-- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned.
+- **Slashing**: Indexers can have their self-staked GRT slashed for providing an incorrect POI or for serving inaccurate data. The slashing percentage is a protocol parameter currently set to 2.5% of an Indexer's self stake. 50% of the slashed GRT goes to the Fisherman that disputed the inaccurate data or incorrect POI. The other 50% is burned.
- **Indexing Rewards**: The rewards that Indexers receive for indexing subgraphs. Indexing rewards are distributed in GRT.
@@ -83,3 +83,5 @@ title: قائمة المصطلحات
- **_Updating_ a subgraph**: The process of releasing a new subgraph version with updates to the subgraph's manifest, schema, or mappings.
- **Migrating**: The process of curation shares moving from an old version of a subgraph to a new version of a subgraph (e.g. when v0.0.1 is updated to v0.0.2).
+
+- **Upgrade Window**: The countdown for hosted service users to upgrade their subgraph(s) to The Graph Network beginning on April 11th, and ending on June 12th 2024.
diff --git a/website/pages/ar/graphcast.mdx b/website/pages/ar/graphcast.mdx
index 4965e86446ab..8fc00976ec28 100644
--- a/website/pages/ar/graphcast.mdx
+++ b/website/pages/ar/graphcast.mdx
@@ -10,7 +10,7 @@ Currently, the cost to broadcast information to other network participants is de
The Graphcast SDK (Software Development Kit) allows developers to build Radios, which are gossip-powered applications that Indexers can run to serve a given purpose. We also intend to create a few Radios (or provide support to other developers/teams that wish to build Radios) for the following use cases:
-- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio)).
+- Real-time cross-checking of subgraph data integrity ([Subgraph Radio](https://docs.graphops.xyz/graphcast/radios/subgraph-radio/intro)).
- Conducting auctions and coordination for warp syncing subgraphs, substreams, and Firehose data from other Indexers.
- Self-reporting on active query analytics, including subgraph request volumes, fee volumes, etc.
- Self-reporting on indexing analytics, including subgraph indexing time, handler gas costs, indexing errors encountered, etc.
diff --git a/website/pages/ar/index.json b/website/pages/ar/index.json
index 358d7708f4e8..005c09a0cf30 100644
--- a/website/pages/ar/index.json
+++ b/website/pages/ar/index.json
@@ -69,8 +69,7 @@
},
"supportedNetworks": {
"title": "الشبكات المدعومة",
- "description": "The Graph supports the following networks on The Graph Network and the hosted service.",
- "graphNetworkAndHostedService": "The Graph Network & hosted service",
- "hostedService": "hosted service"
+ "description": "The Graph supports the following networks.",
+ "footer": "For more details, see the {0} page."
}
}
diff --git a/website/pages/ar/managing/deprecating-a-subgraph.mdx b/website/pages/ar/managing/deprecating-a-subgraph.mdx
index 621e09029479..2d449d11f2b4 100644
--- a/website/pages/ar/managing/deprecating-a-subgraph.mdx
+++ b/website/pages/ar/managing/deprecating-a-subgraph.mdx
@@ -2,11 +2,11 @@
title: إهمال Subgraph
---
-إن كنت ترغب في إهمال الـ subgraph الخاص بك في The Graph Explorer. فأنت في المكان المناسب! اتبع الخطوات أدناه:
+So you'd like to deprecate your subgraph on Graph Explorer. You've come to the right place! Follow the steps below:
-1. قم بزيارة عنوان العقد [ هنا ](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract)
+1. Visit the contract address for Mainnet subgraphs [here](https://etherscan.io/address/0xadca0dd4729c8ba3acf3e99f3a9f471ef37b6825#writeProxyContract) and Arbitrum One subgraphs [here](https://arbiscan.io/address/0xec9A7fb6CbC2E41926127929c2dcE6e9c5D33Bec#writeProxyContract).
2. Call `deprecateSubgraph` with your `SubgraphID` as your argument.
-3. Voilà! Your subgraph will no longer show up on searches on The Graph Explorer.
+3. Voilà! Your subgraph will no longer show up on searches on Graph Explorer.
Please note the following:
diff --git a/website/pages/ar/managing/transferring-subgraph-ownership.mdx b/website/pages/ar/managing/transferring-subgraph-ownership.mdx
index 1ca1c621a9c9..83b5738d7734 100644
--- a/website/pages/ar/managing/transferring-subgraph-ownership.mdx
+++ b/website/pages/ar/managing/transferring-subgraph-ownership.mdx
@@ -4,7 +4,7 @@ title: Transferring Subgraph Ownership
The Graph supports the transfer of the ownership of a subgraph.
-When you deploy a subgraph to mainnet, an NFT will be minted to the address that deployed the subgraph. The NFT is based on a standard ERC721, so it can be easily transferred to different accounts.
+When you publish a subgraph to the decentralized network, an NFT will be minted to the address that published the subgraph. The NFT is based on a standard ERC721, so it can be easily transferred to different accounts.
Whoever owns the NFT controls the subgraph. If the owner decides to sell the NFT, or transfer it, they will no longer be able to make edits or updates to that subgraph on the network.
diff --git a/website/pages/ar/network/benefits.mdx b/website/pages/ar/network/benefits.mdx
index a54a14a768c9..c0ddbdb9be2d 100644
--- a/website/pages/ar/network/benefits.mdx
+++ b/website/pages/ar/network/benefits.mdx
@@ -11,7 +11,7 @@ Here is an analysis:
## Why You Should Use The Graph Network
-- 60-98% lower monthly cost
+- Significantly lower monthly costs
- $0 infrastructure setup costs
- Superior uptime
- Access to hundreds of independent Indexers around the world
@@ -21,68 +21,64 @@ Here is an analysis:
### Lower & more Flexible Cost Structure
-No contracts. No monthly fees. Only pay for the queries you use—with an average cost-per-query of $0.0002. Queries are priced in USD and paid in GRT.
-
-Query costs may vary; the quoted cost is the average at time of publication (December 2022).
-
-## Low Volume User (less than 30,000 queries per month)
-
-| Cost Comparison | Self Hosted | Graph Network |
-| :-: | :-: | :-: |
-| Monthly server cost\* | $350 per month | $0 |
-| Query costs | $0+ | ~$15 per month |
-| Engineering time† | $400 per month | None, built into the network with globally distributed Indexers |
-| Queries per month | Limited to infra capabilities | 30,000 (autoscaling) |
-| Cost per query | $0 | $0.0005‡ |
-| البنية الأساسية | Centralized | Decentralized |
-| Geographic redundancy | $750+ per additional node | Included |
-| Uptime | Varies | 99.9%+ |
-| Total Monthly Costs | $750+ | ~$15 |
-
-## Medium Volume User (3,000,000+ queries per month)
-
-| Cost Comparison | Self Hosted | Graph Network |
-| :-: | :-: | :-: |
-| Monthly server cost\* | $350 per month | $0 |
-| Query costs | $500 per month | $750 per month |
-| Engineering time† | $800 per month | None, built into the network with globally distributed Indexers |
-| Queries per month | Limited to infra capabilities | 3,000,000+ |
-| Cost per query | $0 | $0.00025‡ |
-| البنية الأساسية | Centralized | Decentralized |
-| Engineering expense | $200 per hour | Included |
-| Geographic redundancy | $1,200 in total costs per additional node | Included |
-| Uptime | Varies | 99.9%+ |
-| Total Monthly Costs | $1,650+ | $750 |
-
-## High Volume User (30,000,000+ queries per month)
-
-| Cost Comparison | Self Hosted | Graph Network |
-| :-: | :-: | :-: |
-| Monthly server cost\* | $1100 per month, per node | $0 |
-| Query costs | $4000 | $4,500 per month |
-| Number of nodes needed | 10 | Not applicable |
-| Engineering time† | $6,000 or more per month | None, built into the network with globally distributed Indexers |
-| Queries per month | Limited to infra capabilities | 30,000,000+ |
-| Cost per query | $0 | $0.00015‡ |
-| البنية الأساسية | Centralized | Decentralized |
-| Geographic redundancy | $1,200 in total costs per additional node | Included |
-| Uptime | Varies | 99.9%+ |
-| Total Monthly Costs | $11,000+ | $4,500 |
+No contracts. No monthly fees. Only pay for the queries you use—with an average cost-per-query of $40 per million queries (~$0.00004 per query). Queries are priced in USD and paid in GRT or credit card.
+
+Query costs may vary; the quoted cost is the average at time of publication (March 2024).
+
+## Low Volume User (less than 100,000 queries per month)
+
+| Cost Comparison | Self Hosted | The Graph Network |
+|:----------------------------:|:---------------------------------------:|:---------------------------------------------------------------:|
+| Monthly server cost\* | $350 per month | $0 |
+| Query costs | $0+ | $0 per month |
+| Engineering time† | $400 per month | None, built into the network with globally distributed Indexers |
+| Queries per month | Limited to infra capabilities | 100,000 (Free Plan) |
+| Cost per query | $0 | $0‡ |
+| البنية الأساسية | Centralized | Decentralized |
+| Geographic redundancy | $750+ per additional node | Included |
+| Uptime | Varies | 99.9%+ |
+| Total Monthly Costs | $750+ | $0 |
+
+## Medium Volume User (~3M queries per month)
+
+| Cost Comparison | Self Hosted | The Graph Network |
+|:----------------------------:|:------------------------------------------:|:---------------------------------------------------------------:|
+| Monthly server cost\* | $350 per month | $0 |
+| Query costs | $500 per month | $120 per month |
+| Engineering time† | $800 per month | None, built into the network with globally distributed Indexers |
+| Queries per month | Limited to infra capabilities | ~3,000,000 |
+| Cost per query | $0 | $0.00004 |
+| البنية الأساسية | Centralized | Decentralized |
+| Engineering expense | $200 per hour | Included |
+| Geographic redundancy | $1,200 in total costs per additional node | Included |
+| Uptime | Varies | 99.9%+ |
+| Total Monthly Costs | $1,650+ | $120 |
+
+## High Volume User (~30M queries per month)
+
+| Cost Comparison | Self Hosted | The Graph Network |
+|:----------------------------:|:-------------------------------------------:|:---------------------------------------------------------------:|
+| Monthly server cost\* | $1100 per month, per node | $0 |
+| Query costs | $4000 | $1,200 per month |
+| Number of nodes needed | 10 | Not applicable |
+| Engineering time† | $6,000 or more per month | None, built into the network with globally distributed Indexers |
+| Queries per month | Limited to infra capabilities | ~30,000,000 |
+| Cost per query | $0 | $0.00004 |
+| البنية الأساسية | Centralized | Decentralized |
+| Geographic redundancy | $1,200 in total costs per additional node | Included |
+| Uptime | Varies | 99.9%+ |
+| Total Monthly Costs | $11,000+ | $1,200 |
\*including costs for backup: $50-$100 per month
†Engineering time based on $200 per hour assumption
-‡using the max query budget function in the budget billing tab, while maintaining high quality of service
+‡Reflects cost for data consumer. Query fees are still paid to Indexers for Free Plan queries.
-Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks.
+Estimated costs are only for Ethereum Mainnet subgraphs — costs are even higher when self hosting a `graph-node` on other networks. Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing. Note that gas fees on [Arbitrum](/arbitrum/arbitrum-faq) are substantially lower than Ethereum mainnet.
Curating signal on a subgraph is an optional one-time, net-zero cost (e.g., $1k in signal can be curated on a subgraph, and later withdrawn—with potential to earn returns in the process).
-Some users may need to update their subgraph to a new version. Due to Ethereum gas fees, an update costs ~$50 at time of writing.
-
-Note that gas fees on [Arbitrum](/arbitrum/arbitrum-faq) are substantially lower than Ethereum mainnet.
-
## No Setup Costs & Greater Operational Efficiency
Zero setup fees. Get started immediately with no setup or overhead costs. No hardware requirements. No outages due to centralized infrastructure, and more time to concentrate on your core product . No need for backup servers, troubleshooting, or expensive engineering resources.
diff --git a/website/pages/ar/network/contracts.mdx b/website/pages/ar/network/contracts.mdx
new file mode 100644
index 000000000000..6abd80577ced
--- /dev/null
+++ b/website/pages/ar/network/contracts.mdx
@@ -0,0 +1,29 @@
+---
+title: Protocol Contracts
+---
+
+import { ProtocolContractsTable } from '@/src/contracts'
+
+Below are the deployed contracts which power The Graph Network. Visit the official [contracts repository](https://github.com/graphprotocol/contracts) to learn more.
+
+## Arbitrum
+
+This is the principal deployment of The Graph Network.
+
+
+
+## Mainnet
+
+This was the original deployment of The Graph Network. [Learn more](/arbitrum/arbitrum-faq) about The Graph's scaling with Arbitrum.
+
+
+
+## Arbitrum Sepolia
+
+This is the primary testnet for The Graph Network. Testnet is predominantly used by core developers and ecosystem participants for testing purposes. There are no guarantees of service or availability on The Graph's testnets.
+
+
+
+## Sepolia
+
+
diff --git a/website/pages/ar/network/curating.mdx b/website/pages/ar/network/curating.mdx
index 4793be612934..09b06f9e3476 100644
--- a/website/pages/ar/network/curating.mdx
+++ b/website/pages/ar/network/curating.mdx
@@ -2,38 +2,31 @@
title: Curating
---
-Curators are critical to the Graph decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through the Explorer, curators are able to view network data to make signaling decisions. The Graph Network rewards curators who signal on good quality subgraphs with a share of the query fees that subgraphs generate. Curators are economically incentivized to signal early. These cues from curators are important for Indexers, who can then process or index the data from these signaled subgraphs.
+Curators are critical to The Graph's decentralized economy. They use their knowledge of the web3 ecosystem to assess and signal on the subgraphs that should be indexed by The Graph Network. Through Graph Explorer, Curators view network data to make signaling decisions. In turn, The Graph Network rewards Curators who signal on good quality subgraphs with a share of the query fees those subgraphs generate. The amount of GRT signaled is one of the key considerations for indexers when determining which subgraphs to index.
-When signaling, curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. When signaling using auto-migrate, a Curator’s shares will always be migrated to the latest version published by the developer. If you decide to signal on a specific version instead, shares will always stay on this specific version.
+## What Does Signaling Mean for The Graph Network?
-تذكر أن عملية التنسيق محفوفة بالمخاطر. نتمنى أن تبذل قصارى جهدك وذلك لتنسق ال Subgraphs الموثوقة. إنشاء ال subgraphs لا يحتاج إلى ترخيص، لذلك يمكن للأشخاص إنشاء subgraphs وتسميتها بأي اسم يرغبون فيه. لمزيد من الإرشادات حول مخاطر التنسيق ، تحقق من[The Graph Academy's Curation Guide.](https://thegraph.academy/curators/)
+Before consumers can query a subgraph, it must be indexed. This is where curation comes into play. In order for Indexers to earn substantial query fees on quality subgraphs, they need to know what subgraphs to index. When Curators signal on a subgraph, it lets Indexers know that a subgraph is in demand and of sufficient quality that it should be indexed.
-## منحنى الترابط 101
+Indexers can trust the signal from a Curator because upon signaling, Curators mint a curation share for the subgraph, entitling Curators to a portion of future query fees that the subgraph drives.
-First, we take a step back. Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted.
+Curators make The Graph network efficient and [signaling](#how-to-signal) is the process that curators use to let Indexers know that a subgraph is good to index; where GRT is added to a bonding curve for a subgraph. Indexers can trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives.
-![سعر السهم](/img/price-per-share.png)
+Curator signals are represented as ERC20 tokens called Graph Curation Shares (GCS). Those that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators will also earn fewer query fees if they curate on a low-quality subgraph because there will be fewer queries to process or fewer Indexers to process them.
-نتيجة لذلك ، يرتفع السعر بثبات ، مما يعني أنه سيكون شراء السهم أكثر تكلفة مع مرور الوقت. فيما يلي مثال لما نعنيه ، راجع منحنى الترابط أدناه:
+The [Sunrise Upgrade Indexer](/sunrise/#what-is-the-upgrade-indexer) ensures the indexing of all subgraphs, signaling GRT on a particular subgraph will draw more indexers to it. This incentivization of additional Indexers through curation aims to enhance the quality of service for queries by reducing latency and enhancing network availability.
-![منحنى الترابط Bonding curve](/img/bonding-curve.png)
+When signaling, Curators can decide to signal on a specific version of the subgraph or to signal using auto-migrate. If they signal using auto-migrate, a curator’s shares will always be updated to the latest version published by the developer. If they decide to signal on a specific version instead, shares will always stay on this specific version.
-ضع في اعتبارك أن لدينا منسقان يشتركان في Subgraph واحد:
+To assist teams that are transitioning subgraphs from the hosted service to the Graph Network, curation support is now available. If you require assistance with curation to enhance the quality of service, please send a request to the Edge & Node team at support@thegraph.zendesk.com and specify the subgraphs you need assistance with.
-- المنسق (أ) هو أول من أشار إلى ال Subgraphs. من خلال إضافة 120000 GRT إلى المنحنى ، سيكون من الممكن صك 2000 سهم.
-- تظهر إشارة المنسق "ب" على ال Subgraph لاحقا. للحصول على نفس كمية حصص المنسق "أ" ، يجب إضافة 360000 GRT للمنحنى.
-- لأن كلا من المنسقين يحتفظان بنصف إجمالي اسهم التنسيق ، فإنهم سيحصلان على قدر متساوي من عوائد المنسقين.
-- إذا قام أي من المنسقين بحرق 2000 من حصص التنسيق الخاصة بهم ،فإنهم سيحصلون على 360000 GRT.
-- سيحصل المنسق المتبقي على جميع عوائد المنسق لهذ ال subgraphs. وإذا قام بحرق حصته للحصول علىGRT ، فإنه سيحصل على 120.000 GRT.
-- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph.
+Indexers can find subgraphs to index based on curation signals they see in Graph Explorer (screenshot below).
-In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and the **price of each share decreases with each token sold.**
-
-In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged.
+![مستكشف الفرعيةرسم بياني](/img/explorer-subgraphs.png)
## كيفية الإشارة
-Now that we’ve covered the basics about how the bonding curve works, this is how you will proceed to signal on a subgraph. Within the Curator tab on the Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in the Explorer, [click here.](/network/explorer)
+Within the Curator tab in Graph Explorer, curators will be able to signal and unsignal on certain subgraphs based on network stats. For a step-by-step overview of how to do this in Graph Explorer, [click here.](/network/explorer)
يمكن للمنسق الإشارة إلى إصدار معين ل subgraph ، أو يمكنه اختيار أن يتم ترحيل migrate إشاراتهم تلقائيا إلى أحدث إصدار لهذا ال subgraph. كلاهما استراتيجيات سليمة ولها إيجابيات وسلبيات.
@@ -41,47 +34,43 @@ Signaling on a specific version is especially useful when one subgraph is used b
Having your signal automatically migrate to the newest production build can be valuable to ensure you keep accruing query fees. Every time you curate, a 1% curation tax is incurred. You will also pay a 0.5% curation tax on every migration. Subgraph developers are discouraged from frequently publishing new versions - they have to pay a 0.5% curation tax on all auto-migrated curation shares.
-> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve, and also transfers tokens into the Graph proxy.
-
-## ماذا تعني الإشارة لشبكة The Graph؟
-
-لكي يتمكن المستهلك من الاستعلام عن subgraph ، يجب أولا فهرسة ال subgraph. الفهرسة هي عملية يتم فيها النظر إلى الملفات، والبيانات، والبيانات الوصفية وفهرستها بحيث يمكن العثور على النتائج بشكل أسرع. يجب تنظيم بيانات ال subgraph لتكون قابلة للبحث فيها.
+> **Note**: The first address to signal a particular subgraph is considered the first curator and will have to do much more gas-intensive work than the rest of the following curators because the first curator initializes the curation share tokens, initializes the bonding curve (even on Arbitrum), and also transfers tokens into the Graph proxy.
-وبالتالي ، إذا قام المفهرسون بتخمين ال subgraphs التي يجب عليهم فهرستها ، فستكون هناك فرصة منخفضة في أن يكسبوا رسوم استعلام جيدة لأنه لن يكون لديهم طريقة للتحقق من ال subgraphs ذات الجودة العالية. أدخل التنسيق.
+## Withdrawing your GRT
-Curators make The Graph network efficient and signaling is the process that curators use to let Indexers know that a subgraph is good to index, where GRT is added to a bonding curve for a subgraph. Indexers can inherently trust the signal from a curator because upon signaling, curators mint a curation share for the subgraph, entitling them to a portion of future query fees that the subgraph drives. Curator signal is represented as ERC20 tokens called Graph Curation Shares (GCS). Curators that want to earn more query fees should signal their GRT to subgraphs that they predict will generate a strong flow of fees to the network. Curators cannot be slashed for bad behavior, but there is a deposit tax on Curators to disincentivize poor decision-making that could harm the integrity of the network. Curators also earn fewer query fees if they choose to curate on a low-quality Subgraph since there will be fewer queries to process or fewer Indexers to process those queries. See the diagram below!
+Curators have the option to withdraw their signaled GRT at any time.
-![مخطط التأشير مخطط الإشارات](/img/curator-signaling.png)
+Unlike the process of delegating, if you decide to withdraw your signaled GRT you will not have to wait for a cooldown period and will receive the entire amount (minus the 1% curation tax).
-يمكن للمفهرسين العثور على subgraphs لفهرستها وذلك بناء على إشارات التنسيق التي يرونها في The Graph Explorer (لقطة الشاشة أدناه).
+Once a curator withdraws their signal, indexers may choose to keep indexing the subgraph, even if there's currently no active GRT signaled.
-![مستكشف الفرعيةرسم بياني](/img/explorer-subgraphs.png)
+However, it is recommended that curators leave their signaled GRT in place not only to receive a portion of the query fees, but also to ensure reliability and uptime of the subgraph.
## المخاطر
1. سوق الاستعلام يعتبر حديثا في The Graph وهناك خطر من أن يكون٪ APY الخاص بك أقل مما تتوقع بسبب ديناميكيات السوق الناشئة.
-2. Curation Fee - when a Curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve.
-3. When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating).
+2. Curation Fee - when a curator signals GRT on a subgraph, they incur a 1% curation tax. This fee is burned and the rest is deposited into the reserve supply of the bonding curve.
+3. (Ethereum only) When curators burn their shares to withdraw GRT, the GRT valuation of the remaining shares will be reduced. Be aware that in some cases, curators may decide to burn their shares **all at once**. This situation may be common if a dApp developer stops versioning/improving and querying their subgraph or if a subgraph fails. As a result, remaining curators might only be able to withdraw a fraction of their initial GRT. For a network role with a lower risk profile, see [Delegators](/network/delegating).
4. يمكن أن يفشل ال subgraph بسبب خطأ. ال subgraph الفاشل لا يمكنه إنشاء رسوم استعلام. نتيجة لذلك ، سيتعين عليك الانتظار حتى يصلح المطور الخطأ وينشر إصدارا جديدا.
- إذا كنت مشتركا في أحدث إصدار من subgraph ، فسيتم ترحيل migrate أسهمك تلقائيا إلى هذا الإصدار الجديد. هذا سيتحمل ضريبة تنسيق بنسبة 0.5٪.
- - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. Note that you may receive more or less GRT than you initially deposited into the curation curve, which is a risk associated with being a curator. You can then signal on the new subgraph version, thus incurring a 1% curation tax.
+ - If you have signaled on a specific subgraph version and it fails, you will have to manually burn your curation shares. You can then signal on the new subgraph version, thus incurring a 1% curation tax.
## الأسئلة الشائعة حول التنسيق
### 1. ما هي النسبة المئوية لرسوم الاستعلام التي يكسبها المنسقون؟
-By signalling on a subgraph, you will earn a share of all the query fees that this subgraph generates. 10% of all query fees goes to the Curators pro-rata to their curation shares. This 10% is subject to governance.
+By signalling on a subgraph, you will earn a share of all the query fees that the subgraph generates. 10% of all query fees go to the Curators pro-rata to their curation shares. This 10% is subject to governance.
### 2. كيف يمكنني تقرير ما إذا كان ال subgraph عالي الجودة لكي أقوم بالإشارة إليه؟
Finding high-quality subgraphs is a complex task, but it can be approached in many different ways. As a Curator, you want to look for trustworthy subgraphs that are driving query volume. A trustworthy subgraph may be valuable if it is complete, accurate, and supports a dApp’s data needs. A poorly architected subgraph might need to be revised or re-published, and can also end up failing. It is critical for Curators to review a subgraph’s architecture or code in order to assess if a subgraph is valuable. As a result:
- يمكن للمنسقين استخدام فهمهم للشبكة لمحاولة التنبؤ كيف لل subgraph أن يولد حجم استعلام أعلى أو أقل في المستقبل
-- Curators should also understand the metrics that are available through The Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on.
+- Curators should also understand the metrics that are available through Graph Explorer. Metrics like past query volume and who the subgraph developer is can help determine whether or not a subgraph is worth signalling on.
### 3. What’s the cost of updating a subgraph?
-Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curation shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because updating subgraphs is an on-chain action that costs gas.
+Migrating your curation shares to a new subgraph version incurs a curation tax of 1%. Curators can choose to subscribe to the newest version of a subgraph. When curator shares get auto-migrated to a new version, Curators will also pay half curation tax, ie. 0.5%, because upgrading subgraphs is an on-chain action that costs gas.
### 4. How often can I update my subgraph?
@@ -89,7 +78,49 @@ It’s suggested that you don’t update your subgraphs too frequently. See the
### 5. هل يمكنني بيع أسهم التنسيق الخاصة بي؟
-Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve. As a Curator, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited.
+Curation shares cannot be "bought" or "sold" like other ERC20 tokens that you may be familiar with. They can only be minted (created) or burned (destroyed) along the bonding curve for a particular subgraph. The amount of GRT needed to mint a new signal, and the amount of GRT you receive when you burn your existing signal are determined by that bonding curve:
+
+- As a Curator on Ethereum, you need to know that when you burn your curation shares to withdraw GRT, you can end up with more or less GRT than you initially deposited.
+- As a Curator on Arbitrum, you are guaranteed to get back the GRT you initially deposited (minus the tax).
+
+### 6. Am I eligible for a curation grant?
+
+Curation grants are determined individually on a case-by-case basis. If you need assistance with curation, please send a request to support@thegraph.zendesk.com.
+
+## Curating on Ethereum vs Arbitrum
+
+The behavior of the curation mechanism differs depending on the protocol chain deployment, notably, how the price of a subgraph share is calculated.
+
+The Graph Network's original deployment on Ethereum uses bonding curves to determine what the price of shares is: **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.** This means that curating puts your principal at risk, since it's not guaranteed you can sell your shares and get back your original investment.
+
+On Arbitrum, curating subgraphs becomes significantly simpler. The bonding curves are "flattened", their effect is nullified meaning no Curator will be able to realize gains at the expense of others. This allows Curators to signal or unsignal on subgraphs at any given time, at a consistent cost, and with very limited risk.
+
+If you are interested in curating on Ethereum and want to understand the details of bonding curves and their effects see [Bonding Curve 101](#bonding-curve-101). Please do your diligence to make sure you curate on subgraphs you trust. Creating a subgraph is permissionless, so people can create subgraphs and call them any name they'd like. For more guidance on curation risks, check out [The Graph Academy's Curation Guide.](https://thegraph.academy/curators/)
+
+## منحنى الترابط 101
+
+> **Note**: this section only applies to curation on Ethereum since bonding curves are flat and have no effect on Arbitrum.
+
+Each subgraph has a bonding curve on which curation shares are minted when a user adds signal **into** the curve. Each subgraph’s bonding curve is unique. The bonding curves are architected so that the price to mint a curation share on a subgraph increases linearly, over the number of shares minted.
+
+![سعر السهم](/img/price-per-share.png)
+
+نتيجة لذلك ، يرتفع السعر بثبات ، مما يعني أنه سيكون شراء السهم أكثر تكلفة مع مرور الوقت. فيما يلي مثال لما نعنيه ، راجع منحنى الترابط أدناه:
+
+![منحنى الترابط Bonding curve](/img/bonding-curve.png)
+
+ضع في اعتبارك أن لدينا منسقان يشتركان في Subgraph واحد:
+
+- المنسق (أ) هو أول من أشار إلى ال Subgraphs. من خلال إضافة 120000 GRT إلى المنحنى ، سيكون من الممكن صك 2000 سهم.
+- Curator B’s signal is on the subgraph later at some point. To receive the same amount of shares as Curator A, they would have to add 360,000 GRT into the curve.
+- لأن كلا من المنسقين يحتفظان بنصف إجمالي اسهم التنسيق ، فإنهم سيحصلان على قدر متساوي من عوائد المنسقين.
+- Now, if any of the curators were to burn their 2000 curation shares, they would receive 360,000 GRT.
+- سيحصل المنسق المتبقي على جميع عوائد المنسق لهذ ال subgraphs. وإذا قام بحرق حصته للحصول علىGRT ، فإنه سيحصل على 120.000 GRT.
+- **TLDR:** The GRT valuation of curation shares is determined by the bonding curve and can be volatile. There is potential to incur big losses. Signaling early means you put in less GRT for each share. By extension, this means you earn more curator royalties per GRT than later curators for the same subgraph.
+
+In general, a bonding curve is a mathematical curve that defines the relationship between token supply and asset price. In the specific case of subgraph curation, **the price of each subgraph share increases with each token invested** and **the price of each share decreases with each token sold.**
+
+In the case of The Graph, [Bancor’s implementation of a bonding curve formula](https://drive.google.com/file/d/0B3HPNP-GDn7aRkVaV3dkVl9NS2M/view?resourcekey=0-mbIgrdd0B9H8dPNRaeB_TA) is leveraged.
لازلت مشوشا؟ راجع فيديو دليل التنسيق أدناه:
diff --git a/website/pages/ar/network/delegating.mdx b/website/pages/ar/network/delegating.mdx
index e5a9fb6b8955..85101ff65a44 100644
--- a/website/pages/ar/network/delegating.mdx
+++ b/website/pages/ar/network/delegating.mdx
@@ -2,13 +2,15 @@
title: Delegating
---
-لا يمكن شطب المفوضين بسبب السلوك السيئ ، ولكن هناك ضريبة ودائع على المفوضين لتثبيط اتخاذ القرار السيئ الذي قد يضر بسلامة الشبكة.
+Delegators are network participants who delegate (i.e., "stake") GRT to one or more Indexers. Delegators help secure the network without running a Graph Node themselves.
-سيشرح هذا الدليل كيف تكون مفوضا فعالا في Graph Network. يشارك المفوضون أرباح حصتهم المفوضة مع المفهرسين. يجب أن يستخدم المفوض أفضل حكم لديه لاختيار المفهرسين بناء على عوامل متعددة. يرجى ملاحظة أن هذا الدليل لن يتطرق لخطوات مثل إعداد Metamask ، حيث أن هذه المعلومات متاحة على نطاق واسع على الإنترنت. يوجد ثلاثة أقسام في هذا الدليل:
+Delegators earn a portion of an Indexer's query fees and rewards by delegating to them. The amount of queries an Indexer can process depends on their own stake, the delegated stake, and the price the Indexer charges for each query. Therefore, the more stake that is allocated to an Indexer, the more potential queries they can process.
## دليل المفوض
-This guide will explain how to be an effective Delegator in the Graph Network. Delegators share earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not go over steps such as setting up Metamask properly, as that information is widely available on the internet. There are three sections in this guide:
+This guide will explain how to be an effective Delegator in the Graph Network. Delegators share the earnings of the protocol alongside all Indexers based on their delegated stake. A Delegator must use their best judgment to choose Indexers based on multiple factors. Please note this guide will not cover steps such as setting up Metamask properly, as that information is widely available on the internet.
+
+There are three sections in this guide:
- مخاطر تفويض التوكن في شبكة The Graph
- كيفية حساب العوائد المتوقعة كمفوض
@@ -22,38 +24,40 @@ This guide will explain how to be an effective Delegator in the Graph Network. D
Delegators cannot be slashed for bad behavior, but there is a tax on Delegators to disincentivize poor decision-making that could harm the integrity of the network.
-من المهم أن تفهم أنه في كل مرة تقوم فيها بالتفويض ، سيتم حرق 0.5٪. هذا يعني أنه إذا كنت تفوض 1000 GRT ، فستحرق 5 GRT تلقائيا.
+It is important to understand that every time you delegate, you will be charged 0.5%. This means that if you are delegating 1000 GRT, you will automatically burn 5 GRT.
-هذا يعني أنه لكي يكون المفوض آمنا ، يجب أن يحسب كم ستكون عوائده من خلال التفويض للمفهرس. على سبيل المثال ، قد يحسب المفوض عدد الأيام التي سيستغرقها لاسترداد رسوم الـ 0.5٪ التي دفعها للتفويض.
+In order to be safe, a Delegator should calculate their potential return when delegating to an Indexer. For example, a Delegator might calculate how many days it will take before they have earned back the 0.5% tax on their delegation.
### فترة إلغاء التفويض
-Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens, or earn any rewards for 28 days.
+Whenever a Delegator wants to undelegate, their tokens are subject to a 28-day unbonding period. This means they cannot transfer their tokens or earn any rewards for 28 days.
-One thing to consider as well is choosing an Indexer wisely. If you choose an Indexer who was not trustworthy, or not doing a good job, you will want to undelegate, which means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT.
+Another thing to consider is how to choose an Indexer wisely. If you choose an Indexer that is not trustworthy or not doing a good job, you will want to undelegate. This means you will be losing a lot of opportunities to earn rewards, which can be just as bad as burning GRT.
-
لاحظ 0.5٪ رسوم التفويض ، بالإضافة إلى فترة 28 يوما لإلغاء التفويض.
+
+ لاحظ 0.5٪ رسوم التفويض ، بالإضافة إلى فترة 28 يوما لإلغاء التفويض.
+
### اختيار مفهرس جدير بالثقة مع عائد جيد للمفوضين
-هذا جزء مهم عليك أن تفهمه. أولاً ، دعنا نناقش ثلاث قيم مهمة للغاية وهي بارامترات التفويض.
+This is an important aspect to understand. First, let's discuss three very important values, which are the Delegation Parameters.
-Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. That means if it is set to 100%, as a Delegator you will get 0 indexing rewards. If you see 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards.
+Indexing Reward Cut - The indexing reward cut is the portion of the rewards that the Indexer will keep for themselves. This means that if an Indexer's rewards are set to 100%, as a Delegator you will get 0 indexing rewards. If you see it set at 80% in the UI, that means as a Delegator, you will receive 20%. An important note - at the beginning of the network, Indexing Rewards will account for the majority of the rewards.
*المفهرس الأعلى يمنح المفوضين 90٪ من المكافآت. والمتوسط يمنح المفوضين 20٪. والأدنى يعطي المفوضين ~ 83٪.*
-- اقتطاع رسوم الاستعلام Query Fee Cut - هذا تماما مثل اقتطاع مكافأة الفهرسة Indexing Reward Cut. ومع ذلك ، فهو مخصص بشكل خاص للعائدات على رسوم الاستعلام التي يجمعها المفهرس. وتجدر الإشارة إلى أنه في بداية الشبكة ، سيكون العائد من رسوم الاستعلام صغيرا جدا مقارنة بمكافأة الفهرسة. من المستحسن الاهتمام بالشبكة لتحديد متى ستصبح رسوم الاستعلام في الشبكة أكثر أهمية.
+- Query Fee Cut - This works exactly like the Indexing Reward Cut. However, this applies explicitly to returns on the query fees the Indexer collects. It should be noted that at the start of the network, returns from query fees will be very small compared to the indexing reward. It is recommended that you pay attention to the network to determine when the query fees in the network will start to be more significant.
-As you can see, there is a lot of thought that must go into choosing the right Indexer. This is why we highly recommend you explore The Graph Discord to determine who the Indexers are with the best social reputation, and technical reputation, to reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months in the testnet, and are doing their best to help Delegators earn a good return, as it improves the health and success of the network.
+As you can see, in order to choose the right Indexer, you must consider multiple things. This is why we highly recommend exploring [The Graph Discord](https://discord.gg/graphprotocol) to determine which Indexers have the best social and technical reputations and which reward Delegators consistently. Many of the Indexers are very active in Discord and will be happy to answer your questions. Many of them have been Indexing for months on the testnet, and they are doing their best to help Delegators earn a good return, as it improves the health and success of the network.
### حساب العائد المتوقع للمفوضين
-يجب على المفوض النظر في الكثير من العوامل عند تحديد العوائد. وتشمل:
+A Delegator must consider a lot of factors when determining the return. These include:
- يمكن للمفوض إلقاء نظرة على قدرة المفهرسين على استخدام التوكن المفوضة المتاحة لهم. إذا لم يقم المفهرس بتخصيص جميع التوكن المتاحة ، فإنه لا يكسب أقصى ربح يمكن أن يحققه لنفسه أو للمفوضين.
-- الآن في الشبكة ، يمكن للمفهرس اختيار إغلاق المخصصة (allocation) وجمع المكافآت في أي وقت بين 1 و 28 يوما. لذلك من الممكن أن يكون لدى المفهرس الكثير من المكافآت التي لم يجمعها بعد ، وبالتالي ، فإن إجمالي مكافآته منخفضة. يجب أن يؤخذ هذا في الاعتبار في الأيام الأولى.
+- Right now, in the network an Indexer can choose to close an allocation and collect rewards anytime between 1 and 28 days. So, it is possible that an Indexer might have a lot of rewards they still need to collect, and thus, their total rewards are low. This should be taken into consideration in the early days.
### النظر في اقتطاع رسوم الاستعلام query fee cut واقتطاع رسوم الفهرسة indexing fee cut
@@ -67,28 +71,32 @@ As you can see, there is a lot of thought that must go into choosing the right I
![شارك الصيغة](/img/Share-Forumla.png)
-باستخدام هذه الصيغة ، يمكننا أن نرى أنه من الممكن فعليا للمفهرس الذي يعطي فقط 20٪ للمفوضين ، أن يمنح المفوضين عائدا أفضل من المفهرس الذي يعطي 90٪ للمفوضين.
+Using this formula, we can see that it is possible for an Indexer offering only 20% to Delegators to actually provide a better reward than an Indexer giving 90%.
-وبالتالي يمكن للمفوض أن يقوم بالحسابات ليدرك أن المفهرس الذي يقدم 20٪ للمفوضين يقدم عائدا أفضل.
+Therefore, a Delegator can do the math to determine that the Indexer offering 20% to Delegators is offering a better return.
### النظر في سعة التفويض (delegation capacity)
Another thing to consider is the delegation capacity. Currently, the Delegation Ratio is set to 16. This means that if an Indexer has staked 1,000,000 GRT, their Delegation Capacity is 16,000,000 GRT of Delegated tokens that they can use in the protocol. Any delegated tokens over this amount will dilute all the Delegator rewards.
-Imagine an Indexer has 100,000,000 GRT delegated to them, and their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. And all the Delegators, and the Indexer, are earning way less rewards than they could be.
+Imagine an Indexer with 100,000,000 GRT delegated to them, but their capacity is only 16,000,000 GRT. This means effectively, 84,000,000 GRT tokens are not being used to earn tokens. So, all the Delegators and the Indexer, are earning way less rewards than they could be.
-لذلك يجب على المفوض دائما أن يأخذ في الاعتبار سعة التفويض (Delegation Capacity) للمفهرس ، وأن يأخذها في الاعتبار عند اتخاذ القرار.
+Therefore, a Delegator should always consider the Delegation Capacity of an Indexer, and factor it into their decision making.
## Delegator FAQs and Bugs
### MetaMask "Pending Transaction" Bug
-**When I try to delegate my transaction in MetaMask appears as "Pending" or "Queued" for longer than expected. What should I do?**
+**When I try to delegate my transaction in MetaMask, it appears as "Pending" or "Queued" for longer than expected. What should I do?**
+
+At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts.
+
+For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, a user can attempt subsequent transactions, but these will only be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful.
-At times, attempts to delegate to indexers via MetaMask can fail and result in prolonged periods of "Pending" or "Queued" transaction attempts. For example, a user may attempt to delegate with an insufficient gas fee relative to the current prices, resulting in the transaction attempt displaying as "Pending" in their MetaMask wallet for 15+ minutes. When this occurs, subsequent transactions can be attempted by a user, but these will not be processed until the initial transaction is mined, as transactions for an address must be processed in order. In such cases, these transactions can be cancelled in MetaMask, but the transactions attempts will accrue gas fees without any guarantee that subsequent attempts will be successful. A simpler resolution to this bug is restarting the browsesr (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users that have encountered this issue and have reported successful transactions after restarting their browser and attempting to delegate.
+A simpler resolution to this bug is restarting the browser (e.g., using "abort:restart" in the address bar), which will cancel all previous attempts without gas being subtracted from the wallet. Several users who have encountered this issue have reported successful transactions after restarting their browser and attempting to delegate.
## Video guide for the network UI
-This guide provides a full review of this document, and how to consider everything in this document while interacting with the UI.
+This video guide fully reviews this document and explains how to consider everything in it while interacting with the UI.
diff --git a/website/pages/ar/network/developing.mdx b/website/pages/ar/network/developing.mdx
index 3ee32c2df2e6..638f2b5af282 100644
--- a/website/pages/ar/network/developing.mdx
+++ b/website/pages/ar/network/developing.mdx
@@ -14,9 +14,9 @@ As with all subgraph development, it starts with local development and testing.
> There are certain constraints on The Graph Network, in terms of feature and network support. Only subgraphs on [supported networks](/developing/supported-networks) will earn indexing rewards, and subgraphs which fetch data from IPFS are also not eligible.
-### Deploy to the Subgraph Studio
+### Deploy to Subgraph Studio
-Once defined, the subgraph can be built and deployed to the [Subgraph Studio](https://thegraph.com/docs/en/deploying/subgraph-studio-faqs/). The Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected.
+Once defined, the subgraph can be built and deployed to [Subgraph Studio](/deploying/subgraph-studio-faqs/). Subgraph Studio is a sandbox environment which will index the deployed subgraph and make it available for rate-limited development and testing. This gives developers an opportunity to verify that their subgraph does not encounter any indexing errors, and works as expected.
### Publish to the Network
@@ -30,13 +30,13 @@ Published subgraphs are unlikely to be picked up by Indexers without the additio
Once a subgraph has been processed by Indexers and is available for querying, developers can start to use the subgraph in their applications. Developers query subgraphs via a gateway, which forwards their queries to an Indexer who has processed the subgraph, paying query fees in GRT.
-In order to make queries, developers must generate an API key, which can be done in the Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. The Subgraph Studio provides developers with data on their API key usage over time.
+In order to make queries, developers must generate an API key, which can be done in Subgraph Studio. This API key must be funded with GRT, in order to pay query fees. Developers can set a maximum query fee, in order to control their costs, and limit their API key to a given subgraph or origin domain. Subgraph Studio provides developers with data on their API key usage over time.
-Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in the Subgraph Studio.
+Developers are also able to express an Indexer preference to the gateway, for example preferring Indexers whose query response is faster, or whose data is most up to date. These controls are set in Subgraph Studio.
### Updating Subgraphs
-After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to the Subgraph Studio for rate-limited development and testing.
+After a time a subgraph developer may want to update their subgraph, perhaps fixing a bug or adding new functionality. The subgraph developer may deploy new version(s) of their subgraph to Subgraph Studio for rate-limited development and testing.
Once the Subgraph Developer is ready to update, they can initiate a transaction to point their subgraph at the new version. Updating the subgraph migrates any signal to the new version (assuming the user who applied the signal selected "auto-migrate"), which also incurs a migration tax. This signal migration should prompt Indexers to start indexing the new version of the subgraph, so it should soon become available for querying.
diff --git a/website/pages/ar/network/explorer.mdx b/website/pages/ar/network/explorer.mdx
index 30ef685d8b1c..4c82281ebc72 100644
--- a/website/pages/ar/network/explorer.mdx
+++ b/website/pages/ar/network/explorer.mdx
@@ -2,13 +2,13 @@
title: Graph Explorer
---
-مرحبا بك في مستكشف Graph ، أو كما نحب أن نسميها ، بوابتك اللامركزية في عالم subgraphs وبيانات الشبكة. 👩🏽🚀 مستكشف TheGraph يتكون من عدة اجزاء حيث يمكنك التفاعل مع مطوري Subgraphs الاخرين ، ومطوري dApp ،والمنسقين والمفهرسين، والمفوضين. للحصول على نظرة عامة حول the Graph Explorer، راجع الفيديو أدناه (أو تابع القراءة أدناه):
+Welcome to Graph Explorer, or as we like to call it, your decentralized portal into the world of subgraphs and network data. 👩🏽🚀 Graph Explorer consists of multiple parts where you can interact with other subgraph developers, dapp developers, Curators, Indexers, and Delegators. For a general overview of Graph Explorer, check out the video below (or keep reading below):
## Subgraphs
-First things first, if you just finished deploying and publishing your subgraph in the Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name.
+First things first, if you just finished deploying and publishing your subgraph in Subgraph Studio, the Subgraphs tab on the top of the navigation bar is the place to view your own finished subgraphs (and the subgraphs of others) on the decentralized network. Here, you’ll be able to find the exact subgraph you’re looking for based on the date created, signal amount, or name.
![صورة المستكشف 1](/img/Subgraphs-Explorer-Landing.png)
@@ -93,9 +93,9 @@ If you want to learn more about how to become a Delegator, look no further! All
في قسم الشبكة ، سترى KPIs بالإضافة إلى القدرة على التبديل بين الفترات وتحليل مقاييس الشبكة بشكل مفصل. ستمنحك هذه التفاصيل فكرة عن كيفية أداء الشبكة بمرور الوقت.
-### Activity
+### نظره عامة
-يحتوي قسم النشاط على جميع مقاييس الشبكة الحالية بالإضافة إلى بعض المقاييس المتراكمة بمرور الوقت. هنا يمكنك رؤية أشياء مثل:
+The overview section has all the current network metrics as well as some cumulative metrics over time. Here you can see things like:
- إجمالي حصة الشبكة الحالية
- الحصة المقسمة بين المفهرسين ومفوضيهم
@@ -198,6 +198,6 @@ Now that we’ve talked about the network stats, let’s move on to your persona
![صورة المستكشف 15](/img/Profile-Settings.png)
-كبوابتك الرسمية إلى عالم البيانات اللامركزية ، يتيح لك Graph Explorer اتخاذ مجموعة متنوعة من الإجراءات ، بغض النظر عن دورك في الشبكة. يمكنك الوصول إلى إعدادات ملفك الشخصي عن طريق فتح القائمة المنسدلة بجوار عنوانك ، ثم النقر على زر Settings.
+As your official portal into the world of decentralized data, Graph Explorer allows you to take a variety of actions, no matter your role in the network. You can get to your profile settings by opening the dropdown menu next to your address, then clicking on the Settings button.
تفاصيل المحفظة
diff --git a/website/pages/ar/network/indexing.mdx b/website/pages/ar/network/indexing.mdx
index abe53eae2f89..61e19e0867d6 100644
--- a/website/pages/ar/network/indexing.mdx
+++ b/website/pages/ar/network/indexing.mdx
@@ -26,7 +26,7 @@ The minimum stake for an Indexer is currently set to 100K GRT.
Indexing rewards come from protocol inflation which is set to 3% annual issuance. They are distributed across subgraphs based on the proportion of all curation signal on each, then distributed proportionally to Indexers based on their allocated stake on that subgraph. **An allocation must be closed with a valid proof of indexing (POI) that meets the standards set by the arbitration charter in order to be eligible for rewards.**
-Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/AllocationOpt.jl) integrated with the indexer software stack.
+Numerous tools have been created by the community for calculating rewards; you'll find a collection of them organized in the [Community Guides collection](https://www.notion.so/Community-Guides-abbb10f4dba040d5ba81648ca093e70c). You can also find an up to date list of tools in the #Delegators and #Indexers channels on the [Discord server](https://discord.gg/graphprotocol). Here we link a [recommended allocation optimiser](https://github.com/graphprotocol/allocation-optimizer) integrated with the indexer software stack.
### ما هو إثبات الفهرسة (POI)؟
@@ -38,7 +38,7 @@ Allocations are continuously accruing rewards while they're active and allocated
### Can pending indexing rewards be monitored?
-يحتوي عقد RewardsManager على وظيفة [ الحصول على المكافآت ](https://github.com/graphprotocol/contracts/blob/master/contracts/rewards/RewardsManager.sol#L317) للقراءة فقط يمكن استخدامها للتحقق من المكافآت المعلقة لتخصيص معين.
+The RewardsManager contract has a read-only [getRewards](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/rewards/RewardsManager.sol#L316) function that can be used to check the pending rewards for a specific allocation.
تشتمل العديد من لوحات المعلومات التي أنشأها المجتمع على قيم المكافآت المعلقة ويمكن التحقق منها بسهولة يدويًا باتباع الخطوات التالية:
@@ -63,7 +63,7 @@ Allocations are continuously accruing rewards while they're active and allocated
- انتقل إلى [ واجهة Etherscan لعقد المكافآت Rewards contract ](https://etherscan.io/address/0x9Ac758AB77733b4150A901ebd659cbF8cB93ED66#readProxyContract)
* لاستدعاء `getRewards()`:
- - قم بتوسيع ** 10 .الحصول على المكافآت ** القائمة المنسدلة.
+ - Expand the **9. getRewards** dropdown.
- أدخل ** معرّف التخصيص ** في الإدخال.
- انقر فوق الزر ** الاستعلام **.
@@ -113,11 +113,11 @@ The `queryFeeCut` and `indexingRewardCut` values are delegation parameters that
- **كبيرة** - مُعدة لفهرسة جميع ال subgraphs المستخدمة حاليا وأيضا لخدمة طلبات حركة مرور البيانات ذات الصلة.
| Setup | Postgres (CPUs) | Postgres (memory in GBs) | Postgres (disk in TBs) | VMs (CPUs) | VMs (memory in GBs) |
-| --- | :-: | :-: | :-: | :-: | :-: |
-| صغير | 4 | 8 | 1 | 4 | 16 |
-| قياسي | 8 | 30 | 1 | 12 | 48 |
-| متوسط | 16 | 64 | 2 | 32 | 64 |
-| كبير | 72 | 468 | 3.5 | 48 | 184 |
+| ----- |:---------------------------:|:-----------------------------------:|:---------------------------------:|:---------------------:|:------------------------------:|
+| صغير | 4 | 8 | 1 | 4 | 16 |
+| قياسي | 8 | 30 | 1 | 12 | 48 |
+| متوسط | 16 | 64 | 2 | 32 | 64 |
+| كبير | 72 | 468 | 3.5 | 48 | 184 |
### What are some basic security precautions an Indexer should take?
@@ -149,20 +149,20 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
#### Graph Node
-| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | ws-port-- | - |
+| 8020 | JSON-RPC (for managing deployments) | / | admin-port-- | - |
+| 8030 | Subgraph indexing status API | /graphql | index-node-port-- | - |
+| 8040 | Prometheus metrics | /metrics | metrics-port-- | - |
#### خدمة المفهرس
-| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة |
-| --- | --- | --- | --- | --- |
-| 7600 | GraphQL HTTP server (for paid subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | port-- | `INDEXER_SERVICE_PORT` |
-| 7300 | Prometheus metrics | /metrics | metrics-port-- | - |
+| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة |
+| ------ | ------------------------------------------------------------ | --------------------------------------------------------------------------- | -------------- | ---------------------- |
+| 7600 | GraphQL HTTP server (for paid subgraph queries) | /subgraphs/id/... /status /channel-messages-inbox | port-- | `INDEXER_SERVICE_PORT` |
+| 7300 | Prometheus metrics | /metrics | metrics-port-- | - |
#### وكيل المفهرس(Indexer Agent)
@@ -182,9 +182,9 @@ At the center of an Indexer's infrastructure is the Graph Node which monitors th
#### أنشئ مشروع Google Cloud
-- Clone or navigate to the Indexer repository.
+- Clone or navigate to the [Indexer repository](https://github.com/graphprotocol/indexer).
-- انتقل إلى الدليل ./terraform ، حيث يجب تنفيذ جميع الأوامر.
+- Navigate to the `./terraform` directory, this is where all commands should be executed.
```sh
cd terraform
@@ -297,7 +297,7 @@ kubectl config use-context $(kubectl config get-contexts --output='name'
### Graph Node
-[ Graph Node ](https://github.com/graphprotocol/graph-node) هو تطبيق مفتوح المصدر Rust ومصدره Ethereum blockchain لتحديث البيانات والذي يمكن الاستعلام عنها عبر GraphQL endpoint. يستخدم المطورون ال subgraphs لتحديد مخططهم ، ويستخدمون مجموعة من الرسوم لتحويل البيانات التي يتم الحصول عليها من blockchain و the Graph Node والتي تقوم بمعالجة مزامنة السلسلة بأكملها ، ومراقبة الكتل الجديدة ، وتقديمها عبر GraphQL endpoint.
+[Graph Node](https://github.com/graphprotocol/graph-node) is an open source Rust implementation that event sources the Ethereum blockchain to deterministically update a data store that can be queried via the GraphQL endpoint. Developers use subgraphs to define their schema, and a set of mappings for transforming the data sourced from the blockchain and the Graph Node handles syncing the entire chain, monitoring for new blocks, and serving it via a GraphQL endpoint.
#### ابدأ من المصدر
@@ -735,7 +735,7 @@ default => 0.1 * $SYSTEM_LOAD;
| ---------------------------------------------------------------------------- | ------- |
| { pairs(skip: 5000) { id } } | 0.5 GRT |
| { tokens { symbol } } | 0.1 GRT |
-| { pairs(skip: 5000) { id { tokens } symbol } } | 0.6 GRT |
+| { pairs(skip: 5000) { id } tokens { symbol } } | 0.6 GRT |
#### تطبيق نموذج التكلفة
@@ -750,7 +750,9 @@ indexer cost set model my_model.agora
### Stake in the protocol
-الخطوات الأولى للمشاركة في الشبكة كمفهرس هي الموافقة على البروتوكول وصناديق الأسهم، و (اختياريا) إعداد عنوان المشغل لتفاعلات البروتوكول اليومية. _ ** ملاحظة **: لأغراض الإرشادات ، سيتم استخدام Remix للتفاعل مع العقد ، ولكن لا تتردد في استخدام الأداة التي تختارها (\[OneClickDapp \](https://oneclickdapp.com/) و [ABItopic](https://abitopic.io/) و [MyCrypto](https://www.mycrypto.com/account) وهذه بعض الأدوات المعروفة)._
+The first steps to participating in the network as an Indexer are to approve the protocol, stake funds, and (optionally) set up an operator address for day-to-day protocol interactions.
+
+> Note: For the purposes of these instructions Remix will be used for contract interaction, but feel free to use your tool of choice ([OneClickDapp](https://oneclickdapp.com/), [ABItopic](https://abitopic.io/), and [MyCrypto](https://www.mycrypto.com/account) are a few other known tools).
Once an Indexer has staked GRT in the protocol, the [Indexer components](/network/indexing#indexer-components) can be started up and begin their interactions with the network.
@@ -760,7 +762,7 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/networ
2. في `File Explorer` أنشئ ملفا باسم ** GraphToken.abi ** باستخدام [token ABI](https://raw.githubusercontent.com/graphprotocol/contracts/mainnet-deploy-build/build/abis/GraphToken.json).
-3. مع تحديد `GraphToken.abi` وفتحه في المحرر ، قم بالتبديل إلى Deploy و `Run Transactions` في واجهة Remix.
+3. With `GraphToken.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface.
4. Under environment select `Injected Web3` and under `Account` select your Indexer address.
@@ -774,7 +776,7 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/networ
2. في `File Explorer` أنشئ ملفا باسم ** Staking.abi ** باستخدام Staking ABI.
-3. مع تحديد `Staking.abi` وفتحه في المحرر ، قم بالتبديل إلى قسم `Deploy` و `Run Transactions` في واجهة Remix.
+3. With `Staking.abi` selected and open in the editor, switch to the `Deploy and run transactions` section in the Remix interface.
4. Under environment select `Injected Web3` and under `Account` select your Indexer address.
@@ -790,12 +792,28 @@ Once an Indexer has staked GRT in the protocol, the [Indexer components](/networ
setDelegationParameters(950000, 600000, 500)
```
+### Setting delegation parameters
+
+The `setDelegationParameters()` function in the [staking contract](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol) is essential for Indexers, allowing them to set parameters that define their interactions with Delegators, influencing their reward sharing and delegation capacity.
+
+### How to set delegation parameters
+
+To set the delegation parameters using Graph Explorer interface, follow these steps:
+
+1. Navigate to [Graph Explorer](https://thegraph.com/explorer/).
+2. Connect your wallet. Choose multisig (such as Gnosis Safe) and then select mainnet. Note: You will need to repeat this process for Arbitrum One.
+3. Connect the wallet you have as a signer.
+4. Navigate to the 'Settings' section and select 'Delegation Parameters'. These parameters should be configured to achieve an effective cut within the desired range. Upon entering values in the provided input fields, the interface will automatically calculate the effective cut. Adjust these values as necessary to attain the desired effective cut percentage.
+5. Submit the transaction to the network.
+
+> Note: This transaction will need to be confirmed by the multisig wallet signers.
+
### عمر التخصيص allocation
After being created by an Indexer a healthy allocation goes through four states.
-- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
+- **Active** - Once an allocation is created on-chain ([allocateFrom()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L316)) it is considered **active**. A portion of the Indexer's own and/or delegated stake is allocated towards a subgraph deployment, which allows them to claim indexing rewards and serve queries for that subgraph deployment. The Indexer agent manages creating allocations based on the Indexer rules.
-- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/master/contracts/staking/Staking.sol#L873)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators (see "how are rewards distributed?" below to learn more).
+- **Closed** - An Indexer is free to close an allocation once 1 epoch has passed ([closeAllocation()](https://github.com/graphprotocol/contracts/blob/main/packages/contracts/contracts/staking/Staking.sol#L335)) or their Indexer agent will automatically close the allocation after the **maxAllocationEpochs** (currently 28 days). When an allocation is closed with a valid proof of indexing (POI) their indexing rewards are distributed to the Indexer and its Delegators ([learn more](/network/indexing/#how-are-indexing-rewards-distributed)).
Indexers are recommended to utilize offchain syncing functionality to sync subgraph deployments to chainhead before creating the allocation on-chain. This feature is especially useful for subgraphs that may take longer than 28 epochs to sync or have some chances of failing undeterministically.
diff --git a/website/pages/ar/network/overview.mdx b/website/pages/ar/network/overview.mdx
index 5e530fe7fab5..08469cdc547b 100644
--- a/website/pages/ar/network/overview.mdx
+++ b/website/pages/ar/network/overview.mdx
@@ -2,14 +2,14 @@
title: Network Overview
---
-شبكة The Graph هو بروتوكول فهرسة لامركزي لتنظيم بيانات الـ blockchain. التطبيقات تستخدم GraphQL للاستعلام عن APIs المفتوحة والتي تسمى subgraphs ، لجلب البيانات المفهرسة على الشبكة. باستخدام The Graph ، يمكن للمطورين إنشاء تطبيقات بدون خادم تعمل بالكامل على البنية الأساسية العامة.
+The Graph Network is a decentralized indexing protocol for organizing blockchain data. Applications use GraphQL to query open APIs called subgraphs and retrieve data that is indexed on the network. With The Graph, developers can build serverless applications that run entirely on public infrastructure.
## نظره عامة
-شبكة TheGraph تتكون من مفهرسين (Indexers) ومنسقين (Curators) ومفوضين (Delegator) حيث يقدمون خدمات للشبكة ويقدمون البيانات لتطبيقات Web3. حيث يتم استخدام تلك التطبيقات والبيانات من قبل المستهلكين.
+The Graph Network consists of Indexers, Curators, and Delegators that provide services to the network and serve data to Web3 applications. Consumers use the applications and consume the data.
![اقتصاد الـ Token](/img/Network-roles@2x.png)
To ensure economic security of The Graph Network and the integrity of data being queried, participants stake and use Graph Tokens ([GRT](/tokenomics)). GRT is a work utility token that is an ERC-20 used to allocate resources in the network.
-Active Indexers, Curators and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake.
+Active Indexers, Curators, and Delegators can provide services and earn income from the network, proportional to the amount of work they perform and their GRT stake.
diff --git a/website/pages/ar/new-chain-integration.mdx b/website/pages/ar/new-chain-integration.mdx
index 7f7a29c0d860..91b17ab20954 100644
--- a/website/pages/ar/new-chain-integration.mdx
+++ b/website/pages/ar/new-chain-integration.mdx
@@ -15,7 +15,7 @@ title: تكامل الشبكات الجديدة
**1. استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية**
-إذا كانت سلسلة الكتل متوافقة مع آلة الإيثريوم الافتراضية وإذا كان العميل/العقدة يوفر واجهة برمجة التطبيقات القياسية لاستدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية، ، فإنه يمكن لعقدة الغراف فهرسة هذه السلسلة الجديدة. لمزيد من المعلومات، يرجى الاطلاع على [اختبار استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية] (تكامل*سلسة*جديدة #اختبار*استدعاء*إجراء*عن*بُعد*باستخدام*تمثيل*كائنات*جافا*سكريبت*لآلة*التشغيل*الافتراضية_لإثريوم).
+إذا كانت سلسلة الكتل متوافقة مع آلة الإيثريوم الافتراضية وإذا كان العميل/العقدة يوفر واجهة برمجة التطبيقات القياسية لاستدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية، ، فإنه يمكن لعقدة الغراف فهرسة هذه السلسلة الجديدة. لمزيد من المعلومات، يرجى الاطلاع على [اختبار استدعاء إجراء عن بُعد باستخدام تمثيل كائنات جافا سكريبت لآلة الإيثريوم الافتراضية] (تكامل_سلسة_جديدة #اختبار_استدعاء_إجراء_عن_بُعد_باستخدام_تمثيل_كائنات_جافا_سكريبت_لآلة_التشغيل_الافتراضية\_لإثريوم).
**2. فايرهوز**
@@ -54,11 +54,11 @@ title: تكامل الشبكات الجديدة
**اختبر التكامل من خلال نشر الغراف الفرعي محليًا.**
-1. قم بتثبيت [graph-cli](https://github.com/graphprotocol/graph-cli)
+1. Install [graph-cli](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli)
2. قم بإنشاء مثالًا بسيطًا للغراف الفرعي. بعض الخيارات المتاحة هي كالتالي:
1. يُعتبر [غرافيتار](https://github.com/graphprotocol/example-subgraph/tree/f89bdd4628efa4badae7367d4919b3f648083323) المُعد مسبقًا مثالًا جيدًا لعقد ذكي وغراف فرعي كنقطة انطلاقة جيدة
2. قم بإعداد غراف فرعي محلي من أي عقد ذكي موجود أو بيئة تطوير صلبة [باستخدام هاردهات وملحق الغراف](https://github.com/graphprotocol/hardhat-graph)
-3. قم بتعديل subgraph.yaml الناتج عن طريق تغيير [`dataSources.network`](http://dataSources.network) إلى نفس الاسم الذي تم تمريره سابقًا إلى عقدة الغراف.
+3. Adapt the resulting `subgraph.yaml` by changing `dataSources.network` to the same name previously passed on to Graph Node.
4. أنشئ غرافك الفرعي في عقدة الغراف: `graph create $SUBGRAPH_NAME --node $GRAPH_NODE_ENDPOINT`
5. انشر غرافك الفرعي إلى عقدة الغراف: `graph deploy $SUBGRAPH_NAME --ipfs $IPFS_ENDPOINT --node $GRAPH_NODE_ENDPOINT`
diff --git a/website/pages/ar/operating-graph-node.mdx b/website/pages/ar/operating-graph-node.mdx
index 646ec2d5dffd..ac2816215c96 100644
--- a/website/pages/ar/operating-graph-node.mdx
+++ b/website/pages/ar/operating-graph-node.mdx
@@ -45,7 +45,7 @@ To enable monitoring and reporting, Graph Node can optionally log metrics to a P
- **متطلبات إضافية لمستخدمي Ubuntu **- لتشغيل عقدة الرسم البياني على Ubuntu ، قد تكون هناك حاجة إلى بعض الحزم الإضافية.
```sh
-sudo apt-get install -y clang libpg-dev libssl-dev pkg-config
+sudo apt-get install -y clang libpq-dev libssl-dev pkg-config
```
#### Setup
@@ -77,13 +77,13 @@ A complete Kubernetes example configuration can be found in the [indexer reposit
When it is running Graph Node exposes the following ports:
-| المنفذ | الغرض | المسار | CLI Argument | متغيرات البيئة |
-| --- | --- | --- | --- | --- |
-| 8000 | GraphQL HTTP server (for subgraph queries) | /subgraphs/id/...
/subgraphs/name/.../... | ws-port-- | - |
+| 8020 | JSON-RPC (for managing deployments) | / | admin-port-- | - |
+| 8030 | Subgraph indexing status API | /graphql | index-node-port-- | - |
+| 8040 | Prometheus metrics | /metrics | metrics-port-- | - |
> **Important**: Be careful about exposing ports publicly - **administration ports** should be kept locked down. This includes the the Graph Node JSON-RPC endpoint.
@@ -114,7 +114,7 @@ Full documentation of `config.toml` can be found in the [Graph Node docs](https:
#### Multiple Graph Nodes
-Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestor), and splitting subgraphs across nodes with [deployment rules](#deployment-rules).
+Graph Node indexing can scale horizontally, running multiple instances of Graph Node to split indexing and querying across different nodes. This can be done simply by running Graph Nodes configured with a different `node_id` on startup (e.g. in the Docker Compose file), which can then be used in the `config.toml` file to specify [dedicated query nodes](#dedicated-query-nodes), [block ingestors](#dedicated-block-ingestion), and splitting subgraphs across nodes with [deployment rules](#deployment-rules).
> Note that multiple Graph Nodes can all be configured to use the same database, which itself can be horizontally scaled via sharding.
diff --git a/website/pages/ar/publishing/publishing-a-subgraph.mdx b/website/pages/ar/publishing/publishing-a-subgraph.mdx
index 89aec5bee958..673160e705f0 100644
--- a/website/pages/ar/publishing/publishing-a-subgraph.mdx
+++ b/website/pages/ar/publishing/publishing-a-subgraph.mdx
@@ -2,32 +2,93 @@
title: Publishing a Subgraph to the Decentralized Network
---
-Once your subgraph has been [deployed to the Subgraph Studio](/deploying/deploying-a-subgraph-to-studio), you have tested it out, and are ready to put it into production, you can then publish it to the decentralized network.
+Once you have [deployed your subgraph to Subgraph Studio](/deploying/deploying-a-subgraph-to-studio) and it's ready to go into production, you can publish it to the decentralized network.
-Publishing a Subgraph to the decentralized network makes it available for [Curators](/network/curating) to begin curating on it, and [Indexers](/network/indexing) to begin indexing it.
+When you publish a subgraph to the decentralized network, you make it available for:
-
+- [Curators](/network/curating) to begin curating it.
+- [Indexers](/network/indexing) to begin indexing it.
-You can find the list of the supported networks [Here](/developing/supported-networks).
+
-## Publishing a subgraph
+Check out the list of [supported networks](/developing/supported-networks).
-Subgraphs can be published to the decentralized network directly from the Subgraph Studio dashboard by clicking on the **Publish** button. Once a subgraph is published, it will be available to view in the [Graph Explorer](https://thegraph.com/explorer/).
+## Publishing from Subgraph Studio
-- Subgraphs can be published to Goerli, Arbitrum goerli, Arbitrum One, or Ethereum mainnet.
+1. Go to the [Subgraph Studio](https://thegraph.com/studio/) dashboard
+2. Click on the **Publish** button
+3. Your subgraph will now be visible in [Graph Explorer](https://thegraph.com/explorer/).
-- Regardless of the network the subgraph was published on, it can index data on any of the [supported networks](/developing/supported-networks).
+All published versions of an existing subgraph can:
-- عند نشر نسخة جديدة لـ subgraph حالي ، تنطبق عليه نفس القواعد أعلاه.
+- Be published to Arbitrum One. [Learn more about The Graph Network on Arbitrum](/arbitrum/arbitrum-faq).
-## Curating your subgraph
+- Index data on any of the [supported networks](/developing/supported-networks), regardless of the network on which the subgraph was published.
-> It is recommended that you curate your own subgraph with 10,000 GRT to ensure that it is indexed and available for querying as soon as possible.
+### Updating metadata for a published subgraph
-Subgraph Studio enables you to be the first to curate your subgraph by adding GRT to your subgraph's curation pool in the same transaction. When publishing your subgraph, make sure to check the button that says, "Be the first to signal on this subgraph."
+- After publishing your subgraph to the decentralized network, you can update the metadata anytime in Subgraph Studio.
+- Once you’ve saved your changes and published the updates, they will appear in Graph Explorer.
+- It's important to note that this process will not create a new version since your deployment has not changed.
+
+## Publishing from the CLI
+
+As of version 0.73.0, you can also publish your subgraph with the [`graph-cli`](https://github.com/graphprotocol/graph-tooling/tree/main/packages/cli).
+
+1. Open the `graph-cli`.
+2. Use the following commands: `graph codegen && graph build` then `graph publish`.
+3. A window will open, allowing you to connect your wallet, add metadata, and deploy your finalized subgraph to a network of your choice.
+
+![cli-ui](/img/cli-ui.png)
+
+### Customizing your deployment
+
+You can upload your subgraph build to a specific IPFS node and further customize your deployment with the following flags:
+
+```
+USAGE
+ $ graph publish [SUBGRAPH-MANIFEST] [-h] [--protocol-network arbitrum-one|arbitrum-sepolia --subgraph-id ] [-i ] [--ipfs-hash ] [--webapp-url
+ ]
+
+FLAGS
+ -h, --help Show CLI help.
+ -i, --ipfs= [default: https://api.thegraph.com/ipfs/api/v0] Upload build results to an IPFS node.
+ --ipfs-hash= IPFS hash of the subgraph manifest to deploy.
+ --protocol-network=