diff --git a/sharding/raft23ai/topology/topology.md b/sharding/raft23ai/topology/topology.md index b9d99f3a6..f00eb506d 100644 --- a/sharding/raft23ai/topology/topology.md +++ b/sharding/raft23ai/topology/topology.md @@ -183,8 +183,169 @@ For more details check [Raft Replication Configuration and Management] (https:// ![](./images/t3-5-status-after-leader-change.png " ") +## Task 4: Raft Replication Demo with UI Application -## Task 4: Run the workload +1. Please verify, if the Demo UI application is already opened on the browser window. + + ![](./images/demo-ui-app-browser-1.png " ") + +2. In the list, please open the "More Details" tab for User No. 1. + +![](./images/demo-ui-app-click-more-det-2.png " ") + +Similarly, Open two additional tabs by clicking on "Update" for User No. 1 + +![](./images/demo-ui-app-update-tab-3.png " ") + + +3. Check User No. 1's Shard in Demo UI: +Verify where the leadership is for User No. 1's shard in the Demo UI. + +![](./images/demo-ui-app-more-det-3.png " ") + +4. Go to the terminal window logged in as **oracle** and stop that shard. + + ``` + + sudo podman stop shard3 + + ``` + + You can use below command to switch to **GSM**. + + ``` + + sudo podman exec -i -t gsm1 /bin/bash + + ``` + + Run the below command in terminal window that is switched to **GSM**, to see the status of the shard. + + ``` + + gdsctl config shard + + ``` + + ![](./images/demo-ui-app-stop-shard-4.png " ") + +5. Return to the Demo UI App browser window on "More Details" tab and click on "Refresh" button to observe that the leadership has automatically moved to another shard, indicating re-routing of the request. + + ![](./images/demo-ui-app-refresh-more-det-5.png " ") + +6. Go to the first update tab in Demo UI Application and change the class. + + ![](./images/demo-ui-app-update-class-6.png " ") + + Click on the next update tab and refresh it to see that change in class. + + ![](./images/demo-ui-app-class-updated-6.png " ") + + +7. Run the Workload and check the count of customers in Demo UI App. + + Check the count in browser window before running the workload. + + ![](./images/demo-ui-app-checkcount-before-workload-7.png " ") + + + Note that while the shard is stopped, you can still run the workload. + + If you are using a new terminal, you can use below command as **oracle** to switch to **appclient** container and the switch to the "oracle" user and then change the path to $DEMO_MASTER location. + + ``` + + sudo podman exec -it appclient /bin/bash + + ``` + + ``` + + su - oracle + + ``` + + ``` + + cd $DEMO_MASTER + pwd + ls -rlt + + ``` + + Run the workload using the below command in a terminal window that is switched to **appclient** container. + + ``` + + sh run.sh demo + + ``` + + ![](./images/demo-ui-app-run-workload-7a.png " ") + + + ![](./images/demo-ui-app-run-workload-7b.png " ") + + + Refresh the browser window for Demo UI application and you will observe that the count is increasing in the Demo UI even though the shard is stopped. + + + ![](./images/demo-ui-app-count-inc-after-workload-7c.png " ") + + +8. Stop the Workload. + + You can stop the workload that ran in the previous task using Ctrl+C. + +9. Run the below command in terminal that is switched to **GSM** to auto rebalance the leaders. + + ``` + + gdsctl switchover ru -rebalance + + ``` + ![](./images/t3-2-auto-rebalance.png " ") + +10. Now, we are going to perform delete operation through Demo UI Application. + + Take a look at the count before clicking on delete button. + + ![](./images/demo-ui-app-count-before-delete-9.png " ") + + Click on Delete button in the browser window and delete few users. You will notice that the count is decreasing. + + ![](./images/demo-ui-app-after-del-count-9a.png " ") + + +11. Run the below command in terminal window logged in as **oracle**. Start the shard and see that the shard1 is joining back. + + ``` + + sudo podman start shard3 + + ``` + + Run the below command in terminal that is switched to **GSM** to check the status of the shard. + + ``` + + gdsctl config shard + + ``` + + ![](./images/demo-ui-app-start-shard-11.png " ") + +12. Run the below command in terminal that is switched to **GSM** to auto rebalance the leaders. + + ``` + + gdsctl switchover ru -rebalance + + ``` + ![](./images/t3-2-auto-rebalance.png " ") + + +## Task 5: Run the workload Please use the below steps to run the workload using the "app_schema" account with the available configuration files on the "appclient" container: @@ -244,7 +405,7 @@ Notice that the log index is increasing as read and write operations are going o 6. You can keep running the workload for some time while you perform the next task. -## Task 5: Perform Failover Test +## Task 6: Perform Failover Test What happens when one of the available shard databases goes down or is taken down for maintenance? Failover test by stopping shard1 to create shard1 down situation. @@ -346,166 +507,6 @@ You can stop the workload that ran in the previous task using Ctrl+C. -## Task 6: Raft Replication Demo with UI Application - -1. Please verify, if the Demo UI application is already opened on the browser window. - - ![](./images/demo-ui-app-browser-1.png " ") - -2. In the list, please open the "More Details" tab for User No. 1. - -![](./images/demo-ui-app-click-more-det-2.png " ") - -Similarly, Open two additional tabs by clicking on "Update" for User No. 1 - -![](./images/demo-ui-app-update-tab-3.png " ") - - -3. Check User No. 1's Shard in Demo UI: -Verify where the leadership is for User No. 1's shard in the Demo UI. - -![](./images/demo-ui-app-more-det-3.png " ") - -4. Go to the terminal window logged in as **oracle** and stop that shard. - - ``` - - sudo podman stop shard3 - - ``` - - You can use below command to switch to **GSM**. - - ``` - - sudo podman exec -i -t gsm1 /bin/bash - - ``` - - Run the below command in terminal window that is switched to **GSM**, to see the status of the shard. - - ``` - - gdsctl config shard - - ``` - - ![](./images/demo-ui-app-stop-shard-4.png " ") - -5. Return to the Demo UI App browser window on "More Details" tab and click on "Refresh" button to observe that the leadership has automatically moved to another shard, indicating re-routing of the request. - - ![](./images/demo-ui-app-refresh-more-det-5.png " ") - -6. Go to the first update tab in Demo UI Application and change the class. - - ![](./images/demo-ui-app-update-class-6.png " ") - - Click on the next update tab and refresh it to see that change in class. - - ![](./images/demo-ui-app-class-updated-6.png " ") - - -7. Run the Workload and check the count of customers in Demo UI App. - - Check the count in browser window before running the workload. - - ![](./images/demo-ui-app-checkcount-before-workload-7.png " ") - - - Note that while the shard is stopped, you can still run the workload. - - If you are using a new terminal, you can use below command as **oracle** to switch to **appclient** container and the switch to the "oracle" user and then change the path to $DEMO_MASTER location. - - ``` - - sudo podman exec -it appclient /bin/bash - - ``` - - ``` - - su - oracle - - ``` - - ``` - - cd $DEMO_MASTER - pwd - ls -rlt - - ``` - - Run the workload using the below command in a terminal window that is switched to **appclient** container. - - ``` - - sh run.sh demo - - ``` - - ![](./images/demo-ui-app-run-workload-7a.png " ") - - - ![](./images/demo-ui-app-run-workload-7b.png " ") - - - Refresh the browser window for Demo UI application and you will observe that the count is increasing in the Demo UI even though the shard is stopped. - - - ![](./images/demo-ui-app-count-inc-after-workload-7c.png " ") - - -8. Stop the Workload. - - You can stop the workload that ran in the previous task using Ctrl+C. - -9. Run the below command in terminal that is switched to **GSM** to auto rebalance the leaders. - - ``` - - gdsctl switchover ru -rebalance - - ``` - ![](./images/t3-2-auto-rebalance.png " ") - -10. Now, we are going to perform delete operation through Demo UI Application. - - Take a look at the count before clicking on delete button. - - ![](./images/demo-ui-app-count-before-delete-9.png " ") - - Click on Delete button in the browser window and delete few users. You will notice that the count is decreasing. - - ![](./images/demo-ui-app-after-del-count-9a.png " ") - - -11. Run the below command in terminal window logged in as **oracle**. Start the shard and see that the shard1 is joining back. - - ``` - - sudo podman start shard3 - - ``` - - Run the below command in terminal that is switched to **GSM** to check the status of the shard. - - ``` - - gdsctl config shard - - ``` - - ![](./images/demo-ui-app-start-shard-11.png " ") - -12. Run the below command in terminal that is switched to **GSM** to auto rebalance the leaders. - - ``` - - gdsctl switchover ru -rebalance - - ``` - ![](./images/t3-2-auto-rebalance.png " ") You may now proceed to the next lab. @@ -513,4 +514,4 @@ You may now proceed to the next lab. ## Acknowledgements * **Authors** - Deeksha Sehgal, Oracle Globally Distributed Database Database, Product Management, Senior Product Manager * **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Ajay Joshi, Jyoti Verma -* **Last Updated By/Date** - Deeksha Sehgal, Oracle Globally Distributed Database, Product Management, Senior Product Manager, Aug 2024 \ No newline at end of file +* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database, Product Management, Consulting Member of Technical Staff, August 2024 \ No newline at end of file diff --git a/sharding/raft23ai/troubleshooting/troubleshooting.md b/sharding/raft23ai/troubleshooting/troubleshooting.md index 40219f972..4a9af5916 100644 --- a/sharding/raft23ai/troubleshooting/troubleshooting.md +++ b/sharding/raft23ai/troubleshooting/troubleshooting.md @@ -82,17 +82,17 @@ This lab assumes you have: Please validate parameter values and retry command. ![](./images/3-after-specified-correct-shard.png " ") - + 4. If you get error shown in below screenshot during MOVE RU. You need to substitute the values according to the current environment. Use MOVE RU to move a follower replica of a replication unit from one shard database to another. - - Notes: - - Source database shouldn't contain the replica leader + Notes: + + - Source database shouldn't contain the replica leader - - Target database should not already contain another replica of the replication unit + - Target database should not already contain another replica of the replication unit ![](./images/4-move-ru-error.png " ") @@ -100,11 +100,9 @@ This lab assumes you have: 5. If you get errors shown in below two screenshots during relocate chunk - ![](./images/5-relocate-chunk-error-not-exist-source.png " ") - ![](./images/5-relocate-chunk-ru-not-colocated.png " ") Please follow these guidelines while relocating chunk. @@ -133,8 +131,6 @@ This lab assumes you have: ``` Whitelabel Error Page This application has no explicit mapping for /error, so you are seeing this as a fallback. - - Mon Aug 12 21:13:04 UTC 2024 ``` Please wait and go slow with the delete process. Rapid clicks can cause errors, so allow the operation to complete before clicking again. @@ -143,4 +139,4 @@ This lab assumes you have: ## Acknowledgements * **Authors** - Deeksha Sehgal, Oracle Globally Distributed Database Database, Product Management, Senior Product Manager * **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Ajay Joshi, Jyoti Verma -* **Last Updated By/Date** - Deeksha Sehgal, Oracle Globally Distributed Database, Product Management, Senior Product Manager, Aug 2024 \ No newline at end of file +* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database, Product Management, Consulting Member of Technical Staff, August 2024 \ No newline at end of file diff --git a/sharding/raft23ai/ui-app/raft-demo-ui-app.md b/sharding/raft23ai/ui-app/raft-demo-ui-app.md index 7f6ad70a3..980c48c9e 100644 --- a/sharding/raft23ai/ui-app/raft-demo-ui-app.md +++ b/sharding/raft23ai/ui-app/raft-demo-ui-app.md @@ -86,6 +86,7 @@ This lab assumes you have: ![](./images/before_switchover_ru_sort_shows_shard1_is_leader_for_ru1.png " ") + 2. Switchover RU#1 from shard1 to shard2 from the terminal: ``` @@ -130,24 +131,28 @@ In the bottom of the "More Details" page a link to Home Page is shown to return After adding customer, it brings back to the All-Customers List page. Total Customers count gets increased after adding a customer by 1. The customer details can be viewed with Api call format http://localhost:/8080/updateCustomer/ for given value of customerId. + 2. Update Customer: A customer can be edited either by using link "Update" link from the Home Page or directly using Api call format http://localhost:/8080/updateCustomer/ ![Edit Customer>](./images/edit_customer.png " ") After updating customer, it brings back to the All-Customers List page. You can verify the updated customer details shown in UI or manually using Api call format http://localhost:/8080/updateCustomer/ + 3. Delete Customer: A customer can delete either using link "Delete" or manually using Api call from the browser in the format http://localhost:8080/deleteCustomer/. After deleting customer, it brings back to the All-Customers List page. Total count on the All-Customers List page is reduce by 1. + 4. To Refresh the data on the "Home Page", you can use the Refresh link from the bottom section of the Home Page. Alternatively, reload the page from the browser's default refresh icon. + 5. "Home" Page link at the bottom the page brings to the first page and useful when you are at any higher page# and want to return to the first page of RAFT UI application. -In addition to above information, the results from Lab "Explore Raft Replication Advanced Use-Cases" tasks e.g., for Raft Replication failovers, Scale UP or Scale Down, Move or Copy Replication Unit Replicas etc. all can be verified from Raft Demo UI. +In addition to above information, the results from previous Labs "Explore Raft Replication Topology and "Explore Raft Replication Advanced Use-Cases" tasks e.g., for Raft Replication failovers, Scale UP or Scale Down, Move or Copy Replication Unit Replicas etc. all can be verified from Raft Demo UI. ## Acknowledgements * **Authors** - Ajay Joshi, Oracle Globally Distributed Database, Product Management, Consulting Member of Technical Staff * **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Deeksha Sehgal, Jyoti Verma -* **Last Updated By/Date** - Deeksha Sehgal, Oracle Globally Distributed Database, Product Management, Senior Product Manager, Aug 2024 \ No newline at end of file +* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database, Product Management, Consulting Member of Technical Staff, August 2024 \ No newline at end of file