diff --git a/sharding/raft23ai/advanced-use-cases/advanced-use-cases.md b/sharding/raft23ai/advanced-use-cases/advanced-use-cases.md index e45c976e1..fa1cf6013 100644 --- a/sharding/raft23ai/advanced-use-cases/advanced-use-cases.md +++ b/sharding/raft23ai/advanced-use-cases/advanced-use-cases.md @@ -32,7 +32,7 @@ The same is true for scaling down using REMOVE SHARD and load balancing using MO 1. We will add another shard named SHARD4 to this setup and see the redistribution happens for the RUs. -Run in the terminal as **oracle** user to Create SHARD4 database container +Run in the terminal window logged in as **oracle** user to Create SHARD4 database container ``` @@ -60,6 +60,7 @@ Run in the terminal as **oracle** user to Create SHARD4 database container 3. Once the DB is up and running, run the below commands to complete the GSM configuration to deploy the new SHARD4: +Run the below commands in a terminal window logged in as **oracle**. ``` @@ -77,7 +78,7 @@ Run in the terminal as **oracle** user to Create SHARD4 database container ![](./images/t6-4-shard4-deploy.png " ") -4. Connect to GSM1, run in the terminal as **oracle** user and connect to the shard director server and run the below command to view the ongoing rebalancing tasks. You have to wait until the rebalancing task completes. +4. You can run below command as **oracle** to switch to **GSM**, if you are using a new terminal window. ``` @@ -87,6 +88,8 @@ Run in the terminal as **oracle** user to Create SHARD4 database container ![](./images/t6-4-connect-gsm1.png " ") + Run in ther terminal window switched to **GSM** to view the ongoing rebalancing tasks. You have to wait until the rebalancing task completes. + ``` gdsctl config task @@ -97,7 +100,7 @@ Run in the terminal as **oracle** user to Create SHARD4 database container ![](./images/t6-4-config-task-not-pending.png " ") -5. Exit from GSM1 and Run the below command as **oracle** user to validate the database shard4 container is healthy. +5. Run the below command in terminal window logged in as **oracle** user to validate the database shard4 container is healthy. ``` @@ -107,8 +110,7 @@ Run in the terminal as **oracle** user to Create SHARD4 database container ![](./images/t6-5-add-shard4-container-validation.png " ") -6. Connect to GSM1, run in the terminal as **oracle** user and connect to the shard director server. - +6. Run the below command as **oracle** to switch to **GSM**, if you are using a new terminal window. ``` sudo podman exec -i -t gsm1 /bin/bash @@ -116,7 +118,7 @@ Run in the terminal as **oracle** user to Create SHARD4 database container ``` - 7. Run below command to verify that, shard4 has been deployed. + 7. Run the below command in a terminal window that is switched to **GSM** to verify that, shard4 has been deployed. ``` @@ -124,7 +126,7 @@ Run in the terminal as **oracle** user to Create SHARD4 database container ``` - ![](./images/t6-7-shard4-config-shard.png " ") + ![](./images/t6-7-shard4-config-shard.png " ") 8. Run below command to check the configuration of chunks. @@ -200,13 +202,14 @@ Use MOVE RU to move a follower replica of a replication unit from one shard data ![](./images/t7-2-ru-status-before-move-ru.png " ") -3. Choose the RU with the role follower associated with the respective shard and move to a shard which is NOT having that RU Replica. +3. Choose the RU with the role follower associated with the respective shard and move to a shard which is NOT having that RU Replica. +Following is the example used in the live labs environment. +User has to substitute values as per their environment. ``` - gdsctl move ru -ru 1 -source orcl4cdb_orcl4pdb -target orcl3cdb_orcl3pdb - ``` + ![](./images/t7-3-move-ru.png " ") 4. Please check the status of the RU that you just moved. @@ -239,7 +242,7 @@ Copy a replication unit from one shard database to another using COPY RU. This a When we use the replace option, the copy is done and also the replica is removed from the shard which is specified with the replace option thus keeping the replica count to be 3. - Connect to GSM1, run in the terminal as **oracle** user and connect to the shard director server. + Connect to GSM1, run in the terminal as **oracle** user and connect to the **GSM**, if you a are using a new terminal. ``` @@ -247,7 +250,7 @@ Copy a replication unit from one shard database to another using COPY RU. This a ``` -2. Connect with GSM1 and run the below command to check the status. +2. Run the below command to check the status. ``` @@ -313,101 +316,101 @@ Copy a replication unit from one shard database to another using COPY RU. This a 2. To move a chunk from one Raft replication unit to another replication unit, use the GDSCTL RELOCATE CHUNK command. -To use RELOCATE CHUNK, the source and target replication unit leaders must be located on the same shard, and their followers must also be on the same shards. If they are not on the same shard, use SWITCHOVER RU to move the leader and MOVE RU to move the followers to co-located shards. + To use RELOCATE CHUNK, the source and target replication unit leaders must be located on the same shard, and their followers must also be on the same shards. If they are not on the same shard, use SWITCHOVER RU to move the leader and MOVE RU to move the followers to co-located shards. -When moving chunks, specify the chunk ID numbers, the source RU ID from which to move them, and the target RU ID to move them to + When moving chunks, specify the chunk ID numbers, the source RU ID from which to move them, and the target RU ID to move them to -Suppose we want to relocate chunk 3 from RU 8 to RU 2, RU 8 and 2 leaders must on same shards and RU 8 and 2 followers must be on same shards, if required; use SWITCHOVER RU to move the leader and MOVE RU to move the followers to co-located shards. + Suppose we want to relocate chunk 3 from RU 8 to RU 2, RU 8 and 2 leaders must on same shards and RU 8 and 2 followers must be on same shards, if required; use SWITCHOVER RU to move the leader and MOVE RU to move the followers to co-located shards. -Check the status of RU's from which you are trying to relocate: + Check the status of RU's from which you are trying to relocate: -``` - -gdsctl status ru -ru 8 - -``` + ``` + + gdsctl status ru -ru 8 + + ``` -``` - -gdsctl status ru -ru 2 - -``` + ``` + + gdsctl status ru -ru 2 + + ``` -![](./images/t9-2-ru-status.png " ") + ![](./images/t9-2-ru-status.png " ") -Change the RU leader using below command, if required. + Change the RU leader using below command, if required. -``` - -gdsctl switchover ru -ru 8 -shard orcl3cdb_orcl3pdb - -``` + ``` + + gdsctl switchover ru -ru 8 -shard orcl3cdb_orcl3pdb + + ``` -![](./images/t9-2-switchover_ru.png " ") + ![](./images/t9-2-switchover_ru.png " ") -Check the status for RU's again, after switchover completes: + Check the status for RU's again, after switchover completes: -``` - -gdsctl status ru -ru 8 - -``` + ``` + + gdsctl status ru -ru 8 + + ``` -``` - -gdsctl status ru -ru 2 - -``` + ``` + + gdsctl status ru -ru 2 + + ``` -![](./images/t9-2a-status-ru-after-switchover.png " ") + ![](./images/t9-2a-status-ru-after-switchover.png " ") -Move the RU follower using below command, if required + Move the RU follower using below command, if required. + This is an example command based on the current environment. + User has to substitute values basis their environment. -``` - -gdsctl move ru -ru 8 -source orcl1cdb_orcl1pdb -target orcl2cdb_orcl2pdb - -``` + ``` + gdsctl move ru -ru 8 -source orcl1cdb_orcl1pdb -target orcl2cdb_orcl2pdb + ``` -![](./images/t9-2-move-ru.png " ") + ![](./images/t9-2-move-ru.png " ") -Check the status for RU's again, after move completes, to verify: + Check the status for RU's again, after move completes, to verify: -``` - -gdsctl status ru -ru 8 - -``` + ``` + + gdsctl status ru -ru 8 + + ``` -``` - -gdsctl status ru -ru 2 - -``` + ``` + + gdsctl status ru -ru 2 + + ``` -![](./images/t9-2b-status_ru.png " ") + ![](./images/t9-2b-status_ru.png " ") 2. Run the below command to relocate the chunk from GSM1: + This is an example command based on the current live labs environment. + User has to substitute values basis their environment. -``` - -gdsctl relocate chunk -chunk 3 -sourceru 8 -targetru 2 - -``` + ``` + gdsctl relocate chunk -chunk 3 -sourceru 8 -targetru 2 + ``` -![](./images/t9-2-relocate_chunk.png " ") + ![](./images/t9-2-relocate_chunk.png " ") 3. Please check the status of the chunks and RU, after relocate completes. -``` - -gdsctl status ru -show_chunks - -``` + ``` + + gdsctl status ru -show_chunks + + ``` -![](./images/t9-3-status_chunks.png " ") + ![](./images/t9-3-status_chunks.png " ") ## Task 5: Scale Down with Raft Replication @@ -415,202 +418,213 @@ Scaling down can be done using REMOVE SHARD and load balancing using MOVE RU. 1. Run below command to the check the status of chunks and RUs -``` - -gdsctl status ru -show_chunks - -``` + ``` + + gdsctl status ru -show_chunks + + ``` -![](./images/t10-1-chunk-status-before.png " ") + ![](./images/t10-1-chunk-status-before.png " ") 2. We want to Scale Down by removing the SHARD4. We will first change the replication unit leaders from shard4 to other shards and move the RUs from the SHARD4 to other shards +These are commands based on the current environment. +User has to substitute values basis their environment. -``` - -gdsctl switchover ru -ru 7 -shard orcl1cdb_orcl1pdb - -``` + ``` + + gdsctl switchover ru -ru 7 -shard orcl1cdb_orcl1pdb + + ``` -![](./images/t10-2a-switchover-ru-leader.png " ") + ![](./images/t10-2a-switchover-ru-leader.png " ") -``` - -gdsctl switchover ru -ru 4 -shard orcl2cdb_orcl2pdb - -``` + ``` + + gdsctl switchover ru -ru 4 -shard orcl2cdb_orcl2pdb + + ``` -![](./images/t10-2b-switchover-ru-leader.png " ") + ![](./images/t10-2b-switchover-ru-leader.png " ") -Check the status of chunks after switchover. + Check the status of chunks after switchover. -``` - -gdsctl status ru -show_chunks - -``` - -![](./images/t10-2-after-switchover.png " ") - -We perform move ru until all the RU followers are moved from shard4 to other shards. -Source database shouldn't contain the replica leader -Target database should not already contain another replica of the replication unit. - -``` - -gdsctl move ru -ru 7 -source orcl4cdb_orcl4pdb -target orcl2cdb_orcl2pdb - -``` - -``` - -gdsctl move ru -ru 8 -source orcl4cdb_orcl4pdb -target orcl1cdb_orcl1pdb - -``` - -``` - -gdsctl move ru -ru 2 -source orcl4cdb_orcl4pdb -target orcl1cdb_orcl1pdb - -``` - -``` - -gdsctl move ru -ru 5 -source orcl4cdb_orcl4pdb -target orcl1cdb_orcl1pdb - -``` - -![](./images/t10-2a-move-ru.png " ") - -``` - -gdsctl move ru -ru 1 -source orcl4cdb_orcl4pdb -target orcl3cdb_orcl3pdb - -``` - -``` - -gdsctl move ru -ru 3 -source orcl4cdb_orcl4pdb -target orcl3cdb_orcl3pdb - -``` - -``` - -gdsctl move ru -ru 4 -source orcl4cdb_orcl4pdb -target orcl3cdb_orcl3pdb - -``` - -``` - -gdsctl move ru -ru 6 -source orcl4cdb_orcl4pdb -target orcl3cdb_orcl3pdb - -``` - -![](./images/t10-2b-move-ru.png " ") + ``` + + gdsctl status ru -show_chunks + + ``` + + ![](./images/t10-2-after-switchover.png " ") + + We perform move ru until all the RU followers are moved from shard4 to other shards. + Source database shouldn't contain the replica leader + Target database should not already contain another replica of the replication unit. + + ``` + + gdsctl move ru -ru 7 -source orcl4cdb_orcl4pdb -target orcl2cdb_orcl2pdb + + ``` + + ``` + + gdsctl move ru -ru 8 -source orcl4cdb_orcl4pdb -target orcl1cdb_orcl1pdb + + ``` + + ``` + + gdsctl move ru -ru 2 -source orcl4cdb_orcl4pdb -target orcl1cdb_orcl1pdb + + ``` + + ``` + + gdsctl move ru -ru 5 -source orcl4cdb_orcl4pdb -target orcl1cdb_orcl1pdb + + ``` + + ![](./images/t10-2a-move-ru.png " ") + + ``` + + gdsctl move ru -ru 1 -source orcl4cdb_orcl4pdb -target orcl3cdb_orcl3pdb + + ``` + + ``` + + gdsctl move ru -ru 3 -source orcl4cdb_orcl4pdb -target orcl3cdb_orcl3pdb + + ``` + + ``` + + gdsctl move ru -ru 4 -source orcl4cdb_orcl4pdb -target orcl3cdb_orcl3pdb + + ``` + + ``` + + gdsctl move ru -ru 6 -source orcl4cdb_orcl4pdb -target orcl3cdb_orcl3pdb + + ``` + + ![](./images/t10-2b-move-ru.png " ") 3. Check the status after the move. -``` - -gdsctl status ru -show_chunks - -``` + ``` + + gdsctl status ru -show_chunks + + ``` -![](./images/t10-3-chunk-status-after-move-ru.png " ") + ![](./images/t10-3-chunk-status-after-move-ru.png " ") 4. Move the chunks out of the SHARD4 before we can delete this SHARD: - Run this command from GSM1. + Run this command from **GSM**. -``` - -python3 /opt/oracle/scripts/sharding/scripts/main.py --movechunks="shard_db=ORCL4CDB;shard_pdb=ORCL4PDB" - -``` + ``` + + python3 /opt/oracle/scripts/sharding/scripts/main.py --movechunks="shard_db=ORCL4CDB;shard_pdb=ORCL4PDB" + + ``` -![](./images/t10-4-move-all-chunks.png " ") + ![](./images/t10-4-move-all-chunks.png " ") 5. Check the status for the chunks across all the RU's again and and make sure no DDL error is seen. -``` - -gdsctl status ru -show_chunks - -``` + ``` + + gdsctl status ru -show_chunks + + ``` -``` - -gdsctl show ddl -failed_only - -``` + ``` + + gdsctl show ddl -failed_only + + ``` -![](./images/t10-5-show-ddl-failed-after-move-all.png " ") + ![](./images/t10-5-show-ddl-failed-after-move-all.png " ") 6. Complete the SHARD4 delete operation. -``` - -python3 /opt/oracle/scripts/sharding/scripts/main.py --deleteshard="shard_host=oshard4-0;shard_db=ORCL4CDB;shard_pdb=ORCL4PDB;shard_port=1521;shard_group=shardgroup1" - -``` + ``` + + python3 /opt/oracle/scripts/sharding/scripts/main.py --deleteshard="shard_host=oshard4-0;shard_db=ORCL4CDB;shard_pdb=ORCL4PDB;shard_port=1521;shard_group=shardgroup1" + + ``` -![](./images/t10-6-delete-shard4.png " ") + ![](./images/t10-6-delete-shard4.png " ") 7. Check the status after scale down operation is completed. -``` - -gdsctl config shard - -``` + ``` + + gdsctl config shard + + ``` -``` - -gdsctl config chunks - -``` + ``` + + gdsctl config chunks + + ``` -![](./images/t10-7a-status-after-scale-down.png " ") + ![](./images/t10-7a-status-after-scale-down.png " ") -``` - -gdsctl status ru -show_chunks - -``` + ``` + + gdsctl status ru -show_chunks + + ``` -![](./images/t10-7b-status-after-scale-down.png " ") + ![](./images/t10-7b-status-after-scale-down.png " ") -8. Stop and remove the shard4 container. Shard4 container is up before stop and remove. +8. Run the below command in a terminal window logged in as **oracle** to stop and remove the shard4 container. Shard4 container is up before stop and remove. -![](./images/t10-8-stop-rm-shard4-container.png " ") + ![](./images/t10-8-stop-rm-shard4-container.png " ") -``` - -sudo podman stop shard4 - -``` + ``` + + sudo podman stop shard4 + + ``` -``` - -sudo podman rm shard4 - -``` + ``` + + sudo podman rm shard4 + + ``` 9. Check if the shard4 has been removed. -``` - -sudo podman ps -a - -``` + ``` + + sudo podman ps -a + + ``` + + ![](./images/t10-9-after-scale-down-podman.png " ") -![](./images/t10-9-after-scale-down-podman.png " ") +10. Run the below command in terminal that is switched to **GSM**,to auto rebalance the leaders. + + ``` + + gdsctl switchover ru -rebalance + + ``` + ![](./images/t3-2-auto-rebalance.png " ") -You may now proceed to the next lab. +This is the end of the Raft Replication Workshop. ## **Appendix 1**: Raft Replication Overview @@ -637,4 +651,4 @@ If you selected the **Green Button** for this workshop and still have an active ## Acknowledgements * **Authors** - Deeksha Sehgal, Oracle Globally Distributed Database Database, Product Management, Senior Product Manager * **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Ajay Joshi, Jyoti Verma -* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database, Product Management, Consulting Member of Technical Staff, Aug 2024 \ No newline at end of file +* **Last Updated By/Date** - Deeksha Sehgal, Oracle Globally Distributed Database, Product Management, Senior Product Manager, Aug 2024 \ No newline at end of file diff --git a/sharding/raft23ai/advanced-use-cases/images/t3-2-auto-rebalance.png b/sharding/raft23ai/advanced-use-cases/images/t3-2-auto-rebalance.png new file mode 100644 index 000000000..38e5b8d5c Binary files /dev/null and b/sharding/raft23ai/advanced-use-cases/images/t3-2-auto-rebalance.png differ diff --git a/sharding/raft23ai/initialize-environment/initialize-environment.md b/sharding/raft23ai/initialize-environment/initialize-environment.md index 30f6fa70c..0718b6ccf 100644 --- a/sharding/raft23ai/initialize-environment/initialize-environment.md +++ b/sharding/raft23ai/initialize-environment/initialize-environment.md @@ -6,10 +6,6 @@ In this lab we will review and startup all components required to successfully r *Estimated Lab Time:* 10 Minutes. -Watch the video for a quick walk through of the Initialize Environment lab. - -[Initialize Environment lab](youtube:e3EXx3BMhec) - ### Objectives - Initialize the workshop environment. diff --git a/sharding/raft23ai/topology/images/t3-6-exit-gsm1.png b/sharding/raft23ai/topology/images/t3-6-exit-gsm1.png deleted file mode 100644 index 4d55cd901..000000000 Binary files a/sharding/raft23ai/topology/images/t3-6-exit-gsm1.png and /dev/null differ diff --git a/sharding/raft23ai/topology/images/terminal-1-oracle.png b/sharding/raft23ai/topology/images/terminal-1-oracle.png new file mode 100644 index 000000000..9d5f3e10e Binary files /dev/null and b/sharding/raft23ai/topology/images/terminal-1-oracle.png differ diff --git a/sharding/raft23ai/topology/images/terminal-2-gsm.png b/sharding/raft23ai/topology/images/terminal-2-gsm.png new file mode 100644 index 000000000..256ce042a Binary files /dev/null and b/sharding/raft23ai/topology/images/terminal-2-gsm.png differ diff --git a/sharding/raft23ai/topology/images/terminal-3-appclient.png b/sharding/raft23ai/topology/images/terminal-3-appclient.png new file mode 100644 index 000000000..45a4a40ca Binary files /dev/null and b/sharding/raft23ai/topology/images/terminal-3-appclient.png differ diff --git a/sharding/raft23ai/topology/topology.md b/sharding/raft23ai/topology/topology.md index 24a1ca5e1..b9d99f3a6 100644 --- a/sharding/raft23ai/topology/topology.md +++ b/sharding/raft23ai/topology/topology.md @@ -33,7 +33,33 @@ This lab assumes you have: ## Task 1: Check for containers in your VM -1. Open a terminal window and execute below as **opc** user. +1. Please open three terminal windows. + + First terminal logged in as **oracle** user. + + ![](./images/terminal-1-oracle.png " ") + + Second Terminal switched to **GSM** level. + + ``` + + sudo podman exec -i -t gsm1 /bin/bash + + ``` + + ![](./images/terminal-2-gsm.png " ") + + Third Terminal switched to **appclient** container. + + ``` + + sudo podman exec -it appclient /bin/bash + + ``` + + ![](./images/terminal-3-appclient.png " ") + +2. Run the below command on terminal window that is logged in as as **oracle** user. ``` @@ -53,7 +79,8 @@ Changes to data made by a DML are recorded in the Raft log. A commit record is a For more details check [Raft Replication Configuration and Management] (https://docs.oracle.com/en/database/oracle/oracle-database/23/shard/raft-replication.html#GUID-AF14C34B-4F55-4528-8B28-5073A3BFD2BE) -1. Run in the terminal as **oracle** user and connect to the shard director server. +1. Run the below command to switch to **GSM**, if you are using a new terminal. + ``` sudo podman exec -i -t gsm1 /bin/bash @@ -62,7 +89,7 @@ For more details check [Raft Replication Configuration and Management] (https:// ![](./images/t2-1-podman-gsm1.png " ") -2. Verify sharding topology using the **CONFIG** command. +2. Use the terminal window that is switched to **GSM**. Verify sharding topology using the **CONFIG** command. ``` @@ -105,9 +132,11 @@ For more details check [Raft Replication Configuration and Management] (https:// ## Task 3: Changing the Replication Unit Leader -Using SWITCHOVER RU, you can change which replica is the leader for the specified replication unit. The -db option makes the specified database the new leader of the given RU. + Using SWITCHOVER RU, you can change which replica is the leader for the specified replication unit. + + The -shard option makes the replication unit member on the specified shard database the new leader of the given RU. -1. Run the below command on GSM1 to view the status of all the leaders +1. Run the below command on **GSM** terminal window to view the status of all the leaders ``` @@ -154,29 +183,22 @@ Using SWITCHOVER RU, you can change which replica is the leader for the specifie ![](./images/t3-5-status-after-leader-change.png " ") -6. Exit from GSM1. - - ``` - - exit - - ``` - ![](./images/t3-6-exit-gsm1.png " ") ## Task 4: Run the workload Please use the below steps to run the workload using the "app_schema" account with the available configuration files on the "appclient" container: -1. Switch to the "appclient" container +1. You can use the below command if you need to switch to appclient container in a new terminal window. ``` sudo podman exec -it appclient /bin/bash ``` + ![](./images/t4-1-appclient-container.png " ") -2. Switch to the "oracle" user +2. Use the terminal window that is switched to the "appclient" container. Switch to the "oracle" user. ``` @@ -208,8 +230,8 @@ Please use the below steps to run the workload using the "app_schema" account wi ![](./images/t4-4-run-workload.png " ") -5. During this time, you can continue to check the RU details from another session on the "gsm1" container from "gdsctl" prompt. -Notice that the log index is increasing as there are read and write operations are going on. +5. During this time, you can continue to check the RU details from another terminal window switched to **gsm** . +Notice that the log index is increasing as read and write operations are going on. ``` @@ -227,7 +249,7 @@ Notice that the log index is increasing as there are read and write operations a What happens when one of the available shard databases goes down or is taken down for maintenance? Failover test by stopping shard1 to create shard1 down situation. -1. Run the below command as **oracle** user to check the status for all the containers. +1. You can run the below command in a terminal window logged in as **oracle** user to check the status for all the containers. ``` @@ -249,7 +271,7 @@ Failover test by stopping shard1 to create shard1 down situation. ![](./images/t5-2-stop-shard1.png " ") -3. Switch to GSM1 on another terminal session and check the status for RU's and you will see that database orcl1cdb_orcl1pdb is not present. +3. Below command can be used to switch to **GSM**, if you are using a new terminal. ``` @@ -257,6 +279,8 @@ Failover test by stopping shard1 to create shard1 down situation. ``` + Run below in the terminal window that is switched to **GSM** and check the status of shards, RU's and you will see that database orcl1cdb_orcl1pdb is not present. + ``` gdsctl config shard @@ -273,7 +297,8 @@ Failover test by stopping shard1 to create shard1 down situation. You will see that shard1 down situation has no impact on the running workload. -4. Start the shard1 using the podman start command, to reflect that shard1 is joining back. +4. On a terminal window logged in as **oracle**. +Start the shard1 using the podman start command, to reflect that shard1 is joining back. ``` @@ -284,7 +309,7 @@ You will see that shard1 down situation has no impact on the running workload. ![](./images/t5-4-startup-shard1.png " ") -5. On a parallel session switch to GSM1, check the status of shard, RU's and see that shard1 has joined back. +5. You can use the below command as **oracle** to switch to **GSM**. ``` @@ -292,6 +317,8 @@ You will see that shard1 down situation has no impact on the running workload. ``` + On a terminal window switched to **GSM**, check the status of shard, RU's and see that shard1 has joined back. + ``` gdsctl config shard @@ -308,6 +335,16 @@ You will see that shard1 down situation has no impact on the running workload. You can stop the workload that ran in the previous task using Ctrl+C. +6. Run the below command in terminal that is switched to **GSM** to auto rebalance the leaders. + + ``` + + gdsctl switchover ru -rebalance + + ``` + ![](./images/t3-2-auto-rebalance.png " ") + + ## Task 6: Raft Replication Demo with UI Application @@ -329,7 +366,7 @@ Verify where the leadership is for User No. 1's shard in the Demo UI. ![](./images/demo-ui-app-more-det-3.png " ") -4. Go to the terminal and stop that shard. +4. Go to the terminal window logged in as **oracle** and stop that shard. ``` @@ -337,104 +374,113 @@ Verify where the leadership is for User No. 1's shard in the Demo UI. ``` + You can use below command to switch to **GSM**. + ``` sudo podman exec -i -t gsm1 /bin/bash ``` + Run the below command in terminal window that is switched to **GSM**, to see the status of the shard. + ``` gdsctl config shard ``` -![](./images/demo-ui-app-stop-shard-4.png " ") + ![](./images/demo-ui-app-stop-shard-4.png " ") 5. Return to the Demo UI App browser window on "More Details" tab and click on "Refresh" button to observe that the leadership has automatically moved to another shard, indicating re-routing of the request. -![](./images/demo-ui-app-refresh-more-det-5.png " ") + ![](./images/demo-ui-app-refresh-more-det-5.png " ") 6. Go to the first update tab in Demo UI Application and change the class. -![](./images/demo-ui-app-update-class-6.png " ") + ![](./images/demo-ui-app-update-class-6.png " ") -Go to the next update tab and refresh it to see that change in class. + Click on the next update tab and refresh it to see that change in class. -![](./images/demo-ui-app-class-updated-6.png " ") + ![](./images/demo-ui-app-class-updated-6.png " ") 7. Run the Workload and check the count of customers in Demo UI App. -Check the count in browser window before running the workload. + Check the count in browser window before running the workload. -![](./images/demo-ui-app-checkcount-before-workload-7.png " ") + ![](./images/demo-ui-app-checkcount-before-workload-7.png " ") -Note that while the shard is stopped, you can still run the workload. + Note that while the shard is stopped, you can still run the workload. -Switch to the "appclient" container + If you are using a new terminal, you can use below command as **oracle** to switch to **appclient** container and the switch to the "oracle" user and then change the path to $DEMO_MASTER location. -``` - -sudo podman exec -it appclient /bin/bash - -``` + ``` + + sudo podman exec -it appclient /bin/bash + + ``` -Switch to the "oracle" user + ``` + + su - oracle + + ``` -``` - -su - oracle - -``` + ``` + + cd $DEMO_MASTER + pwd + ls -rlt + + ``` -Change the path to $DEMO_MASTER location + Run the workload using the below command in a terminal window that is switched to **appclient** container. -``` - -cd $DEMO_MASTER -pwd -ls -rlt - -``` + ``` + + sh run.sh demo + + ``` -Run the workload using the below command + ![](./images/demo-ui-app-run-workload-7a.png " ") -``` - -sh run.sh demo - -``` -![](./images/demo-ui-app-run-workload-7a.png " ") + ![](./images/demo-ui-app-run-workload-7b.png " ") -![](./images/demo-ui-app-run-workload-7b.png " ") + Refresh the browser window for Demo UI application and you will observe that the count is increasing in the Demo UI even though the shard is stopped. -Refresh the browser window for Demo UI application and you will observe that the count is increasing in the Demo UI even though the shard is stopped. + ![](./images/demo-ui-app-count-inc-after-workload-7c.png " ") -![](./images/demo-ui-app-count-inc-after-workload-7c.png " ") +8. Stop the Workload. + You can stop the workload that ran in the previous task using Ctrl+C. -8. Stop the Workload. +9. Run the below command in terminal that is switched to **GSM** to auto rebalance the leaders. -You can stop the workload that ran in the previous task using Ctrl+C. + ``` + + gdsctl switchover ru -rebalance + + ``` + ![](./images/t3-2-auto-rebalance.png " ") -9. Now, we are going to perform delete operation through Demo UI Application. +10. Now, we are going to perform delete operation through Demo UI Application. -Take a look at the count before clicking on delete button. + Take a look at the count before clicking on delete button. -![](./images/demo-ui-app-count-before-delete-9.png " ") + ![](./images/demo-ui-app-count-before-delete-9.png " ") -Click on Delete button in the browser window and delete few users. You will notice that the count is decreasing. -s -![](./images/demo-ui-app-after-del-count-9a.png " ") + Click on Delete button in the browser window and delete few users. You will notice that the count is decreasing. + ![](./images/demo-ui-app-after-del-count-9a.png " ") -11. Start the shard and see that the shard1 is joining back. + +11. Run the below command in terminal window logged in as **oracle**. Start the shard and see that the shard1 is joining back. ``` @@ -442,7 +488,24 @@ s ``` -![](./images/demo-ui-app-start-shard-11.png " ") + Run the below command in terminal that is switched to **GSM** to check the status of the shard. + + ``` + + gdsctl config shard + + ``` + + ![](./images/demo-ui-app-start-shard-11.png " ") + +12. Run the below command in terminal that is switched to **GSM** to auto rebalance the leaders. + + ``` + + gdsctl switchover ru -rebalance + + ``` + ![](./images/t3-2-auto-rebalance.png " ") You may now proceed to the next lab. @@ -450,4 +513,4 @@ You may now proceed to the next lab. ## Acknowledgements * **Authors** - Deeksha Sehgal, Oracle Globally Distributed Database Database, Product Management, Senior Product Manager * **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Ajay Joshi, Jyoti Verma -* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database, Product Management, Consulting Member of Technical Staff, July 2024 \ No newline at end of file +* **Last Updated By/Date** - Deeksha Sehgal, Oracle Globally Distributed Database, Product Management, Senior Product Manager, Aug 2024 \ No newline at end of file diff --git a/sharding/raft23ai/ui-app/raft-demo-ui-app.md b/sharding/raft23ai/ui-app/raft-demo-ui-app.md index 56dbf9ab6..7f6ad70a3 100644 --- a/sharding/raft23ai/ui-app/raft-demo-ui-app.md +++ b/sharding/raft23ai/ui-app/raft-demo-ui-app.md @@ -143,126 +143,11 @@ In the bottom of the "More Details" page a link to Home Page is shown to return 5. "Home" Page link at the bottom the page brings to the first page and useful when you are at any higher page# and want to return to the first page of RAFT UI application. -## Task 4: Run the workload to view newly added customers from UI App -1. You can use the below steps to run the additional workload and view those in UI after refreshing the page: +In addition to above information, the results from Lab "Explore Raft Replication Advanced Use-Cases" tasks e.g., for Raft Replication failovers, Scale UP or Scale Down, Move or Copy Replication Unit Replicas etc. all can be verified from Raft Demo UI. - ``` - - sudo podman exec -it --user oracle appclient /bin/bash - cd $DEMO_MASTER - sh run.sh demo - - ``` - - ![](./images/additional_workload.png " ") - -2. Monitor the load from UI app: After additional demo data load, count will increase and all customers including recently added will be shown like below: - - ![](./images/all_customer_after_additonal_workload.png " ") - -3. You can check the RU details from another session on the "gsm1" container from "gdsctl" prompt. Notice that the log index is increasing as there are read and write operations are going on like you have verified in Lab4's task 5. - - ``` - - gdsctl ru -sort - - ``` - -4. You can keep running the workload while you perform the next task or exit workload using Ctrl+C. Keep observing All customers page via http://localhost:8080 to view the Total count is increasing accordingly after refreshing the Home Page. - -## Task 5: Verify data from UI during Failover Test - -What's effect on Application UI when one of the available shard databases goes down or is taken down for maintenance? Since RU leadership will changes from shard1 to any competing shards, UI will keep showing data from the catalog without any this UI application downtime. - -Run the similar steps from Lab "Explore Raft Replication Topology - Task 5: Perform Failover Test" like below: - -1. Run the below command as **oracle** user to check the status for all the containers. - - ``` - - sudo podman ps -a - - ``` - - ![](./images/podman-containers.png " ") - -2. Run the below command as **oracle** to stop shard1. - - ``` - - sudo podman stop shard1 - - ``` - - ![](./images/stop-shard1.png " ") - -3. Switch to GSM1 on another terminal session and check the status for RU's and you will see that database orcl1cdb_orcl1pdb is not present. - - ``` - - sudo podman exec -i -t gsm1 /bin/bash - - ``` - - ``` - - gdsctl config shard - - ``` - - ``` - - gdsctl status ru -show_chunks - - ``` - - ![](./images/status-chunks-after-shard1-down.png " ") - -You will see that shard1 stop situation has no impact on the running UI application and even if workload is kept on running. UI can show the newly added records and counts also increase when we refresh the page by click the Refresh link or browser's refresh icon. Records and counts can be verified from SQL*plus client as well after connecting to the catalog DB from the catalog container. - -4. Start the shard1 using the podman start command, to reflect that shard1 is joining back. - - ``` - - sudo podman start shard1 - - ``` - - ![](./images/startup-shard1.png " ") - -5. On a parallel session switch to GSM1, check the status of shard, RU's and see that shard1 has joined back. - - ``` - - sudo podman exec -i -t gsm1 /bin/bash - - ``` - - ``` - - gdsctl config shard - - ``` - - ``` - - gdsctl status ru -show_chunks - - ``` - - ![](./images/status-chunks-after-startup-shard1.png " ") - -You can stop the workload that ran in the previous task using Ctrl+C. - -In addition to above tasks, the results from Lab "Explore Raft Replication Topology" tasks e.g., for Raft Replication failovers, Scale UP or Scale Down, Move or Copy Replication Unit Replicas etc. all can be verified from Raft Demo UI. - -This is the end of the Raft Replication Workshop. - -## Rate this Workshop -When you are finished, don't forget to rate this workshop! We rely on this feedback to help us improve and refine our LiveLabs catalog. Follow the steps to submit your rating. ## Acknowledgements * **Authors** - Ajay Joshi, Oracle Globally Distributed Database, Product Management, Consulting Member of Technical Staff * **Contributors** - Pankaj Chandiramani, Shefali Bhargava, Deeksha Sehgal, Jyoti Verma -* **Last Updated By/Date** - Ajay Joshi, Oracle Globally Distributed Database, Product Management, Consulting Member of Technical Staff, July 2024 \ No newline at end of file +* **Last Updated By/Date** - Deeksha Sehgal, Oracle Globally Distributed Database, Product Management, Senior Product Manager, Aug 2024 \ No newline at end of file diff --git a/sharding/raft23ai/workshops/desktop/manifest.json b/sharding/raft23ai/workshops/desktop/manifest.json index e7c1299b5..038052097 100644 --- a/sharding/raft23ai/workshops/desktop/manifest.json +++ b/sharding/raft23ai/workshops/desktop/manifest.json @@ -40,15 +40,15 @@ "filename": "../../advanced-use-cases/advanced-use-cases.md" }, { - "title": "Lab 6: Demo UI Application for Raft Replication", - "description": "Demo UI Application for Raft Replication", - "filename": "../../ui-app/raft-demo-ui-app.md" - }, - { - "title": "Lab 7: Clean up Stack and Instances", + "title": "Lab 6: Clean up Stack and Instances", "description": "Clean up ORM Stack and instances", "filename": "../../cleanup/cleanup.md" }, + { + "title": "Appendix: Demo UI Application for Raft Replication", + "description": "Demo UI Application for Raft Replication", + "filename": "../../ui-app/raft-demo-ui-app.md" + }, { "title": "Need Help?", "description": "Solutions to Common Problems and Directions for Receiving Live Help", diff --git a/sharding/raft23ai/workshops/ocw24-sandbox/manifest.json b/sharding/raft23ai/workshops/ocw24-sandbox/manifest.json index 88b27f452..b87381fd7 100644 --- a/sharding/raft23ai/workshops/ocw24-sandbox/manifest.json +++ b/sharding/raft23ai/workshops/ocw24-sandbox/manifest.json @@ -31,7 +31,7 @@ "filename": "../../advanced-use-cases/advanced-use-cases.md" }, { - "title": "Lab 5: Demo UI Application for Raft Replication", + "title": "Appendix: Demo UI Application for Raft Replication", "description": "Demo UI Application for Raft Replication", "filename": "../../ui-app/raft-demo-ui-app.md" }, diff --git a/sharding/raft23ai/workshops/ocw24-tenancy/manifest.json b/sharding/raft23ai/workshops/ocw24-tenancy/manifest.json index 22d3c411f..647f57de1 100644 --- a/sharding/raft23ai/workshops/ocw24-tenancy/manifest.json +++ b/sharding/raft23ai/workshops/ocw24-tenancy/manifest.json @@ -40,15 +40,15 @@ "filename": "../../advanced-use-cases/advanced-use-cases.md" }, { - "title": "Lab 6: Demo UI Application for Raft Replication", - "description": "Demo UI Application for Raft Replication", - "filename": "../../ui-app/raft-demo-ui-app.md" - }, - { - "title": "Lab 7: Clean up Stack and Instances", + "title": "Lab 6: Clean up Stack and Instances", "description": "Clean up ORM Stack and instances", "filename": "../../cleanup/cleanup.md" }, + { + "title": "Appendix: Demo UI Application for Raft Replication", + "description": "Demo UI Application for Raft Replication", + "filename": "../../ui-app/raft-demo-ui-app.md" + }, { "title": "Need Help?", "description": "Solutions to Common Problems and Directions for Receiving Live Help", diff --git a/sharding/raft23ai/workshops/sandbox/manifest.json b/sharding/raft23ai/workshops/sandbox/manifest.json index 88b27f452..b87381fd7 100644 --- a/sharding/raft23ai/workshops/sandbox/manifest.json +++ b/sharding/raft23ai/workshops/sandbox/manifest.json @@ -31,7 +31,7 @@ "filename": "../../advanced-use-cases/advanced-use-cases.md" }, { - "title": "Lab 5: Demo UI Application for Raft Replication", + "title": "Appendix: Demo UI Application for Raft Replication", "description": "Demo UI Application for Raft Replication", "filename": "../../ui-app/raft-demo-ui-app.md" }, diff --git a/sharding/raft23ai/workshops/tenancy/manifest.json b/sharding/raft23ai/workshops/tenancy/manifest.json index 22d3c411f..647f57de1 100644 --- a/sharding/raft23ai/workshops/tenancy/manifest.json +++ b/sharding/raft23ai/workshops/tenancy/manifest.json @@ -40,15 +40,15 @@ "filename": "../../advanced-use-cases/advanced-use-cases.md" }, { - "title": "Lab 6: Demo UI Application for Raft Replication", - "description": "Demo UI Application for Raft Replication", - "filename": "../../ui-app/raft-demo-ui-app.md" - }, - { - "title": "Lab 7: Clean up Stack and Instances", + "title": "Lab 6: Clean up Stack and Instances", "description": "Clean up ORM Stack and instances", "filename": "../../cleanup/cleanup.md" }, + { + "title": "Appendix: Demo UI Application for Raft Replication", + "description": "Demo UI Application for Raft Replication", + "filename": "../../ui-app/raft-demo-ui-app.md" + }, { "title": "Need Help?", "description": "Solutions to Common Problems and Directions for Receiving Live Help",