diff --git a/layouts/partials/menu-footer.html b/layouts/partials/menu-footer.html index 208facc..be99b02 100644 --- a/layouts/partials/menu-footer.html +++ b/layouts/partials/menu-footer.html @@ -19,7 +19,13 @@

- Team
+ Primary Author
+ + + Gia Hưng
+
+
+ Updating Author
Từ Nhật Phương
diff --git a/public/1-introduce/index.html b/public/1-introduce/index.html index 7332500..b24dd60 100644 --- a/public/1-introduce/index.html +++ b/public/1-introduce/index.html @@ -1,1721 +1,1674 @@ + + + + + + + + + + Introduction :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+
+
+ +
+
+ + + + +
+
+ +
+
- - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - -
  • +
    + +

    + + Introduction +

    + + + + + + +

    Overview

    +

    The lab uses templates and code duplication is subject to errors. Refer to Repository to edit if you get an error

    +

    EKS Blueprints

    +

    EKS Blueprints is an open-source development framework that captures the complexity of cloud infrastructure from developers and enables them to deploy workloads with ease.

    +

    Containerized environments on AWS include many open-source or AWS products and services, including services for running containers, CI/CD pipeline, logging/metrics, and security enforcements.

    +

    The EKS Blueprints framework packages these tools into a cohesive whole and makes them available to development teams as a service. From an operational perspective, the framework allows companies to unify tools and best practices for securing, scaling, monitoring, and operating container infrastructure into one central platform that developers in an enterprise can use.

    +

    Create Workspace

    +

    Work

    +

    EKS Blueprints is built on top of Amazon EKS and all the different components. EKS Blueprints are defined through Infrastructure-as-Code best practices through the AWS CDK.

    +

    See more documentation on EKS blueprints for CDK built with AWS CDK making it easy for customers to build and deploy EKS blueprints on Amazon EKS.

    +

    AWS Cloud Development Kit (AWS CDK) is an open-source software development framework for defining cloud application resources in programming languages and familiar programs.

    +

    Benefit

    +

    Customers can leverage EKS Blueprints to:

    +
      +
    • +

      Deploy EKS clusters on any number of accounts and regions following best practices.

      +
    • +
    • +

      Manage cluster configuration, including add-ons that run in each cluster, from a single Git repository.

      +
    • +
    • +

      Define groups, their namespaces, and associated access permissions for your groups.

      +
    • +
    • +

      Create Continuous Delivery (CD) pipelines responsible for deploying your infrastructure. Leverage GitOps-based workflows to introduce and manage workloads to your team.

      +
    • +
    • +

      Constructs of EKS Blueprints:

      +
        +
      • Bottlerocket
      • +
      • AWS Fargate
      • +
      • Multi-region deployments
      • +
      • Multi-team deployments
      • +
      • Custom cluster deployments
      • +
      +
    • +
    +

    Create Workspace

    + + + + + +
    + +
    + + +
    - "> - - 7.3 Create add-ons - - - - - - - - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Introduction -

    - - - - - - -

    Overview

    -

    The lab uses templates and code duplication is subject to errors. Refer to Repository to edit if you get an error -

    -

    EKS Blueprints

    -

    EKS Blueprints is an open-source development framework that captures the complexity of cloud infrastructure - from developers and enables them to deploy workloads with ease.

    -

    Containerized environments on AWS include many open-source or AWS products and services, including services - for running containers, CI/CD pipeline, logging/metrics, and security enforcements.

    -

    The EKS Blueprints framework packages these tools into a cohesive whole and makes them available to - development teams as a service. From an operational perspective, the framework allows companies to unify tools - and best practices for securing, scaling, monitoring, and operating container infrastructure into one central - platform that developers in an enterprise can use.

    -

    Create Workspace

    -

    Work

    -

    EKS Blueprints is built on top of Amazon EKS and all the different components. EKS Blueprints are defined - through Infrastructure-as-Code best practices through the AWS CDK.

    -

    See more documentation on EKS blueprints for - CDK built with AWS CDK making it easy for customers to build and deploy EKS blueprints on Amazon EKS. -

    -

    AWS Cloud Development Kit (AWS CDK) is an open-source software - development framework for defining cloud application resources in programming languages and familiar programs. -

    -

    Benefit

    -

    Customers can leverage EKS Blueprints to:

    -
      -
    • -

      Deploy EKS clusters on any number of accounts and regions following best practices.

      -
    • -
    • -

      Manage cluster configuration, including add-ons that run in each cluster, from a single Git repository. -

      -
    • -
    • -

      Define groups, their namespaces, and associated access permissions for your groups.

      -
    • -
    • -

      Create Continuous Delivery (CD) pipelines responsible for deploying your infrastructure. Leverage - GitOps-based workflows to introduce and manage workloads to your team.

      -
    • -
    • -

      Constructs of EKS Blueprints:

      -
        -
      • Bottlerocket
      • -
      • AWS Fargate
      • -
      • Multi-region deployments
      • -
      • Multi-team deployments
      • -
      • Custom cluster deployments
      • -
      -
    • -
    -

    Create Workspace

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/2-prerequiste/2.1-createvpcec2/index.html b/public/2-prerequiste/2.1-createvpcec2/index.html index a4e9dc7..6bb138e 100644 --- a/public/2-prerequiste/2.1-createvpcec2/index.html +++ b/public/2-prerequiste/2.1-createvpcec2/index.html @@ -12,21 +12,21 @@ Create VPC và EC2 Instance :: AWS System Manager - - - - - - - - - + + + + + + + + + - + - + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Tool Installation +

    + + +

    Installing kubectl

    +

    Amazon EKS clusters require the kubectl, kubelet, and aws-cli or aws-iam-authenticator tools to enable IAM authentication for your Kubernetes cluster.

    +
      +
    1. Install kubectl by using the following commands:
    2. +
    +
    sudo curl --silent --location -o /usr/local/bin/kubectl \
    +   https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
     
    +sudo chmod +x /usr/local/bin/kubectl
    +

    Create Workspace

    +

    For more information, refer to the official AWS guide for installing kubectl.

    +
      +
    1. Update awscli with the following commands:
    2. +
    +
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    +unzip awscliv2.zip
    +sudo ./aws/install
    +

    Create Workspace

    +
      +
    1. Verify the installation by running the following command:
    2. +
    +
    for command in kubectl jq envsubst aws
    +  do
    +    which $command &>/dev/null && echo "$command in path" || echo "$command NOT FOUND"
    +  done
    +

    Create Workspace

    +
      +
    1. Enable kubectl bash completion with the following commands:
    2. +
    +
    kubectl completion bash >>  ~/.bash_completion
    +. /etc/profile.d/bash_completion.sh
    +. ~/.bash_completion
    +

    Create Workspace

    +
    + +
    + +
    + +
    -
  • - "> - - 5.2 Create Pipeline - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Tool Installation -

    - - - - - - -

    Installing kubectl

    -

    Amazon EKS clusters require the kubectl, kubelet, and - aws-cli or aws-iam-authenticator tools to enable IAM authentication for your - Kubernetes cluster.

    -
      -
    1. Install kubectl by using the following commands:
    2. -
    -
    sudo curl --silent --location -o /usr/local/bin/kubectl \
    -   https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
    -
    -sudo chmod +x /usr/local/bin/kubectl
    -
    -

    Create Workspace

    -

    For more information, refer to the official AWS guide for installing - kubectl.

    -
      -
    1. Update awscli with the following commands:
    2. -
    -
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    -unzip awscliv2.zip
    -sudo ./aws/install
    -
    -

    Create Workspace

    -
      -
    1. Verify the installation by running the following command:
    2. -
    -
    for command in kubectl jq envsubst aws
    -  do
    -    which $command &>/dev/null && echo "$command in path" || echo "$command NOT FOUND"
    -  done
    -
    -

    Create Workspace

    -
      -
    1. Enable kubectl bash completion with the following commands:
    2. -
    -
    kubectl completion bash >>  ~/.bash_completion
    -. /etc/profile.d/bash_completion.sh
    -. ~/.bash_completion
    -
    -

    Create Workspace

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/2-prerequiste/2.4-createrole/index.html b/public/2-prerequiste/2.4-createrole/index.html index d78fbe1..e93a013 100644 --- a/public/2-prerequiste/2.4-createrole/index.html +++ b/public/2-prerequiste/2.4-createrole/index.html @@ -1,1744 +1,1700 @@ + + + + + + + + + + Create IAM Role :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - -
  • +
    + +

    + + Create IAM Role +

    + + + + + + +

    Creating IAM Role for Cloud9 Instance

    +
      +
    1. First, access the AWS Management Console
    2. +
    +
      +
    • Search for and select IAM
    • +
    +

    Create Workspace

    +
      +
    1. In the IAM interface
    2. +
    +
      +
    • Select Roles
    • +
    • Click on Create role
    • +
    +

    Create Workspace

    +
      +
    1. In the Select trusted entity step
    2. +
    +
      +
    • Choose AWS service
    • +
    • Select EC2
    • +
    • Click Next
    • +
    +

    Create Workspace

    +
      +
    1. In the Add permission step
    2. +
    +
      +
    • Search for AdministratorAccess
    • +
    • Select AdministratorAccess
    • +
    • Click Next
    • +
    +

    Create Workspace

    +
      +
    1. Complete the Name section
    2. +
    +
      +
    • For Name, enter eks-blueprints-cdk-workshop-admin +Create Workspace
    • +
    +
      +
    1. +

      Click Create role +Create Workspace

      +
    2. +
    3. +

      You have successfully created an IAM role for the EC2 Instance +Create Workspace

      +
    4. +
    + + + + + +
    + +
    + + +
    - "> - - 7.3 Create add-ons - - - - - - - - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Create IAM Role -

    - - - - - - -

    Creating IAM Role for Cloud9 Instance

    -
      -
    1. First, access the AWS Management Console
    2. -
    -
      -
    • Search for and select IAM
    • -
    -

    Create Workspace

    -
      -
    1. In the IAM interface
    2. -
    -
      -
    • Select Roles
    • -
    • Click on Create role
    • -
    -

    Create Workspace

    -
      -
    1. In the Select trusted entity step
    2. -
    -
      -
    • Choose AWS service
    • -
    • Select EC2
    • -
    • Click Next
    • -
    -

    Create Workspace

    -
      -
    1. In the Add permission step
    2. -
    -
      -
    • Search for AdministratorAccess
    • -
    • Select AdministratorAccess
    • -
    • Click Next
    • -
    -

    Create Workspace

    -
      -
    1. Complete the Name section
    2. -
    -
      -
    • For Name, enter eks-blueprints-cdk-workshop-admin - Create Workspace -
    • -
    -
      -
    1. -

      Click Create role - Create Workspace -

      -
    2. -
    3. -

      You have successfully created an IAM role for the EC2 Instance - Create Workspace -

      -
    4. -
    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/2-prerequiste/2.5-attachrole/index.html b/public/2-prerequiste/2.5-attachrole/index.html index c892408..1be01f3 100644 --- a/public/2-prerequiste/2.5-attachrole/index.html +++ b/public/2-prerequiste/2.5-attachrole/index.html @@ -12,21 +12,21 @@ Attach IAM role :: AWS System Manager - - - - - - - - - + + + + + + + + + - + - + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - -
  • - "> - - 7.2 Testing Cluster Autoscaler - - - - - - - - - - - - - - - +
    + +

    + + Create EKS Blueprints +

    + + + + + + +

    Create EKS Blueprints

    +

    Refer to how to create Github Repository

    +
      +
    1. +

      Access to New repository of Github

      +
        +
      • In the Create a new repository interface, enter my-eks-blueprints for Repository name
      • +
      • Select Public
      • +
      • Select Create repository
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      After creating repository successfully

      +
        +
      • Copy and store HTTPS path of Git repository
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      In the Github interface we will install and create token

      +
        +
      • Select Avatar of your Github account
      • +
      • Select Settings
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. Then scroll down and select Developer settings
    2. +
    +

    Create Workspace

    +
      +
    1. +

      In the Developer settings interface

      +
        +
      • Select Personal access tokens
      • +
      • Select Generate new token
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      In the Generate new token interface

      +
        +
      • Note, enter eks-workshop-toke
      • +
      • Select the following scope: repo and admin:repo_hook
      • +
      • Select Generate token
      • +
      +
    2. +
    +

    Create Workspace +7. Select Generate token

    +

    Create Workspace

    +
      +
    1. +

      Complete Generate token

      +
        +
      • Copy and store token
      • +
      +
    2. +
    +

    Create Workspace

    +

    Refer to how to create Personal Access Token

    +
      +
    1. Tải git
    2. +
    +
    sudo dnf install git -y
    +git --version
    +

    Create Workspace

    +
      +
    1. Do a clone repository
    2. +
    +
    git clone https://github.com/<your-alias>/my-eks-blueprints.git
    +

    Create Workspace

    +
    + +
    -
  • - - 7.3 Create add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Create EKS Blueprints -

    - - - - - - -

    Create EKS Blueprints

    -

    Refer to how to create Github - Repository

    -
      -
    1. -

      Access to New repository of Github

      -
        -
      • In the Create a new repository interface, enter - my-eks-blueprints for Repository name
      • -
      • Select Public
      • -
      • Select Create repository
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      After creating repository successfully

      -
        -
      • Copy and store HTTPS path of Git repository
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      In the Github interface we will install and create token

      -
        -
      • Select Avatar of your Github account
      • -
      • Select Settings
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. Then scroll down and select Developer settings
    2. -
    -

    Create Workspace

    -
      -
    1. -

      In the Developer settings interface

      -
        -
      • Select Personal access tokens
      • -
      • Select Generate new token
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      In the Generate new token interface

      -
        -
      • Note, enter eks-workshop-toke
      • -
      • Select the following scope: repo and - admin:repo_hook
      • -
      • Select Generate token
      • -
      -
    2. -
    -

    Create Workspace - 7. Select Generate token

    -

    Create Workspace

    -
      -
    1. -

      Complete Generate token

      -
        -
      • Copy and store token
      • -
      -
    2. -
    -

    Create Workspace

    -

    Refer to how to create Personal - Access Token

    -
      -
    1. Tải git
    2. -
    -
    sudo dnf install git -y
    -git --version
    -
    -

    Create Workspace

    -
      -
    1. Do a clone repository
    2. -
    -
    git clone https://github.com/<your-alias>/my-eks-blueprints.git
    -
    -

    Create Workspace

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/4-createcdkproject/index.html b/public/4-createcdkproject/index.html index 7b3190f..75f6ba2 100644 --- a/public/4-createcdkproject/index.html +++ b/public/4-createcdkproject/index.html @@ -1,1775 +1,1716 @@ + + + + + + + + + + Create CDK Project :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Create CDK Project +

    + + +

    Create CDK Project

    +
      +
    1. Change the directory to the main repo and install nvm
    2. +
    +
    cd my-eks-blueprints
    +curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
    +export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
    +[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
    +source ~/.bashrc
    +nvm -v
    +

    Create Workspace

    +
      +
    1. Use Node.js version 18
    2. +
    +
    nvm install v18
    +nvm use v18
    +node -v
    +npm -v
    +

    Create Workspace

    +

    You need to use Node.js version 14.15.0 or higher to use CDK. For more information, see here +Create Workspace

    +
    +
      +
    1. Install TypeScript and CDK version 2.147.3
    2. +
    +
    npm -g install typescript
    +npm install -g aws-cdk@2.147.3
    +cdk --version
    +

    Create Workspace

    +
      +
    1. Initialize a new CDK project using TypeScript
    2. +
    +
    cdk init app --language typescript
    +

    Create Workspace

    +
      +
    1. In the VSCode interface +
        +
      • View the sidebar
      • +
      • Examine the structure of the project
      • +
      • lib/: This is where the stacks or constructs of your CDK project are defined. +Create Workspace
      • +
      • bin/my-eks-blueprints.ts: This is the entry point of the CDK project. It will load the constructs defined in lib/. +Create Workspace
      • +
      +
    2. +
    + +

    You can read more about CDK.

    +
    + +
      +
    1. Set the AWS_DEFAULT_REGION and ACCOUNT_ID
    2. +
    +
    export AWS_DEFAULT_REGION=ap-southeast-1
    +export ACCOUNT_ID=212454837823
    +
    +

    Note: Remember to replace ACCOUNT_ID with your actual ID for the lab.

    +
    + +

    Create Workspace

    +
      +
    1. Initialize the bootstrap account
    2. +
    +

    To perform bootstrapping, run:

    +
    cdk bootstrap --trust=$ACCOUNT_ID \
    +  --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
    +  aws://$ACCOUNT_ID/$AWS_REGION
    +

    On successful bootstrapping, you will see:

    +
    Environment aws://212454837823/ap-southeast-1 bootstrapped.
    +

    Create Workspace

    +
      +
    1. Install the eks-blueprints and dotenv modules for the project
    2. +
    +
    npm i @aws-quickstart/eks-blueprints dotenv
    +

    Create Workspace

    +
    + +
    -
  • - - 5.2 Create Pipeline - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - -
  • -
  • - "> - - 7.2 Testing Cluster Autoscaler - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 7.3 Create add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Create CDK Project -

    - - - - - - -

    Create CDK Project

    -
      -
    1. Change the directory to the main repo and install nvm
    2. -
    -
    cd my-eks-blueprints
    -curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
    -export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
    -[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
    -source ~/.bashrc
    -nvm -v
    -
    -

    Create Workspace

    -
      -
    1. Use Node.js version 18
    2. -
    -
    nvm install v18
    -nvm use v18
    -node -v
    -npm -v
    -
    -

    Create Workspace

    - -
    -

    You need to use Node.js version 14.15.0 or higher to use CDK. For more information, see here - Create Workspace -

    -
    - -
      -
    1. Install TypeScript and CDK version 2.147.3
    2. -
    -
    npm -g install typescript
    -npm install -g aws-cdk@2.147.3
    -cdk --version
    -
    -

    Create Workspace

    -
      -
    1. Initialize a new CDK project using TypeScript
    2. -
    -
    cdk init app --language typescript
    -
    -

    Create Workspace

    -
      -
    1. In the VSCode interface -
        -
      • View the sidebar
      • -
      • Examine the structure of the project
      • -
      • lib/: This is where the stacks or constructs of your CDK project are defined. - Create Workspace -
      • -
      • bin/my-eks-blueprints.ts: This is the entry point of the CDK project. It will load - the constructs defined in lib/. - Create Workspace -
      • -
      -
    2. -
    - -
    -

    You can read more about CDK.

    -
    - -
      -
    1. Set the AWS_DEFAULT_REGION and ACCOUNT_ID
    2. -
    -
    export AWS_DEFAULT_REGION=ap-southeast-1
    -export ACCOUNT_ID=212454837823
    -
    -
    -

    Note: Remember to replace ACCOUNT_ID with your actual ID for the lab.

    -
    - -

    Create Workspace

    -
      -
    1. Initialize the bootstrap account
    2. -
    -

    To perform bootstrapping, run:

    -
    cdk bootstrap --trust=$ACCOUNT_ID \
    -  --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
    -  aws://$ACCOUNT_ID/$AWS_REGION
    -
    -

    On successful bootstrapping, you will see:

    -
    Environment aws://212454837823/ap-southeast-1 bootstrapped.
    -
    -

    Create Workspace

    -
      -
    1. Install the eks-blueprints and dotenv modules for the project
    2. -
    -
    npm i @aws-quickstart/eks-blueprints dotenv
    -
    -

    Create Workspace

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/404.html b/public/404.html index 80e9d5a..f36e9fe 100644 --- a/public/404.html +++ b/public/404.html @@ -9,15 +9,15 @@ 404 Page not found - - - - - - - + + + + + + + - + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - -
      + +
    +
    + +
    + +
    + +
    + +

    + + Create Cluster +

    + + + + + + +

    Create Cluster

    +

    In this section, we will deploy our first EKS cluster using the eks-blueprints package. Blueprints published as npm module

    +

    You can learn more about Amazon EKS Blueprints for CDK

    +
      +
    1. +

      We edit the main file of lib/my-eks-blueprints-stack.ts:

      +
        +
      • Open the file lib/my-eks-blueprints-stack.ts
      • +
      • See the sample code in the file
      • +
      +
    2. +
    +

    Deployment Pipeline

    +
      +
    1. Complete the lib/my-eks-blueprints-stack.ts file by pasting (replacing) the following code into the file:
    2. +
    +
    // lib/my-eks-blueprints-stack.ts
    +import * as cdk from 'aws-cdk-lib';
    +import { Construct } from 'constructs';
    +import * as blueprints from '@aws-quickstart/eks-blueprints';
    +import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
     
    +export default class ClusterConstruct extends Construct {
    +  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    +    super(scope, id);
     
    +    const account = props?.env?.account!;
    +    const region = props?.env?.region!;
     
    +    const blueprint = blueprints.EksBlueprint.builder()
    +      .account(account)
    +      .region(region)
    +      .clusterProvider(
    +        new blueprints.GenericClusterProvider({
    +          version: 'auto'
    +        })
    +      )
    +      .addOns()
    +      .teams()
    +      .build(scope, id + "-stack");
    +  }
    +}
    +

    Deployment Pipeline

    +
      +
    1. Open the file bin/my-eks-blueprints.ts to review the sample code.
    2. +
    +

    Deployment Pipeline

    +
      +
    1. +

      In this file, we create a CDK Construct, which is a building block of CDK representing what is necessary to create components of AWS Cloud.

      +
        +
      • +

        In our case, the component is an EKS cluster blueprint placed in provided account, region, add-ons, teams (which we haven’t assigned yet) and all other resources necessary to create the blueprint (e.g., VPC, subnet, etc.). The build() command at the end initializes the cluster blueprint.

        +
      • +
      • +

        To actually make a construct usable in a CDK project, we need to add it to our entrypoint.

        +
      • +
      • +

        Replace the contents of bin/my-eks-blueprints.ts with the following code block.

        +
      • +
      +
    2. +
    +
    // bin/my-eks-blueprints.ts
    +import * as cdk from 'aws-cdk-lib';
    +import ClusterConstruct from '../lib/my-eks-blueprints-stack';
    +import * as dotenv from 'dotenv';
     
    +const app = new cdk.App();
    +const account = process.env.CDK_DEFAULT_ACCOUNT!;
    +const region = process.env.CDK_DEFAULT_REGION;
    +const env = { account, region }
     
    +new ClusterConstruct(app, 'cluster', { env });
    +

    Deployment Pipeline

    +
      +
    1. Create a new .env file.
    2. +
    +

    Deployment Pipeline

    +
      +
    1. Add environment variables:
    2. +
    +
    CDK_DEFAULT_ACCOUNT=XXXXX
    +CDK_DEFAULT_REGION=XXXX
    +

    Deployment Pipeline

    +

    Please replace CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION with your own values.

    +
    +
      +
    1. Import Construct to make it available, then use the CDK app to initialize a new object of the CDK Construct we imported. Check CDK:
    2. +
    +
    cdk list
    +
      +
    • If there are no issues, you should see the following result:
    • +
    +
    cluster-stack
    +

    Deployment Pipeline

    +

    As you can see, we can leverage EksBlueprint to define our cluster easily using CDK.

    +

    Instead of deploying a single cluster, we will utilize the blueprint generator to add a deployment pipeline that can handle all updates for our infrastructure across different environments.

    +
    + +
    + +
    + +
    - - -
  • - - 5.1 Create Cluster - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.2 Create Pipeline - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Create Cluster -

    - - - - - - -

    Create Cluster

    -

    In this section, we will deploy our first EKS cluster using the eks-blueprints package. - Blueprints published as npm module

    -

    You can learn more about Amazon EKS - Blueprints for CDK

    -
      -
    1. -

      We edit the main file of lib/my-eks-blueprints-stack.ts:

      -
        -
      • Open the file lib/my-eks-blueprints-stack.ts
      • -
      • See the sample code in the file
      • -
      -
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. Complete the lib/my-eks-blueprints-stack.ts file by pasting (replacing) the following - code into the file:
    2. -
    -
    // lib/my-eks-blueprints-stack.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    -
    -export default class ClusterConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    -    super(scope, id);
    -
    -    const account = props?.env?.account!;
    -    const region = props?.env?.region!;
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -      .account(account)
    -      .region(region)
    -      .clusterProvider(
    -        new blueprints.GenericClusterProvider({
    -          version: 'auto'
    -        })
    -      )
    -      .addOns()
    -      .teams()
    -      .build(scope, id + "-stack");
    -  }
    -}
    -
    -

    Deployment Pipeline -

    -
      -
    1. Open the file bin/my-eks-blueprints.ts to review the sample code.
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. -

      In this file, we create a CDK Construct, which is a building block of - CDK representing what is necessary to create components of AWS Cloud.

      -
        -
      • -

        In our case, the component is an EKS cluster blueprint placed in provided account, region, - add-ons, teams (which we haven’t assigned yet) and all other resources necessary to - create the blueprint (e.g., VPC, subnet, etc.). The build() command at the end - initializes the cluster blueprint.

        -
      • -
      • -

        To actually make a construct usable in a CDK project, we need to - add it to our entrypoint.

        -
      • -
      • -

        Replace the contents of bin/my-eks-blueprints.ts with the following code - block.

        -
      • -
      -
    2. -
    -
    // bin/my-eks-blueprints.ts
    -import * as cdk from 'aws-cdk-lib';
    -import ClusterConstruct from '../lib/my-eks-blueprints-stack';
    -import * as dotenv from 'dotenv';
    -
    -const app = new cdk.App();
    -const account = process.env.CDK_DEFAULT_ACCOUNT!;
    -const region = process.env.CDK_DEFAULT_REGION;
    -const env = { account, region }
    -
    -new ClusterConstruct(app, 'cluster', { env });
    -
    -

    Deployment Pipeline -

    -
      -
    1. Create a new .env file.
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. Add environment variables:
    2. -
    -
    CDK_DEFAULT_ACCOUNT=XXXXX
    -CDK_DEFAULT_REGION=XXXX
    -
    -

    Deployment Pipeline -

    - -
    -

    Please replace CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION with your own - values.

    -
    - -
      -
    1. Import Construct to make it available, then use the CDK app to initialize a new object of the CDK - Construct we imported. Check CDK:
    2. -
    -
    cdk list
    -
    -
      -
    • If there are no issues, you should see the following result:
    • -
    -
    cluster-stack
    -
    -

    Deployment Pipeline -

    -

    As you can see, we can leverage EksBlueprint to define our cluster easily using CDK.

    -

    Instead of deploying a single cluster, we will utilize the blueprint generator to add a deployment pipeline - that can handle all updates for our infrastructure across different environments.

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/5-deploymentpipeline/5.2-accesscluster/index.html b/public/5-deploymentpipeline/5.2-accesscluster/index.html index 432a377..76084a5 100644 --- a/public/5-deploymentpipeline/5.2-accesscluster/index.html +++ b/public/5-deploymentpipeline/5.2-accesscluster/index.html @@ -1,1902 +1,1824 @@ + + + + + + + + + + Create Pipeline :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • +
  • - "> - - 7. Add-ons - - - - - - -
      +
      + +

      + + Create Pipeline +

      + + + + + + +

      Create Pipeline

      +

      Setting up AWS Secrets Manager

      +

      We will need to add GitHub Personal Access Token to AWS’s AWS Secrets Manager to take advantage of AWS CodePipeline and GitHub as our pipeline will take advantage of webhook to run successfully.

      +

      You can refer to more about how to create GitHub Personal Access Token

      +
        +
      1. +

        After creating GitHub Personal Access Token

        +
          +
        • We return to VSCode Terminal
        • +
        • Create Secret in Secrets Manager with the name eks-workshop-token
        • +
        +
      2. +
      +
      aws secretsmanager create-secret --name "eks-workshop-token" --description "github access token" --secret-string "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"
      +

      Note: remember to replace your secret-string with the token you created.

      +

      Create Workspace

      +
        +
      1. +

        We can create a new CodePipelineStack resource by creating a new CDK Construct in the lib/ directory, then importing Construct into the main entry point file.

        +
          +
        • Create new construct file.
        • +
        +
      2. +
      +
      touch lib/pipeline.ts
      +

      Create Workspace

      +
        +
      1. Once the file is created, open the file and add the following code to create pipeline construct
      2. +
      +
      // lib/pipeline.ts
      +import * as cdk from 'aws-cdk-lib';
      +import { Construct } from 'constructs';
      +import * as blueprints from '@aws-quickstart/eks-blueprints';
      +import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
       
      +export default class PipelineConstruct extends Construct {
      +  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
      +    super(scope, id)
       
      +    const account = props?.env?.account!;
      +    const region = props?.env?.region!;
       
      +    const blueprint = blueprints.EksBlueprint.builder()
      +      .account(account)
      +      .region(region)
      +      .clusterProvider(
      +        new blueprints.GenericClusterProvider({
      +          version: 'auto'
      +        })
      +      )
      +      .addOns()
      +      .teams();
       
      +    blueprints.CodePipelineStack.builder()
      +      .name("eks-blueprints-workshop-pipeline")
      +      .owner("your-github-username")
      +      .repository({
      +          repoUrl: 'your-repo-name',
      +          credentialsSecretName: 'github-token',
      +          targetRevision: 'main'
      +      })
      +      .build(scope, id+'-stack', props);
      +  }
      +}
      +

      Make configuration:

      +
        +
      • name, we enter eks-blueprints-workshop-pipeline or the name pipeline you want.
      • +
      • owner, enter your github name. (in the lab, enter AWS-First-Cloud-Journey)
      • +
      • repoUrl, enter the name of the repo. (In the lab, enter my-eks-blueprints)
      • +
      • credentialsSecretName, enter your secret (In the lab, enter eks-workshop-token)
      • +
      • targetRevision, enter revision main
      • +
      +

      Create Workspace

      +
        +
      1. +

        To make sure we can access Construct, we need to import and initialize a new construct.

        +
          +
        • Change the content of the file bin/my-eks-blueprints.ts
        • +
        +
      2. +
      +
      // bin/my-eks-blueprints.ts
      +// bin/my-eks-blueprints.ts
      +import * as cdk from 'aws-cdk-lib';
      +import ClusterConstruct from '../lib/my-eks-blueprints-stack';
      +import * as dotenv from 'dotenv';
      +import PipelineConstruct from '../lib/pipeline'; // IMPORT OUR PIPELINE CONSTRUCT
       
      +dotenv.config();
       
      +const app = new cdk.App();
      +const account = process.env.CDK_DEFAULT_ACCOUNT!;
      +const region = process.env.CDK_DEFAULT_REGION;
      +const env = { account, region }
       
      +new ClusterConstruct(app, 'cluster', { env });
      +new PipelineConstruct(app, 'pipeline', { env });
      +

      Create Workspace

      +
        +
      1. Do a list check pipeline
      2. +
      +
      cdk list
      +

      Create Workspace

      +
        +
      1. Do more Stage. In this step, we add stages to the pipeline (in the lab using the dev stage, you can deploy more stages for test and production in regions. other)
      2. +
      +
      // lib/pipeline.ts
      +import * as cdk from 'aws-cdk-lib';
      +import { Construct } from 'constructs';
      +import * as blueprints from '@aws-quickstart/eks-blueprints';
      +import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
       
      +export default class PipelineConstruct extends Construct {
      +  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
      +    super(scope, id)
       
      +    const account = props?.env?.account!;
      +    const region = props?.env?.region!;
       
      +    const blueprint = blueprints.EksBlueprint.builder()
      +      .account(account)
      +      .region(region)
      +      .clusterProvider(
      +        new blueprints.GenericClusterProvider({
      +          version: 'auto'
      +        })
      +      )
      +      .addOns()
      +      .teams();
       
      +    blueprints.CodePipelineStack.builder()
      +      .name("eks-blueprints-workshop-pipeline")
      +      .owner("your-github-username")
      +      .repository({
      +          repoUrl: 'your-repo-name',
      +          credentialsSecretName: 'github-token',
      +          targetRevision: 'main'
      +      })
      +      // WE ADD THE STAGES IN WAVE FROM THE PREVIOUS CODE
      +      .wave({
      +        id: "envs",
      +        stages: [
      +          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1') }
      +        ]
      +      })
      +      .build(scope, id + '-stack', props);
      +  }
      +}
      +
        +
      • +

        Use class blueprints.StackStage build support to define our stages using .stage

        +
      • +
      • +

        Use .wave support for parallel deployment.

        +
      • +
      • +

        In the lab, we are deploying a cluster.

        +
      • +
      • +

        If you’re deploying multiple clusters, for mitigation we’ll simply add .wave to the list of stages to include how you want to structure your different deployment stages in the pipeline. (ie different add-ons, region deployment, etc.).

        +
      • +
      • +

        Our stack will deploy the following clusters: EKS in the dev environment. CodePipeline deploys to the region: ap-southeast-1.

        +
      • +
      +

      Create Workspace

      +
        +
      1. Perform pipeline list recheck
      2. +
      +
      cdk list
      +

      The following results:

      +
      cluster-stack
      +pipeline-stack
      +pipeline-stack/dev/dev-blueprint
      +

      Create Workspace

      +
      + +
      -
    • - - 7.1 Introducing add-ons - - - - - - -
    • - - - - - - - - - - - - +
      -
    • - "> - - 7.2 Testing Cluster Autoscaler - - - - - - -
    • - - - - - - - - - - - - - - -
    • - - 7.3 Create add-ons - - - - - - -
    • - - - - - - -
    - - - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Create Pipeline -

    - - - - - - -

    Create Pipeline

    -

    Setting up AWS Secrets Manager

    -

    We will need to add GitHub Personal Access Token to AWS’s AWS Secrets - Manager to take advantage of AWS CodePipeline and GitHub as our - pipeline will take advantage of webhook to run successfully.

    -

    You can refer to more about how to create GitHub - Personal Access Token

    -
      -
    1. -

      After creating GitHub Personal Access Token

      -
        -
      • We return to VSCode Terminal
      • -
      • Create Secret in Secrets Manager with the name eks-workshop-token -
      • -
      -
    2. -
    -
    aws secretsmanager create-secret --name "eks-workshop-token" --description "github access token" --secret-string "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"
    -
    -

    Note: remember to replace your secret-string with the token you created.

    -

    Create Workspace -

    -
      -
    1. -

      We can create a new CodePipelineStack resource by creating a new CDK - Construct in the lib/ directory, then importing Construct - into the main entry point file.

      -
        -
      • Create new construct file.
      • -
      -
    2. -
    -
    touch lib/pipeline.ts
    -
    -

    Create Workspace -

    -
      -
    1. Once the file is created, open the file and add the following code to create pipeline - construct
    2. -
    -
    // lib/pipeline.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    -
    -export default class PipelineConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    -    super(scope, id)
    -
    -    const account = props?.env?.account!;
    -    const region = props?.env?.region!;
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -      .account(account)
    -      .region(region)
    -      .clusterProvider(
    -        new blueprints.GenericClusterProvider({
    -          version: 'auto'
    -        })
    -      )
    -      .addOns()
    -      .teams();
    -
    -    blueprints.CodePipelineStack.builder()
    -      .name("eks-blueprints-workshop-pipeline")
    -      .owner("your-github-username")
    -      .repository({
    -          repoUrl: 'your-repo-name',
    -          credentialsSecretName: 'github-token',
    -          targetRevision: 'main'
    -      })
    -      .build(scope, id+'-stack', props);
    -  }
    -}
    -
    -

    Make configuration:

    -
      -
    • name, we enter eks-blueprints-workshop-pipeline or the name - pipeline you want.
    • -
    • owner, enter your github name. (in the lab, enter - AWS-First-Cloud-Journey)
    • -
    • repoUrl, enter the name of the repo. (In the lab, enter - my-eks-blueprints)
    • -
    • credentialsSecretName, enter your secret (In the lab, enter - eks-workshop-token)
    • -
    • targetRevision, enter revision main
    • -
    -

    Create Workspace -

    -
      -
    1. -

      To make sure we can access Construct, we need to import and initialize a new construct. -

      -
        -
      • Change the content of the file bin/my-eks-blueprints.ts
      • -
      -
    2. -
    -
    // bin/my-eks-blueprints.ts
    -// bin/my-eks-blueprints.ts
    -import * as cdk from 'aws-cdk-lib';
    -import ClusterConstruct from '../lib/my-eks-blueprints-stack';
    -import * as dotenv from 'dotenv';
    -import PipelineConstruct from '../lib/pipeline'; // IMPORT OUR PIPELINE CONSTRUCT
    -
    -dotenv.config();
    -
    -const app = new cdk.App();
    -const account = process.env.CDK_DEFAULT_ACCOUNT!;
    -const region = process.env.CDK_DEFAULT_REGION;
    -const env = { account, region }
    -
    -new ClusterConstruct(app, 'cluster', { env });
    -new PipelineConstruct(app, 'pipeline', { env });
    -
    -

    Create Workspace -

    -
      -
    1. Do a list check pipeline
    2. -
    -
    cdk list
    -
    -

    Create Workspace -

    -
      -
    1. Do more Stage. In this step, we add stages to the pipeline (in the lab using the - dev stage, you can deploy more stages for test and - production in regions. other)
    2. -
    -
    // lib/pipeline.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    -
    -export default class PipelineConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    -    super(scope, id)
    -
    -    const account = props?.env?.account!;
    -    const region = props?.env?.region!;
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -      .account(account)
    -      .region(region)
    -      .clusterProvider(
    -        new blueprints.GenericClusterProvider({
    -          version: 'auto'
    -        })
    -      )
    -      .addOns()
    -      .teams();
    -
    -    blueprints.CodePipelineStack.builder()
    -      .name("eks-blueprints-workshop-pipeline")
    -      .owner("your-github-username")
    -      .repository({
    -          repoUrl: 'your-repo-name',
    -          credentialsSecretName: 'github-token',
    -          targetRevision: 'main'
    -      })
    -      // WE ADD THE STAGES IN WAVE FROM THE PREVIOUS CODE
    -      .wave({
    -        id: "envs",
    -        stages: [
    -          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1') }
    -        ]
    -      })
    -      .build(scope, id + '-stack', props);
    -  }
    -}
    -
    -
      -
    • -

      Use class blueprints.StackStage build support to define our stages using - .stage

      -
    • -
    • -

      Use .wave support for parallel deployment.

      -
    • -
    • -

      In the lab, we are deploying a cluster.

      -
    • -
    • -

      If you’re deploying multiple clusters, for mitigation we’ll simply add .wave to the list - of stages to include how you want to structure your different deployment stages in the pipeline. (ie - different add-ons, region deployment, etc.).

      -
    • -
    • -

      Our stack will deploy the following clusters: EKS in the dev environment. CodePipeline deploys to the - region: ap-southeast-1.

      -
    • -
    -

    Create Workspace -

    -
      -
    1. Perform pipeline list recheck
    2. -
    -
    cdk list
    -
    -

    The following results:

    -
    cluster-stack
    -pipeline-stack
    -pipeline-stack/dev/dev-blueprint
    -
    -

    Create Workspace -

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/5-deploymentpipeline/5.3-pipelineinaction/index.html b/public/5-deploymentpipeline/5.3-pipelineinaction/index.html index ca97993..5673268 100644 --- a/public/5-deploymentpipeline/5.3-pipelineinaction/index.html +++ b/public/5-deploymentpipeline/5.3-pipelineinaction/index.html @@ -1,1850 +1,1757 @@ + + + + + + + + + + Pipeline in Action :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - + +
    +
    + +
    + +
    + +
    + +

    + + Pipeline in Action +

    + + +

    Pipeline in Action

    +

    Constructs have been modified, then saved.

    +
      +
    1. We do add, commit and push your changes to remote repository
    2. +
    +
    git add .
    +git commit -m "Setting up EKS Blueprints deployment pipeline"
    +git branch -M main
    +git config credential.helper store
    +git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
    +
      +
    • +

      Since this is your first time pushing to Github’s repomte repository, Cloud 9 will prompt you to enter your GitHub credentials. You will need to use your GitHub password (if 2FA is not enabled) or your Github Token (if 2FA is enabled). In the lab, we use Github Token because using user name and password is no longer valid.

      +
    • +
    • +

      If you forgot Secret you can view it in AWS Secret Manager

      +
    • +
    • +

      The credential.helper call is used to store your credentials so you don’t have to keep entering them every time you make a change.

      +
    • +
    +

    Note: git push uses the accompanying token https://[token]@github.com/[github_name]/[repo_name].git

    +

    Create Workspace

    +
      +
    1. Check if the repository has been pushed up yet?
    2. +
    +

    Create Workspace

    +
      +
    1. After pushing to the repository, we deploy the pipeline stack.
    2. +
    +
    cdk deploy pipeline-stack
    +

    Create Workspace

    +
      +
    1. +

      You will be prompted to confirm the pipeline stack deployment.

      +
        +
      • Type y and then press enter.
      • +
      • After successful deployment will display Stack ARN
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. Return to the AWS Management Console interface
    2. +
    +
      +
    • Find and select CodePipeline
    • +
    +

    Create Workspace

    +
      +
    1. You will see the rollout in progress.
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Wait about 30 minutes, Pipeline shows Succeeced

      +
        +
      • +

        CodePipeline will pick up the changes made in the remote repository and pipelne will start building. Updates (add, remove, fix code) can be seen in the CodePipeline Console to verify that the stages are built correctly.

        +
      • +
      • +

        Select the pipeline name.

        +
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      See Source and Build steps

      +
        +
      • Source: Source stage runs an action to retrieve code changes when the pipeline is run manually or when a webhook event is sent from the source provider. In our case, every time we make a code change in our my-eks-blueprints repository and reflect the changes in the remote repo, the event will be sent to the pipeline (with GitHub personal access token) ) to enable new pipeline execution.
      • +
      • Build: build stage allows you to run test and build actions as part of the pipeline.
      • +
      • During Build, the pipeline runs scripts to make sure everything works as intended.
      • +
      • This includes npm package installations, version checking and CDK synth.
      • +
      • Any error in the configuration from your repo may make this stage fail.
      • +
      • You can see a list of commands run in this action by clicking Details in actions (below its name and AWS Codebuild).
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Followed by UpdatePipeline and Assets

      +
        +
      • UpdatePipeline: This is an extra build stage that runs to check if the pipeline needs updating. For example, if the code is changed to include additional (out-of-production) stages, UpdatePipeline will run the build and reconfigure pipeline that needs to add those additional stages. This stage is the Assets needed to run the stages.
      • +
      • Assets: This is a series of build actions that handle the assets needed to deploy the EKS cluster. Asset, in the context of CDK, are local files, directories, or Docker images that can be packaged into CDK libraries and applications. These assets or artifacts are necessary for our CDK application to function. These assets allow the Framework to work properly, as they contain the parameters and configurations used to deploy the necessary resources i.e. Cluster Provider, Kubernetes resources in Cluster, IAM, add-ons with Helm Charts, etc. Assets are stored on AWS as Lambda Functions for S3 Artifacts bucket stored files and executables.
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. Finally dev (Prepare and Deploy)
    2. +
    +
    *   **Envs (our wave)**: a wave is an implementation option for pipelines that provide multiple stages (or environments) in parallel. Because the CDK aggregates code into a CloudFormation template, you can view it in the stack deployment management console as a CloudFormation template.
    +
    +

    Create Workspace

    +

    If you encounter an error during the pipeline execution, click to view the details. +Create Workspace

    +

    This error indicates that the queue limit has been exceeded. +Create Workspace

    +

    You can try retry it. +Create Workspace

    +

    And finally, it should run successfully. +Create Workspace

    +
    +
    + +
    + +
    + +
    -
  • - "> - - 5.2 Create Pipeline - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Pipeline in Action -

    - - - - - - -

    Pipeline in Action

    -

    Constructs have been modified, then saved.

    -
      -
    1. We do add, commit and push your changes to remote repository
    2. -
    -
    git add .
    -git commit -m "Setting up EKS Blueprints deployment pipeline"
    -git branch -M main
    -git config credential.helper store
    -git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
    -
    -
      -
    • -

      Since this is your first time pushing to Github’s repomte repository, Cloud 9 will prompt you to enter - your GitHub credentials. You will need to use your GitHub password (if 2FA is not enabled) or your Github - Token (if 2FA is enabled). In the lab, we use Github Token because using user - name and password is no longer valid.

      -
    • -
    • -

      If you forgot Secret you can view it in AWS Secret Manager

      -
    • -
    • -

      The credential.helper call is used to store your credentials so you don’t have to keep - entering them every time you make a change.

      -
    • -
    -

    Note: git push uses the accompanying token - https://[token]@github.com/[github_name]/[repo_name].git

    -

    Create Workspace -

    -
      -
    1. Check if the repository has been pushed up yet?
    2. -
    -

    Create Workspace -

    -
      -
    1. After pushing to the repository, we deploy the pipeline stack.
    2. -
    -
    cdk deploy pipeline-stack
    -
    -

    Create Workspace -

    -
      -
    1. -

      You will be prompted to confirm the pipeline stack deployment.

      -
        -
      • Type y and then press enter.
      • -
      • After successful deployment will display Stack ARN
      • -
      -
    2. -
    -

    Create Workspace -

    -
      -
    1. Return to the AWS Management Console interface
    2. -
    -
      -
    • Find and select CodePipeline
    • -
    -

    Create Workspace -

    -
      -
    1. You will see the rollout in progress.
    2. -
    -

    Create Workspace -

    -
      -
    1. -

      Wait about 30 minutes, Pipeline shows Succeeced

      -
        -
      • -

        CodePipeline will pick up the changes made in the remote repository and pipelne will start building. - Updates (add, remove, fix code) can be seen in the CodePipeline Console to verify - that the stages are built correctly.

        -
      • -
      • -

        Select the pipeline name.

        -
      • -
      -
    2. -
    -

    Create Workspace -

    -
      -
    1. -

      See Source and Build steps

      -
        -
      • Source: Source stage runs an action to retrieve code changes when - the pipeline is run manually or when a webhook event is sent from the source provider. In our case, - every time we make a code change in our my-eks-blueprints repository and reflect the - changes in the remote repo, the event will be sent to the pipeline (with GitHub personal access token) ) - to enable new pipeline execution.
      • -
      • Build: build stage allows you to run test and build actions as part - of the pipeline.
      • -
      • During Build, the pipeline runs scripts to make sure everything works as intended. -
      • -
      • This includes npm package installations, version checking and - CDK synth.
      • -
      • Any error in the configuration from your repo may make this stage fail.
      • -
      • You can see a list of commands run in this action by clicking Details in - actions (below its name and AWS Codebuild).
      • -
      -
    2. -
    -

    Create Workspace -

    -
      -
    1. -

      Followed by UpdatePipeline and Assets

      -
        -
      • UpdatePipeline: This is an extra build stage that runs to check if - the pipeline needs updating. For example, if the code is changed to include additional - (out-of-production) stages, UpdatePipeline will run the build and reconfigure pipeline - that needs to add those additional stages. This stage is the Assets needed to run the - stages.
      • -
      • Assets: This is a series of build actions that handle the assets needed to deploy the - EKS cluster. Asset, in the context of CDK, are local files, directories, or Docker - images that can be packaged into CDK libraries and applications. These assets or artifacts are necessary - for our CDK application to function. These assets allow the Framework to work properly, as they contain - the parameters and configurations used to deploy the necessary resources i.e. Cluster Provider, - Kubernetes resources in Cluster, IAM, add-ons with Helm Charts, etc. Assets are stored on AWS as Lambda - Functions for S3 Artifacts bucket stored files and executables.
      • -
      -
    2. -
    -

    Create Workspace -

    -
      -
    1. Finally dev (Prepare and Deploy)
    2. -
    -
    *   **Envs (our wave)**: a wave is an implementation option for pipelines that provide multiple stages (or environments) in parallel. Because the CDK aggregates code into a CloudFormation template, you can view it in the stack deployment management console as a CloudFormation template.
    -
    -

    Create Workspace -

    - -
    -

    If you encounter an error during the pipeline execution, click to view the details. - Create Workspace -

    -

    This error indicates that the queue limit has been exceeded. - Create Workspace -

    -

    You can try retry it. - Create Workspace -

    -

    And finally, it should run successfully. - Create Workspace -

    -
    - - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/5-deploymentpipeline/5.4-accessingthecluster/index.html b/public/5-deploymentpipeline/5.4-accessingthecluster/index.html index 13f8ad9..39e0966 100644 --- a/public/5-deploymentpipeline/5.4-accessingthecluster/index.html +++ b/public/5-deploymentpipeline/5.4-accessingthecluster/index.html @@ -12,21 +12,21 @@ Access Cluster :: AWS System Manager - - - - - - - - - + + + + + + + + + - + - + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Build Deployment Pipeline +

    + + +

    Building Deployment Pipeline

    +

    In this section, we’ll look at how to set up a deployment pipeline to automate updates for our cluster. While it’s convenient to leverage the CDK command-line tool to deploy your first stack, it’s a good idea to set up automated pipelines responsible for deploying and updating your EKS infrastructure. We will use CodePipelineStack of Framework to deploy environments in different regions.

    +

    CodePipelineStack is a structure for easy continuous delivery of AWS CDK applications. Whenever you check out the source code of an AWS CDK application on GitHub, the stack can automatically build, test, and deploy your new version.

    +

    CodePipelineStack updates itself: if you add stages or application stacks, the pipeline will automatically reconfigure itself to deploy those new stages or stacks.

    +

    Content

    +
      +
    1. Create Cluster
    2. +
    3. Create Pipeline
    4. +
    5. Pipeline in Action
    6. +
    7. Cluster Access
    8. +
    +
    + +
    -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - -
  • -
  • - "> - - 7.3 Create add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Build Deployment Pipeline -

    - - - - - - -

    Building Deployment Pipeline

    -

    In this section, we’ll look at how to set up a deployment pipeline to automate updates for our cluster. While - it’s convenient to leverage the CDK command-line tool to deploy your first stack, it’s a good idea to set up - automated pipelines responsible for deploying and updating your EKS infrastructure. We will use - CodePipelineStack of Framework to deploy environments in different regions. -

    -

    CodePipelineStack is a structure for easy continuous delivery of AWS CDK applications. - Whenever you check out the source code of an AWS CDK application on GitHub, the stack can automatically build, - test, and deploy your new version.

    -

    CodePipelineStack updates itself: if you add stages or application stacks, the pipeline will automatically - reconfigure itself to deploy those new stages or stacks.

    -

    Content

    -
      -
    1. Create Cluster
    2. -
    3. Create Pipeline
    4. -
    5. Pipeline in Action
    6. -
    7. Cluster Access
    8. -
    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/6-onboardteams/6.1-definingteams/index.html b/public/6-onboardteams/6.1-definingteams/index.html index 80d21c0..ed12c85 100644 --- a/public/6-onboardteams/6.1-definingteams/index.html +++ b/public/6-onboardteams/6.1-definingteams/index.html @@ -1,1793 +1,1735 @@ + + + + + + + + + + Setting up teams :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Setting up teams +

    + + + + + + +

    Set groups

    +
      +
    1. Implement creating folders as teams including application and platform.
    2. +
    +
    mkdir teams && cd teams && mkdir platform-team && mkdir application-team
    +

    Deployment Pipeline

    +
      +
    1. We’ll start by creating an IAM user for platform.
    2. +
    +
    aws iam create-user --user-name platform
    +

    Deployment Pipeline

    +
      +
    1. Create a file index.ts, used to create resources for platform-team
    2. +
    +
    cd platform-team && touch index.ts
    +

    Deployment Pipelinec)

    +
      +
    1. Next we add the following code block to index.ts
    2. +
    +
    import { ArnPrincipal } from "aws-cdk-lib/aws-iam";
    +import { PlatformTeam } from '@aws-quickstart/eks-blueprints';
     
    +export class TeamPlatform extends PlatformTeam {
    +    constructor(accountID: string) {
    +        super({
    +            name: "platform",
    +            users: [new ArnPrincipal(`arn:aws:iam::${accountID}:user/platform`)]
    +        })
    +    }
    +}
    +

    Explanation of the code block:

    +
      +
    • +

      The above code block imports ArnPrincipal construct from aws-cdk-lib/aws-iam module for AWS CDK so that users can be added to the platform with IAM credentials their.

      +
    • +
    • +

      The best way is to extend a class using PlatformTeam class so that our platform/infrastucture people can manage users/roles, while developers can simply create groups using the provided arugments transmisson.

      +
    • +
    • +

      Then we pass in two arguments: name and list of IAM users.

      +
    • +
    +

    Deployment Pipeline

    +

    Application Team

    +
      +
    1. Create IAM user for the application team.
    2. +
    +
    aws iam create-user --user-name application
    +

    Deployment Pipeline

    +
      +
    1. Change directory path and create file index.ts
    2. +
    +
    cd ../application-team && touch index.ts
    +

    Deployment Pipeline

    +
      +
    1. Add code to teams/application-team/index.ts file
    2. +
    +
    import { ArnPrincipal } from 'aws-cdk-lib/aws-iam';
    +import { ApplicationTeam } from '@aws-quickstart/eks-blueprints';
     
     
    +export class TeamApplication extends ApplicationTeam {
    +    constructor(name: string, accountID: string) {
    +        super({
    +            name: name, 
    +            users: [new ArnPrincipal(`arn:aws:iam::${accountID}:user/application`)] 
    +        });
    +    }
    +}
    +

    The Application Team template will do the following things:

    +
      +
    • Create a namespace
    • +
    • Register quotas
    • +
    • Register as an IAM user to access multiple accounts
    • +
    • Create a shared role to access the cluster. Alternatively, an existing role can be provisioned.
    • +
    • Register the role/user provided in the awsAuth map for kubectl and dashboard access to the cluster and namespace.
    • +
    +

    Deployment Pipeline

    +
      +
    1. We will create an additional file index.ts in the team folder
    2. +
    +
    cd .. && touch index.ts
    +

    Deployment Pipeline

    +
      +
    1. In the file index.ts add the following code:
    2. +
    +
    export { TeamPlatform } from './platform-team';
    +export { TeamApplication } from './application-team';
    +

    Deployment Pipeline

    +
    + +
    + +
    + +
    -
  • - "> - - 5.2 Create Pipeline - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Setting up teams -

    - - - - - - -

    Set groups

    -
      -
    1. Implement creating folders as teams including application and platform.
    2. -
    -
    mkdir teams && cd teams && mkdir platform-team && mkdir application-team
    -
    -

    Deployment Pipeline

    -
      -
    1. We’ll start by creating an IAM user for platform.
    2. -
    -
    aws iam create-user --user-name platform
    -
    -

    Deployment Pipeline

    -
      -
    1. Create a file index.ts, used to create resources for platform-team
    2. -
    -
    cd platform-team && touch index.ts
    -
    -

    Deployment Pipelinec)

    -
      -
    1. Next we add the following code block to index.ts
    2. -
    -
    import { ArnPrincipal } from "aws-cdk-lib/aws-iam";
    -import { PlatformTeam } from '@aws-quickstart/eks-blueprints';
    -
    -export class TeamPlatform extends PlatformTeam {
    -    constructor(accountID: string) {
    -        super({
    -            name: "platform",
    -            users: [new ArnPrincipal(`arn:aws:iam::${accountID}:user/platform`)]
    -        })
    -    }
    -}
    -
    -

    Explanation of the code block:

    -
      -
    • -

      The above code block imports ArnPrincipal construct from - aws-cdk-lib/aws-iam module for AWS CDK so that users can be added to the - platform with IAM credentials their.

      -
    • -
    • -

      The best way is to extend a class using PlatformTeam class so that our platform/infrastucture people can - manage users/roles, while developers can simply create groups using the provided arugments transmisson. -

      -
    • -
    • -

      Then we pass in two arguments: name and list of IAM users.

      -
    • -
    -

    Deployment Pipeline

    -

    Application Team

    -
      -
    1. Create IAM user for the application team.
    2. -
    -
    aws iam create-user --user-name application
    -
    -

    Deployment Pipeline

    -
      -
    1. Change directory path and create file index.ts
    2. -
    -
    cd ../application-team && touch index.ts
    -
    -

    Deployment Pipeline

    -
      -
    1. Add code to teams/application-team/index.ts file
    2. -
    -
    import { ArnPrincipal } from 'aws-cdk-lib/aws-iam';
    -import { ApplicationTeam } from '@aws-quickstart/eks-blueprints';
    -
    -
    -export class TeamApplication extends ApplicationTeam {
    -    constructor(name: string, accountID: string) {
    -        super({
    -            name: name, 
    -            users: [new ArnPrincipal(`arn:aws:iam::${accountID}:user/application`)] 
    -        });
    -    }
    -}
    -
    -

    The Application Team template will do the following things:

    -
      -
    • Create a namespace
    • -
    • Register quotas
    • -
    • Register as an IAM user to access multiple accounts
    • -
    • Create a shared role to access the cluster. Alternatively, an existing role can be provisioned.
    • -
    • Register the role/user provided in the awsAuth map for kubectl and dashboard access to the cluster and - namespace.
    • -
    -

    Deployment Pipeline

    -
      -
    1. We will create an additional file index.ts in the team folder
    2. -
    -
    cd .. && touch index.ts
    -
    -

    Deployment Pipeline

    -
      -
    1. In the file index.ts add the following code:
    2. -
    -
    export { TeamPlatform } from './platform-team';
    -export { TeamApplication } from './application-team';
    -
    -

    Deployment Pipeline

    - - - - - -
    - -
    - - -
    - - -
    - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file +
    + +
    +
    +
    + + + + + + + + + + + + + + + + + diff --git a/public/6-onboardteams/6.2-onboardingteams/index.html b/public/6-onboardteams/6.2-onboardingteams/index.html index a0e69aa..046ef5a 100644 --- a/public/6-onboardteams/6.2-onboardingteams/index.html +++ b/public/6-onboardteams/6.2-onboardingteams/index.html @@ -1,1772 +1,1728 @@ + + + + + + + + + + Configuring teams :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • +
  • - "> - - 7. Add-ons - - - - - - -
      - - - - +
      + +

      + + Configuring teams +

      + - - - - - - - - - - - - -
    • - - 7.1 Introducing add-ons - - - - - - -
    • +

      Configure groups

      +
        +
      1. +

        In the previous section, we created both Applications and Platform team templates.

        +
          +
        • Add the following code to the template // lib/pipeline.ts
        • +
        +
      2. +
      +
      // lib/pipeline.ts
      +import * as cdk from 'aws-cdk-lib';
      +import { Construct } from 'constructs';
      +import * as blueprints from '@aws-quickstart/eks-blueprints';
      +import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
       
      +import { TeamPlatform, TeamApplication } from '../teams'; // HERE WE IMPORT TEAMS
       
      +export default class PipelineConstruct extends Construct {
      +  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
      +    super(scope, id)
       
      +    const account = props?.env?.account!;
      +    const region = props?.env?.region!;
       
      +    const blueprint = blueprints.EksBlueprint.builder()
      +      .account(account)
      +      .region(region)
      +      .clusterProvider(
      +        new blueprints.GenericClusterProvider({
      +          version: 'auto',
      +        })
      +      )
      +      .addOns()
      +      .teams(new TeamPlatform(account), new TeamApplication('burnham',account)); // HERE WE USE TEAMS
       
      +    blueprints.CodePipelineStack.builder()
      +      .name("eks-blueprints-workshop-pipeline")
      +      .owner("your-github-username")
      +      .repository({
      +          repoUrl: 'your-repo-name',
      +          credentialsSecretName: 'github-token',
      +          targetRevision: 'main'
      +      })
      +      .wave({
      +        id: "envs",
      +        stages: [
      +          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1') }
      +        ]
      +      })
      +      .build(scope, id + '-stack', props);
      +  }
      +}
      +

      Deployment Pipeline

      +
        +
      1. Push changes to remote repository Github
      2. +
      +
      cd ..
      +git add .
      +git commit -m "adding teams"
      +git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
      +

      Deployment Pipeline

      +
        +
      1. Wait about 15 minutes for Succeeded
      2. +
      +

      Deployment Pipeline

      +
        +
      1. +

        Successfully deployed +Deployment Pipeline

        +
      2. +
      3. +

        Perform test

        +
      4. +
      +
      kubectl get ns
      +
        +
      • You will notice that team-burnham is in namespace
      • +
      +

      Deployment Pipeline

      +
      + +
      -
    • - - 7.2 Testing Cluster Autoscaler - - - - - - -
    • - - - - - - - - - - - - +
      -
    • - "> - - 7.3 Create add-ons - - - - - - -
    • - - - - - - -
    - - - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Configuring teams -

    - - - - - - -

    Configure groups

    -
      -
    1. -

      In the previous section, we created both Applications and Platform team templates.

      -
        -
      • Add the following code to the template // lib/pipeline.ts
      • -
      -
    2. -
    -
    // lib/pipeline.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    -
    -import { TeamPlatform, TeamApplication } from '../teams'; // HERE WE IMPORT TEAMS
    -
    -export default class PipelineConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    -    super(scope, id)
    -
    -    const account = props?.env?.account!;
    -    const region = props?.env?.region!;
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -      .account(account)
    -      .region(region)
    -      .clusterProvider(
    -        new blueprints.GenericClusterProvider({
    -          version: 'auto',
    -        })
    -      )
    -      .addOns()
    -      .teams(new TeamPlatform(account), new TeamApplication('burnham',account)); // HERE WE USE TEAMS
    -
    -    blueprints.CodePipelineStack.builder()
    -      .name("eks-blueprints-workshop-pipeline")
    -      .owner("your-github-username")
    -      .repository({
    -          repoUrl: 'your-repo-name',
    -          credentialsSecretName: 'github-token',
    -          targetRevision: 'main'
    -      })
    -      .wave({
    -        id: "envs",
    -        stages: [
    -          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1') }
    -        ]
    -      })
    -      .build(scope, id + '-stack', props);
    -  }
    -}
    -
    -

    Deployment Pipeline

    -
      -
    1. Push changes to remote repository Github
    2. -
    -
    cd ..
    -git add .
    -git commit -m "adding teams"
    -git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
    -
    -

    Deployment Pipeline

    -
      -
    1. Wait about 15 minutes for Succeeded
    2. -
    -

    Deployment Pipeline

    -
      -
    1. -

      Successfully deployed - Deployment Pipeline -

      -
    2. -
    3. -

      Perform test

      -
    4. -
    -
    kubectl get ns
    -
    -
      -
    • You will notice that team-burnham is in namespace
    • -
    -

    Deployment Pipeline

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/6-onboardteams/6.3-clusteraccessforteams/index.html b/public/6-onboardteams/6.3-clusteraccessforteams/index.html index 232c83b..004dd60 100644 --- a/public/6-onboardteams/6.3-clusteraccessforteams/index.html +++ b/public/6-onboardteams/6.3-clusteraccessforteams/index.html @@ -1,1803 +1,1728 @@ + + + + + + + + + + Team Access :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - -
  • +
    + +

    + + Team Access +

    + + + + + + +

    Group Access

    +
      +
    1. Burnham Team, only having access to resources in their dedicated namespace along with a demonstration of how we can use Kubernative native construct to ensure that only people used in team-burnham namespace can access those resources. This is also known as soft multi-tenancy you are using Kubernetes constructs like namespaces, quotas, and network policies to prevent applications from being accessed. implementations in different namespaces communicate with each other.
    2. +
    +
    kubectl describe role -n team-burnham
    +

    Deployment Pipeline

    +

    You can see that Team Burnham can only get and list a set of application-focused Kubernetes resources (pods, daemonsets, deployments, replicasets, statefulsets, and jobs). You’ll notice that they don’t have permission to create or delete resources in their respective namespaces.

    +
      +
    1. Retrieve the created role for Team burnham by running the following command:
    2. +
    +
    aws cloudformation describe-stacks --stack-name dev-dev-blueprint | jq -r '.Stacks[0].Outputs[] | select(.OutputKey|match("burnhamteamrole"))| .OutputValue'
    +

    Deployment Pipeline

    +
      +
    1. Create credentials for application
    2. +
    +
    aws iam create-login-profile --user-name application --password Ekscdkworkshop123!
    +

    Deployment Pipeline

    +
      +
    1. +

      Go to AWS

      +
        +
      • Perform login with IAM user
      • +
      • Enter your Account ID
      • +
      • Select Next
      • +
      +
    2. +
    +

    Deployment Pipeline

    +
      +
    1. +

      Next,

      +
        +
      • Enter IAM user name as application
      • +
      • Enter password just created
      • +
      • Select Sign in
      • +
      +
    2. +
    +

    Deployment Pipeline

    +
      +
    1. Complete the login
    2. +
    +

    Deployment Pipeline

    +
      +
    1. +

      In the AWS interface

      +
        +
      • Select Switch role
      • +
      +
    2. +
    +

    Deployment Pipeline

    +
      +
    1. +

      In the Switch Role interface

      +
        +
      • Account, enter your Account ID
      • +
      • Then enter Role
      • +
      • Select Switch Role
      • +
      +
    2. +
    +

    Deployment Pipeline

    +
      +
    1. Complete Switch Role
    2. +
    +

    Deployment Pipeline

    +
      +
    1. Access to EKS
    2. +
    +

    Deployment Pipeline

    +
      +
    1. Here you will see an error message stating that the Team Burnham user is NOT allowed to list deployments in all namespaces.
    2. +
    +

    Deployment Pipeline

    +

    Deployment Pipeline

    +
      +
    1. When you select team-burnham in namespace, you will see the forbidden message disappear. This means that you are currently showing Team Burnham workloads (no workloads since any workloads have not been deployed).
    2. +
    + + + + + +
    + +
    + + +
    - "> - - 7.3 Create add-ons - - - - - - - - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Team Access -

    - - - - - - -

    Group Access

    -
      -
    1. Burnham Team, only having access to resources in their dedicated namespace along with a - demonstration of how we can use Kubernative native construct to ensure that only people - used in team-burnham namespace can access those resources. This is also known as - soft multi-tenancy you are using Kubernetes constructs like - namespaces, quotas, and network policies to prevent - applications from being accessed. implementations in different namespaces communicate with - each other.
    2. -
    -
    kubectl describe role -n team-burnham
    -
    -

    Deployment Pipeline -

    -

    You can see that Team Burnham can only get and list a set - of application-focused Kubernetes resources (pods, daemonsets, deployments, replicasets, - statefulsets, and jobs). You’ll notice that they don’t have permission to create or delete resources in their - respective namespaces.

    -
      -
    1. Retrieve the created role for Team burnham by running the following command:
    2. -
    -
    aws cloudformation describe-stacks --stack-name dev-dev-blueprint | jq -r '.Stacks[0].Outputs[] | select(.OutputKey|match("burnhamteamrole"))| .OutputValue'
    -
    -

    Deployment Pipeline -

    -
      -
    1. Create credentials for application
    2. -
    -
    aws iam create-login-profile --user-name application --password Ekscdkworkshop123!
    -
    -

    Deployment Pipeline -

    -
      -
    1. -

      Go to AWS

      -
        -
      • Perform login with IAM user
      • -
      • Enter your Account ID
      • -
      • Select Next
      • -
      -
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. -

      Next,

      -
        -
      • Enter IAM user name as application
      • -
      • Enter password just created
      • -
      • Select Sign in
      • -
      -
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. Complete the login
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. -

      In the AWS interface

      -
        -
      • Select Switch role
      • -
      -
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. -

      In the Switch Role interface

      -
        -
      • Account, enter your Account ID
      • -
      • Then enter Role
      • -
      • Select Switch Role
      • -
      -
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. Complete Switch Role
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. Access to EKS
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. Here you will see an error message stating that the Team Burnham user is NOT allowed to list deployments - in all namespaces.
    2. -
    -

    Deployment Pipeline -

    -

    Deployment Pipeline -

    -
      -
    1. When you select team-burnham in namespace, you will see the forbidden - message disappear. This means that you are currently showing Team Burnham workloads (no workloads since any - workloads have not been deployed).
    2. -
    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/6-onboardteams/index.html b/public/6-onboardteams/index.html index 6632c27..d53c3a6 100644 --- a/public/6-onboardteams/index.html +++ b/public/6-onboardteams/index.html @@ -1,1691 +1,1653 @@ + + + + + + + + + + Manage teams using IaC :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Manage teams using IaC +

    + + +

    OnboardTeams

    +

    In this section, we will introduce our teams to EKS Blueprints. we’ll look at how to join a Platform team and an Application team and make sure we’re defining the right access levels for our teams. An example would be that our Application team must have read-only access to the underlying infrastructure and must be extended to their namespace while our Platform team will have granular levels of access. because the Platform team will be responsible for managing the underlying infrastructure.

    +

    Benefits of managing teams using infrastructure as code (IaC):

    +
      +
    • Self-documenting code
    • +
    • Focused logic related to the group
    • +
    • Ability to use repeatable templates to create new environments.
    • +
    +

    Content

    +
      +
    1. OnboardTeams
    2. +
    3. Referral group
    4. +
    5. Cluster Access
    6. +
    +
    + +
    + +
    + +
    -
  • - "> - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Manage teams using IaC -

    - - - - - - -

    OnboardTeams

    -

    In this section, we will introduce our teams to EKS Blueprints. we’ll look at how to join a Platform team and - an Application team and make sure we’re defining the right access levels for our teams. An example would be - that our Application team must have read-only access to the underlying infrastructure and must be extended to - their namespace while our Platform team will have granular levels of access. because the Platform team will be - responsible for managing the underlying infrastructure.

    -

    Benefits of managing teams using infrastructure as code (IaC):

    -
      -
    • Self-documenting code
    • -
    • Focused logic related to the group
    • -
    • Ability to use repeatable templates to create new environments.
    • -
    -

    Content

    -
      -
    1. OnboardTeams
    2. -
    3. Referral group
    4. -
    5. Cluster Access
    6. -
    - - - - - -
    - -
    - - -
    - - -
    - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file +
    + +
    +
    +
    + + + + + + + + + + + + + + + + + diff --git a/public/7-add-ons/7.1-intro/index.html b/public/7-add-ons/7.1-intro/index.html index bfcae02..2945ced 100644 --- a/public/7-add-ons/7.1-intro/index.html +++ b/public/7-add-ons/7.1-intro/index.html @@ -1,1742 +1,1701 @@ + + + + + + + + + + Introducing add-ons :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • +
  • - "> - - 7. Add-ons - - - - - - -
      - - - - - +
      + +

      + + Introducing add-ons +

      + - - - - - - - - - - - -
    • - - 7.1 Introducing add-ons - - - - - - -
    • - +
        +
      1. Adding an add-on to a template is as simple as adding the .addOns method to blueprints.EksBlueprint.builder(). We will use Cluster Autoscaler as an example to show how simple add-ons are using EKS Blueprint. Add Cluster Autoscaler to your lib/pipeline.ts template as shown below:
      2. +
      +
      // lib/pipeline.ts
      +import * as cdk from 'aws-cdk-lib';
      +import { Construct } from 'constructs';
      +import * as blueprints from '@aws-quickstart/eks-blueprints';
      +import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
      +import { TeamApplication, TeamPlatform } from '../teams';
       
      +export default class PipelineConstruct extends Construct {
      +  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
      +    super(scope, id)
       
      +    const account = props?.env?.account!;
      +    const region = props?.env?.region!;
       
      +    const blueprint = blueprints.EksBlueprint.builder()
      +      .account(account)
      +      .region(region)
      +      .clusterProvider(
      +        new blueprints.GenericClusterProvider({
      +          version: KubernetesVersion.V1_29,
      +        })
      +      )
      +      .addOns(new blueprints.ClusterAutoScalerAddOn) // Cluster Autoscaler addon goes here
      +      .teams(new TeamPlatform(account), new TeamApplication('burnham', account));
       
      +    blueprints.CodePipelineStack.builder()
      +      .name("eks-blueprints-workshop-pipeline")
      +      .owner("your-github-username")
      +      .repository({
      +          repoUrl: 'your-repo-name',
      +          credentialsSecretName: 'github-token',
      +          targetRevision: 'main'
      +      })
      +      // WE ADD THE STAGES IN WAVE FROM THE PREVIOUS CODE
      +      .wave({
      +        id: "envs",
      +        stages: [
      +          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1') }
      +        ]
      +      })
      +      .build(scope, id + '-stack', props);
      +  }
      +}
      +

      Add-ons

      +
        +
      1. If you are new to Cluster Autoscaler, this is a tool that automatically adjusts the number of nodes in your cluster when pods fail due to insufficient resources or pods are rescheduled to other nodes due to failure. used up over a long period. Push your changes to your GitHub repo to start the process.
      2. +
      +
      git add .
      +git commit -m "adding CA"
      +git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
      +

      Add-ons

      +
        +
      1. Wait about 15 minutes to complete.
      2. +
      +

      Add-ons

      +
        +
      1. Then run the following command to check Cluster Autoscaler is running
      2. +
      +
      kubectl get pods -n kube-system
      +

      Add-ons

      +
      + +
      -
    • - - 7.2 Testing Cluster Autoscaler - - - - - - -
    • - - - - - - - - - - - - +
      -
    • - "> - - 7.3 Create add-ons - - - - - - -
    • - - - - - - -
    - - - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Introducing add-ons -

    - - - - - - -
      -
    1. Adding an add-on to a template is as simple as adding the .addOns method to - blueprints.EksBlueprint.builder(). We will use Cluster Autoscaler as an example to show how - simple add-ons are using EKS Blueprint. Add Cluster Autoscaler to your - lib/pipeline.ts template as shown below:
    2. -
    -
    // lib/pipeline.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    -import { TeamApplication, TeamPlatform } from '../teams';
    -
    -export default class PipelineConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    -    super(scope, id)
    -
    -    const account = props?.env?.account!;
    -    const region = props?.env?.region!;
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -      .account(account)
    -      .region(region)
    -      .clusterProvider(
    -        new blueprints.GenericClusterProvider({
    -          version: KubernetesVersion.V1_29,
    -        })
    -      )
    -      .addOns(new blueprints.ClusterAutoScalerAddOn) // Cluster Autoscaler addon goes here
    -      .teams(new TeamPlatform(account), new TeamApplication('burnham', account));
    -
    -    blueprints.CodePipelineStack.builder()
    -      .name("eks-blueprints-workshop-pipeline")
    -      .owner("your-github-username")
    -      .repository({
    -          repoUrl: 'your-repo-name',
    -          credentialsSecretName: 'github-token',
    -          targetRevision: 'main'
    -      })
    -      // WE ADD THE STAGES IN WAVE FROM THE PREVIOUS CODE
    -      .wave({
    -        id: "envs",
    -        stages: [
    -          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1') }
    -        ]
    -      })
    -      .build(scope, id + '-stack', props);
    -  }
    -}
    -
    -

    Add-ons

    -
      -
    1. If you are new to Cluster Autoscaler, this is a tool that automatically adjusts the number of nodes in - your cluster when pods fail due to insufficient resources or pods are rescheduled to other nodes due to - failure. used up over a long period. Push your changes to your GitHub repo to start the process.
    2. -
    -
    git add .
    -git commit -m "adding CA"
    -git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
    -
    -

    Add-ons

    -
      -
    1. Wait about 15 minutes to complete.
    2. -
    -

    Add-ons

    -
      -
    1. Then run the following command to check Cluster Autoscaler is running
    2. -
    -
    kubectl get pods -n kube-system
    -
    -

    Add-ons

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/7-add-ons/7.2-testingcluster/index.html b/public/7-add-ons/7.2-testingcluster/index.html index 76d16cd..679464f 100644 --- a/public/7-add-ons/7.2-testingcluster/index.html +++ b/public/7-add-ons/7.2-testingcluster/index.html @@ -1,1788 +1,1735 @@ + + + + + + + + + + Testing Cluster Autoscaler :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - -
      - - - - - - - - - - - - - - - - - -
    • +
  • - "> - - 7.1 Introducing add-ons - - - - - - - - - - - - - - - +
    + +

    + + Testing Cluster Autoscaler +

    + + + + + + +

    Check Cluster Autoscaler

    +

    we were able to deploy Cluster Autoscaler successfully.

    +

    The following steps will help test and validate the Cluster Autoscaler functionality in your cluster.

    +

    Deploy a sample application as a deployment. Scale deployment to 50. Scaling event monitoring.

    +
      +
    1. +

      Deploy sample application

      +
        +
      • Check the number of available nodes.
      • +
      +
    2. +
    +
    kubectl get nodes
    +

    Add-ons

    +
      +
    1. Make sample nginx application +Create a directory and a file named nginx.yaml:
    2. +
    +
    mkdir -p /home/ec2-user/environment
    +sudo vi /home/ec2-user/environment/nginx.yaml
    +

    Copy the following content into the nginx.yaml file:

    +
    apiVersion: apps/v1
    +kind: Deployment
    +metadata:
    +  name: nginx-to-scaleout
    +spec:
    +  replicas: 1
    +  selector:
    +    matchLabels:
    +      app: nginx
    +  template:
    +    metadata:
    +      labels:
    +        service: nginx
    +        app: nginx
    +    spec:
    +      containers:
    +      - image: nginx
    +        name: nginx-to-scaleout
    +        resources:
    +          limits:
    +            cpu: 500m
    +            memory: 512Mi
    +          requests:
    +            cpu: 500m
    +            memory: 512Mi
    +

    Finally, apply the nginx.yaml file:

    +
    kubectl apply -f ~/environment/nginx.yaml
    +

    Add-ons

    +
      +
    1. Check the pod is running
    2. +
    +
    kubectl get pod -l app=nginx
    +

    Add-ons

    +
      +
    1. Implement Scale deployment replicas
    2. +
    +
      +
    • We can now scale the deployment to 10 replicas and observe the deployment:
    • +
    +
    kubectl scale --replicas=10 deployment/nginx-to-scaleout
    +

    Add-ons

    +
      +
    1. Next we do Monitoring the scaling event
    2. +
    +
      +
    • Some pods will be in a Pending state, which will trigger the cluster-autoscaler to expand the EC2 pool:
    • +
    +
    kubectl get pods -l app=nginx -o wide --watch
    +

    Add-ons

    +
      +
    1. To view the cluster-autoscaler log
    2. +
    +
    kubectl -n kube-system logs -f deployment/blueprints-addon-cluster-autoscaler-aws-cluster-autoscaler
    +

    Add-ons

    +
      +
    1. You can list all the nodes
    2. +
    +
    kubectl get nodes
    +

    Add-ons

    +
      +
    1. To delete the execution resource
    2. +
    +
    kubectl delete deploy nginx-to-scaleout
    +rm ~/environment/nginx.yaml
    +

    Add-ons

    +
    + +
    -
  • - - 7.2 Testing Cluster Autoscaler - - - - - - -
  • - - - - - - - - - - - - +
    -
  • - "> - - 7.3 Create add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Testing Cluster Autoscaler -

    - - - - - - -

    Check Cluster Autoscaler

    -

    we were able to deploy Cluster Autoscaler successfully.

    -

    The following steps will help test and validate the Cluster Autoscaler functionality in your cluster.

    -

    Deploy a sample application as a deployment. Scale deployment to 50. Scaling event monitoring.

    -
      -
    1. -

      Deploy sample application

      -
        -
      • Check the number of available nodes.
      • -
      -
    2. -
    -
    kubectl get nodes
    -
    -

    Add-ons

    -
      -
    1. Make sample nginx application - Create a directory and a file named nginx.yaml:
    2. -
    -
    mkdir -p /home/ec2-user/environment
    -sudo vi /home/ec2-user/environment/nginx.yaml
    -
    -

    Copy the following content into the nginx.yaml file:

    -
    apiVersion: apps/v1
    -kind: Deployment
    -metadata:
    -  name: nginx-to-scaleout
    -spec:
    -  replicas: 1
    -  selector:
    -    matchLabels:
    -      app: nginx
    -  template:
    -    metadata:
    -      labels:
    -        service: nginx
    -        app: nginx
    -    spec:
    -      containers:
    -      - image: nginx
    -        name: nginx-to-scaleout
    -        resources:
    -          limits:
    -            cpu: 500m
    -            memory: 512Mi
    -          requests:
    -            cpu: 500m
    -            memory: 512Mi
    -
    -

    Finally, apply the nginx.yaml file:

    -
    kubectl apply -f ~/environment/nginx.yaml
    -
    -

    Add-ons

    -
      -
    1. Check the pod is running
    2. -
    -
    kubectl get pod -l app=nginx
    -
    -

    Add-ons

    -
      -
    1. Implement Scale deployment replicas
    2. -
    -
      -
    • We can now scale the deployment to 10 replicas and observe the deployment:
    • -
    -
    kubectl scale --replicas=10 deployment/nginx-to-scaleout
    -
    -

    Add-ons

    -
      -
    1. Next we do Monitoring the scaling event
    2. -
    -
      -
    • Some pods will be in a Pending state, which will trigger the cluster-autoscaler to expand the EC2 pool: -
    • -
    -
    kubectl get pods -l app=nginx -o wide --watch
    -
    -

    Add-ons

    -
      -
    1. To view the cluster-autoscaler log
    2. -
    -
    kubectl -n kube-system logs -f deployment/blueprints-addon-cluster-autoscaler-aws-cluster-autoscaler
    -
    -

    Add-ons

    -
      -
    1. You can list all the nodes
    2. -
    -
    kubectl get nodes
    -
    -

    Add-ons

    -
      -
    1. To delete the execution resource
    2. -
    -
    kubectl delete deploy nginx-to-scaleout
    -rm ~/environment/nginx.yaml
    -
    -

    Add-ons

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/7-add-ons/7.3-createaddons/index.html b/public/7-add-ons/7.3-createaddons/index.html index 5f030b7..f1434b5 100644 --- a/public/7-add-ons/7.3-createaddons/index.html +++ b/public/7-add-ons/7.3-createaddons/index.html @@ -1,1824 +1,1779 @@ + + + + + + + + + + Create add-ons :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    -
  • +
  • - "> - - 5.3 Pipeline in Action - - - - - +
    + +

    + + Create add-ons +

    + - - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - +

    Create add-ons

    +
      +
    1. First, we create kubevious_addon.ts in the lib folder
    2. +
    +
    touch lib/kubevious_addon.ts
    +

    Add-ons

    +
      +
    1. Add the following code to the kubevious_addon.ts file
    2. +
    +
    // lib/kubevious_addon.ts
    +import { Construct } from 'constructs';
    +import * as blueprints from '@aws-quickstart/eks-blueprints';
    +import { setPath } from '@aws-quickstart/eks-blueprints/dist/utils/object-utils';
     
    +/**
    + * User provided options for the Helm Chart
    + */
    +export interface KubeviousAddOnProps extends blueprints.HelmAddOnUserProps {
    +  version?: string,
    +  ingressEnabled?: boolean,
    +  kubeviousServiceType?: string,
    +}
     
    +/**
    + * Default props to be used when creating the Helm chart
    + */
    +const defaultProps: blueprints.HelmAddOnProps & KubeviousAddOnProps = {
    +  name: "blueprints-kubevious-addon",
    +  namespace: "kubevious",
    +  chart: "kubevious",
    +  version: "0.9.13",
    +  release: "kubevious",
    +  repository:  "https://helm.kubevious.io",
    +  values: {},
     
    +  ingressEnabled: false,
    +  kubeviousServiceType: "ClusterIP",
    +};
     
    +/**
    + * Main class to instantiate the Helm chart
    + */
    +export class KubeviousAddOn extends blueprints.HelmAddOn {
     
    +  readonly options: KubeviousAddOnProps;
     
    -        
  • - - 6. Manage teams using IaC - - - - - - -
      - - - + constructor(props?: KubeviousAddOnProps) { + super({...defaultProps, ...props}); + this.options = this.props as KubeviousAddOnProps; + } + deploy(clusterInfo: blueprints.ClusterInfo): Promise<Construct> { + let values: blueprints.Values = populateValues(this.options); + const chart = this.addHelmChart(clusterInfo, values); + return Promise.resolve(chart); + } +} +/** + * populateValues populates the appropriate values used to customize the Helm chart + * @param helmOptions User provided values to customize the chart + */ +function populateValues(helmOptions: KubeviousAddOnProps): blueprints.Values { + const values = helmOptions.values ?? {}; + setPath(values, "ingress.enabled", helmOptions.ingressEnabled); + setPath(values, "kubevious.service.type", helmOptions.kubeviousServiceType); + setPath(values, "mysql.generate_passwords", true); + return values; +} +
  • Add-ons

    +
      +
    1. Then add the following code to lib/pipeline.ts
    2. +
    +
    // lib/pipeline-stack.ts
    +import * as cdk from 'aws-cdk-lib';
    +import { Construct } from 'constructs';
    +import * as blueprints from '@aws-quickstart/eks-blueprints';
     
    +import { TeamPlatform, TeamApplication } from '../teams'; 
     
    +export default class PipelineConstruct extends Construct {
    +  constructor(scope: Construct, id: string, props?: cdk.StackProps){
    +    super(scope,id)
     
    +    const blueprint = blueprints.EksBlueprint.builder()
    +    .account(account)
    +    .region(region)
    +    .addOns(
    +      new blueprints.ClusterAutoScalerAddOn,
    +      new blueprints.KubeviousAddOn(), // New addon goes here
    +    ) 
    +    .teams(new TeamPlatform(account), new TeamApplication('burnham',account));
    +  
    +    blueprints.CodePipelineStack.builder()
    +      .name("eks-blueprints-workshop-pipeline")
    +      .owner("your-github-username")
    +      .repository({
    +          repoUrl: 'your-repo-name',
    +          credentialsSecretName: 'github-token',
    +          targetRevision: 'main'
    +      })
    +      .wave({
    +        id: "envs",
    +        stages: [
    +          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1')}
    +        ]
    +      })
    +      .build(scope, id+'-stack', props);
    +  }
    +}
    +

    Add-ons

    +
      +
    1. Do push to Github repository
    2. +
    +
    git add .
    +git commit -m "adding Kubevious"
    +git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
    +

    Add-ons

    +
      +
    1. Wait 15 minutes to complete
    2. +
    +

    Add-ons

    +
      +
    1. Once the pipeline is complete, we can see our add-ons in action by running the command below:
    2. +
    +
    kubectl port-forward $(kubectl get pods -n kubevious -l "app.kubernetes.io/component=kubevious-ui" -o jsonpath="{.items[0].metadata.name}") 8080:80 -n kubevious
    +

    Add-ons

    +
    + +
    -
  • - - 6.1 Setting up teams - - - - - - -
  • - - - - - - - - - - - - +
    -
  • - "> - - 6.2 Configuring teams - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 6.3 Team Access - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Create add-ons -

    - - - - - - -

    Create add-ons

    -
      -
    1. First, we create kubevious_addon.ts in the lib folder
    2. -
    -
    touch lib/kubevious_addon.ts
    -
    -

    Add-ons

    -
      -
    1. Add the following code to the kubevious_addon.ts file
    2. -
    -
    // lib/kubevious_addon.ts
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { setPath } from '@aws-quickstart/eks-blueprints/dist/utils/object-utils';
    -
    -/**
    - * User provided options for the Helm Chart
    - */
    -export interface KubeviousAddOnProps extends blueprints.HelmAddOnUserProps {
    -  version?: string,
    -  ingressEnabled?: boolean,
    -  kubeviousServiceType?: string,
    -}
    -
    -/**
    - * Default props to be used when creating the Helm chart
    - */
    -const defaultProps: blueprints.HelmAddOnProps & KubeviousAddOnProps = {
    -  name: "blueprints-kubevious-addon",
    -  namespace: "kubevious",
    -  chart: "kubevious",
    -  version: "0.9.13",
    -  release: "kubevious",
    -  repository:  "https://helm.kubevious.io",
    -  values: {},
    -
    -  ingressEnabled: false,
    -  kubeviousServiceType: "ClusterIP",
    -};
    -
    -/**
    - * Main class to instantiate the Helm chart
    - */
    -export class KubeviousAddOn extends blueprints.HelmAddOn {
    -
    -  readonly options: KubeviousAddOnProps;
    -
    -  constructor(props?: KubeviousAddOnProps) {
    -    super({...defaultProps, ...props});
    -    this.options = this.props as KubeviousAddOnProps;
    -  }
    -
    -  deploy(clusterInfo: blueprints.ClusterInfo): Promise<Construct> {
    -    let values: blueprints.Values = populateValues(this.options);
    -    const chart = this.addHelmChart(clusterInfo, values);
    -
    -    return Promise.resolve(chart);
    -  }
    -}
    -
    -/**
    - * populateValues populates the appropriate values used to customize the Helm chart
    - * @param helmOptions User provided values to customize the chart
    - */
    -function populateValues(helmOptions: KubeviousAddOnProps): blueprints.Values {
    -  const values = helmOptions.values ?? {};
    -
    -  setPath(values, "ingress.enabled",  helmOptions.ingressEnabled);
    -  setPath(values, "kubevious.service.type",  helmOptions.kubeviousServiceType);
    -  setPath(values, "mysql.generate_passwords",  true);
    -
    -  return values;
    -}
    -
    -

    Add-ons

    -
      -
    1. Then add the following code to lib/pipeline.ts
    2. -
    -
    // lib/pipeline-stack.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -
    -import { TeamPlatform, TeamApplication } from '../teams'; 
    -
    -export default class PipelineConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps){
    -    super(scope,id)
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -    .account(account)
    -    .region(region)
    -    .addOns(
    -      new blueprints.ClusterAutoScalerAddOn,
    -      new blueprints.KubeviousAddOn(), // New addon goes here
    -    ) 
    -    .teams(new TeamPlatform(account), new TeamApplication('burnham',account));
    -  
    -    blueprints.CodePipelineStack.builder()
    -      .name("eks-blueprints-workshop-pipeline")
    -      .owner("your-github-username")
    -      .repository({
    -          repoUrl: 'your-repo-name',
    -          credentialsSecretName: 'github-token',
    -          targetRevision: 'main'
    -      })
    -      .wave({
    -        id: "envs",
    -        stages: [
    -          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1')}
    -        ]
    -      })
    -      .build(scope, id+'-stack', props);
    -  }
    -}
    -
    -

    Add-ons

    -
      -
    1. Do push to Github repository
    2. -
    -
    git add .
    -git commit -m "adding Kubevious"
    -git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
    -
    -

    Add-ons

    -
      -
    1. Wait 15 minutes to complete
    2. -
    -

    Add-ons

    -
      -
    1. Once the pipeline is complete, we can see our add-ons in action by running the command below:
    2. -
    -
    kubectl port-forward $(kubectl get pods -n kubevious -l "app.kubernetes.io/component=kubevious-ui" -o jsonpath="{.items[0].metadata.name}") 8080:80 -n kubevious
    -
    -

    Add-ons

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/7-add-ons/index.html b/public/7-add-ons/index.html index b6f3152..3ce66b0 100644 --- a/public/7-add-ons/index.html +++ b/public/7-add-ons/index.html @@ -1,1770 +1,1730 @@ + + + + + + + + + + Add-ons :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - -
  • +
    + +

    + + Add-ons +

    + + + + + + +

    Add-ons

    +

    Add-ons are third-party (and native AWS) solutions that provide the functionality needed to optimize the efficient running of EKS Blueprints. Add-ons allow you to configure the tools and services you want to run to support your EKS workload. When you configure add-ons for a blueprint, the add-ons are made available at deployment time. Add-ons can deploy both Kubernetes-specific and AWS resources needed to support the add-on functionality.

    +

    The benefit of leveraging the EKS Blueprints Add-on is that you extend your ability to leverage open-source projects and tools built by the Kubernetes community. These projects and tools address different areas of running your workload on Kubernetes such as security, observability, CI/CD, GitOps, and more.

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Add-onDescription
    AppMeshAdds an AppMesh controller and CRDs.
    ArgoCDProvisions Argo CD into your cluster.
    AWS Load Balancer ControlleProvisions the AWS Load Balancer Controller into your cluster
    CalicoAdds the Calico 1.7.1 CNI/Network policy engine.
    Cluster AutoscalerAdds the standard cluster autoscaler.
    Container InsightsAdds Container Insights support integrating monitoring with CloudWatch.
    CoreDNSAdds CoreDNS (flexible, extensible DNS server) Amazon EKS add-on.
    ExternalDNSAdds External DNS support for AWS to the cluster, integrating with Amazon Route 53
    Kube ProxyAdds kube-proxy Amazon EKS add-on (maintains network rules on each Amazon EC2 node).
    Metrics ServerAdds metrics server (pre-req for HPA and other monitoring tools).
    NginxAdds NGINX ingress controller.
    Secrets StoreAdds AWS Secrets Manager and Config Provider for Secret Store CSI Driver to the EKS Cluster.
    SSM AgentAdds Amazon SSM Agent to worker nodes.
    VPC CNIAdds the Amazon VPC CNI Amazon EKS addon to support native VPC networking.
    Weave GitOpsWeave GitOps Core AddOn.
    X-RayAdds XRay Daemon to the EKS Cluster.
    OPA GatekeeperAdds policy management features to your cluster
    VeleroAdds Velero to the EKS Cluster.
    +

    Content

    +
      +
    1. Introducing add-ons
    2. +
    3. Test Cluster Autotscaler
    4. +
    5. Create add-ons
    6. +
    + + + + + +
    + +
    + + +
    - "> - - 7.3 Create add-ons - - - - - - - - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Add-ons -

    - - - - - - -

    Add-ons

    -

    Add-ons are third-party (and native AWS) solutions that provide the functionality needed to optimize the - efficient running of EKS Blueprints. Add-ons allow you to configure the tools and services you want to run to - support your EKS workload. When you configure add-ons for a blueprint, the add-ons are made available at - deployment time. Add-ons can deploy both Kubernetes-specific and AWS resources needed to support the add-on - functionality.

    -

    The benefit of leveraging the EKS Blueprints Add-on is that you extend your ability to leverage open-source - projects and tools built by the Kubernetes community. These projects and tools address different areas of - running your workload on Kubernetes such as security, observability, CI/CD, GitOps, and more.

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Add-onDescription
    AppMeshAdds an AppMesh controller and CRDs.
    ArgoCDProvisions Argo CD into your cluster.
    AWS Load Balancer ControlleProvisions the AWS Load Balancer Controller into your cluster
    CalicoAdds the Calico 1.7.1 CNI/Network policy engine.
    Cluster AutoscalerAdds the standard cluster autoscaler.
    Container InsightsAdds Container Insights support integrating monitoring with CloudWatch.
    CoreDNSAdds CoreDNS (flexible, extensible DNS server) Amazon EKS add-on.
    ExternalDNSAdds External DNS support for AWS to the cluster, integrating with Amazon Route 53
    Kube ProxyAdds kube-proxy Amazon EKS add-on (maintains network rules on each Amazon EC2 node).
    Metrics ServerAdds metrics server (pre-req for HPA and other monitoring tools).
    NginxAdds NGINX ingress controller.
    Secrets StoreAdds AWS Secrets Manager and Config Provider for Secret Store CSI Driver to the EKS Cluster.
    SSM AgentAdds Amazon SSM Agent to worker nodes.
    VPC CNIAdds the Amazon VPC CNI Amazon EKS addon to support native VPC networking.
    Weave GitOpsWeave GitOps Core AddOn.
    X-RayAdds XRay Daemon to the EKS Cluster.
    OPA GatekeeperAdds policy management features to your cluster
    VeleroAdds Velero to the EKS Cluster.
    -

    Content

    -
      -
    1. Introducing add-ons
    2. -
    3. Test Cluster Autotscaler
    4. -
    5. Create add-ons
    6. -
    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/8-deploy/8.1-argocd/index.html b/public/8-deploy/8.1-argocd/index.html index bcc6527..84361c7 100644 --- a/public/8-deploy/8.1-argocd/index.html +++ b/public/8-deploy/8.1-argocd/index.html @@ -1,1726 +1,1672 @@ + + + + + + + + + + Introducing ArgoCD :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Introducing ArgoCD +

    + + +

    About ArgoCD

    +

    The next step is to use ArgoCD to increase our team’s workload. There are two ways to leverage ArgoCD with EKS Blueprints:

    +

    Integrate manual workload onboarding using the ArgoCD CLI and expose the local ArgoCD server to gain access to the dashboard Leverage automated bootstrapping to automate your workload integration.

    +

    ArgoCD is a GitOps declarative, continuous delivery tool for Kubernetes. Provisions add Argo CDs to an EKS cluster and optionally launch your workloads from public and private Git repositories.

    +

    The Argo CD add-on allows platform administrators to combine cluster provisioning and workload bootstrapping in a single step and enables use cases such as cloning an existing running production cluster in another area for a few minutes. This is critical for business continuity and disaster recovery scenarios as well as availability across regions and geographic expansion.

    +

    ArgoCD for EKS Blueprints

    +

    ArgoCD aligns well with the principles that define the value proposition of using the EKS Blueprint, including:

    +
      +
    • Application definitions, configurations, and environments must be declared and version controlled
    • +
    • Application deployment and lifecycle management should be automated, testable, and easy to understand
    • +
    • Follow the GitOps model of using Git repositories as a truism to define your desired application state
    • +
    • Flexibility in how Kubernetes manifests are defined and managed
    • +
    • Argo CD automates the deployment of desired application states in specified target environments. Application deployments can track updates to branches, tags, or be pinned to a specific manifest version at a Git commit.
    • +
    +

    Create Workspace

    +

    Argo CD is implemented as a Kubernetes controller that continuously monitors running applications and compares the current, live state with the desired target state (as specified in the Git repo). Deployed applications whose state directly deviates from the target state is considered out of sync. Argo CD reports & visualizes discrepancies, and provides means to automatically or manually synchronize live state back to the desired target state. Any modifications made to the desired target state in the Git repository can be automatically applied and reflected in the specified target environments.

    +

    ArgoCD Bootstrapping

    +

    EKS Blueprints provides a bootstrap workload approach and add-ons from the customer GitOps repository.

    +

    You can see more documentation Cluster Bootstrapping

    +

    To enable bootstrapping, the ArgoCD add-on allows passing an ApplicationRepository at build time. Currently, support the following types of repositories:

    +
      +
    • Public HTTP/HTTPS repository (eg GitHub)
    • +
    • Git repository accessing Private HTTPS requires username/password authentication.
    • +
    • Private git repository with SSH access requires an SSH key for authentication.
    • +
    • Private HTTPS accessible GitHub repository is accessible with GitHub token.
    • +
    +
    + +
    + +
    + +
    -
  • - "> - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Introducing ArgoCD -

    - - - - - - -

    About ArgoCD

    -

    The next step is to use ArgoCD to increase our team’s workload. There are two ways to leverage ArgoCD with - EKS Blueprints:

    -

    Integrate manual workload onboarding using the ArgoCD CLI and expose the local ArgoCD server to gain access - to the dashboard Leverage automated bootstrapping to automate your workload integration.

    -

    ArgoCD is a GitOps declarative, continuous delivery tool for Kubernetes. Provisions add Argo CDs to an EKS - cluster and optionally launch your workloads from public and private Git repositories.

    -

    The Argo CD add-on allows platform administrators to combine cluster provisioning and workload bootstrapping - in a single step and enables use cases such as cloning an existing running production cluster in another area - for a few minutes. This is critical for business continuity and disaster recovery scenarios as well as - availability across regions and geographic expansion.

    -

    ArgoCD for EKS Blueprints

    -

    ArgoCD aligns well with the principles that define the value proposition of using the EKS Blueprint, - including:

    -
      -
    • Application definitions, configurations, and environments must be declared and version controlled
    • -
    • Application deployment and lifecycle management should be automated, testable, and easy to understand
    • -
    • Follow the GitOps model of using Git repositories as a truism to define your desired application state -
    • -
    • Flexibility in how Kubernetes manifests are defined and managed
    • -
    • Argo CD automates the deployment of desired application states in specified target environments. - Application deployments can track updates to branches, tags, or be pinned to a specific manifest version at - a Git commit.
    • -
    -

    Create Workspace -

    -

    Argo CD is implemented as a Kubernetes controller that continuously monitors running applications and - compares the current, live state with the desired target state (as specified in the Git repo). Deployed - applications whose state directly deviates from the target state is considered out of sync. Argo CD reports - & visualizes discrepancies, and provides means to automatically or manually synchronize live state back to - the desired target state. Any modifications made to the desired target state in the Git repository can be - automatically applied and reflected in the specified target environments.

    -

    ArgoCD Bootstrapping

    -

    EKS Blueprints provides a bootstrap workload approach and add-ons from the customer GitOps repository.

    -

    You can see more documentation Cluster - Bootstrapping

    -

    To enable bootstrapping, the ArgoCD add-on allows passing an ApplicationRepository at build - time. Currently, support the following types of repositories:

    -
      -
    • Public HTTP/HTTPS repository (eg GitHub)
    • -
    • Git repository accessing Private HTTPS requires username/password authentication.
    • -
    • Private git repository with SSH access requires an SSH key for authentication.
    • -
    • Private HTTPS accessible GitHub repository is accessible with GitHub token.
    • -
    - - - - - -
    - -
    - - -
    - - -
    - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file +
    + +
    +
    +
    + + + + + + + + + + + + + + + + + diff --git a/public/8-deploy/8.2-deploy/index.html b/public/8-deploy/8.2-deploy/index.html index 50d499e..bd3dafe 100644 --- a/public/8-deploy/8.2-deploy/index.html +++ b/public/8-deploy/8.2-deploy/index.html @@ -1,1811 +1,1763 @@ + + + + + + + + + + Deploying Workload with ArgoCD :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - + +
    +
    + +
    + +
    + +
    + +

    + + Deploying Workload with ArgoCD +

    + + - +

    Deploy Workload with ArgoCD

    +

    Define workload repo

    +

    ArgoCD Bootstrapping starts with defining variables with repo information such as the URL of the workload repo and the in-app path of the applications. We will use a smaller version of the application. Full-scale example of an application containing a workload for team-burnham:

    +
    const repoUrl = 'https://github.com/aws-samples/eks-blueprints-workloads.git'
     
    +const bootstrapRepo : blueprints.ApplicationRepository = {
    +    repoUrl,
    +    targetRevision: 'workshop',
    +}
    +

    You can see more EKS Blueprints Workloads

    +

    ArgoCD add-on definition

    +

    The variables can then be passed as a parameter in the ArgoCD add-on definitions for our stage. Optionally, you can set a secret for the Argo admin.

    +
    const prodBootstrapArgo = new blueprints.ArgoCDAddOn({
    +    bootstrapRepo: {
    +        ...bootstrapRepo,
    +        path: 'envs/dev'
    +    },
    +});
    +

    You can set different paths from the repo based on the environment you are working in. Since our path has dev, test, and prod deployments, we can set the path to ’envs/dev’, ’envs/test’ and ’env/prod’ and set the variables to names individual.

    +

    We can then pass this information to the pipeline using the addOns method as part of the stackBuilder property that drives the blueprints.

    +
    blueprints.CodePipelineStack.builder()
    +  .name("pipeline-name")
    +  .owner("owner-name")
    +  .repository({
    +      repoUrl: 'repo-name',
    +      credentialsSecretName: 'github-token',
    +      targetRevision: 'main'
    +  })
    +  .wave({
    +      id: 'envs',
    +      stages: [
    +          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1').addOns(devBootstrapArgo)}
    +      ]
    +  })
    +  .build(app, 'pipeline-stack');
    +
      +
    1. Make changes to the file lib/pipeline-stack.ts
    2. +
    +
    // lib/pipeline.ts
    +import * as cdk from 'aws-cdk-lib';
    +import { Construct } from 'constructs';
    +import * as blueprints from '@aws-quickstart/eks-blueprints';
    +import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    +import { TeamApplication, TeamPlatform } from '../teams';
     
    +export default class PipelineConstruct extends Construct {
    +  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    +    super(scope, id)
     
    +    const account = props?.env?.account!;
    +    const region = props?.env?.region!;
     
    +    const blueprint = blueprints.EksBlueprint.builder()
    +      .account(account)
    +      .region(region)
    +      .version("auto")
    +      .addOns(
    +        new blueprints.ClusterAutoScalerAddOn,
    +        new blueprints.KubeviousAddOn(),
    +      )
    +      .teams(new TeamPlatform(account), new TeamApplication('burnham', account));
     
    +    // HERE WE ADD THE ARGOCD APP OF APPS REPO INFORMATION
    +    const repoUrl = 'https://github.com/aws-samples/eks-blueprints-workloads.git';
     
    +    const bootstrapRepo: blueprints.ApplicationRepository = {
    +      repoUrl,
    +      targetRevision: 'workshop',
    +    }
     
    +    // HERE WE GENERATE THE ADDON CONFIGURATIONS
    +    const devBootstrapArgo = new blueprints.ArgoCDAddOn({
    +      bootstrapRepo: {
    +        ...bootstrapRepo,
    +        path: 'envs/dev'
    +      },
    +    });
     
    -
    -
    -
    -
    -            
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - + blueprints.CodePipelineStack.builder() + .name("eks-blueprints-workshop-pipeline") + .owner("your-github-username") + .repository({ + repoUrl: 'your-repo-name', + credentialsSecretName: 'github-token', + targetRevision: 'main' + }) + // WE ADD THE STAGES IN WAVE FROM THE PREVIOUS CODE + .wave({ + id: "envs", + stages: [ + // HERE WE ADD OUR NEW ADDON WITH THE CONFIGURED ARGO CONFIGURATIONS + { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1').addOns(devBootstrapArgo) } + ] + }) + .build(scope, id + '-stack', props); + } +} +

    Add-ons

    +
      +
    1. Thực hiện push thay đổi lên Github repository
    2. +
    +
    git add .
    +git commit -m "Bootstrapping ArgoCD"
    +git push https://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@github.com/FromSunNews/my-eks-blueprints.git
    +

    Add-ons

    +
      +
    1. Đợi 15 phút sau sẽ hoàn thành
    2. +
    +

    Add-ons

    +
      +
    1. Thực hiện kiểm tra argocd namespace bằng lệnh.
    2. +
    +
    kubectl get ns
    +

    Add-ons

    +
    + +
    -
  • - - 7. Add-ons - - - - - - -
      - - - - - - - - - - - - - - - +
  • -
  • - "> - - 7.1 Introducing add-ons - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 7.2 Testing Cluster Autoscaler - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 7.3 Create add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Deploying Workload with ArgoCD -

    - - - - - - -

    Deploy Workload with ArgoCD

    -

    Define workload repo

    -

    ArgoCD Bootstrapping starts with defining variables with repo information such as the URL of the workload - repo and the in-app path of the applications. We will use a smaller version of the application. Full-scale - example of an application containing a workload for team-burnham:

    -
    const repoUrl = 'https://github.com/aws-samples/eks-blueprints-workloads.git'
    -
    -const bootstrapRepo : blueprints.ApplicationRepository = {
    -    repoUrl,
    -    targetRevision: 'workshop',
    -}
    -
    -

    You can see more EKS Blueprints - Workloads

    -

    ArgoCD add-on definition

    -

    The variables can then be passed as a parameter in the ArgoCD add-on definitions for our stage. Optionally, - you can set a secret for the Argo admin.

    -
    const prodBootstrapArgo = new blueprints.ArgoCDAddOn({
    -    bootstrapRepo: {
    -        ...bootstrapRepo,
    -        path: 'envs/dev'
    -    },
    -});
    -
    -

    You can set different paths from the repo based on the environment you are working in. Since our path has - dev, test, and prod deployments, we can set the path to ’envs/dev’, ’envs/test’ and ’env/prod’ and set the - variables to names individual.

    -

    We can then pass this information to the pipeline using the addOns method as part of the - stackBuilder property that drives the blueprints.

    -
    blueprints.CodePipelineStack.builder()
    -  .name("pipeline-name")
    -  .owner("owner-name")
    -  .repository({
    -      repoUrl: 'repo-name',
    -      credentialsSecretName: 'github-token',
    -      targetRevision: 'main'
    -  })
    -  .wave({
    -      id: 'envs',
    -      stages: [
    -          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1').addOns(devBootstrapArgo)}
    -      ]
    -  })
    -  .build(app, 'pipeline-stack');
    -
    -
      -
    1. Make changes to the file lib/pipeline-stack.ts
    2. -
    -
    // lib/pipeline.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    -import { TeamApplication, TeamPlatform } from '../teams';
    -
    -export default class PipelineConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    -    super(scope, id)
    -
    -    const account = props?.env?.account!;
    -    const region = props?.env?.region!;
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -      .account(account)
    -      .region(region)
    -      .version("auto")
    -      .addOns(
    -        new blueprints.ClusterAutoScalerAddOn,
    -        new blueprints.KubeviousAddOn(),
    -      )
    -      .teams(new TeamPlatform(account), new TeamApplication('burnham', account));
    -
    -    // HERE WE ADD THE ARGOCD APP OF APPS REPO INFORMATION
    -    const repoUrl = 'https://github.com/aws-samples/eks-blueprints-workloads.git';
    -
    -    const bootstrapRepo: blueprints.ApplicationRepository = {
    -      repoUrl,
    -      targetRevision: 'workshop',
    -    }
    -
    -    // HERE WE GENERATE THE ADDON CONFIGURATIONS
    -    const devBootstrapArgo = new blueprints.ArgoCDAddOn({
    -      bootstrapRepo: {
    -        ...bootstrapRepo,
    -        path: 'envs/dev'
    -      },
    -    });
    -
    -    blueprints.CodePipelineStack.builder()
    -      .name("eks-blueprints-workshop-pipeline")
    -      .owner("your-github-username")
    -      .repository({
    -          repoUrl: 'your-repo-name',
    -          credentialsSecretName: 'github-token',
    -          targetRevision: 'main'
    -      })
    -      // WE ADD THE STAGES IN WAVE FROM THE PREVIOUS CODE
    -      .wave({
    -        id: "envs",
    -        stages: [
    -          // HERE WE ADD OUR NEW ADDON WITH THE CONFIGURED ARGO CONFIGURATIONS
    -          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1').addOns(devBootstrapArgo) }
    -        ]
    -      })
    -      .build(scope, id + '-stack', props);
    -  }
    -}
    -
    -

    Add-ons

    -
      -
    1. Thực hiện push thay đổi lên Github repository
    2. -
    -
    git add .
    -git commit -m "Bootstrapping ArgoCD"
    -git push https://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@github.com/FromSunNews/my-eks-blueprints.git
    -
    -

    Add-ons

    -
      -
    1. Đợi 15 phút sau sẽ hoàn thành
    2. -
    -

    Add-ons

    -
      -
    1. Thực hiện kiểm tra argocd namespace bằng lệnh.
    2. -
    -
    kubectl get ns
    -
    -

    Add-ons

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/8-deploy/8.3-manage/index.html b/public/8-deploy/8.3-manage/index.html index 8e00912..cead40c 100644 --- a/public/8-deploy/8.3-manage/index.html +++ b/public/8-deploy/8.3-manage/index.html @@ -1,1721 +1,1680 @@ + + + + + + + + + + Manage workloads on ArgoCD :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Manage workloads on ArgoCD +

    + + +

    Workload management on ArgoCD

    +

    Now, let’s log into the user interface and see how the workloads are being managed through ArgoCD.

    +
      +
    1. By default, the argocd-server service is not public. For the purpose of this workshop, we will use the Load Balancer to use.
    2. +
    +
    kubectl patch svc blueprints-addon-argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
    +

    Add-ons

    +
      +
    1. Wait 5 minutes, LoadBalancer is created.
    2. +
    +
    export ARGOCD_SERVER=`kubectl get svc blueprints-addon-argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
    +

    Add-ons

    +
      +
    1. TYPE and EXTERNAL-IP on argo server service changed to LoadBalancer. Copy the EXTERNAL-IP of LoadBalancer.
    2. +
    +
    kubectl get svc -n argocd
    +

    Add-ons

    +
      +
    1. Open a browser and paste the EXTERNAL-IP of LoadBalancer in.
    2. +
    +

    Add-ons

    +
      +
    1. Implement automatic password generation and username is admin
    2. +
    +
    kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
    +

    Add-ons

    +
      +
    1. After logging in, observe the workloads on the ArgoCD UI
    2. +
    +

    Add-ons

    +

    Add-ons

    +
    + +
    + +
    + +
    -
  • - "> - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Manage workloads on ArgoCD -

    - - - - - - -

    Workload management on ArgoCD

    -

    Now, let’s log into the user interface and see how the workloads are being managed through ArgoCD.

    -
      -
    1. By default, the argocd-server service is not public. For the purpose of this workshop, we - will use the Load Balancer to use.
    2. -
    -
    kubectl patch svc blueprints-addon-argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
    -
    -

    Add-ons

    -
      -
    1. Wait 5 minutes, LoadBalancer is created.
    2. -
    -
    export ARGOCD_SERVER=`kubectl get svc blueprints-addon-argocd-server -n argocd -o json | jq --raw-output '.status.loadBalancer.ingress[0].hostname'`
    -
    -

    Add-ons

    -
      -
    1. TYPE and EXTERNAL-IP on argo server service changed to LoadBalancer. Copy the EXTERNAL-IP of LoadBalancer. -
    2. -
    -
    kubectl get svc -n argocd
    -
    -

    Add-ons

    -
      -
    1. Open a browser and paste the EXTERNAL-IP of LoadBalancer in.
    2. -
    -

    Add-ons

    -
      -
    1. Implement automatic password generation and username is admin
    2. -
    -
    kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
    -
    -

    Add-ons

    -
      -
    1. After logging in, observe the workloads on the ArgoCD UI
    2. -
    -

    Add-ons

    -

    Add-ons

    - - - - - -
    - -
    - - -
    - - -
    - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file +
    + +
    +
    +
    + + + + + + + + + + + + + + + + + diff --git a/public/8-deploy/index.html b/public/8-deploy/index.html index 525cac7..6d92ecb 100644 --- a/public/8-deploy/index.html +++ b/public/8-deploy/index.html @@ -1,1700 +1,1659 @@ + + + + + + + + + + Deploying Workload with ArgoCD :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Deploying Workload with ArgoCD +

    + + +

    Deploy Workload with ArgoCD

    +

    You have now successfully provisioned an EKS cluster with your teams, roles, and pipelines to automate and manage the infrastructure. The next step is to leverage GitOps methodologies using ArgoCD to manage and automate our application workloads using technologies and tools you are probably already familiar with like Git.

    +

    In this section, we will demonstrate how to leverage ArgoCD to deploy and manage our application workloads.

    +

    What is ArgoCD?

    +
      +
    • How to set up and deploy our workload with ArgoCD. Use the ArgoCD user interface to manage deployed workloads.
    • +
    • It is important to understand the key principles of GitOps before diving into this section.
    • +
    +

    Key Principles of GitOps

    +
      +
    • GitOps is declarative which means that a system managed with GitOps must have its desired state expressed declaratively.
    • +
    • The desired state is stored in a way that enforces immutability, versioning, and restoring a complete version history. The software agent pulls the desired state declarations from the source.
    • +
    • Softare employees continuously observe the actual system state and try to apply the desired state.
    • +
    +

    Content

    +
      +
    1. About ArgoCD
    2. +
    3. Deploy Workload with ArgoCD
    4. +
    5. Workload Management on ArgoCD
    6. +
    +
    + +
    + +
    + +
    -
  • - "> - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Deploying Workload with ArgoCD -

    - - - - - - -

    Deploy Workload with ArgoCD

    -

    You have now successfully provisioned an EKS cluster with your teams, roles, and pipelines to automate and - manage the infrastructure. The next step is to leverage GitOps methodologies using ArgoCD to manage and - automate our application workloads using technologies and tools you are probably already familiar with like - Git.

    -

    In this section, we will demonstrate how to leverage ArgoCD to deploy and manage our application workloads. -

    -

    What is ArgoCD?

    -
      -
    • How to set up and deploy our workload with ArgoCD. Use the ArgoCD user interface to manage deployed - workloads.
    • -
    • It is important to understand the key principles of GitOps before diving into this section.
    • -
    -

    Key Principles of GitOps

    -
      -
    • GitOps is declarative which means that a system managed with GitOps must have its desired state expressed - declaratively.
    • -
    • The desired state is stored in a way that enforces immutability, versioning, and restoring a complete - version history. The software agent pulls the desired state declarations from the source.
    • -
    • Softare employees continuously observe the actual system state and try to apply the desired state.
    • -
    -

    Content

    -
      -
    1. About ArgoCD
    2. -
    3. Deploy Workload with ArgoCD
    4. -
    5. Workload Management on ArgoCD
    6. -
    - - - - - -
    - -
    - - -
    - - -
    - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file +
    + +
    +
    +
    + + + + + + + + + + + + + + + + + diff --git a/public/9-cleanup/index.html b/public/9-cleanup/index.html index 392fedd..1c08912 100644 --- a/public/9-cleanup/index.html +++ b/public/9-cleanup/index.html @@ -1,1691 +1,1655 @@ + + + + + + + + + + Clean up resources :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Clean up resources +

    + + +

    Clean up resources

    +
      +
    1. Perform a delete EKS Blueprints
    2. +
    +
    cd ~/environment/my-eks-blueprints
    +cdk destroy --all
    +

    Create Workspace

    +
      +
    1. Select y
    2. +
    +

    Create Workspace

    +
      +
    1. Go to CloudFormation console
    2. +
    +

    Create Workspace

    +
      +
    1. Select the Stack to delete. Then select Delete +Create Workspace
    2. +
    +
    + +
    -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - -
  • -
  • - "> - - 7.3 Create add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Clean up resources -

    - - - - - - -

    Clean up resources

    -
      -
    1. Perform a delete EKS Blueprints
    2. -
    -
    cd ~/environment/my-eks-blueprints
    -cdk destroy --all
    -
    -

    Create Workspace

    -
      -
    1. Select y
    2. -
    -

    Create Workspace

    -
      -
    1. Go to CloudFormation console
    2. -
    -

    Create Workspace

    -
      -
    1. Select the Stack to delete. Then select Delete - Create Workspace -
    2. -
    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/categories/index.html b/public/categories/index.html index 772fcf3..2bc15d9 100644 --- a/public/categories/index.html +++ b/public/categories/index.html @@ -1,1662 +1,1629 @@ + + + + + + + + + + Categories :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + taxonomy :: + + Categories +

    + + +
      + +
    +
    + +
    + +
    + +
    -
  • - "> - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - taxonomy :: - - Categories -

    - - - - - - - - -
      - -
    - - -
    - -
    - - -
    - - -
    - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file +
    + +
    +
    +
    + + + + + + + + + + + + + + + + + diff --git a/public/index.html b/public/index.html index 9847816..ca39fdb 100644 --- a/public/index.html +++ b/public/index.html @@ -1,1667 +1,1629 @@ + + + + + + + + + + Session Management :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    + +
    + +
    + + + + + + navigation + + + +

    Introduction to EKS Blueprints

    +

    Architecture Diagram

    +

    ConnectPrivate

    +

    Core Concepts

    +

    ConnectPrivate

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ConceptsDescription
    ClusterAn EKS Cluster deployed following best practices.
    Resource ProviderResource providers are abstractions that supply external AWS resources to the cluster (e.g., hosted zones, VPCs, etc.).
    Add-onAllows you to configure, deploy, and update the operational software or add-ons that provide key functionality to support your Kubernetes applications.
    TeamsA logical grouping of IAM identities that have access to Kubernetes namespaces or cluster administrative access depending upon the team type.
    PipelinesContinuous Delivery pipelines for deploying clusters and add-ons
    ApplicationAn application that runs within an EKS Cluster.
    +

    Blueprint

    +

    ConnectPrivate

    +

    EKS Blueprints allow you to configure and deploy what is known as a blueprint cluster. A blueprint combines clusters, add-ons, and teams into a cohesive object that can be deployed as a whole. Once the blueprint is configured, it can be easily deployed across any number of AWS accounts and regions. Blueprints also leverage GitOps tools to facilitate cluster bootstrapping and workload integration.

    +

    Contents

    +
      +
    1. Introduction
    2. +
    3. Preparation Steps
    4. +
    5. Creating EKS Blueprints
    6. +
    7. Creating CDK Project
    8. +
    9. Deploying Pipeline
    10. +
    11. Onboarding Teams
    12. +
    13. Add-ons
    14. +
    15. Deployment
    16. +
    17. Cleanup Resources
    18. +
    + + + +
    - "> - - 7.3 Create add-ons - - - - - - - - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Deploying Workload with ArgoCD - - - - - - -
      - - - - - - - - - - - - - - - - - -
    • - - 8.1 Introducing ArgoCD - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + -
    • +
  • + +
    +
    +
    + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - -
  • - - 8.2 Deploying Workload with ArgoCD - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 8.3 Manage workloads on ArgoCD - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    - -
    - -
    - - - - - - navigation - - - -

    Introduction to EKS Blueprints

    -

    Architecture Diagram

    -

    ConnectPrivate

    -

    Core Concepts

    -

    ConnectPrivate

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ConceptsDescription
    ClusterAn EKS Cluster deployed following best practices.
    Resource ProviderResource providers are abstractions that supply external AWS resources to the cluster (e.g., hosted - zones, VPCs, etc.).
    Add-onAllows you to configure, deploy, and update the operational software or add-ons that provide key - functionality to support your Kubernetes applications.
    TeamsA logical grouping of IAM identities that have access to Kubernetes namespaces or cluster - administrative access depending upon the team type.
    PipelinesContinuous Delivery pipelines for deploying clusters and add-ons
    ApplicationAn application that runs within an EKS Cluster.
    -

    Blueprint

    -

    ConnectPrivate

    -

    EKS Blueprints allow you to configure and deploy what is known as a blueprint cluster. A blueprint combines - clusters, add-ons, and teams into a cohesive object that can be deployed as a whole. Once the blueprint is - configured, it can be easily deployed across any number of AWS accounts and regions. Blueprints also leverage - GitOps tools to facilitate cluster bootstrapping and workload integration.

    -

    Contents

    -
      -
    1. Introduction
    2. -
    3. Preparation Steps
    4. -
    5. Creating EKS Blueprints
    6. -
    7. Creating CDK Project
    8. -
    9. Deploying Pipeline
    10. -
    11. Onboarding Teams
    12. -
    13. Add-ons
    14. -
    15. Deployment
    16. -
    17. Cleanup Resources
    18. -
    - - - -
    - - -
    - - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/public/tags/index.html b/public/tags/index.html index a785a42..970d513 100644 --- a/public/tags/index.html +++ b/public/tags/index.html @@ -1,1662 +1,1629 @@ + + + + + + + + + + Tags :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + taxonomy :: + + Tags +

    + + +
      + +
    +
    + +
    + +
    + +
    -
  • - "> - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Access Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Manage teams using IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Add-ons - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Deploying Workload with ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Clean up resources - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - taxonomy :: - - Tags -

    - - - - - - - - -
      - -
    - - -
    - -
    - - -
    - - -
    - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file +
    + +
    +
    +
    + + + + + + + + + + + + + + + + + diff --git a/public/vi/1-introduce/index.html b/public/vi/1-introduce/index.html index 81d524d..22b7421 100644 --- a/public/vi/1-introduce/index.html +++ b/public/vi/1-introduce/index.html @@ -12,21 +12,21 @@ Giới thiệu :: AWS System Manager - - - - - - - - - + + + + + + + + + - + - + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - -
  • +
    + +

    + + Tạo VPC và EC2 Instance +

    + + + + + + +

    Tạo VPC

    +
      +
    1. +

      Truy cập AWS Management Console

      +
        +
      • Tìm VPC
      • +
      • Chọn Create VPC
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Trong giao diện VPC settings

      +
        +
      • Resources to create, chọn VPC and more
      • +
      • Name tag auto-generation, nhập EKS Blueprint VPC
      • +
      • IPv4 CIDR block, nhập 10.0.0.0/16
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. Chọn các AZs +
        +
      • Chọn các AZs theo hình và bấm chọn Create VPC
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. Sau khi tạo xong, chúng ta sẽ có một VPC như thế này +Create Workspace +Create Workspace
    2. +
    +

    Tạo EC2 Instance

    +
      +
    1. +

      Truy cập AWS Management Console

      +
        +
      • Tìm EC2
      • +
      • Chọn Launch Instance
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Trong giao diện Launch an instance

      +
        +
      • Name and tags, chọn EKS Blueprint Instance
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Ở mục Application and OS Images (Amazon Machine Image)

      +
        +
      • Chọn Amazon Linux 2023 AMI
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Ở mục Instance type và Key pair

      +
        +
      • Chọn t3.small
      • +
      • Tạo ra 1 key pair đặt tên là kp-eks-blueprint +Create Workspace
      • +
      +
    2. +
    3. +

      Ở mục Network settings

      +
        +
      • Chọn VPC mà bạn mới tạo
      • +
      • Chọn public-subnet-1
      • +
      • Bật Auto-assign public IP
      • +
      • Tạo một Security Group
      • +
      +
    4. +
    +

    Create Workspace

    +
      +
    1. Ở mục Configure storage thay đổi bộ nhớ thành 30GB va nhấn Launch Instance
    2. +
    +

    Create Workspace

    +
      +
    1. Vậy là chúng ta đã hoàn thành việc khởi tạo EC2 Instance.
    2. +
    +

    Create Workspace

    + + + + + +
    + +
    + + +
    - "> - - 7.3 Tạo add-ons - - - - - - - - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Tạo VPC và EC2 Instance -

    - - - - - - -

    Tạo VPC

    -
      -
    1. -

      Truy cập AWS Management Console

      -
        -
      • Tìm VPC
      • -
      • Chọn Create VPC
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Trong giao diện VPC settings

      -
        -
      • Resources to create, chọn VPC and more
      • -
      • Name tag auto-generation, nhập EKS Blueprint VPC
      • -
      • IPv4 CIDR block, nhập 10.0.0.0/16
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. Chọn các AZs -
        -
      • Chọn các AZs theo hình và bấm chọn Create VPC
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. Sau khi tạo xong, chúng ta sẽ có một VPC như thế này - Create Workspace - Create Workspace -
    2. -
    -

    Tạo EC2 Instance

    -
      -
    1. -

      Truy cập AWS Management Console

      -
        -
      • Tìm EC2
      • -
      • Chọn Launch Instance
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Trong giao diện Launch an instance

      -
        -
      • Name and tags, chọn EKS Blueprint Instance
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Ở mục Application and OS Images (Amazon Machine Image)

      -
        -
      • Chọn Amazon Linux 2023 AMI
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Ở mục Instance type và Key pair

      -
        -
      • Chọn t3.small
      • -
      • Tạo ra 1 key pair đặt tên là kp-eks-blueprint - Create Workspace -
      • -
      -
    2. -
    3. -

      Ở mục Network settings

      -
        -
      • Chọn VPC mà bạn mới tạo
      • -
      • Chọn public-subnet-1
      • -
      • Bật Auto-assign public IP
      • -
      • Tạo một Security Group
      • -
      -
    4. -
    -

    Create Workspace

    -
      -
    1. Ở mục Configure storage thay đổi bộ nhớ thành 30GB va nhấn - Launch Instance
    2. -
    -

    Create Workspace

    -
      -
    1. Vậy là chúng ta đã hoàn thành việc khởi tạo EC2 Instance.
    2. -
    -

    Create Workspace

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/2-prerequiste/2.2-connectec2/index.html b/public/vi/2-prerequiste/2.2-connectec2/index.html index 3b08453..80a14e9 100644 --- a/public/vi/2-prerequiste/2.2-connectec2/index.html +++ b/public/vi/2-prerequiste/2.2-connectec2/index.html @@ -1,1767 +1,1703 @@ + + + + + + + + + + Kết nối SSH từ Visual Studio Code đến EC2 Instance :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - -
  • +
    + +

    + + Kết nối SSH từ Visual Studio Code đến EC2 Instance +

    + + + + + + +

    Kết nối SSH từ Visual Studio Code đến EC2 Instance

    +

    Kết nối SSH từ Visual Studio Code đến EC2 Instance là giải pháp nhanh chóng thay thế cho việc sử dụng Cloud9.

    +
      +
    1. +

      Chúng ta sẽ tải Visual Studio Code và extensions có tên là Remote - SSH

      +

      Bạn có thể tải vs code tại đây: Download VSCode

      +

      Sau khi tải xong. Chúng ta sẽ tải extension sau đây:

      +

      Create Workspace

      +
    2. +
    3. +

      Sau khi tải xong chúng ta nhấn có biểu tượng ở phía dưới góc trái màn hình một hộp thoại sẽ được mở ra. +Create Workspace

      +
    4. +
    5. +

      Chúng ta sẽ nhấn vào Connect to Host. +Create Workspace

      +
    6. +
    7. +

      Nhấn vào Add New SSH Host. +Create Workspace

      +
    8. +
    9. +

      Trong ô input nhập eks-blueprint-remote và nhấn Enter. +Create Workspace

      +
    10. +
    11. +

      Nhấn vào đường dẫn trong C:\Users\ADMIN.ssh\config để cấu hình +Create Workspace

      +
    12. +
    13. +

      Ngay tại khối code SSH name mới được cấu hình trước đó hãy thay đổi thông tin đúng với địa chỉ IPv4 của EC2 Instance và đường dẫn đến Key Pair trong máy của bạn.
      +Create Workspace

      +
    14. +
    15. +

      Bấm vào biểu tượng SSH ở dưới cùng góc bên trái và bắt đầu thực hiện Connect. +Create Workspace

      +
    16. +
    17. +

      Chọn Continue +Create Workspace

      +
    18. +
    19. +

      Nhấn vào Linux +Create Workspace

      +
    20. +
    21. +

      Chọn Open Folder và Click OK +Create Workspace

      +
    22. +
    23. +

      Đây là giao diện sau khi kết nối +Create Workspace

      +
    24. +
    + + + + + +
    + +
    + + +
    - "> - - 7.3 Tạo add-ons - - - - - - - - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Kết nối SSH từ Visual Studio Code đến EC2 Instance -

    - - - - - - -

    Kết nối SSH từ Visual Studio Code đến EC2 - Instance

    -

    Kết nối SSH từ Visual Studio Code đến EC2 Instance là giải pháp nhanh chóng thay thế cho việc sử - dụng Cloud9.

    -
      -
    1. -

      Chúng ta sẽ tải Visual Studio Code và extensions có tên là Remote - SSH

      -

      Bạn có thể tải vs code tại đây: Download VSCode -

      -

      Sau khi tải xong. Chúng ta sẽ tải extension sau đây:

      -

      Create Workspace

      -
    2. -
    3. -

      Sau khi tải xong chúng ta nhấn có biểu tượng ở phía dưới góc trái màn hình một hộp thoại - sẽ được mở ra. - Create Workspace -

      -
    4. -
    5. -

      Chúng ta sẽ nhấn vào Connect to Host. - Create Workspace -

      -
    6. -
    7. -

      Nhấn vào Add New SSH Host. - Create Workspace -

      -
    8. -
    9. -

      Trong ô input nhập eks-blueprint-remote và nhấn Enter. - Create Workspace -

      -
    10. -
    11. -

      Nhấn vào đường dẫn trong C:\Users\ADMIN.ssh\config để cấu hình - Create Workspace -

      -
    12. -
    13. -

      Ngay tại khối code SSH name mới được cấu hình trước đó hãy thay đổi thông tin đúng với địa - chỉ IPv4 của EC2 Instance và đường dẫn đến Key Pair trong máy - của bạn.
      - Create Workspace -

      -
    14. -
    15. -

      Bấm vào biểu tượng SSH ở dưới cùng góc bên trái và bắt đầu thực hiện Connect. - Create Workspace -

      -
    16. -
    17. -

      Chọn Continue - Create Workspace -

      -
    18. -
    19. -

      Nhấn vào Linux - Create Workspace -

      -
    20. -
    21. -

      Chọn Open Folder và Click OK - Create Workspace -

      -
    22. -
    23. -

      Đây là giao diện sau khi kết nối - Create Workspace -

      -
    24. -
    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/2-prerequiste/2.3-installtool/index.html b/public/vi/2-prerequiste/2.3-installtool/index.html index 147d862..b103b17 100644 --- a/public/vi/2-prerequiste/2.3-installtool/index.html +++ b/public/vi/2-prerequiste/2.3-installtool/index.html @@ -1,1729 +1,1682 @@ + + + + + + + + + + Cài đặt Tool :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Cài đặt Tool +

    + + +

    Cài đặt kubectl

    +

    Các cluster của Amazon EKS sẽ cần công cụ kubectlkubeletaws-cli hoặc aws-iam-authenticator để cho phép chứng thực IAM cho Kubernetes cluster của bạn.

    +
      +
    1. Chúng ta sử dụng lệnh sau để cài kubectl.
    2. +
    +
    sudo curl --silent --location -o /usr/local/bin/kubectl \
    +   https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
     
    +sudo chmod +x /usr/local/bin/kubectl
    +

    Create Workspace +Các bạn có thể xem thêm cài đặt kubectl trên AWS

    +
      +
    1. Cập nhật awscli
    2. +
    +
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    +unzip awscliv2.zip
    +sudo ./aws/install
    +

    Create Workspace

    +
      +
    1. Chúng ta thực hiện xác thực.
    2. +
    +
    for command in kubectl jq envsubst aws
    +  do
    +    which $command &>/dev/null && echo "$command in path" || echo "$command NOT FOUND"
    +  done
    +

    Create Workspace

    +
      +
    1. Kích hoạt kubectl bash_completion
    2. +
    +
    kubectl completion bash >>  ~/.bash_completion
    +. /etc/profile.d/bash_completion.sh
    +. ~/.bash_completion
    +

    Create Workspace

    +
    + +
    + +
    + +
    -
  • - "> - - 5.2 Tạo Pipeline - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Truy cập Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Quản lý nhóm bằng IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Cài đặt Tool -

    - - - - - - -

    Cài đặt kubectl

    -

    Các cluster của Amazon EKS sẽ cần công cụ kubectlkubelet và - aws-cli hoặc aws-iam-authenticator để cho phép chứng thực - IAM cho Kubernetes cluster của bạn.

    -
      -
    1. Chúng ta sử dụng lệnh sau để cài kubectl.
    2. -
    -
    sudo curl --silent --location -o /usr/local/bin/kubectl \
    -   https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/kubectl
    -
    -sudo chmod +x /usr/local/bin/kubectl
    -
    -

    Create Workspace - Các bạn có thể xem thêm cài đặt kubectl trên AWS

    -
      -
    1. Cập nhật awscli
    2. -
    -
    curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
    -unzip awscliv2.zip
    -sudo ./aws/install
    -
    -

    Create Workspace

    -
      -
    1. Chúng ta thực hiện xác thực.
    2. -
    -
    for command in kubectl jq envsubst aws
    -  do
    -    which $command &>/dev/null && echo "$command in path" || echo "$command NOT FOUND"
    -  done
    -
    -

    Create Workspace

    -
      -
    1. Kích hoạt kubectl bash_completion
    2. -
    -
    kubectl completion bash >>  ~/.bash_completion
    -. /etc/profile.d/bash_completion.sh
    -. ~/.bash_completion
    -
    -

    Create Workspace

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/2-prerequiste/2.4-createrole/index.html b/public/vi/2-prerequiste/2.4-createrole/index.html index 89142fe..f702511 100644 --- a/public/vi/2-prerequiste/2.4-createrole/index.html +++ b/public/vi/2-prerequiste/2.4-createrole/index.html @@ -1,1751 +1,1708 @@ + + + + + + + + + + Tạo IAM role :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - -
  • +
    + +

    + + Tạo IAM role +

    + + + + + + +

    Tạo IAM role cho Cloud9 instance

    +
      +
    1. +

      Đầu tiên, chúng ta đến giao diện AWS Management Console

      +
        +
      • Tìm và chọn IAM
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Trong giao diện IAM

      +
        +
      • Chọn Role
      • +
      • Chọn Create role
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Trong bước Select trusted entity

      +
        +
      • Chọn AWS service
      • +
      • Chọn EC2
      • +
      • Chọn Next
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Trong bước Add permission

      +
        +
      • Tìm AdministratorAccess
      • +
      • Chọn AdministratorAccess
      • +
      • Chọn Next
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Hoàn thành Name

      +
        +
      • Name, nhập eks-blueprints-cdk-workshop-admin
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. Chọn Create role
    2. +
    +

    Create Workspace

    +
      +
    1. Hoàn thành tạo IAM role cho EC2 Instance +Create Workspace
    2. +
    + + + + + +
    + +
    + + +
    - "> - - 7.3 Tạo add-ons - - - - - - - - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Tạo IAM role -

    - - - - - - -

    Tạo IAM role cho Cloud9 instance

    -
      -
    1. -

      Đầu tiên, chúng ta đến giao diện AWS Management Console

      -
        -
      • Tìm và chọn IAM
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Trong giao diện IAM

      -
        -
      • Chọn Role
      • -
      • Chọn Create role
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Trong bước Select trusted entity

      -
        -
      • Chọn AWS service
      • -
      • Chọn EC2
      • -
      • Chọn Next
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Trong bước Add permission

      -
        -
      • Tìm AdministratorAccess
      • -
      • Chọn AdministratorAccess
      • -
      • Chọn Next
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Hoàn thành Name

      -
        -
      • Name, nhập eks-blueprints-cdk-workshop-admin
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. Chọn Create role
    2. -
    -

    Create Workspace

    -
      -
    1. Hoàn thành tạo IAM role cho EC2 Instance - Create Workspace -
    2. -
    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/2-prerequiste/2.5-attachrole/index.html b/public/vi/2-prerequiste/2.5-attachrole/index.html index 17b1b54..d568a80 100644 --- a/public/vi/2-prerequiste/2.5-attachrole/index.html +++ b/public/vi/2-prerequiste/2.5-attachrole/index.html @@ -1,1718 +1,1680 @@ + + + + + + + + + + Gán IAM role :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Gán IAM role +

    + + +

    Gán IAM role cho Cloud9 instance

    +
      +
    1. Đầu tiên, chúng ta đến giao diện AWS Management Console +
        +
      • Tìm ec2 và chọn Instance
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Trong giao diện EC2 instance

      +
        +
      • Chọn EKS Blueprint Instance
      • +
      • Chọn Actions
      • +
      • Chọn Security
      • +
      • Chọn Modify IAM role
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Trong giao diện Modify IAM role

      +
        +
      • Chọn eks-blueprints-cdk-workshop-admin role
      • +
      • Chọn Update IAM role
      • +
      +
    2. +
    +

    Create Workspace

    +
    + +
    -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Truy cập Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Quản lý nhóm bằng IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - -
  • -
  • - "> - - 7.3 Tạo add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Gán IAM role -

    - - - - - - -

    Gán IAM role cho Cloud9 instance

    -
      -
    1. Đầu tiên, chúng ta đến giao diện AWS Management Console -
        -
      • Tìm ec2 và chọn Instance
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Trong giao diện EC2 instance

      -
        -
      • Chọn EKS Blueprint Instance
      • -
      • Chọn Actions
      • -
      • Chọn Security
      • -
      • Chọn Modify IAM role
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Trong giao diện Modify IAM role

      -
        -
      • Chọn eks-blueprints-cdk-workshop-admin role
      • -
      • Chọn Update IAM role
      • -
      -
    2. -
    -

    Create Workspace

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/2-prerequiste/2.6-updaterole/index.html b/public/vi/2-prerequiste/2.6-updaterole/index.html index 92fc524..dab7c0a 100644 --- a/public/vi/2-prerequiste/2.6-updaterole/index.html +++ b/public/vi/2-prerequiste/2.6-updaterole/index.html @@ -1,1719 +1,1675 @@ + + + + + + + + + + Cập nhật IAM :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - + +
    +
    + +
    + +
    + +
    + +

    + + Cập nhật IAM +

    + - + - +

    Cập nhật IAM role

    +
      +
    1. Tiến hành cấu hình
    2. +
    +
    export ACCOUNT_ID=xxx
    +export AWS_REGION=xxx
    +

    Create Workspace

    +
      +
    1. Kiểm tra AWS_REGION
    2. +
    +
    test -n "$AWS_REGION" && echo AWS_REGION is "$AWS_REGION" || echo AWS_REGION is not set
    +

    Create Workspace

    +
      +
    1. Lưu lại vào bash_profile
    2. +
    +
    echo "export ACCOUNT_ID=${ACCOUNT_ID}" | tee -a ~/.bash_profile
    +echo "export AWS_REGION=${AWS_REGION}" | tee -a ~/.bash_profile
    +aws configure set default.region ${AWS_REGION}
    +aws configure get default.region
    +

    Create Workspace

    +
      +
    1. Xác thực IAM role
    2. +
    +
    aws sts get-caller-identity --query Arn | grep eks-blueprints-cdk-workshop-admin -q && echo "IAM role valid" || echo "IAM role NOT valid"
    +

    Create Workspace

    +

    Nếu kết quả là IAM role NOT valid, hãy kiểm tra lại các bước trước xem thông tin IAM role bạn tạo và gán vào Cloud9 Workspace có chính xác không nhé.

    +
    + +
    + +
    + +
    - - - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Truy cập Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Quản lý nhóm bằng IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Cập nhật IAM -

    - - - - - - -

    Cập nhật IAM role

    -
      -
    1. Tiến hành cấu hình
    2. -
    -
    export ACCOUNT_ID=xxx
    -export AWS_REGION=xxx
    -
    -

    Create Workspace

    -
      -
    1. Kiểm tra AWS_REGION
    2. -
    -
    test -n "$AWS_REGION" && echo AWS_REGION is "$AWS_REGION" || echo AWS_REGION is not set
    -
    -

    Create Workspace

    -
      -
    1. Lưu lại vào bash_profile
    2. -
    -
    echo "export ACCOUNT_ID=${ACCOUNT_ID}" | tee -a ~/.bash_profile
    -echo "export AWS_REGION=${AWS_REGION}" | tee -a ~/.bash_profile
    -aws configure set default.region ${AWS_REGION}
    -aws configure get default.region
    -
    -

    Create Workspace

    -
      -
    1. Xác thực IAM role
    2. -
    -
    aws sts get-caller-identity --query Arn | grep eks-blueprints-cdk-workshop-admin -q && echo "IAM role valid" || echo "IAM role NOT valid"
    -
    -

    Create Workspace

    -

    Nếu kết quả là IAM role NOT valid, hãy kiểm tra lại các bước trước xem thông tin IAM role bạn tạo và gán vào - Cloud9 Workspace có chính xác không nhé.

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/2-prerequiste/index.html b/public/vi/2-prerequiste/index.html index 863d106..04714a2 100644 --- a/public/vi/2-prerequiste/index.html +++ b/public/vi/2-prerequiste/index.html @@ -12,21 +12,21 @@ Các bước chuẩn bị :: AWS System Manager - - - - - - - - - + + + + + + + + + - + - + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - -
  • - "> - - 7.2 Kiểm tra Cluster Autoscaler - - - - - - - - - - - - - - - +
    + +

    + + Tạo EKS Blueprints +

    + + + + + + +

    Tạo EKS Blueprints

    +

    Tham khảo cách tạo Repository Github

    +
      +
    1. +

      Truy cập vào New repository của Github

      +
        +
      • Trong giao diện Create a new repository, nhập my-eks-blueprints đối với Repository name
      • +
      • Chọn Public
      • +
      • Chọn Create repisitory
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Sau khi tạo repository thành công

      +
        +
      • Sao chép và lưu trữ đường dẫn HTTPS của Git repository
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Trong giao diện Github chúng ta sẽ cài đặt và tạo token

      +
        +
      • Chọn vào Avatar của tài khoản Github của bạn
      • +
      • Chọn Settings +Create Workspace
      • +
      +
    2. +
    3. +

      Sau đó, kéo xuống và chọn Developer settings

      +
    4. +
    +

    Create Workspace

    +
      +
    1. +

      Trong giao diện Developer settings

      +
        +
      • Chọn Personal access tokens
      • +
      • Chọn Generate new toke
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Trong giao diện Generate new token

      +
        +
      • Note, nhập eks-workshop-token
      • +
      • Chọn các scope sau: repoadmin:repo_hook
      • +
      • Chọn Generate token
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. Chọn Generate token
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Hoàn thành Generate token

      +
        +
      • Sao chép và lưu giữ token
      • +
      +
    2. +
    +

    Create Workspace

    +

    Tham khảo cách tạo Personal Access Token

    +
      +
    1. Tải git
    2. +
    +
    sudo dnf install git -y
    +git --version
    +

    Create Workspace

    +
      +
    1. Thực hiện clone repository
    2. +
    +
    git clone https://github.com/<your-alias>/my-eks-blueprints.git
    +

    Create Workspace

    +
    + +
    -
  • - - 7.3 Tạo add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Tạo EKS Blueprints -

    - - - - - - -

    Tạo EKS Blueprints

    -

    Tham khảo cách tạo Repository - Github

    -
      -
    1. -

      Truy cập vào New repository của Github

      -
        -
      • Trong giao diện Create a new repository, nhập - my-eks-blueprints đối với Repository name
      • -
      • Chọn Public
      • -
      • Chọn Create repisitory
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Sau khi tạo repository thành công

      -
        -
      • Sao chép và lưu trữ đường dẫn HTTPS của Git repository
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Trong giao diện Github chúng ta sẽ cài đặt và tạo token

      -
        -
      • Chọn vào Avatar của tài khoản Github của bạn
      • -
      • Chọn Settings - Create Workspace -
      • -
      -
    2. -
    3. -

      Sau đó, kéo xuống và chọn Developer settings

      -
    4. -
    -

    Create Workspace

    -
      -
    1. -

      Trong giao diện Developer settings

      -
        -
      • Chọn Personal access tokens
      • -
      • Chọn Generate new toke
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Trong giao diện Generate new token

      -
        -
      • Note, nhập eks-workshop-token
      • -
      • Chọn các scope sau: repoadmin:repo_hook
      • -
      • Chọn Generate token
      • -
      -
    2. -
    -

    Create Workspace

    -
      -
    1. Chọn Generate token
    2. -
    -

    Create Workspace

    -
      -
    1. -

      Hoàn thành Generate token

      -
        -
      • Sao chép và lưu giữ token
      • -
      -
    2. -
    -

    Create Workspace

    -

    Tham khảo cách tạo Personal - Access Token

    -
      -
    1. Tải git
    2. -
    -
    sudo dnf install git -y
    -git --version
    -
    -

    Create Workspace

    -
      -
    1. Thực hiện clone repository
    2. -
    -
    git clone https://github.com/<your-alias>/my-eks-blueprints.git
    -
    -

    Create Workspace

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/4-createcdkproject/index.html b/public/vi/4-createcdkproject/index.html index 2354351..7e18b97 100644 --- a/public/vi/4-createcdkproject/index.html +++ b/public/vi/4-createcdkproject/index.html @@ -1,1782 +1,1720 @@ + + + + + + + + + + Tạo CDK project :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Tạo CDK project +

    + + +

    Tạo CDK project

    +
      +
    1. Đầu tiên chúng ta Thay đổi đường dẫn đến main repo và tải nvm
    2. +
    +
    cd my-eks-blueprints
    +curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
    +export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
    +[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
    +source ~/.bashrc
    +nvm -v
    +

    Create Workspace

    +
      +
    1. Sử dụng phiên bản node 18
    2. +
    +
    nvm install v18
    +nvm use v18
    +node -v
    +npm -v
    +

    Create Workspace

    +

    Bạn cần sử dụng phien bản node trên 14.15.0 để sử dụng được cdk. Xem thêm tại đây +Create Workspace

    +
    +
      +
    1. Tải Typescript và tải CDK phiên bản 2.147.3
    2. +
    +
    npm -g install typescript
    +npm install -g aws-cdk@2.147.3
    +cdk --version
    +

    Create Workspace

    +
      +
    1. thực hiện tạo CDK project mới sử dụng typescript
    2. +
    +
    cdk init app --language typescript
    +

    Create Workspace

    +
      +
    1. Trong giao diện VSCode +
        +
      • Xem sidebar
      • +
      • Xem cấu trúc của project
      • +
      • lib / : Đây là nơi các stack hoặc construct CDK project của bạn được định nghĩa. +Create Workspace
      • +
      • bin / my-eks-blueprints.ts : Đây là entrypoint của CDK project. Nó sẽ tải các contructs được định nghĩa trong lib / . +Create Workspace
      • +
      +
    2. +
    + +

    Bạn có thể xem thêm tài liệu về CDK

    +
    + +
      +
    1. Xác thực AWS_DEFAULT_REGIONACCOUNT_ID
    2. +
    +
    export AWS_DEFAULT_REGION=ap-southeast-1
    +export ACCOUNT_ID=212454837823
    +
    +

    Lưu ý: nhớ thay đổi ACCOUNT_ID của bạn để thực hiện bài lab.

    +
    + +

    Create Workspace

    +
      +
    1. Chúng ta thực hiện khởi tạo Boostrap account
    2. +
    +
      +
    • Để bootstrapping chúng ta thực hiện lệnh.
    • +
    +
    cdk bootstrap --trust=$ACCOUNT_ID \
    +  --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
    +  aws://$ACCOUNT_ID/$AWS_REGION
    +
      +
    • Khi thực hiện bootstrap thành công sẽ xuất hiện như sau:
    • +
    +
    Environment aws://212454837823/ap-southeast-1 bootstrapped.
    +

    Create Workspace

    +
      +
    1. Chúng ta tiếp tục chạy lệnh cài đặt module eks-blueprints và dotenv cho project
    2. +
    +
    npm i @aws-quickstart/eks-blueprints dotenv
    +

    Create Workspace

    +
    + +
    -
  • - - 5.2 Tạo Pipeline - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Truy cập Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Quản lý nhóm bằng IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - -
  • -
  • - "> - - 7.2 Kiểm tra Cluster Autoscaler - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 7.3 Tạo add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Tạo CDK project -

    - - - - - - -

    Tạo CDK project

    -
      -
    1. Đầu tiên chúng ta Thay đổi đường dẫn đến main repo và tải nvm
    2. -
    -
    cd my-eks-blueprints
    -curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
    -export NVM_DIR="$([ -z "${XDG_CONFIG_HOME-}" ] && printf %s "${HOME}/.nvm" || printf %s "${XDG_CONFIG_HOME}/nvm")"
    -[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
    -source ~/.bashrc
    -nvm -v
    -
    -

    Create Workspace

    -
      -
    1. Sử dụng phiên bản node 18
    2. -
    -
    nvm install v18
    -nvm use v18
    -node -v
    -npm -v
    -
    -

    Create Workspace

    - -
    -

    Bạn cần sử dụng phien bản node trên 14.15.0 để sử dụng được cdk. Xem thêm tại đây - Create Workspace -

    -
    - -
      -
    1. Tải Typescript và tải CDK phiên bản 2.147.3
    2. -
    -
    npm -g install typescript
    -npm install -g aws-cdk@2.147.3
    -cdk --version
    -
    -

    Create Workspace

    -
      -
    1. thực hiện tạo CDK project mới sử dụng typescript
    2. -
    -
    cdk init app --language typescript
    -
    -

    Create Workspace

    -
      -
    1. Trong giao diện VSCode -
        -
      • Xem sidebar
      • -
      • Xem cấu trúc của project
      • -
      • lib / : Đây là nơi các stack hoặc construct CDK - project của bạn được định nghĩa. - Create Workspace -
      • -
      • bin / my-eks-blueprints.ts : Đây là entrypoint của CDK project. Nó - sẽ tải các contructs được định nghĩa trong lib / . - Create Workspace -
      • -
      -
    2. -
    - -
    -

    Bạn có thể xem thêm tài liệu về CDK

    -
    - -
      -
    1. Xác thực AWS_DEFAULT_REGIONACCOUNT_ID
    2. -
    -
    export AWS_DEFAULT_REGION=ap-southeast-1
    -export ACCOUNT_ID=212454837823
    -
    -
    -

    Lưu ý: nhớ thay đổi ACCOUNT_ID của bạn để thực hiện bài lab.

    -
    - -

    Create Workspace

    -
      -
    1. Chúng ta thực hiện khởi tạo Boostrap account
    2. -
    -
      -
    • Để bootstrapping chúng ta thực hiện lệnh.
    • -
    -
    cdk bootstrap --trust=$ACCOUNT_ID \
    -  --cloudformation-execution-policies arn:aws:iam::aws:policy/AdministratorAccess \
    -  aws://$ACCOUNT_ID/$AWS_REGION
    -
    -
      -
    • Khi thực hiện bootstrap thành công sẽ xuất hiện như sau:
    • -
    -
    Environment aws://212454837823/ap-southeast-1 bootstrapped.
    -
    -

    Create Workspace

    -
      -
    1. Chúng ta tiếp tục chạy lệnh cài đặt module eks-blueprints và dotenv cho - project
    2. -
    -
    npm i @aws-quickstart/eks-blueprints dotenv
    -
    -

    Create Workspace

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/404.html b/public/vi/404.html index 5c505da..266f757 100644 --- a/public/vi/404.html +++ b/public/vi/404.html @@ -9,15 +9,15 @@ 404 Page not found - - - - - - - + + + + + + + - + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - -
      - + +
    +
    + +
    + +
    + +
    + +

    + + Tạo Cluster +

    + + + + + + +

    Tạo Cluster

    +

    Trong phần này, chúng ta sẽ triển khai EKS cluster đầu tiên của mình bằng cách sử dụng eks-blueprints package. Blueprints được xuất bản dưới dạng mô-đun npm

    +

    Bạn có thể tìm hiểu thêm về Amazon EKS Blueprints for CDK

    +
      +
    1. +

      Chúng ta thực hiện chỉnh sửa main file của lib/my-eks-blueprints-stack.ts:

      +
        +
      • Mở file lib/my-eks-blueprints-stack.ts
      • +
      • Xem các code mẫu trong file
      • +
      +
    2. +
    +

    Deployment Pipeline

    +
      +
    1. Thực hiện hoàn thành file lib/my-eks-blueprints-stack.ts bằng cách dán(thay thể) đoạn code sau vào file:
    2. +
    +
    // lib/my-eks-blueprints-stack.ts
    +import * as cdk from 'aws-cdk-lib';
    +import { Construct } from 'constructs';
    +import * as blueprints from '@aws-quickstart/eks-blueprints';
    +import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
     
    +export default class ClusterConstruct extends Construct {
    +  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    +    super(scope, id);
     
    +    const account = props?.env?.account!;
    +    const region = props?.env?.region!;
     
    +    const blueprint = blueprints.EksBlueprint.builder()
    +      .account(account)
    +      .region(region)
    +      .clusterProvider(
    +        new blueprints.GenericClusterProvider({
    +          version: 'auto'
    +        })
    +      )
    +      .addOns()
    +      .teams()
    +      .build(scope, id + "-stack");
    +  }
    +}
    +

    Deployment Pipeline +3. Chúng ta mở file bin/my-eks-blueprints.ts để xem code mẫu.

    +

    Deployment Pipeline +4. Trong tệp này, chúng ta tạo CDK Constructbuilding block của CDK thể hiện những thứ cần thiết để tạo nên các thành phần của AWS Cloud.

    +
      +
    • +

      Trong trường hợp của chúng ta, thành phần là EKS cluster blueprint đặt trong provided account, region, add-ons, teams (mà chúng ta chưa assign) và tất cả các tài nguyên khác cần thiết để tạo blueprint(ví dụ VPC, subnet,…). Lệnh build() ở cuối khởi tạo cluster blueprint.

      +
    • +
    • +

      Để thực sự làm cho một construct có thể sử dụng được trong CDK project, chúng ta cần thêm nó vào entrypoint của chúng ta.

      +
    • +
    • +

      Thay thế nội dung của bin/my-eks-blueprints.ts bằng code block sau.

      +
    • +
    +
    // bin/my-eks-blueprints.ts
    +import * as cdk from 'aws-cdk-lib';
    +import ClusterConstruct from '../lib/my-eks-blueprints-stack';
    +import * as dotenv from 'dotenv';
     
    +const app = new cdk.App();
    +const account = process.env.CDK_DEFAULT_ACCOUNT!;
    +const region = process.env.CDK_DEFAULT_REGION;
    +const env = { account, region }
     
    +new ClusterConstruct(app, 'cluster', { env });
    +

    Deployment Pipeline

    +
      +
    1. +

      Chúng ta tạo mới một file .env +Deployment Pipeline

      +
    2. +
    3. +

      Thêm biên môi trường vào

      +
    4. +
    +
    CDK_DEFAULT_ACCOUNT=XXXXX
    +CDK_DEFAULT_REGION=XXXX
    +

    Deployment Pipeline +

    Hãy thay thế bằng CDK_DEFAULT_ACCOUNT và CDK_DEFAULT_REGION của bạn

    +
    +

    +
      +
    1. Tệp import Construct để làm cho nó có sẵn, sau đó sử dụng CDK app để khởi tạo một object mới của CDK Construct mà chúng ta đã import. Kiểm tra CDK
    2. +
    +
    cdk list
    +
      +
    • Nếu không có vấn đề, chúng ta có kết quả sau:
    • +
    +
    cluster-stack
    +

    Deployment Pipeline

    +

    Như bạn có thể thấy, chúng ta có thể tận dụng EksBlueprint để xác định cluster của chúng ta một cách dễ dàng bằng cách sử dụng CDK.

    +

    Thay vì triển khai single cluster, chúng ta sẽ tận dụng trình tạo blueprint để thêm deployment pipeline có thể xử lý tất cả các bản cập nhật cho cơ sở hạ tầng của chúng ta cho các môi trường khác nhau.

    +
    + +
    + +
    + +
    - - -
  • - - 5.1 Tạo Cluster - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.2 Tạo Pipeline - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Truy cập Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Quản lý nhóm bằng IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Tạo Cluster -

    - - - - - - -

    Tạo Cluster

    -

    Trong phần này, chúng ta sẽ triển khai EKS cluster đầu tiên của mình bằng cách sử dụng eks-blueprints - package. Blueprints được xuất bản dưới dạng mô-đun npm

    -

    Bạn có thể tìm hiểu thêm về Amazon EKS - Blueprints for CDK

    -
      -
    1. -

      Chúng ta thực hiện chỉnh sửa main file của lib/my-eks-blueprints-stack.ts:

      -
        -
      • Mở file lib/my-eks-blueprints-stack.ts
      • -
      • Xem các code mẫu trong file
      • -
      -
    2. -
    -

    Deployment Pipeline -

    -
      -
    1. Thực hiện hoàn thành file lib/my-eks-blueprints-stack.ts bằng cách dán(thay thể) đoạn - code sau vào file:
    2. -
    -
    // lib/my-eks-blueprints-stack.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    -
    -export default class ClusterConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    -    super(scope, id);
    -
    -    const account = props?.env?.account!;
    -    const region = props?.env?.region!;
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -      .account(account)
    -      .region(region)
    -      .clusterProvider(
    -        new blueprints.GenericClusterProvider({
    -          version: 'auto'
    -        })
    -      )
    -      .addOns()
    -      .teams()
    -      .build(scope, id + "-stack");
    -  }
    -}
    -
    -

    Deployment Pipeline - 3. Chúng ta mở file bin/my-eks-blueprints.ts để xem code mẫu.

    -

    Deployment Pipeline - 4. Trong tệp này, chúng ta tạo CDK Constructbuilding block của CDK thể - hiện những thứ cần thiết để tạo nên các thành phần của AWS Cloud.

    -
      -
    • -

      Trong trường hợp của chúng ta, thành phần là EKS cluster blueprint đặt trong provided account, - region, add-ons, teams (mà chúng ta chưa assign) và tất cả các tài nguyên khác cần thiết để tạo - blueprint(ví dụ VPC, subnet,…). Lệnh build() ở cuối khởi tạo cluster blueprint.

      -
    • -
    • -

      Để thực sự làm cho một construct có thể sử dụng được trong CDK project, - chúng ta cần thêm nó vào entrypoint của chúng ta.

      -
    • -
    • -

      Thay thế nội dung của bin/my-eks-blueprints.ts bằng code block sau.

      -
    • -
    -
    // bin/my-eks-blueprints.ts
    -import * as cdk from 'aws-cdk-lib';
    -import ClusterConstruct from '../lib/my-eks-blueprints-stack';
    -import * as dotenv from 'dotenv';
    -
    -const app = new cdk.App();
    -const account = process.env.CDK_DEFAULT_ACCOUNT!;
    -const region = process.env.CDK_DEFAULT_REGION;
    -const env = { account, region }
    -
    -new ClusterConstruct(app, 'cluster', { env });
    -
    -

    Deployment Pipeline -

    -
      -
    1. -

      Chúng ta tạo mới một file .env - Deployment Pipeline -

      -
    2. -
    3. -

      Thêm biên môi trường vào

      -
    4. -
    -
    CDK_DEFAULT_ACCOUNT=XXXXX
    -CDK_DEFAULT_REGION=XXXX
    -
    -

    Deployment Pipeline - -

    -

    Hãy thay thế bằng CDK_DEFAULT_ACCOUNT và CDK_DEFAULT_REGION của bạn -

    -
    -

    -
      -
    1. Tệp import Construct để làm cho nó có sẵn, sau đó sử dụng CDK app để khởi tạo một object mới của CDK - Construct mà chúng ta đã import. Kiểm tra CDK
    2. -
    -
    cdk list
    -
    -
      -
    • Nếu không có vấn đề, chúng ta có kết quả sau:
    • -
    -
    cluster-stack
    -
    -

    Deployment Pipeline -

    -

    Như bạn có thể thấy, chúng ta có thể tận dụng EksBlueprint để xác định cluster của chúng ta một cách dễ dàng - bằng cách sử dụng CDK.

    -

    Thay vì triển khai single cluster, chúng ta sẽ tận dụng trình tạo blueprint để thêm deployment pipeline có - thể xử lý tất cả các bản cập nhật cho cơ sở hạ tầng của chúng ta cho các môi trường khác nhau.

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/5-deploymentpipeline/5.2-accesscluster/index.html b/public/vi/5-deploymentpipeline/5.2-accesscluster/index.html index 888d60e..6011155 100644 --- a/public/vi/5-deploymentpipeline/5.2-accesscluster/index.html +++ b/public/vi/5-deploymentpipeline/5.2-accesscluster/index.html @@ -1,1899 +1,1824 @@ + + + + + + + + + + Tạo Pipeline :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    - - - - - - - - - - - -
  • +
  • - "> - - 7. Tiện ích bổ sung - - - - - - -
      +
      + +

      + + Tạo Pipeline +

      + + + + + + +

      Tạo Pipeline

      +

      Thiết lập AWS Secrets Manager

      +

      Chúng ta sẽ cần thêm GitHub Personal Access Token vào AWS Secrets Manager của AWS để tận dụng AWS CodePipelineGitHubpipeline của chúng ta sẽ tận dụng webhook để chạy thành công.

      +

      Bạn có thể tham khảo thêm về cách tạo GitHub Personal Access Token

      +
        +
      1. +

        Sau khi tạo GitHub Personal Access Token

        +
          +
        • Chúng ta quay lại VSCode Terminal
        • +
        • Thực hiện tạo Secret trong Secrets Manager với tên eks-workshop-token
        • +
        +
      2. +
      +
      aws secretsmanager create-secret --name "eks-workshop-token" --description "github access token" --secret-string "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" 
      +

      Lưu ý: nhớ thay secret-string của bạn bằng token bạn đã tạo.

      +

      Create Workspace

      +
        +
      1. +

        Chúng ta có thể tạo một tài nguyên CodePipelineStack mới bằng cách tạo một CDK Construct mới trong thư mục lib/, sau đó nhập Construct vào main entrypoint file.

        +
          +
        • Thực hiện tạo construct file mới.
        • +
        +
      2. +
      +
      touch lib/pipeline.ts
      +

      Create Workspace

      +
        +
      1. Sau khi tệp được tạo, hãy mở tệp và thêm đoạn mã sau để tạo pipeline construct
      2. +
      +
      // lib/pipeline.ts
      +import * as cdk from 'aws-cdk-lib';
      +import { Construct } from 'constructs';
      +import * as blueprints from '@aws-quickstart/eks-blueprints';
      +import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
       
      +export default class PipelineConstruct extends Construct {
      +  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
      +    super(scope, id)
       
      +    const account = props?.env?.account!;
      +    const region = props?.env?.region!;
       
      +    const blueprint = blueprints.EksBlueprint.builder()
      +      .account(account)
      +      .region(region)
      +      .clusterProvider(
      +        new blueprints.GenericClusterProvider({
      +          version: 'auto'
      +        })
      +      )
      +      .addOns()
      +      .teams();
       
      +    blueprints.CodePipelineStack.builder()
      +      .name("eks-blueprints-workshop-pipeline")
      +      .owner("your-github-username")
      +      .repository({
      +          repoUrl: 'your-repo-name',
      +          credentialsSecretName: 'github-token',
      +          targetRevision: 'main'
      +      })
      +      .build(scope, id+'-stack', props);
      +  }
      +}
      +

      Thực hiện cấu hình:

      +
        +
      • name, chúng ta nhập eks-blueprints-workshop-pipeline hoặc tên pipeline mà bạn muốn.
      • +
      • owner, nhập tên github của bạn. (trong bài lab, nhập AWS-First-Cloud-Journey)
      • +
      • repoUrl, nhập tên của repo. (Trong bài lab, nhập my-eks-blueprints)
      • +
      • credentialsSecretName, nhập secret của bạn (Trong bài lab, nhập eks-workshop-token)
      • +
      • targetRevision, nhập revision main
      • +
      +

      Create Workspace

      +
        +
      1. +

        Để đảm bảo chúng ta có thể truy cập vào Construct, chúng ta cần import và khởi tạo một construct mới.

        +
          +
        • Thay đổi nội dung của file bin/my-eks-blueprints.ts
        • +
        +
      2. +
      +
      // bin/my-eks-blueprints.ts
      +// bin/my-eks-blueprints.ts
      +import * as cdk from 'aws-cdk-lib';
      +import ClusterConstruct from '../lib/my-eks-blueprints-stack';
      +import * as dotenv from 'dotenv';
      +import PipelineConstruct from '../lib/pipeline'; // IMPORT OUR PIPELINE CONSTRUCT
       
      +dotenv.config();
       
      +const app = new cdk.App();
      +const account = process.env.CDK_DEFAULT_ACCOUNT!;
      +const region = process.env.CDK_DEFAULT_REGION;
      +const env = { account, region }
       
      +new ClusterConstruct(app, 'cluster', { env });
      +new PipelineConstruct(app, 'pipeline', { env });
      +

      Create Workspace

      +
        +
      1. Thực hiện kiểm tra danh sách pipeline
      2. +
      +
      cdk list
      +

      Create Workspace

      +
        +
      1. Thực hiện thêm Stage. Trong bước này, chúng ta thực hiện thêm các stage cho pipeline( trong bài lab sử dụng stage dev, bạn có thể triển khai thêm các stage dành cho testproduction ở các region khác)
      2. +
      +
      // lib/pipeline.ts
      +import * as cdk from 'aws-cdk-lib';
      +import { Construct } from 'constructs';
      +import * as blueprints from '@aws-quickstart/eks-blueprints';
      +import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
       
      +export default class PipelineConstruct extends Construct {
      +  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
      +    super(scope, id)
       
      +    const account = props?.env?.account!;
      +    const region = props?.env?.region!;
       
      +    const blueprint = blueprints.EksBlueprint.builder()
      +      .account(account)
      +      .region(region)
      +      .clusterProvider(
      +        new blueprints.GenericClusterProvider({
      +          version: 'auto'
      +        })
      +      )
      +      .addOns()
      +      .teams();
       
      +    blueprints.CodePipelineStack.builder()
      +      .name("eks-blueprints-workshop-pipeline")
      +      .owner("your-github-username")
      +      .repository({
      +          repoUrl: 'your-repo-name',
      +          credentialsSecretName: 'github-token',
      +          targetRevision: 'main'
      +      })
      +      // WE ADD THE STAGES IN WAVE FROM THE PREVIOUS CODE
      +      .wave({
      +        id: "envs",
      +        stages: [
      +          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1') }
      +        ]
      +      })
      +      .build(scope, id + '-stack', props);
      +  }
      +}
      +
        +
      • +

        Sử dụng class blueprints.StackStage hỗ trợ tạo để định nghĩa các stage của chúng ta bằng cách sử dụng .stage

        +
      • +
      • +

        Sử dụng .wave hỗ trợ để triển khai song song.

        +
      • +
      • +

        Trong bài lab, chúng ta đang triển khai 1 cluster.

        +
      • +
      • +

        Nếu trường hợp bạn triển khai nhiều cluster, để giảm thiểu, chúng ta sẽ chỉ cần thêm .wave vào danh sách các stage để bao gồm cách bạn muốn cấu trúc các stage triển khai khác nhau của mình trong pipeline. (tức là khác nhau add-ons, region deployment v.v.).

        +
      • +
      • +

        Stack của chúng ta sẽ triển khai các cluster sau: EKS trong môi trường dev. CodePipeline triển khai tới region: ap-southeast-1.

        +
      • +
      +

      Create Workspace

      +
        +
      1. Thực hiện kiểm tra lại danh sách pipeline
      2. +
      +
      cdk list
      +

      Kết quả như sau:

      +
      cluster-stack
      +pipeline-stack
      +pipeline-stack/dev/dev-blueprint
      +

      Create Workspace

      +
      + +
      -
    • - - 7.1 Giới thiệu add-ons - - - - - - -
    • - - - - - - - - - - - - +
      -
    • - "> - - 7.2 Kiểm tra Cluster Autoscaler - - - - - - -
    • - - - - - - - - - - - - - - -
    • - - 7.3 Tạo add-ons - - - - - - -
    • - - - - - - -
    - - - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Tạo Pipeline -

    - - - - - - -

    Tạo Pipeline

    -

    Thiết lập AWS Secrets Manager

    -

    Chúng ta sẽ cần thêm GitHub Personal Access Token vào AWS Secrets Manager - của AWS để tận dụng AWS CodePipelineGitHubpipeline - của chúng ta sẽ tận dụng webhook để chạy thành công.

    -

    Bạn có thể tham khảo thêm về cách tạo GitHub - Personal Access Token

    -
      -
    1. -

      Sau khi tạo GitHub Personal Access Token

      -
        -
      • Chúng ta quay lại VSCode Terminal
      • -
      • Thực hiện tạo Secret trong Secrets Manager với tên - eks-workshop-token
      • -
      -
    2. -
    -
    aws secretsmanager create-secret --name "eks-workshop-token" --description "github access token" --secret-string "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" 
    -
    -

    Lưu ý: nhớ thay secret-string của bạn bằng token bạn đã tạo.

    -

    Create Workspace -

    -
      -
    1. -

      Chúng ta có thể tạo một tài nguyên CodePipelineStack mới bằng cách tạo một CDK - Construct mới trong thư mục lib/, sau đó nhập Construct vào - main entrypoint file.

      -
        -
      • Thực hiện tạo construct file mới.
      • -
      -
    2. -
    -
    touch lib/pipeline.ts
    -
    -

    Create Workspace -

    -
      -
    1. Sau khi tệp được tạo, hãy mở tệp và thêm đoạn mã sau để tạo pipeline construct
    2. -
    -
    // lib/pipeline.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    -
    -export default class PipelineConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    -    super(scope, id)
    -
    -    const account = props?.env?.account!;
    -    const region = props?.env?.region!;
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -      .account(account)
    -      .region(region)
    -      .clusterProvider(
    -        new blueprints.GenericClusterProvider({
    -          version: 'auto'
    -        })
    -      )
    -      .addOns()
    -      .teams();
    -
    -    blueprints.CodePipelineStack.builder()
    -      .name("eks-blueprints-workshop-pipeline")
    -      .owner("your-github-username")
    -      .repository({
    -          repoUrl: 'your-repo-name',
    -          credentialsSecretName: 'github-token',
    -          targetRevision: 'main'
    -      })
    -      .build(scope, id+'-stack', props);
    -  }
    -}
    -
    -

    Thực hiện cấu hình:

    -
      -
    • name, chúng ta nhập eks-blueprints-workshop-pipeline hoặc tên - pipeline mà bạn muốn.
    • -
    • owner, nhập tên github của bạn. (trong bài lab, nhập - AWS-First-Cloud-Journey)
    • -
    • repoUrl, nhập tên của repo. (Trong bài lab, nhập my-eks-blueprints)
    • -
    • credentialsSecretName, nhập secret của bạn (Trong bài lab, nhập - eks-workshop-token)
    • -
    • targetRevision, nhập revision main
    • -
    -

    Create Workspace -

    -
      -
    1. -

      Để đảm bảo chúng ta có thể truy cập vào Construct, chúng ta cần import và khởi tạo một - construct mới.

      -
        -
      • Thay đổi nội dung của file bin/my-eks-blueprints.ts
      • -
      -
    2. -
    -
    // bin/my-eks-blueprints.ts
    -// bin/my-eks-blueprints.ts
    -import * as cdk from 'aws-cdk-lib';
    -import ClusterConstruct from '../lib/my-eks-blueprints-stack';
    -import * as dotenv from 'dotenv';
    -import PipelineConstruct from '../lib/pipeline'; // IMPORT OUR PIPELINE CONSTRUCT
    -
    -dotenv.config();
    -
    -const app = new cdk.App();
    -const account = process.env.CDK_DEFAULT_ACCOUNT!;
    -const region = process.env.CDK_DEFAULT_REGION;
    -const env = { account, region }
    -
    -new ClusterConstruct(app, 'cluster', { env });
    -new PipelineConstruct(app, 'pipeline', { env });
    -
    -

    Create Workspace -

    -
      -
    1. Thực hiện kiểm tra danh sách pipeline
    2. -
    -
    cdk list
    -
    -

    Create Workspace -

    -
      -
    1. Thực hiện thêm Stage. Trong bước này, chúng ta thực hiện thêm các stage cho pipeline( - trong bài lab sử dụng stage dev, bạn có thể triển khai thêm các stage dành cho - testproduction ở các region khác)
    2. -
    -
    // lib/pipeline.ts
    -import * as cdk from 'aws-cdk-lib';
    -import { Construct } from 'constructs';
    -import * as blueprints from '@aws-quickstart/eks-blueprints';
    -import { KubernetesVersion } from 'aws-cdk-lib/aws-eks';
    -
    -export default class PipelineConstruct extends Construct {
    -  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    -    super(scope, id)
    -
    -    const account = props?.env?.account!;
    -    const region = props?.env?.region!;
    -
    -    const blueprint = blueprints.EksBlueprint.builder()
    -      .account(account)
    -      .region(region)
    -      .clusterProvider(
    -        new blueprints.GenericClusterProvider({
    -          version: 'auto'
    -        })
    -      )
    -      .addOns()
    -      .teams();
    -
    -    blueprints.CodePipelineStack.builder()
    -      .name("eks-blueprints-workshop-pipeline")
    -      .owner("your-github-username")
    -      .repository({
    -          repoUrl: 'your-repo-name',
    -          credentialsSecretName: 'github-token',
    -          targetRevision: 'main'
    -      })
    -      // WE ADD THE STAGES IN WAVE FROM THE PREVIOUS CODE
    -      .wave({
    -        id: "envs",
    -        stages: [
    -          { id: "dev", stackBuilder: blueprint.clone('ap-southeast-1') }
    -        ]
    -      })
    -      .build(scope, id + '-stack', props);
    -  }
    -}
    -
    -
      -
    • -

      Sử dụng class blueprints.StackStage hỗ trợ tạo để định nghĩa các stage của chúng ta bằng - cách sử dụng .stage

      -
    • -
    • -

      Sử dụng .wave hỗ trợ để triển khai song song.

      -
    • -
    • -

      Trong bài lab, chúng ta đang triển khai 1 cluster.

      -
    • -
    • -

      Nếu trường hợp bạn triển khai nhiều cluster, để giảm thiểu, chúng ta sẽ chỉ cần thêm - .wave vào danh sách các stage để bao gồm cách bạn muốn cấu trúc các stage triển khai khác - nhau của mình trong pipeline. (tức là khác nhau add-ons, region deployment v.v.).

      -
    • -
    • -

      Stack của chúng ta sẽ triển khai các cluster sau: EKS trong môi trường dev. CodePipeline triển khai tới - region: ap-southeast-1.

      -
    • -
    -

    Create Workspace -

    -
      -
    1. Thực hiện kiểm tra lại danh sách pipeline
    2. -
    -
    cdk list
    -
    -

    Kết quả như sau:

    -
    cluster-stack
    -pipeline-stack
    -pipeline-stack/dev/dev-blueprint
    -
    -

    Create Workspace -

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/5-deploymentpipeline/5.3-pipelineinaction/index.html b/public/vi/5-deploymentpipeline/5.3-pipelineinaction/index.html index 897560f..3e68c89 100644 --- a/public/vi/5-deploymentpipeline/5.3-pipelineinaction/index.html +++ b/public/vi/5-deploymentpipeline/5.3-pipelineinaction/index.html @@ -1,1853 +1,1758 @@ + + + + + + + + + + Pipeline in Action :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Pipeline in Action +

    + + +

    Pipeline in Action

    +

    Các construct đã được sửa đổi, sau đó lưu lại.

    +
      +
    1. Chúng ta thực hiện add, commit và push các thay đổi của bạn vào remote repository
    2. +
    +
    git add .
    +git commit -m "Setting up EKS Blueprints deployment pipeline"
    +git branch -M main
    +git config credential.helper store
    +git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
    +
      +
    • +

      Vì đây là lần đầu tiên bạn push lên reomte repository của Github, Cloud 9 sẽ nhắc bạn nhập thông tin đăng nhập GitHub của bạn. Bạn sẽ cần sử dụng mật khẩu GitHub của mình (nếu 2FA chưa được bật) hoặc Github Token của bạn (nếu 2FA được bật). Trong bài lab, chúng ta sử dụng Github Token vì sử dụng user namepassword đã không còn hiệu lực.

      +
    • +
    • +

      Nếu quên Secret bạn có thể xem trong AWS Secret Manager

      +
    • +
    • +

      Lệnh gọi credential.helper dùng để lưu trữ thông tin đăng nhập của bạn để bạn không cần phải tiếp tục nhập chúng mỗi khi thực hiện thay đổi.

      +
    • +
    +

    Lưu ý: git push sử dùng kèm token theo https://[token]@github.com/[github_name]/[repo_name].git

    +

    Create Workspace

    +
      +
    1. Kiểm tra lại repository xem đã được push lên chưa?
    2. +
    +

    Create Workspace

    +
      +
    1. Sau khi push lên repository, chúng ta thực hiện deploy pipeline stack.
    2. +
    +
    cdk deploy pipeline-stack
    +

    Create Workspace

    +
      +
    1. +

      Bạn sẽ được nhắc xác nhận việc triển khai pipeline stack.

      +
        +
      • Nhập y và sau đó nhấn enter.
      • +
      • Sau khi triển khai thành công sẽ hiển thị Stack ARN
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. Quay lại giao diện AWS Management Console
    2. +
    +
      +
    • Tìm và chọn CodePipeline
    • +
    +

    Create Workspace

    +
      +
    1. Bạn sẽ quan sát thấy quá trình triển khai đang diễn ra.
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Đợi khoảng 30 phút sau, Pipeline hiển thị Succeeced

      +
        +
      • +

        CodePipeline sẽ nhận các thay đổi được thực hiện trong remote repository và pipelne sẽ bắt đầu xây dựng. Bản cập nhật(thêm, xóa, sửa code) có thể được nhìn thấy trong CodePipeline Console để xác minh rằng các stage được xây dựng chính xác.

        +
      • +
      • +

        Chọn vào tên pipeline.

        +
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Xem các bước SourceBuild

      +
        +
      • Source: Source stage chạy một action để truy xuất các thay đổi code khi pipeline được chạy theo cách thủ công hoặc khi một event webhook được gửi từ source provider. Trong trường hợp của chúng ta, mỗi khi chúng ta thực hiện thay đổi code trong my-eks-blueprints repository của mình và reflect những thay đổi trong remote repo, event sẽ được gửi đến pipeline(kèm GitHub personal access token) để kích hoạt thực thi pipline mới.
      • +
      • Build : build stage cho phép bạn chạy các action test và build như một phần của pipeline.
      • +
      • Trong quá trình Build, pipeline sẽ chạy các script để đảm bảo mọi thứ hoạt động như dự định.
      • +
      • Điều này bao gồm npm package installations, version checkingCDK synth.
      • +
      • Bất kỳ lỗi nào trong cấu hình từ repo của bạn đều có thể không thực hiện được stage này.
      • +
      • Bạn có thể xem danh sách các lệnh được chạy trong hành động này bằng cách nhấp vào Details trong actions (bên dưới tên của nó và AWS Codebuild).
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. +

      Tiếp theo là UpdatePipelineAssets

      +
        +
      • UpdatePipeline : Đây là một extra build stage chạy để kiểm tra pipeline có cần cập nhật hay không. Ví dụ: nếu code được thay đổi để bao gồm các stage bổ sung (ngoài production), UpdatePipeline sẽ chạy build và reconfigure pipeline cần thêm các stage bổ sung đó. Stage này là Assets cần thiết để chạy các stage.
      • +
      • Assets : Đây là một loạt các build action xử lý các asset cần thiết để triển khai EKS cluster. Asset, trong ngữ cảnh của CDK, là các tệp cục bộ, thư mục hoặc Docker image có thể được đóng gói vào các thư viện và ứng dụng CDK. Những nội dung hoặc hiện vật này cần thiết để ứng dụng CDK của chúng ta hoạt động. Các asset này cho phép Framework hoạt động bình thường, vì chúng chứa các tham số và cấu hình được sử dụng để triển khai các tài nguyên cần thiết, tức là Cluster Provider, Kubernetes resources trong Cluster, IAM, add-ons với Helm Charts, v.v. Asset được lưu trữ trên AWS dưới dạng các Lambda Function cho các thực thi và tệp được lưu trữ S3 Artifacts bucket.
      • +
      +
    2. +
    +

    Create Workspace

    +
      +
    1. Cuối cùng là dev (Prepare và Deploy)
    2. +
    +
      +
    • Envs (our wave): wave là một tùy chọn triển khai cho các pipeline cung cấp nhiều stage (hoặc môi trường) song song. Vì CDK tổng hợp code thành một CloudFormation template, bạn có thể xem trong bảng điều khiển quản lý việc triển khai các stack dưới dạng mẫu CloudFormation.
    • +
    +

    Create Workspace

    + +

    Khi mà bạn bị lỗi trong quá trình chạy pipeline thì hãy nhấp vào xem chi tiết +Create Workspace

    +

    Lỗi này là số lượng hàng đợi bị giới hạn. +Create Workspace

    +

    Bạn hãy thực hiện chạy lại +Create Workspace

    +

    Và cuối cùng nó đã chạy được +Create Workspace

    +
    + + + + + + +
    + +
    + + +
    + +
    -
  • - "> - - 7.2 Kiểm tra Cluster Autoscaler - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 7.3 Tạo add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Pipeline in Action -

    - - - - - - -

    Pipeline in Action

    -

    Các construct đã được sửa đổi, sau đó lưu lại.

    -
      -
    1. Chúng ta thực hiện add, commit và push các thay đổi của bạn vào remote - repository
    2. -
    -
    git add .
    -git commit -m "Setting up EKS Blueprints deployment pipeline"
    -git branch -M main
    -git config credential.helper store
    -git push https://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@github.com/AWS-First-Cloud-Journey/my-eks-blueprints.git
    -
    -
      -
    • -

      Vì đây là lần đầu tiên bạn push lên reomte repository của Github, Cloud 9 sẽ nhắc bạn nhập thông tin đăng - nhập GitHub của bạn. Bạn sẽ cần sử dụng mật khẩu GitHub của mình (nếu 2FA chưa được bật) hoặc Github Token - của bạn (nếu 2FA được bật). Trong bài lab, chúng ta sử dụng Github Token vì sử dụng - user namepassword đã không còn hiệu lực.

      -
    • -
    • -

      Nếu quên Secret bạn có thể xem trong AWS Secret Manager

      -
    • -
    • -

      Lệnh gọi credential.helper dùng để lưu trữ thông tin đăng nhập của bạn để bạn không cần - phải tiếp tục nhập chúng mỗi khi thực hiện thay đổi.

      -
    • -
    -

    Lưu ý: git push sử dùng kèm token theo - https://[token]@github.com/[github_name]/[repo_name].git

    -

    Create Workspace -

    -
      -
    1. Kiểm tra lại repository xem đã được push lên chưa?
    2. -
    -

    Create Workspace -

    -
      -
    1. Sau khi push lên repository, chúng ta thực hiện deploy pipeline stack.
    2. -
    -
    cdk deploy pipeline-stack
    -
    -

    Create Workspace -

    -
      -
    1. -

      Bạn sẽ được nhắc xác nhận việc triển khai pipeline stack.

      -
        -
      • Nhập y và sau đó nhấn enter.
      • -
      • Sau khi triển khai thành công sẽ hiển thị Stack ARN
      • -
      -
    2. -
    -

    Create Workspace -

    -
      -
    1. Quay lại giao diện AWS Management Console
    2. -
    -
      -
    • Tìm và chọn CodePipeline
    • -
    -

    Create Workspace -

    -
      -
    1. Bạn sẽ quan sát thấy quá trình triển khai đang diễn ra.
    2. -
    -

    Create Workspace -

    -
      -
    1. -

      Đợi khoảng 30 phút sau, Pipeline hiển thị Succeeced

      -
        -
      • -

        CodePipeline sẽ nhận các thay đổi được thực hiện trong remote repository và pipelne sẽ bắt đầu xây - dựng. Bản cập nhật(thêm, xóa, sửa code) có thể được nhìn thấy trong CodePipeline - Console để xác minh rằng các stage được xây dựng chính xác.

        -
      • -
      • -

        Chọn vào tên pipeline.

        -
      • -
      -
    2. -
    -

    Create Workspace -

    -
      -
    1. -

      Xem các bước SourceBuild

      -
        -
      • Source: Source stage chạy một action để truy xuất các thay đổi code - khi pipeline được chạy theo cách thủ công hoặc khi một event webhook được gửi từ source provider. Trong - trường hợp của chúng ta, mỗi khi chúng ta thực hiện thay đổi code trong - my-eks-blueprints repository của mình và reflect những thay đổi trong remote repo, - event sẽ được gửi đến pipeline(kèm GitHub personal access token) để kích hoạt thực thi pipline mới.
      • -
      • Build : build stage cho phép bạn chạy các action test và build như - một phần của pipeline.
      • -
      • Trong quá trình Build, pipeline sẽ chạy các script để đảm bảo mọi thứ hoạt động như - dự định.
      • -
      • Điều này bao gồm npm package installations, version checking và - CDK synth.
      • -
      • Bất kỳ lỗi nào trong cấu hình từ repo của bạn đều có thể không thực hiện được stage này.
      • -
      • Bạn có thể xem danh sách các lệnh được chạy trong hành động này bằng cách nhấp vào - Details trong actions (bên dưới tên của nó và AWS - Codebuild).
      • -
      -
    2. -
    -

    Create Workspace -

    -
      -
    1. -

      Tiếp theo là UpdatePipelineAssets

      -
        -
      • UpdatePipeline : Đây là một extra build stage chạy để kiểm tra - pipeline có cần cập nhật hay không. Ví dụ: nếu code được thay đổi để bao gồm các stage bổ sung (ngoài - production), UpdatePipeline sẽ chạy build và reconfigure pipeline cần thêm các stage bổ - sung đó. Stage này là Assets cần thiết để chạy các stage.
      • -
      • Assets : Đây là một loạt các build action xử lý các asset cần thiết để triển khai EKS - cluster. Asset, trong ngữ cảnh của CDK, là các tệp cục bộ, thư mục hoặc Docker image có - thể được đóng gói vào các thư viện và ứng dụng CDK. Những nội dung hoặc hiện vật này cần thiết để ứng - dụng CDK của chúng ta hoạt động. Các asset này cho phép Framework hoạt động bình thường, vì chúng chứa - các tham số và cấu hình được sử dụng để triển khai các tài nguyên cần thiết, tức là Cluster Provider, - Kubernetes resources trong Cluster, IAM, add-ons với Helm Charts, v.v. Asset được lưu trữ trên AWS dưới - dạng các Lambda Function cho các thực thi và tệp được lưu trữ S3 Artifacts bucket.
      • -
      -
    2. -
    -

    Create Workspace -

    -
      -
    1. Cuối cùng là dev (Prepare và Deploy)
    2. -
    -
      -
    • Envs (our wave): wave là một tùy chọn triển khai cho các pipeline cung cấp nhiều stage - (hoặc môi trường) song song. Vì CDK tổng hợp code thành một CloudFormation template, bạn có thể xem trong - bảng điều khiển quản lý việc triển khai các stack dưới dạng mẫu CloudFormation.
    • -
    -

    Create Workspace -

    - -
    -

    Khi mà bạn bị lỗi trong quá trình chạy pipeline thì hãy nhấp vào xem chi tiết - Create Workspace -

    -

    Lỗi này là số lượng hàng đợi bị giới hạn. - Create Workspace -

    -

    Bạn hãy thực hiện chạy lại - Create Workspace -

    -

    Và cuối cùng nó đã chạy được - Create Workspace -

    -
    - - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/5-deploymentpipeline/5.4-accessingthecluster/index.html b/public/vi/5-deploymentpipeline/5.4-accessingthecluster/index.html index 5fa9fd2..8c27d6d 100644 --- a/public/vi/5-deploymentpipeline/5.4-accessingthecluster/index.html +++ b/public/vi/5-deploymentpipeline/5.4-accessingthecluster/index.html @@ -1,1704 +1,1661 @@ + + + + + + + + + + Truy cập Cluster :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Truy cập Cluster +

    + - + +

    Truy cập Cluster

    +
      +
    1. Cài đặt quyền truy cập vào cluster
    2. +
    +
    export KUBE_CONFIG=$(aws cloudformation describe-stacks --stack-name dev-dev-blueprint | jq -r '.Stacks[0].Outputs[] | select(.OutputKey|match("ConfigCommand"))| .OutputValue')
    +$KUBE_CONFIG
    +

    Create Workspace

    +
      +
    1. Khi kubeconfig đã được cập nhật, bạn sẽ có thể truy cập vào EKS cluster
    2. +
    +
    kubectl get svc
    +

    Create Workspace

    +
    + +
    + +
    + +
    - - - -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Truy cập Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Quản lý nhóm bằng IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Truy cập Cluster -

    - - - - - - -

    Truy cập Cluster

    -
      -
    1. Cài đặt quyền truy cập vào cluster
    2. -
    -
    export KUBE_CONFIG=$(aws cloudformation describe-stacks --stack-name dev-dev-blueprint | jq -r '.Stacks[0].Outputs[] | select(.OutputKey|match("ConfigCommand"))| .OutputValue')
    -$KUBE_CONFIG
    -
    -

    Create Workspace -

    -
      -
    1. Khi kubeconfig đã được cập nhật, bạn sẽ có thể truy cập vào EKS cluster
    2. -
    -
    kubectl get svc
    -
    -

    Create Workspace -

    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/5-deploymentpipeline/index.html b/public/vi/5-deploymentpipeline/index.html index 45dd54c..e6d1f1d 100644 --- a/public/vi/5-deploymentpipeline/index.html +++ b/public/vi/5-deploymentpipeline/index.html @@ -1,1691 +1,1650 @@ + + + + + + + + + + Tạo Deployment Pipeline :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + Tạo Deployment Pipeline +

    + + +

    Tạo Deployment Pipeline

    +

    Trong phần này, chúng ta sẽ xem xét cách thiết lập một deployment pipeline để tự động hóa các bản cập nhật cho cluster của chúng ta. Mặc dù thuận tiện khi tận dụng công cụ dòng lệnh CDK để triển khai stack đầu tiên của bạn, nhung chúng ta nên thiết lập các pipeline tự động chịu trách nhiệm triển khai và cập nhật cơ sở hạ tầng EKS của bạn. Chúng ta sẽ sử dụng CodePipelineStack của Framework để triển khai môi trường ở các khu vực khác nhau.

    +

    CodePipelineStack là một cấu trúc để phân phối liên tục các ứng dụng AWS CDK một cách dễ dàng. Bất cứ khi nào bạn kiểm tra mã nguồn của ứng dụng AWS CDK vào GitHub, stack có thể tự động xây dựng, kiểm tra và triển khai phiên bản mới của bạn.

    +

    CodePipelineStack tự cập nhật: nếu bạn thêm các stage hoặc stack ứng dụng, pipeline sẽ tự động cấu hình lại chính nó để triển khai các stage hoặc stack mới đó.

    +

    Nội dung

    +
      +
    1. Tạo Cluster
    2. +
    3. Tạo Pipeline
    4. +
    5. Pipeline in Action
    6. +
    7. Truy cập Cluster
    8. +
    +
    + +
    -
  • - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Truy cập Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Quản lý nhóm bằng IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - -
  • -
  • - "> - - 7.3 Tạo add-ons - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - -
    - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - Tạo Deployment Pipeline -

    - - - - - - -

    Tạo Deployment Pipeline

    -

    Trong phần này, chúng ta sẽ xem xét cách thiết lập một deployment pipeline để tự động hóa các bản cập nhật - cho cluster của chúng ta. Mặc dù thuận tiện khi tận dụng công cụ dòng lệnh CDK để triển khai stack đầu tiên - của bạn, nhung chúng ta nên thiết lập các pipeline tự động chịu trách nhiệm triển khai và cập nhật cơ sở hạ - tầng EKS của bạn. Chúng ta sẽ sử dụng CodePipelineStack của Framework để - triển khai môi trường ở các khu vực khác nhau.

    -

    CodePipelineStack là một cấu trúc để phân phối liên tục các ứng dụng AWS CDK một cách dễ - dàng. Bất cứ khi nào bạn kiểm tra mã nguồn của ứng dụng AWS CDK vào GitHub, stack có thể tự động xây dựng, - kiểm tra và triển khai phiên bản mới của bạn.

    -

    CodePipelineStack tự cập nhật: nếu bạn thêm các stage hoặc stack ứng dụng, pipeline sẽ tự động cấu hình lại - chính nó để triển khai các stage hoặc stack mới đó.

    -

    Nội dung

    -
      -
    1. Tạo Cluster
    2. -
    3. Tạo Pipeline
    4. -
    5. Pipeline in Action
    6. -
    7. Truy cập Cluster
    8. -
    - - - - - -
    - -
    - - -
    - - -
    - - +
    + +
    +
    - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file + + + + + + + + + + + + + + + + + diff --git a/public/vi/6-onboardteams/6.1-definingteams/index.html b/public/vi/6-onboardteams/6.1-definingteams/index.html index d97c78a..5ea5c51 100644 --- a/public/vi/6-onboardteams/6.1-definingteams/index.html +++ b/public/vi/6-onboardteams/6.1-definingteams/index.html @@ -12,21 +12,21 @@ Thiết lập các nhóm :: AWS System Manager - - - - - - - - - + + + + + + + + + - + - + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + taxonomy :: + + Categories +

    + + +
      + +
    +
    + +
    + +
    + +
    -
  • - "> - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Truy cập Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Quản lý nhóm bằng IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - taxonomy :: - - Categories -

    - - - - - - - - -
      - -
    - - -
    - -
    - - -
    - - -
    - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file +
    + +
    +
    +
    + + + + + + + + + + + + + + + + + diff --git a/public/vi/index.html b/public/vi/index.html index 09d8fd2..3078c58 100644 --- a/public/vi/index.html +++ b/public/vi/index.html @@ -1,1667 +1,1629 @@ + + + + + + + + + + Session Management :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    + +
    + +
    + + + + + + navigation + + + +

    Giới thiệu về EKS Blueprints

    +

    Sơ đồ kiến trúc

    +

    ConnectPrivate

    +

    Khái niệm cốt lõi

    +

    ConnectPrivate

    + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    ConceptsDescription
    ClusterAn EKS Cluster deployed following best practices.
    Resource ProviderResource providers are abstractions that supply external AWS resources to the cluster (e.g. hosted zones, VPCs, etc.).
    Add-onAllow you to configure, deploy, and update the operational software, or add-ons, that provide key functionality to support your Kubernetes applications.
    TeamsA logical grouping of IAM identities that have access to a Kubernetes namespace(s), or cluster administrative access depending upon the team type.
    PipelinesContinuous Delivery pipelines for deploying clusters and add-ons
    ApplicationAn application that runs within an EKS Cluster.
    +

    Blueprint

    +

    ConnectPrivate

    +

    EKS Blueprints cho phép bạn định cấu hình và triển khai những gì gọi là blueprint cluster. Blueprint kết hợp các cluster, tiện ích bổ sung và nhóm thành một đối tượng gắn kết có thể được triển khai tổng thể. Sau khi blueprint được định cấu hình, nó có thể được triển khai dễ dàng trên bất kỳ số lượng tài khoản AWS và khu vực nào. Blueprints cũng tận dụng công cụ GitOps để tạo điều kiện khởi động cluster và tích hợp workload.

    +

    Nội dung

    +
      +
    1. Giới thiệu
    2. +
    3. Các bước chuẩn bị
    4. +
    5. Tạo EKS Blueprints
    6. +
    7. Tạo CDK Project
    8. +
    9. Triển khai Pipeline
    10. +
    11. Onboard Teams
    12. +
    13. Add-ons
    14. +
    15. Triển khai
    16. +
    17. Dọn dẹp tài nguyên
    18. +
    + + + +
    - "> - - 7.3 Tạo add-ons - - - - - - - - - - - - - - - - - - - - - - - - - - +
    -
  • - "> - - 8. Triển khai Workload với ArgoCD - - - - - - -
      - - - - - - - - - - - - - - - - - -
    • - - 8.1 Giới thiệu ArgoCD - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + -
    • +
  • + +
    +
    +
    + + + + + + + + + + + + + + + + + - - - - - - - - - - - - - -
  • - - 8.2 Triển khai với ArgoCD - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 8.3 Quản lý trên ArgoCD - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    - -
    - -
    - - - - - - navigation - - - -

    Giới thiệu về EKS Blueprints

    -

    Sơ đồ kiến trúc

    -

    ConnectPrivate

    -

    Khái niệm cốt lõi

    -

    ConnectPrivate

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    ConceptsDescription
    ClusterAn EKS Cluster deployed following best practices.
    Resource ProviderResource providers are abstractions that supply external AWS resources to the cluster (e.g. hosted - zones, VPCs, etc.).
    Add-onAllow you to configure, deploy, and update the operational software, or add-ons, that provide key - functionality to support your Kubernetes applications.
    TeamsA logical grouping of IAM identities that have access to a Kubernetes namespace(s), or cluster - administrative access depending upon the team type.
    PipelinesContinuous Delivery pipelines for deploying clusters and add-ons
    ApplicationAn application that runs within an EKS Cluster.
    -

    Blueprint

    -

    ConnectPrivate

    -

    EKS Blueprints cho phép bạn định cấu hình và triển khai những gì gọi là blueprint cluster. Blueprint kết hợp - các cluster, tiện ích bổ sung và nhóm thành một đối tượng gắn kết có thể được triển khai tổng thể. Sau khi - blueprint được định cấu hình, nó có thể được triển khai dễ dàng trên bất kỳ số lượng tài khoản AWS và khu vực - nào. Blueprints cũng tận dụng công cụ GitOps để tạo điều kiện khởi động cluster và tích hợp workload.

    -

    Nội dung

    -
      -
    1. Giới thiệu
    2. -
    3. Các bước chuẩn bị
    4. -
    5. Tạo EKS Blueprints
    6. -
    7. Tạo CDK Project
    8. -
    9. Triển khai Pipeline
    10. -
    11. Onboard Teams
    12. -
    13. Add-ons
    14. -
    15. Triển khai
    16. -
    17. Dọn dẹp tài nguyên
    18. -
    - - - -
    - - -
    - - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/public/vi/index.json b/public/vi/index.json index c759343..78eca0d 100644 --- a/public/vi/index.json +++ b/public/vi/index.json @@ -60,7 +60,7 @@ "title": "Các bước chuẩn bị", "tags": [], "description": "", - "content": "Các bước chuẩn bị Để tiến hành bài lab, chúng ta phải chuẩn bị môi trường Cloud9 và tạo IAM role cho Cloud9 instance.\nĐồng thời cài đặt Kubernetes Tool\nNội dung Tạo môi trường Cloud9 Cài đặt công cụ Tạo IAM role Gán IAM role cho Cloud9 instance Cập nhật IAM role " + "content": "Các bước chuẩn bị Để tiến hành bài lab, chúng ta phải chuẩn bị môi trường Cloud9 và tạo IAM role cho Cloud9 instance.\nĐồng thời cài đặt Kubernetes Tool\nNội dung Tạo VPC và EC2 Instance Kết nối với EC2 Instance Cài đặt các cộng cụ Tạo IAM Role Gán IAM Role Cập nhật IAM Role " }, { "uri": "//localhost:1313/vi/6-onboardteams/6.2-onboardingteams/", @@ -151,7 +151,7 @@ "title": "Truy cập Cluster", "tags": [], "description": "", - "content": "Truy cập Cluster Truy cập Cluster Cài đặt quyền truy cập vào cluster export KUBE_CONFIG=$(aws cloudformation describe-stacks --stack-name dev-dev-blueprint | jq -r \u0026#39;.Stacks[0].Outputs[] | select(.OutputKey|match(\u0026#34;ConfigCommand\u0026#34;))| .OutputValue\u0026#39;)\r$KUBE_CONFIG Khi kubeconfig đã được cập nhật, bạn sẽ có thể truy cập vào EKS cluster kubectl get svc " + "content": "Truy cập Cluster Cài đặt quyền truy cập vào cluster export KUBE_CONFIG=$(aws cloudformation describe-stacks --stack-name dev-dev-blueprint | jq -r \u0026#39;.Stacks[0].Outputs[] | select(.OutputKey|match(\u0026#34;ConfigCommand\u0026#34;))| .OutputValue\u0026#39;)\r$KUBE_CONFIG Khi kubeconfig đã được cập nhật, bạn sẽ có thể truy cập vào EKS cluster kubectl get svc " }, { "uri": "//localhost:1313/vi/2-prerequiste/2.5-attachrole/", diff --git a/public/vi/tags/index.html b/public/vi/tags/index.html index b4d5603..4ceb31f 100644 --- a/public/vi/tags/index.html +++ b/public/vi/tags/index.html @@ -1,1662 +1,1629 @@ + + + + + + + + + + Tags :: AWS System Manager + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    +
    +
    + +
    +
    + + + + +
    +
    + +
    +
    + +
    +
    + +
    + +
    + +
    + +

    + + taxonomy :: + + Tags +

    + + +
      + +
    +
    + +
    + +
    + +
    -
  • - "> - - 5.3 Pipeline in Action - - - - - - -
  • - - - - - - - - - - - - - - -
  • - - 5.4 Truy cập Cluster - - - - - - -
  • - - - - - - - - - - - - - - - - - - - - - -
  • - - 6. Quản lý nhóm bằng IaC - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 7. Tiện ích bổ sung - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 8. Triển khai Workload với ArgoCD - - - - - - - - -
  • - - - - - - - - - - - - -
  • - - 9. Dọn dẹp tài nguyên - - - - - - -
  • - - - - - - - - - -
    -

    More

    - -
    - - - -
    -
    - -
    - - - - - - - - -
    -
    -
    - -
    -
    - - - - -
    -
    - -
    -
    - - -
    -
    - -
    - -
    - -
    - -

    - - taxonomy :: - - Tags -

    - - - - - - - - -
      - -
    - - -
    - -
    - - -
    - - -
    - - -
    - -
    -
    -
    - - - - - - - - - - - - - - - - - - \ No newline at end of file +
    + +
    +
    +
    + + + + + + + + + + + + + + + + +