-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add unit and integration tests #12
Comments
It would also be nice to have an easy way to spin up a development environment to manually test changes. Currently there are a lot of configs that are only set in the deployed copy of the repo, and the prod config can't easily be copied and reused without editing a bunch of files to avoid conflicts with the running prod instance. For future reference, here are the changes I had to make to get a test environment up and running after copying over the repo: diff --git a/config/spark-defaults.conf b/config/spark-defaults.conf
index 5f1e64f..2286b53 100644
--- a/config/spark-defaults.conf
+++ b/config/spark-defaults.conf
@@ -4,7 +4,7 @@ spark.driver.extraClassPath /jdbc/ojdbc8.jar
spark.driver.extraClassPath /jdbc/ojdbc8.jar
# Docker hostname address of the Spark master node
-spark.master spark://spark-node-master:7077
+spark.master spark://spark-node-master-test:7077
# Driver memory can be low, as it's only used for the JDBC connection
spark.driver.memory 2g
# Total memory available to the worker, across all jobs
diff --git a/docker-compose.yaml b/docker-compose.yaml
index 4f72205..b0ce44c 100644
--- a/docker-compose.yaml
+++ b/docker-compose.yaml
@@ -2,11 +2,10 @@ x-default: &node
image: ${CCAO_REGISTRY_URL}/service-spark-iasworld:latest
# Corresponds to the user set in the Dockerfile and the shiny-server
# user on the server for proper write perms to mounted directories
- user: 1003:0
+ user: 1010:0
build:
context: .
dockerfile: Dockerfile
- restart: unless-stopped
volumes:
- ./drivers:/jdbc:ro
- ./src:/tmp/src:ro
@@ -20,13 +19,13 @@ x-default: &node
- IPTS_PRD_PASSWORD
- IPTS_TST_PASSWORD
networks:
- - sparknet
+ - sparknet-test
services:
- spark-node-master:
+ spark-node-master-test:
<<: *node
- container_name: spark-node-master
- hostname: spark-node-master
+ container_name: spark-node-master-test
+ hostname: spark-node-master-test
environment:
- SPARK_MODE=master
@@ -49,16 +48,16 @@ services:
- IPTS_TST_PORT
- GH_APP_ID
ports:
- - 4040:4040
- - 8080:8080
+ - 4041:4040
+ - 8081:8080
- spark-node-worker:
+ spark-node-worker-test:
<<: *node
- container_name: spark-node-worker
- hostname: spark-node-worker
+ container_name: spark-node-worker-test
+ hostname: spark-node-worker-test
environment:
- SPARK_MODE=worker
- - SPARK_MASTER_URL=spark://spark-node-master:7077
+ - SPARK_MASTER_URL=spark://spark-node-master-test:7077
- SPARK_WORKER_MEMORY=96G
- SPARK_WORKER_CORES=28
- SPARK_WORKER_WEBUI_PORT=9090
@@ -82,16 +81,16 @@ services:
- IPTS_TST_PORT
- GH_APP_ID
ports:
- - 9090:9090
+ - 9091:9090
# Using a dedicated subnet because the Docker default subnet conflicts
# with some of the CCAO's internal routing
networks:
- sparknet:
+ sparknet-test:
ipam:
config:
- - subnet: 211.55.0.0/16
- name: sparknet
+ - subnet: 211.54.0.0/16
+ name: sparknet-test
secrets:
AWS_CREDENTIALS:
diff --git a/src/utils/github.py b/src/utils/github.py
index 600c0e9..5210e8d 100644
--- a/src/utils/github.py
+++ b/src/utils/github.py
@@ -84,7 +84,7 @@ class GitHubClient:
response.raise_for_status()
gh_token = response.json()["token"]
- data: dict[str, str | dict] = {"ref": "master"}
+ data: dict[str, str | dict] = {"ref": "jeancochrane/gate-test-result-s3-upload-behind-workflow-variable"}
if inputs is not None:
data["inputs"] = inputs |
Given how critical the output of this pipeline is to everything downstream of it, we should add data and integration tests to confirm that it's doing what we expect. This should include automated testing via CI/Actions.
The text was updated successfully, but these errors were encountered: