-
Notifications
You must be signed in to change notification settings - Fork 0
Deprecated
These pages have been deprecated and the guidance comprised within will not be updated going forward.
Prerequisites:
- Git
- docker, docker-compose
- maven
- access to the public registry on ECR
We're building GeoNetwork from a custom fork from the start as if we need to make any changes to the java code we can then build our own war file and docker image rather than using the default geonetwork one.
Clone https://github.com/AstunTechnology/custom-geonetwork and check out the os-310x branch:
git clone https://github.com/AstunTechnology/custom-geonetwork.git && git checkout os-310x
Build war file with es profile (for elasticsearch):
cd custom-geonetwork
sudo mvn clean install -DskipTests -Pes
Copy war file to os-custom-geonetwork/docker
(and change to that directory):
cd web/target && cp geonetwork.war ../../../os-custom-geonetwork/docker && cd !$
Build image and check it's present:
docker build --no-cache . -t os-geonetwork:v0
docker images -a
Get login details from AWS ECR and temporarily authenticate your local docker:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
Create repository:
aws ecr-public create-repository --region us-east-1 --repository-name os-custom-geonetwork
Tag image and push to ECR. You can see the registryid
in the image reference in the docker file.
docker tag os-geonetwork:v0 public.ecr.aws/[registryid]/os-custom-geonetwork
docker push public.ecr.aws/[registryid]/os-custom-geonetwork
We're building a custom ElasticSearch container so we can pre-load it with the GeoNetwork indices (after checking whether they already exist)
In the current repository (eg os-custom-geonetwork
) change to the docker/elasticsearch
directory.
Build image and check it's present:
docker build --no-cache . -t os-elasticsearch:v0
docker images -a
Get login details from AWS ECR and temporarily authenticate your local docker:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin public.ecr.aws
Create repository:
aws ecr-public create-repository --repository-name os-elasticsearch
Tag image and push to ECR:
docker tag os-elasticsearch:v0 public.ecr.aws/[registryid]/os-elasticsearch
docker push public.ecr.aws/[registryid]/os-elasticsearch
Prerequisites:
- aws-cli installed and configured
- a user with the appropriate permissions
- a vpc with two subnets
- an application load balancer
- a security group
- this repository cloned or downloaded as (eg os-custom-geonetwork)
Create Cluster:
cd os-custom-geonetwork/docker
ecs-cli up --vpc [vpc-id] --capability-iam --region eu-west-2 --cluster docker-geonetwork-ec2 --subnets [subnet1,subnet2] \
--instance-type t3a.large --size 1 --force --keypair sandpit_shared \
--security-group [securitygroupname] --extra-user-data server_prep-os.sh
After successful creation (the message "cluster creation succeeded") give it a few minutes and then create the docker containers:
ecs-cli compose --file docker-compose.yml --file docker-compose-3.10-ecs-os.yml service up --cluster-config default --launch-type EC2 \
--target-group-arn [loadbalancerarn] \
--container-name geonetwork --container-port 8080 --health-check-grace-period 1800
Updates to either files in this repository or to the required schema plugins should be committed to the correct branch of the appropriate repository. If required, add a volume mount for the updated file to the GeoNetwork service in https://github.com/AstunTechnology/os-custom-geonetwork/blob/main/docker/docker-compose-3.10-ecs-os-rds.yml#L36
Get the current IP address for the server using:
aws ec2 describe-instances --filters "Name=vpc-id, Values=[vpc-id]" --query 'Reservations[*].Instances[*].PublicIpAddress' --profile osvpc
Then SSH onto it using the usual method.
Update this repository and any schema plugins using git pull
, using sudo
as required. Then change directory to os-custom-geonetwork\docker
and execute customisations-os.sh
to copy the files to their required locations.
Back on your local computer, un-deploy the containers using:
ecs-cli compose --file docker-compose-3.10-ecs-os-rds.yml service rm -c docker-geonetwork-ec2 --ecs-profile osvpc --region eu-west-2
Then after a few minutes re-deploy them using
ecs-cli compose --file docker-compose.yml --file docker-compose-3.10-ecs-os.yml service up --cluster-config default --launch-type EC2 \
--target-group-arn [loadbalancerarn] \
--container-name geonetwork --container-port 8080 --health-check-grace-period 1800
Note that you may need to repeat the re-deploy task a couple of times if the old deployment is still draining.
Prerequisites:
- security key for ssh
- jq
Get the IP address of the EC2 instance and ssh onto it:
instanceIP=$( aws ec2 describe-instances --filters "Name=vpc-id, Values=[vpcid]" --query 'Reservations[*].Instances[*].PublicIpAddress' --output json | jq .[]| jq -r .[] ) && ssh ec2-user@$instanceIP -i ~/.ssh/sandpit_shared.euwest2.pem
Check the output of /var/log/cloud-init-output.log
If you run ecs-cli ps
and see messages about hitting dockerhub limits:
- Sign up for a pro account for dockerhub
- Authenticate at the command prompt with
docker login
- ssh onto the instance as above and edit /etc/ecs/ecs.config to include the credentials from your local
.docker/config.json
as per this link - Restart the ecs service with
sudo service ecs restart
- Run the
ecs-cli compose
command again to redeploy the containers
(Note that this is an interim approach- the change to ecs.config will need to be repeated if the EC2 instance is re-created, but it's not recommended to include the dockerhub credentials as environment variables)
There is a full admin interface for Kibana, but by default, it has no authentication in place. Until this is fixed, we're going for security by obscurity.
The aim is to make this part of the container set up but for now it must be done manually
Get the container name for the kibana container by sshing onto the EC2 instance and running:
docker ps -a | grep kibana
Go onto the docker container and create an indices directory (you will need to do the following every time the kibana container restarts):
docker exec -i -t [kibana container] /bin/bash
mkdir indices
exit
Change to the os-custom-geonetwork/elasticsearch
directory and copy the json to the running kibana container:
for f in *.json; do docker cp $f [kibana container]:/usr/share/kibana/indices/$f
cd ../kibana
docker cp export.ndjson [kibana container]:/usr/share/kibana/indices/export.ndjson
Go back onto the kibana container and run the following commands to check the cluster health (yellow or green are OK, red is not) and then check whether the indices have already been loaded, and if not, load them:
docker exec -i -t [kibana container] /bin/bash
cd indices
curl -X GET 'http://localhost:9200/_cat/health?h=st'
for f in *.json ; do if [ $(curl -LI "http://localhost:9200/gn-${f%.*}" -o /dev/null -w '%{http_code}\n' -s) == "200" ] ; \
then echo "gn-${f##*/} exists" ; else echo "gn-${f%.*} missing" \
&& curl -X PUT "http://localhost:9200/gn-${f%.*}" \
-H "Content-Type:application/json" -d @$f ; fi ; done
curl -X POST "http://localhost:5601/api/saved_objects/_import?overwrite=true" -H "kbn-xsrf:true" --form file=@export.ndjson
Note that the final cURL command will report an error at present but it can be ignored