Skip to content

Commit

Permalink
setup tests
Browse files Browse the repository at this point in the history
  • Loading branch information
hamzaimran08 committed Nov 29, 2023
1 parent 3d09c80 commit 1870610
Show file tree
Hide file tree
Showing 21 changed files with 63,065 additions and 0 deletions.
95 changes: 95 additions & 0 deletions .github/workflows/serve-python.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
name: Serve-Python workflow

on:
push:
paths:
- "serve-python/**"
# Adds ability to run this workflow manually
workflow_dispatch:
inputs:
logLevel:
description: 'Log level'
required: true
default: 'warning'
type: choice
options:
- info
- warning
- debug
tags:
description: 'Manual run'
required: false
type: boolean

jobs:
build_and_test:
runs-on: ubuntu-latest

steps:
- name: 'Checkout GitHub Action'
uses: actions/checkout@main

- name: 'Build test image'
run: |
docker build -t python-test-image -f ./serve-python/Dockerfile.test ./serve-python
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@0.7.1
with:
image-ref: 'python-test-image'
format: 'table'
severity: 'CRITICAL,HIGH'
security-checks: 'vuln'
timeout: '59m59s'
exit-code: '0'

- name: 'Run tests'
env:
IMAGE_NAME: python-test-image
run: |
pip install -r ./serve-python/tests/requirements.txt
python3 -m pytest ./serve-python
push:
if: |
github.ref == 'refs/heads/main' &&
github.repository == 'scilifelabdatacentre/serve-images'
needs: build_and_test
runs-on: ubuntu-latest
concurrency:
group: '${{ github.workflow }} @ ${{ github.event.pull_request.head.label || github.head_ref || github.ref }}'
cancel-in-progress: true
permissions:
contents: read
packages: write

steps:
- name: 'Checkout github action'
uses: actions/checkout@main

- name: Docker meta
id: meta
uses: docker/metadata-action@v4
with:
images: ghcr.io/scilifelabdatacentre/serve-python
tags: |
type=raw,value={{date 'YYMMDD-HHmm' tz='Europe/Stockholm'}}
- name: 'Login to GHCR'
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{github.actor}}
password: ${{secrets.GITHUB_TOKEN}}

- name: Publish image to GHCR
uses: docker/build-push-action@v3
with:
file: ./serve-python/Dockerfile
context: ./serve-python
push: true
build-args: version=${{ github.ref_name }}
tags: |
${{ steps.meta.outputs.tags }}
ghcr.io/scilifelabdatacentre/serve-python:latest
labels: ${{ steps.meta.outputs.labels }}
11 changes: 11 additions & 0 deletions dev_scripts/run_python.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
#!/bin/bash

set -o errexit

docker build -t python-dev-img -f ./serve-python/Dockerfile.test ./serve-python
python3 -m venv venv
source ./venv/bin/activate
python3 -m pip install --upgrade pip
pip install -r ./serve-python/tests/requirements.txt
export IMAGE_NAME=python-dev-img
python3 -m pytest ./serve-python/
40 changes: 40 additions & 0 deletions serve-python/Dockerfile.test
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
# FROM python:3
FROM ubuntu:18.04

# Create user name and home directory variables.
# The variables are later used as $USER and $HOME.
ENV USER=user
ENV HOME=/home/$USER

# Add user to system
RUN useradd -m -u 1000 $USER

# Set working directory (this is where the code should go)
WORKDIR $HOME

RUN /bin/bash -c "apt update"
RUN /bin/bash -c "apt install python3.7-dev -y"
RUN /bin/bash -c "apt install curl -y"
RUN /bin/bash -c "apt-get install python3-distutils -y"
RUN /bin/bash -c "curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py"
RUN /bin/bash -c "python3.7 get-pip.py"
RUN /bin/bash -c "apt install gcc -y"
RUN /bin/bash -c "update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 1"
RUN /bin/bash -c "pip3 install --upgrade pip"
# RUN /bin/bash -c "apt update"
# RUN /bin/bash -c "curl https://dl.min.io/client/mc/release/linux-amd64/mc --output mc && chmod +x mc"
COPY requirements.txt $HOME/requirements.txt
RUN /bin/bash -c "pip3 install -r requirements.txt"

COPY serve.py $HOME/serve.py
COPY deploy.sh $HOME/deploy.sh
COPY deploy.sh $HOME/start-script.sh
COPY tests/model/ $HOME/models/
RUN chmod +x start-script.sh \
&& chmod +x deploy.sh \
&& chown -R $USER:$USER $HOME

ENV STACKN_MODEL_PATH=$HOME/models
ENV PYTHONPATH=$HOME/models

CMD ./deploy.sh
49 changes: 49 additions & 0 deletions serve-python/tests/model/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Transformer example project

This STACKn example project demonstrates how to deploy the Swedish BERT model developed by the Swedish Unemployment Agency: https://github.com/af-ai-center/SweBERT.git and publish live prediction endpoints to a STACKn model portal.

***

## BERT and the Swedish BERT models


The realease of the BERT architecture is seen as one of the major breakthroughs in NLP (natural language processing) in the last few years. BERT has presented state of the art results across a number of different use cases, such as document classification, sentiment analysis, natural language inference, questions answering, sentence similarity and more.

Arbetsförmedlingen (The Swedish Public Employment Service) has developed Swedish BERT models which were trained on Swedish Wikipedia with approximately 2 million articles and 300 million words.

## Getting started

Please follow and refer to [these detailed steps](https://github.com/scaleoutsystems/examples/tree/main/tutorials/studio/quickstart#transformers-example-project) in order to create a "STACKn Default" project and correclty set up a Jupiter instance.

Clone this repository:

$ git clone https://github.com/scaleoutsystems/transformers-example-project.git

and then install the pip requirements and enable the Jupyter notebook extension:

$ pip install -r requirements.txt
$ jupyter nbextension enable --py widgetsnbextension

Now you should be ready to open the `getting_started_with_swebert.ipynb` in the _notebooks_ folder. Please follow the notebook's instructions.

## Deploying the model

Once you have run all the cells in the above notebook, open up again a terminal in your Jupyter Lab session and execute the following command within the repository directory:

- `stackn create object afbert -r minor` (**Note:** add the flag `--insecure` in case you have deployed STACKn locally with a self-signed certificate)

- `stackn get objects` (**Note:** add the flag `--insecure` in case you have deployed STACKn locally with a self-signed certificate)

(Check that the model is listed; you should be able to see the newly created model object in your Studio UI, under the "_Objects_" tab)

Deploy the newly created model object with the "_Python Model Deployment_" component (under the "_Serve_" tab in Studio). _Name_ can be anything, _Model_ should match the name of the newly created model (e.g. "afbert:v0.1.0"); leave the rest as defaults.

**Note:** It could take some time for this model to initialize, so keep checking the logs until it is available and wait until it is running successfully.

## Run the prediction

Once the above serving app is up and running, copy the endpoint URL by right-clicking on the _Open_ link.

Go back to your Jupyter Lab session and open the `predict.ipynb` notebook under the _notebooks_ folder. Paste the copied URL at line 12 in order to use the correct endpoint for the prediction. It is time to test the prediction! Run all the cells and check the results.

**Tips:** You can play around by changing the values of the `example` and `msk_ind` variables. The latter will mask (or "hide") one of the words in the example sentence; then the prediction will shown the possible candidates for such "missing" word.
Loading

0 comments on commit 1870610

Please sign in to comment.