Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement user interface tests #2876

Draft
wants to merge 1 commit into
base: develop
Choose a base branch
from
Draft

Implement user interface tests #2876

wants to merge 1 commit into from

Conversation

deen13
Copy link
Contributor

@deen13 deen13 commented Jun 25, 2024

Short Description

This pull request introduces Playwright as a solution for user interface and end-to-end testing. I've included an example test to showcase its usage and feel. I suggest keeping this simple test as a baseline and incrementally adding more tests over time.

I've separated the Playwright tests into their own module because they don't integrate well with the unit test setup. While I'm open to discussing this technically, that would exceed the scope of this pull request description. 🙈

Proposed Changes

  • Add an example test
  • Implement circleci step
  • Implement devtool script
  • Add playwright and pytest-playwright as dependencies

Resolved issues

Fixes: #2583


Pull Request Review Guidelines

Copy link

codeclimate bot commented Jun 25, 2024

Code Climate has analyzed commit 3897eec and detected 0 issues on this pull request.

The test coverage on the diff in this pull request is 100.0% (50% is the threshold).

This pull request will bring the total coverage in the repository to 82.7% (-0.2% change).

View more on Code Climate.

@deen13 deen13 force-pushed the feature/ui-tests branch from 8910c1c to 0cb1eb3 Compare June 25, 2024 13:40
@deen13 deen13 marked this pull request as ready for review June 25, 2024 13:50
@deen13 deen13 marked this pull request as draft June 25, 2024 14:32
@deen13 deen13 force-pushed the feature/ui-tests branch from 12014b3 to 3897eec Compare July 1, 2024 10:54
@deen13 deen13 marked this pull request as ready for review July 1, 2024 11:05
Copy link
Contributor

@JoeyStk JoeyStk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you very much! I haven't worked with neither Selenium nor Playwright yet, so I don't have an opinion on either one of them. As they seem to be equally strong in general, I would be willing to try Playwright :)

@JoeyStk
Copy link
Contributor

JoeyStk commented Jul 7, 2024

@PeterNerlich

@deen13
Copy link
Contributor Author

deen13 commented Jul 8, 2024

Just a side note: Playwright has seen a significant rise in usage, growing from 15% to 30% in the past year, according to State of JS 2023. To me, it seems like it could become the next major tool for testing.

@PeterNerlich PeterNerlich self-assigned this Jul 8, 2024
Copy link
Collaborator

@PeterNerlich PeterNerlich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm very much looking forward to this PR, though sadly there is a major problem currently (see the end of this review) so don't start implementing the change requests I'm listing first too enthusiastically before you've through.

Playwright needs a browser backend and complains playwright._impl._errors.Error: BrowserType.launch: Executable doesn't exist at /home/$USER/.cache/ms-playwright/chromium-1117/chrome-linux/chrome at first start

  • this should be documented (also detail space requirements, as they are nontrivial for this optional thing, and that the report that some dependencies are missing not necessarily means that the tests can't be run)
  • optionally a separate script to setup playwright in the venv would be nice? (mainly wrapper around playwright install I guess)

You should change "/tests/"="unused-argument,missing-function-docstring" to instead start with "/(ui_)?tests/" in the [tool.pylint-per-file-ignores] section of pyproject.toml, so that pylint ignores some rules we already disabled for the normal tests for ui_tests as well.

A TODO for the future is definitely to write down a guide to writing meaningful UI tests. More so than with the normal tests, it is very difficult know what to test and what not to test, and how to avoid being too implementation dependent while still catching bad UI mishaps.
This documentation should include linking to and highlighting best practices, noting that (at the time of writing) this chapter is only available in the part focused on nodejs and absent from python and other flavours of the documentation. It might make sense to then add notes showing how each point might be adapted to python/pytest (e.g. fixtures instead of before/after hooks), and finally document or link to the documentation of any playwright-centered fixtures already established at that point.


Unfortunately, I have not been successful in making it work for more than one test at a time. After the first test has run, any subsequent tests will fail to use anything related to the database (like merely logging in), regardless if the first test was successful or not. This is a very complicated issue that stems from the live_server fixture using djangos TransactionTestCase which has a bug with restoring data (like migrations) after flushing the database, which is necessary for technical reasons for a subset of tests (e.g. all tests requiring a full running server instance, as these UI tests do). The same issue is haunting us with the normal tests occasionally and had it's fair share of fun with me in #2596. If anyone is interested in the technical details of this rabbit hole, feel free to message me and see whether I can still recount them in a coherent manner.

Ultimately, this makes this change not useful even in some preliminary capacity if merged. When the search for some weird workaround is successful, I will approve this PR, as frustrating as this is.

Comment on lines +40 to +63
# Parse given command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
# Verbosity for pytest
-v|-vv|-vvv|-vvvv) VERBOSITY="$1";shift;;
esac
done

# The default pytests args we use
PYTEST_ARGS=("--disable-warnings" "--color=yes")

if [[ -n "${VERBOSITY}" ]]; then
PYTEST_ARGS+=("$VERBOSITY")
else
PYTEST_ARGS+=("--quiet" "--numprocesses=auto")
fi

PYTEST_ARGS+=("--testmon-noselect")

"$(dirname "${BASH_SOURCE[0]}")/prune_pdf_cache.sh"

echo -e "Running all user interface tests..." | print_info
deescalate_privileges pytest "${PYTEST_ARGS[@]}" ui_tests
echo "✔ Tests successfully completed " | print_success
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're missing a catch-all in this loop, meaning that if I ever specify any argument that is not -v, -vv, -vvv or -vvvv, this will not shift and thus in the next round, the same argument will be checked again, and consequently forever while the process runs with 100% CPU usage indefinitely.

I think it's reasonable and nice being able to specify individual tests as well as keyword expressions and markers, it should just default to all except those under tests if none are specified.
So I propose including all that confusing stuff from tools/test.sh that takes care of applying these arguments and ensures sane defaults. With testmon, we just need to take care not to pollute the database for the normal tests (I'm not sure, but better safe than sorry): export TESTMON_DATAFILE=".testmondata-ui". Limiting tests to only the ui tests seems most flexible to me using the same method you used to exclude them in tools/test.sh: PYTEST_ARGS+=("--ignore=tests").

Suggested change
# Parse given command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
# Verbosity for pytest
-v|-vv|-vvv|-vvvv) VERBOSITY="$1";shift;;
esac
done
# The default pytests args we use
PYTEST_ARGS=("--disable-warnings" "--color=yes")
if [[ -n "${VERBOSITY}" ]]; then
PYTEST_ARGS+=("$VERBOSITY")
else
PYTEST_ARGS+=("--quiet" "--numprocesses=auto")
fi
PYTEST_ARGS+=("--testmon-noselect")
"$(dirname "${BASH_SOURCE[0]}")/prune_pdf_cache.sh"
echo -e "Running all user interface tests..." | print_info
deescalate_privileges pytest "${PYTEST_ARGS[@]}" ui_tests
echo "✔ Tests successfully completed " | print_success
TESTS=()
# Parse given command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
# If only tests affected by recent changed should be run, --changed can be passed as a flag
--changed) CHANGED=1;shift;;
# Verbosity for pytest
-v|-vv|-vvv|-vvvv) VERBOSITY="$1";shift;;
# Select tests by keyword expression
-k) shift;KW_EXPR="$1";shift;;
# Select tests by marker
-m) shift;MARKER="$1";shift;;
# If only particular tests should be run, test path can be passed as CLI argument
*) TESTS+=("$1");shift;;
esac
done
# The default pytests args we use
PYTEST_ARGS=("--disable-warnings" "--color=yes")
# Ignore normal tests bc they run separate from the ui tests
PYTEST_ARGS+=("--ignore=tests")
if [[ -n "${VERBOSITY}" ]]; then
PYTEST_ARGS+=("$VERBOSITY")
else
PYTEST_ARGS+=("--quiet" "--numprocesses=auto")
fi
export TESTMON_DATAFILE=".testmondata-ui"
# Check if --changed flag was passed
if [[ -n "${CHANGED}" ]]; then
# Check if .testmondata file exists
if [[ -f ".testmondata-ui" ]]; then
# Only run changed tests and don't update dependency database
PYTEST_ARGS+=("--testmon-nocollect")
CHANGED_MESSAGE=" affected by recent changes"
else
# Inform that all tests will be run
echo -e "\nIt looks like you have not run pytest without the \"--changed\" flag before." | print_warning
echo -e "Pytest has to build a dependency database by running all tests without the flag once.\n" | print_warning
# Override test path argument
unset TESTS
# Tell testmon to run all tests and collect data
PYTEST_ARGS+=("--testmon-noselect")
fi
else
# Run all tests, but update list of tests
PYTEST_ARGS+=("--testmon-noselect")
fi
# Determine whether coverage data should be collected
if [[ -z "${CHANGED}" ]] && (( ${#TESTS[@]} == 0 )); then
PYTEST_ARGS+=("--cov=integreat_cms" "--cov-report=html")
fi
if [[ -n "${KW_EXPR}" ]] || [[ -n "${MARKER}" ]] || (( ${#TESTS[@]} )); then
MESSAGES=()
if [[ -n "${KW_EXPR}" ]]; then
MESSAGES+=("\"${KW_EXPR}\"")
PYTEST_ARGS+=("-k" "${KW_EXPR}")
fi
if [[ -n "${MARKER}" ]]; then
MESSAGES+=("with ${MARKER}")
PYTEST_ARGS+=("-m" "${KW_EXPR}")
fi
# Check whether test paths exist
for t in "${TESTS[@]}"; do
if [[ -e "${t%%::*}" ]]; then
# Adapt message and append to pytest arguments
MESSAGES+=("${t}")
PYTEST_ARGS+=("${t}")
elif [[ -n "${t}" ]]; then
# If the test path does not exist but was non-zero, show an error
echo -e "${t%%::*}: No such file or directory" | print_error
exit 1
fi
done
TEST_MESSAGE=$(join_by ", " "${MESSAGES[@]}")
TEST_MESSAGE=" in ${TEST_MESSAGE}"
fi
"$(dirname "${BASH_SOURCE[0]}")/prune_pdf_cache.sh"
echo -e "Running all user interface tests..." | print_info
deescalate_privileges pytest ${PYTEST_ARGS[@]}
echo "✔ Tests successfully completed " | print_success

Comment on lines +1 to +17
import pytest
from django.core.management import call_command
from pytest_django.plugin import DjangoDbBlocker

# pylint: disable=unused-argument


@pytest.fixture(scope="session")
def load_test_data(django_db_setup: None, django_db_blocker: DjangoDbBlocker) -> None:
"""
Load the test data initially for all test cases

:param django_db_setup: The fixture providing the database availability
:param django_db_blocker: The fixture providing the database blocker
"""
with django_db_blocker.unblock():
call_command("loaddata", "integreat_cms/cms/fixtures/test_data.json")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is suggestion is basically just me noodling around when playing with this PR, trying to find what kind of fixtures could make sense. I stopped at writing some to auto-login to any role and starting the test on the dashboard (which also serves as a check to make sure login was successful, because if that assertion fails the test shows as ERROR instead of FAILED and might save time debugging), plus some comfort fixtures like page_as_root, page_as_* etc. for all roles so it's easy to test individual roles without having to know that the name for the relevant fixture to parametrize is login_role_user_cookies (since that name is not meant to be used directly by tests):

import pytest
from playwright.sync_api import expect, Page
from ui_tests.conftest import ROOT, AUTHOR

def test_something(page_as_root: Page, page_as_author: Page) -> None:
    page = page_as_root if page_as_root is not None else page_as_author
    # …

# …is more or less equivalent to…

@pytest.mark.parametrize("login_role_user_cookies", [ROOT, AUTHOR], indirect=True)
def test_something(page_as_role: Page) -> None:
    page, role = page_as_role
    # …

This also contains a few small general improvements, like adding a module docstring in the same style as tests/conftest.py, importing the roles, role groups and the load_test_data fixture from there.

There are two TODOs I left in here, but I tried to take care to otherwise clean up and document my work so it is neither lost nor causes way too much unnecessary additional work later, if you someone else were to try and include it in a PR and had to understand it in depth in order to satisfy pylint and co.

Suggested change
import pytest
from django.core.management import call_command
from pytest_django.plugin import DjangoDbBlocker
# pylint: disable=unused-argument
@pytest.fixture(scope="session")
def load_test_data(django_db_setup: None, django_db_blocker: DjangoDbBlocker) -> None:
"""
Load the test data initially for all test cases
:param django_db_setup: The fixture providing the database availability
:param django_db_blocker: The fixture providing the database blocker
"""
with django_db_blocker.unblock():
call_command("loaddata", "integreat_cms/cms/fixtures/test_data.json")
"""
This module contains shared fixtures for pytest, specific to ui tests
"""
from __future__ import annotations
from typing import TYPE_CHECKING
import pytest
from django.contrib.auth import get_user_model
from django.test.client import Client
from playwright.sync_api import BrowserContext, BrowserType, expect, Page
from pytest_django.live_server_helper import LiveServer
# pylint: disable=unused-import
from tests.conftest import ALL_ROLES, ANONYMOUS, APP_TEAM, AUTHOR, CMS_TEAM, EDITOR, EVENT_MANAGER, HIGH_PRIV_STAFF_ROLES, load_test_data, MANAGEMENT, MARKETING_TEAM, OBSERVER, PRIV_STAFF_ROLES, REGION_ROLES, ROLES, ROOT, SERVICE_TEAM, STAFF_ROLES, WRITE_ROLES
if TYPE_CHECKING:
from typing import Generator, Sequence
from _pytest.fixtures import SubRequest
from playwright._impl._api_structures import SetCookieParam
from pytest_django.plugin import DjangoDbBlocker # type: ignore[attr-defined]
# pylint: disable=redefined-outer-name
@pytest.fixture(scope="session")
def login_role_user_cookies(
request: SubRequest, django_db_setup: None, load_test_data: None, django_db_blocker: DjangoDbBlocker, live_server: LiveServer # noqa: F811
) -> tuple[list[dict[str,str]], str]:
"""
Get the test user of the current role, force a login and extract the session cookies. Gets executed only once per user.
:param request: The request object providing the parametrized role variable through ``request.param``
:param django_db_setup: The fixture providing the database availability
:param load_test_data: The fixture providing the test data (see :meth:`~tests.conftest.load_test_data`)
:param django_db_blocker: The fixture providing the database blocker
:param live_server: The fixture providing a running full blown server instance to test against
:return: A tuple consisting of the cookies representing the session for the respective user and the name of the associated role
"""
client = Client()
cookies = []
# Only log in user if the role is not anonymous
if request.param != ANONYMOUS:
with django_db_blocker.unblock():
user = get_user_model().objects.get(username=request.param.lower())
client.force_login(user)
domain = f"{live_server.thread.host}:{live_server.thread.port}"
for name, properties in client.cookies.items():
cookie = {
"name": name,
"value": properties.value, # needs to be put in explicitly!
"domain": domain,
"path": properties["path"],
}
cookies.append(cookie)
return cookies, request.param
@pytest.fixture(scope="function")
def login_role_user(login_role_user_cookies: tuple[Sequence[SetCookieParam], str], browser_type: BrowserType, django_db_blocker: DjangoDbBlocker) -> Generator[tuple[BrowserContext, str], None, None]:
"""
Launch a fresh browser context with a valid session for the current user.
:param login_role_user_cookies: A tuple of the session cookies and the name of the associated user role
:param browser_type: The object to launch a browser context
:param django_db_blocker: The fixture providing the database blocker
:return: The browser context and the current role
"""
cookies, role = login_role_user_cookies
# TODO: expose headless through parametrization
# and set it to False
# - whenever this parametrization tells us to
# - OR tests were invoked with a global non-headless flag
browser = browser_type.launch(headless=False)
# TODO: enable changing TZ and locale through parametrization
#context = browser.new_context(timezone_id="Europe/Berlin", locale="de-DE")
context = browser.new_context(timezone_id="Europe/Berlin", locale="en-GB")
context.add_cookies(cookies)
yield context, role
context.close(reason="Test finished")
@pytest.fixture(scope="function")
def page_as_role(login_role_user: tuple[BrowserContext, str], live_server: LiveServer) -> Page:
"""
Launch a fresh browser context with a valid session for the current user.
:param login_role_user: The browser context and the current role
:param live_server: The fixture providing a running full blown server instance to test against
:return: The page on the default page (login page for anonymous, otherwise the respective dashboard)
"""
context, role = login_role_user
page = context.new_page()
page.goto(live_server.url)
if role == ANONYMOUS:
expect(page.get_by_role("heading", name="Login")).to_be_visible()
else:
expect(page.get_by_role("heading", name="Login")).to_be_hidden()
name = "Admin Dashboard" if role in STAFF_ROLES else "My Dashboard"
expect(page.get_by_role("heading", name=name)).to_be_visible()
return page
ROLE_FIXTURES = [f"page_as_{role.lower()}" for role in ALL_ROLES]
def pytest_generate_tests(metafunc: pytest.Metafunc) -> None:
"""
Hook called by pytest on collection to generate tests (e.g. dynamically adding parametrizations for fixtures).
We use this instead of defining default parametrizations on `login_role_user_cookies` directly
in order to implement the page_as_<ROLE> fixtures as syntactic sugar.
:param metafunc: The object describing the test and details like attached fixtures
"""
if "login_role_user_cookies" in metafunc.fixturenames:
already_parametrized = None
for mark in metafunc.definition.iter_markers(name="parametrize"):
if mark.args[0] == "login_role_user_cookies":
already_parametrized = mark
# Theoretically, we could choose to overwrite the given parametrization later like this:
# mark.args[1] = (mark.args[1][0], REPLACEMENT_PARAMETERS)
# because mark.args[1] is a tuple of the name of the fixture the parameters are for, and the parameters themselves.
# But in this case we probably don't want to overwrite it if it is already parametrized.
if not already_parametrized:
# The fixtures like page_as_root don't do anything by themselves
# except for depending on page_as_role and passing its result without modification.
# The actual logic happens here, where we parametrize the login_role_user_cookies fixture,
# which is the one actually doing something with the parameters in the end,
# by simply checking which page_as_* fixtures are attached to the test.
# If none are found, we just parametrize it with ALL_ROLES. This is necessary because
# we cannot decorate the login_role_user_cookies fixture as we would do usually with
# @pytest.fixture(scope="session", params=ALL_ROLES)
# because it seems then we cannot distinguish whether that parametrization stems from the fixture definition
# or whether it was parametrized on the test. (see how we determined already_parametrized above)
found_roles = [role for role in ALL_ROLES if f"page_as_{role.lower()}" in metafunc.fixturenames]
if not found_roles:
found_roles = ALL_ROLES
metafunc.parametrize("login_role_user_cookies", found_roles, indirect=True)
# Create the dummy page_as_* fixtures
for name in ROLE_FIXTURES:
def fixture_func(page_as_role: Page) -> Page:
"""
The page on the default page (login page for anonymous, otherwise the respective dashboard).
:param page_as_role: The page on the default page, logged in
:return: The page on the default page, logged in
"""
return page_as_role
globals()[name] = pytest.fixture(scope="function")(fixture_func)

@PeterNerlich PeterNerlich removed their assignment Jul 14, 2024
@PeterNerlich PeterNerlich added the blocked Blocked by external dependency label Jul 14, 2024
@charludo
Copy link
Contributor

charludo commented Sep 4, 2024

Unfortunately haven't been able to test this, I get ERROR ui_tests/test_login.py::test_login[chromium] - AttributeError: 'PlaywrightContextManager' object has no attribute '_pywright'when running the tool.

@PeterNerlich PeterNerlich marked this pull request as draft October 16, 2024 18:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked Blocked by external dependency
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Concept UI test
4 participants