You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 1, 2022. It is now read-only.
This project explicitly (see README.md#requirements) and implicitly (see Dockerfile) documents the dependencies. Because this is done on several locations, sooner or later someone will miss to update the dependencies which will result in inconsistencies.
At the same time, Python provides a straight forward way to specify dependencies, the requirements.txt.
Advantages:
Only one location
Supported by the toolchain (e.g. pip can directly parse the content)
What do you think about switching to requirements.txt?
The text was updated successfully, but these errors were encountered:
dasmur
added a commit
to dasmur/corona_landkreis_fallzahlen_scraping
that referenced
this issue
Jun 18, 2020
This commit introduces a requirements.txt to define
all dependencies required to build the crawler at one
central point.
The Dockerfile and the documentation will also be adapted
accordingly.
(fixes: corona-zahlen-landkreis#63)
This project explicitly (see README.md#requirements) and implicitly (see Dockerfile) documents the dependencies. Because this is done on several locations, sooner or later someone will miss to update the dependencies which will result in inconsistencies.
At the same time, Python provides a straight forward way to specify dependencies, the
requirements.txt
.Advantages:
What do you think about switching to requirements.txt?
The text was updated successfully, but these errors were encountered: