Cancel deprecation of get_logger and patch_loggers (and deprecate patch_logger instead). Extensions need loggers too, distinct ones, and they were forgotten... Sorry for the back and forth 🙇
Attributes setter and deleter on Function are deprecated. They were moved into the Attribute class since properties are instantiated as attributes, not functions.
Extension hooks must accept **kwargs in their signature, to allow forward-compatibility. Accepting **kwargs also makes it possible to remove unused arguments from the signature.
In version 1, Griffe will serialize object members as dictionaries instead of lists. Lists were initially used to preserve source order, but source order can be re-obtained thanks to the line number attributes (lineno, endlineno). Version 0.49 is able to load both lists and dictionaries from JSON dumps, and version 1 will maintain this ability. However external tools loading JSON dumps will need to be updated.
Add temporary_inspected_package helper (3c4ba16 by Timothée Mazzucotelli).
Accept alias resolution related parameters in temporary_visited_package (7d5408a by Timothée Mazzucotelli).
Accept inits parameter in temporary_visited_package (a4859b7 by Timothée Mazzucotelli).
Warn (DEBUG) when an object coming from a sibling, parent or external module instead of the current module or a submodule is exported (listed in __all__) (f82317a by Timothée Mazzucotelli). Issue-249, Related-to-PR-251
Pass down agent to extension hooks (71acb01 by Timothée Mazzucotelli). Issue-312
Add source property to docstrings, which return the docstring lines as written in the source (3f6a71a by Timothée Mazzucotelli). Issue-90
Move setter and deleter to Attribute class instead of Function, since that's how properties are instantiated (309c6e3 by Timothée Mazzucotelli). Issue-311
Reduce risk of recursion errors by excluding imported objects from has_docstrings, unless they're public (9296ca7 by Timothée Mazzucotelli). Issue-302
Fix retrieval of annotations from parent for Yields section in properties (8a21f4d by Timothée Mazzucotelli). Issue-298
Fix parsing Yields section (Google-style) when yielded values are tuples, and the description has more lines than tuple values (9091776 by Timothée Mazzucotelli).
Fix condition on objects kinds when merging stubs (727f99b by Timothée Mazzucotelli).
All submodules are deprecated. All objects are now exposed in the top-level griffe module.
All logger names are deprecated, and will be replaced with "griffe" in v1. In v1 our single "griffe" logger will provide a method to temporarily disable logging, logger.disable(), since that's the most common third-party use.
The get_logger function is deprecated. Instead, we'll use a global logger internally, and users are welcome to use it too.
The patch_loggers function is renamed patch_logger.
Following the logging changes, the docstring_warning function can now directly log a warning message instead of returning a callable that does. Passing it a logger name (to get a callable) is deprecated in favor of passing it a docstring, message and offset directly.
Don't take a shortcut to the end of an alias chain when getting/setting/deleting alias members (1930609 by Timothée Mazzucotelli).
Short-circuit __all__ convention when checking if a module is public (5abf4e3 by Timothée Mazzucotelli).
Reuse existing loggers, preventing overwriting issues (3c2825f by Timothée Mazzucotelli).
Ignore .pth files that are not utf-8 encoded (ea299dc by Andrew Sansom). Issue-300, PR-301
Attributes without annotations cannot be dataclass parameters (c9b2e09 by Hassan Kibirige). PR-297
When deciding to alias an object or not during inspection, consider module paths to be equivalent even with arbitrary private components (8c9f6e6 by Timothée Mazzucotelli). Issue-296
Fix target path computation: use qualified names to maintain classes in the path (6e17def by Timothée Mazzucotelli). Issue-296
The has_private_name and has_special_name properties on objects and aliases have been renamed is_private and is_special. The is_private property now only returns true if the name is not special.
We are working on v1, and it will come soon, so we recommend that you consider adding an upper bound on Griffe. Version 1 will remove all legacy code! There will be a couple more v0 before so that you get all the deprecation warnings needed to upgrade your code using Griffe before upgrading to v1. See breaking changes and deprecations for v0.46 below.
Calling objects' has_labels() method with a labels keyword argument is not supported anymore. The parameter became a variadic positional parameter, so it cannot be used as a keyword argument anymore. Passing a sequence instead of multiple positional arguments still works but will emit a deprecation warning.
Calling the load_extensions() function with an exts keyword argument is not supported anymore. The parameter became a variadic positional parameter, so it cannot be used as a keyword argument anymore. Passing a sequence instead of multiple positional arguments still works but will emit a deprecation warning.
As seen above in the breaking changes section, the only parameters of Object.has_labels() and load_extensions() both became variadic positional parameters. Passing a sequence as single argument is deprecated in favor of passing multiple arguments. This is an ergonomic change: I myself often forgot to wrap extensions in a list. Passing sequences of labels (lists, sets, tuples) is also difficult from Jinja templates.
The following methods and properties on objects and aliases are deprecated: member_is_exported(), is_explicitely_exported, is_implicitely_exported. Use the is_exported property instead. See issue 281.
The is_exported() and is_public() methods became properties. They can still be called like methods, but will emit deprecation warnings when doing so. See issue 281.
The ignore_private parameter of the find_breaking_changes() function is now deprecated and unused. With the reworked "exported" and "public" API, this parameter became useless. See issue 281.
Using stats() instead of Stats will now emit a deprecation warning.
Add properties telling whether an expression name resolves to an enumeration class, instance or value (fdb21d9 by Timothée Mazzucotelli). Issue-mkdocstrings/python#124
Remove get_call_keyword_arguments utility function, as it is implemented with a single line and creates a cyclic dependency with expressions (35cf170 by Timothée Mazzucotelli).
Further prevent cyclic dependency between node utils and expressions (9614c83 by Timothée Mazzucotelli).
Avoid cyclic dependency between node utils and expressions (aedf39c by Timothée Mazzucotelli).
Move arguments node-parsing logic into its own module (used by visitor and lambda expressions) (ad68e65 by Timothée Mazzucotelli).
Use canonical imports (3091660 by Timothée Mazzucotelli).
Use ast.unparse instead of our own unparser (6fe1316 by Timothée Mazzucotelli).
Only return 0 for the line number of removed objects when the location is reworked as relative (3a4d054 by Timothée Mazzucotelli).
De-duplicate search paths in finder as they could lead to the same modules being yielded twice or more when scanning namespace packages (80a158a by Timothée Mazzucotelli).
Fix logic for skipping already encountered modules when scanning namespace packages (21a48d0 by Timothée Mazzucotelli). Issue mkdocstrings#646
The loader load_module method was renamed load, Its module parameter was renamed objspec and is now positional-only. This method always returned the specified object, not just modules, so it made more sense to rename it load and to rename the parameter specifying the object. Old usages (load_module and module=...) will continue to work for some time (a few months, a year, more), and will emit deprecation warnings.
Force extension import path to be a string (coming from MkDocs' !relative tag) (34e21a9 by Timothée Mazzucotelli).
Fix crash when trying to get a decorator callable path (found thanks to pysource-codegen) (e57f08e by Timothée Mazzucotelli).
Fix crash when trying to get docstring after assignment (found thanks to pysource-codegen) (fb0a0c1 by Timothée Mazzucotelli).
Fix type errors in expressions and value extractor, don't pass duplicate arguments (found thanks to pysource-codegen) (7e53288 by Timothée Mazzucotelli).
Use all members (declared and inherited) when checking for breakages, avoid false-positives when a member of a class is moved into a parent class (1c4340b by Timothée Mazzucotelli). Issue #203
Skip early submodules with dots in their path (5e81b8a by Timothée Mazzucotelli). Issue #185
Classes InspectorExtension and VisitorExtension are deprecated in favor of Extension. As a side-effect, the hybrid extension is also deprecated. See how to use and write extensions.
Numpy parser: handle return section items with just type, or no name and no type (bdec37d by Michael Chow). Issue #173, PR #174, Co-authored-by: Timothée Mazzucotelli pawamoy@pm.me
Rework extension system (dea4c83 by Timothée Mazzucotelli).
Parse attribute values, parameter defaults and decorators as expressions (7b653b3 by Timothée Mazzucotelli).
Add loader option to avoid storing source code, reducing memory footprint (d592edf by Timothée Mazzucotelli).
Add extra attribute to objects (707a348 by Timothée Mazzucotelli).
AliasResolutionError instances don't have a target_path attribute anymore. It is instead replaced by an alias attribute which is a reference to an Alias instance.
Lots of positional-or-keyword parameters were changed to keyword-only parameters.
Support newer versions of editables (ab7a3be by Timothée Mazzucotelli): the names of editable modules have changed from __editables_* to _editable_impl_*.
Provide a JSON schema (7dfed39 by Timothée Mazzucotelli).
Allow hybrid extension to filter objects and run multiple inspectors (f8ff53a by Timothée Mazzucotelli).
Allow loading extension from file path (131454e by Timothée Mazzucotelli).
Add back relative_filepath which now really returns the filepath relative to the current working directory (40fe0c5 by Timothée Mazzucotelli).
Parameter only_known_modules was renamed external in the expand_wildcards() method of the loader.
Exception UnhandledEditablesModuleError was renamed UnhandledEditableModuleError since we now support editable installation from other packages than editables.
Properties are now fetched as attributes rather than functions, since that is how they are used. This was asked by users, and since Griffe generates signatures for Python APIs (emphasis on APIs), it makes sense to return data that matches the interface provided to users. Such property objects in Griffe's output will still have the associated property labels of course.
Lots of bug fixes. These bugs were discovered by running Griffe on many major packages as well as the standard library (again). Particularly, alias resolution should be more robust now, and should generate less issues like cyclic aliases, meaning indirect/wildcard imports should be better understood. We still highly discourage the use of wildcard imports
The "Breaking Changes" and "Deprecations" sections are proudly written with the help of our new API breakage detection feature ! Many thanks to Talley Lambert (@tlambert03) for the initial code allowing to compare two Griffe trees.
All parameters of the load_git function, except module, are now keyword-only.
Parameter try_relative_path of the load_git function was removed.
Parameter commit was renamed ref in the load_git function.
Parameter commit was renamed ref in the tmp_worktree helper, which will probably become private later.
Parameters ref and repo switched positions in the tmp_worktree helper.
All parameters of the resolve_aliases method are now keyword-only.
Parameters only_exported and only_known_modules of the resolve_module_aliases method were removed. This method is most probably not used by anyone, and will probably be made private in the future.
Parameters only_exported and only_known_modules of the resolve_aliases method are deprecated in favor of their inverted counter-part implicit and external parameters.
Example before: loader.resolve_aliases(only_exported=True, only_known_modules=True)
Example after: loader.resolve_aliases(implicit=False, external=False)
Add CLI option to disallow inspection (8f71a07 by Timothée Mazzucotelli).
Support complex __all__ assignments (9a2128b by Timothée Mazzucotelli). Issue #40
Inherit class parameters from __init__ method (e195593 by François Rozet). Issue mkdocstrings/python#19, PR #65. It allows to write "Parameters" sections in the docstring of the class itself.
Cancel deprecation of get_logger and patch_loggers (and deprecate patch_logger instead). Extensions need loggers too, distinct ones, and they were forgotten... Sorry for the back and forth 🙇
Attributes setter and deleter on Function are deprecated. They were moved into the Attribute class since properties are instantiated as attributes, not functions.
Extension hooks must accept **kwargs in their signature, to allow forward-compatibility. Accepting **kwargs also makes it possible to remove unused arguments from the signature.
In version 1, Griffe will serialize object members as dictionaries instead of lists. Lists were initially used to preserve source order, but source order can be re-obtained thanks to the line number attributes (lineno, endlineno). Version 0.49 is able to load both lists and dictionaries from JSON dumps, and version 1 will maintain this ability. However external tools loading JSON dumps will need to be updated.
Add temporary_inspected_package helper (3c4ba16 by Timothée Mazzucotelli).
Accept alias resolution related parameters in temporary_visited_package (7d5408a by Timothée Mazzucotelli).
Accept inits parameter in temporary_visited_package (a4859b7 by Timothée Mazzucotelli).
Warn (DEBUG) when an object coming from a sibling, parent or external module instead of the current module or a submodule is exported (listed in __all__) (f82317a by Timothée Mazzucotelli). Issue-249, Related-to-PR-251
Pass down agent to extension hooks (71acb01 by Timothée Mazzucotelli). Issue-312
Add source property to docstrings, which return the docstring lines as written in the source (3f6a71a by Timothée Mazzucotelli). Issue-90
Move setter and deleter to Attribute class instead of Function, since that's how properties are instantiated (309c6e3 by Timothée Mazzucotelli). Issue-311
Reduce risk of recursion errors by excluding imported objects from has_docstrings, unless they're public (9296ca7 by Timothée Mazzucotelli). Issue-302
Fix retrieval of annotations from parent for Yields section in properties (8a21f4d by Timothée Mazzucotelli). Issue-298
Fix parsing Yields section (Google-style) when yielded values are tuples, and the description has more lines than tuple values (9091776 by Timothée Mazzucotelli).
Fix condition on objects kinds when merging stubs (727f99b by Timothée Mazzucotelli).
All submodules are deprecated. All objects are now exposed in the top-level griffe module.
All logger names are deprecated, and will be replaced with "griffe" in v1. In v1 our single "griffe" logger will provide a method to temporarily disable logging, logger.disable(), since that's the most common third-party use.
The get_logger function is deprecated. Instead, we'll use a global logger internally, and users are welcome to use it too.
The patch_loggers function is renamed patch_logger.
Following the logging changes, the docstring_warning function can now directly log a warning message instead of returning a callable that does. Passing it a logger name (to get a callable) is deprecated in favor of passing it a docstring, message and offset directly.
Don't take a shortcut to the end of an alias chain when getting/setting/deleting alias members (1930609 by Timothée Mazzucotelli).
Short-circuit __all__ convention when checking if a module is public (5abf4e3 by Timothée Mazzucotelli).
Reuse existing loggers, preventing overwriting issues (3c2825f by Timothée Mazzucotelli).
Ignore .pth files that are not utf-8 encoded (ea299dc by Andrew Sansom). Issue-300, PR-301
Attributes without annotations cannot be dataclass parameters (c9b2e09 by Hassan Kibirige). PR-297
When deciding to alias an object or not during inspection, consider module paths to be equivalent even with arbitrary private components (8c9f6e6 by Timothée Mazzucotelli). Issue-296
Fix target path computation: use qualified names to maintain classes in the path (6e17def by Timothée Mazzucotelli). Issue-296
The has_private_name and has_special_name properties on objects and aliases have been renamed is_private and is_special. The is_private property now only returns true if the name is not special.
We are working on v1, and it will come soon, so we recommend that you consider adding an upper bound on Griffe. Version 1 will remove all legacy code! There will be a couple more v0 before so that you get all the deprecation warnings needed to upgrade your code using Griffe before upgrading to v1. See breaking changes and deprecations for v0.46 below.
Calling objects' has_labels() method with a labels keyword argument is not supported anymore. The parameter became a variadic positional parameter, so it cannot be used as a keyword argument anymore. Passing a sequence instead of multiple positional arguments still works but will emit a deprecation warning.
Calling the load_extensions() function with an exts keyword argument is not supported anymore. The parameter became a variadic positional parameter, so it cannot be used as a keyword argument anymore. Passing a sequence instead of multiple positional arguments still works but will emit a deprecation warning.
As seen above in the breaking changes section, the only parameters of Object.has_labels() and load_extensions() both became variadic positional parameters. Passing a sequence as single argument is deprecated in favor of passing multiple arguments. This is an ergonomic change: I myself often forgot to wrap extensions in a list. Passing sequences of labels (lists, sets, tuples) is also difficult from Jinja templates.
The following methods and properties on objects and aliases are deprecated: member_is_exported(), is_explicitely_exported, is_implicitely_exported. Use the is_exported property instead. See issue 281.
The is_exported() and is_public() methods became properties. They can still be called like methods, but will emit deprecation warnings when doing so. See issue 281.
The ignore_private parameter of the find_breaking_changes() function is now deprecated and unused. With the reworked "exported" and "public" API, this parameter became useless. See issue 281.
Using stats() instead of Stats will now emit a deprecation warning.
Add properties telling whether an expression name resolves to an enumeration class, instance or value (fdb21d9 by Timothée Mazzucotelli). Issue-mkdocstrings/python#124
Remove get_call_keyword_arguments utility function, as it is implemented with a single line and creates a cyclic dependency with expressions (35cf170 by Timothée Mazzucotelli).
Further prevent cyclic dependency between node utils and expressions (9614c83 by Timothée Mazzucotelli).
Avoid cyclic dependency between node utils and expressions (aedf39c by Timothée Mazzucotelli).
Move arguments node-parsing logic into its own module (used by visitor and lambda expressions) (ad68e65 by Timothée Mazzucotelli).
Use canonical imports (3091660 by Timothée Mazzucotelli).
Use ast.unparse instead of our own unparser (6fe1316 by Timothée Mazzucotelli).
Only return 0 for the line number of removed objects when the location is reworked as relative (3a4d054 by Timothée Mazzucotelli).
De-duplicate search paths in finder as they could lead to the same modules being yielded twice or more when scanning namespace packages (80a158a by Timothée Mazzucotelli).
Fix logic for skipping already encountered modules when scanning namespace packages (21a48d0 by Timothée Mazzucotelli). Issue mkdocstrings#646
The loader load_module method was renamed load, Its module parameter was renamed objspec and is now positional-only. This method always returned the specified object, not just modules, so it made more sense to rename it load and to rename the parameter specifying the object. Old usages (load_module and module=...) will continue to work for some time (a few months, a year, more), and will emit deprecation warnings.
Force extension import path to be a string (coming from MkDocs' !relative tag) (34e21a9 by Timothée Mazzucotelli).
Fix crash when trying to get a decorator callable path (found thanks to pysource-codegen) (e57f08e by Timothée Mazzucotelli).
Fix crash when trying to get docstring after assignment (found thanks to pysource-codegen) (fb0a0c1 by Timothée Mazzucotelli).
Fix type errors in expressions and value extractor, don't pass duplicate arguments (found thanks to pysource-codegen) (7e53288 by Timothée Mazzucotelli).
Use all members (declared and inherited) when checking for breakages, avoid false-positives when a member of a class is moved into a parent class (1c4340b by Timothée Mazzucotelli). Issue #203
Skip early submodules with dots in their path (5e81b8a by Timothée Mazzucotelli). Issue #185
Classes InspectorExtension and VisitorExtension are deprecated in favor of Extension. As a side-effect, the hybrid extension is also deprecated. See how to use and write extensions.
Numpy parser: handle return section items with just type, or no name and no type (bdec37d by Michael Chow). Issue #173, PR #174, Co-authored-by: Timothée Mazzucotelli pawamoy@pm.me
Rework extension system (dea4c83 by Timothée Mazzucotelli).
Parse attribute values, parameter defaults and decorators as expressions (7b653b3 by Timothée Mazzucotelli).
Add loader option to avoid storing source code, reducing memory footprint (d592edf by Timothée Mazzucotelli).
Add extra attribute to objects (707a348 by Timothée Mazzucotelli).
AliasResolutionError instances don't have a target_path attribute anymore. It is instead replaced by an alias attribute which is a reference to an Alias instance.
Lots of positional-or-keyword parameters were changed to keyword-only parameters.
Support newer versions of editables (ab7a3be by Timothée Mazzucotelli): the names of editable modules have changed from __editables_* to _editable_impl_*.
Provide a JSON schema (7dfed39 by Timothée Mazzucotelli).
Allow hybrid extension to filter objects and run multiple inspectors (f8ff53a by Timothée Mazzucotelli).
Allow loading extension from file path (131454e by Timothée Mazzucotelli).
Add back relative_filepath which now really returns the filepath relative to the current working directory (40fe0c5 by Timothée Mazzucotelli).
Parameter only_known_modules was renamed external in the expand_wildcards() method of the loader.
Exception UnhandledEditablesModuleError was renamed UnhandledEditableModuleError since we now support editable installation from other packages than editables.
Properties are now fetched as attributes rather than functions, since that is how they are used. This was asked by users, and since Griffe generates signatures for Python APIs (emphasis on APIs), it makes sense to return data that matches the interface provided to users. Such property objects in Griffe's output will still have the associated property labels of course.
Lots of bug fixes. These bugs were discovered by running Griffe on many major packages as well as the standard library (again). Particularly, alias resolution should be more robust now, and should generate less issues like cyclic aliases, meaning indirect/wildcard imports should be better understood. We still highly discourage the use of wildcard imports
The "Breaking Changes" and "Deprecations" sections are proudly written with the help of our new API breakage detection feature ! Many thanks to Talley Lambert (@tlambert03) for the initial code allowing to compare two Griffe trees.
All parameters of the load_git function, except module, are now keyword-only.
Parameter try_relative_path of the load_git function was removed.
Parameter commit was renamed ref in the load_git function.
Parameter commit was renamed ref in the tmp_worktree helper, which will probably become private later.
Parameters ref and repo switched positions in the tmp_worktree helper.
All parameters of the resolve_aliases method are now keyword-only.
Parameters only_exported and only_known_modules of the resolve_module_aliases method were removed. This method is most probably not used by anyone, and will probably be made private in the future.
Parameters only_exported and only_known_modules of the resolve_aliases method are deprecated in favor of their inverted counter-part implicit and external parameters.
Example before: loader.resolve_aliases(only_exported=True, only_known_modules=True)
Example after: loader.resolve_aliases(implicit=False, external=False)
Add CLI option to disallow inspection (8f71a07 by Timothée Mazzucotelli).
Support complex __all__ assignments (9a2128b by Timothée Mazzucotelli). Issue #40
Inherit class parameters from __init__ method (e195593 by François Rozet). Issue mkdocstrings/python#19, PR #65. It allows to write "Parameters" sections in the docstring of the class itself.
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
There are multiple ways to contribute to this project:
with feature requests: we are always happy to receive feedback and new ideas! If you have any, you can create new feature requests on our issue tracker. Make sure to search issues first, to avoid creating duplicate requests.
with bug reports: only you (the users) can help us find and fix bugs! We greatly appreciate if you can give us a bit of your time to create a proper bug report on our issue tracker. Same as for feature requests, make sure the bug is not already reported, by searching through issues first.
with user support: watch activity on the Github repository and our Gitter channel to answer issues and discussions created by users. Answering questions from users can take a lot of time off maintenance and new features: helping us with user support means more time for us to work on the project.
with documentation: spotted a mistake in the documentation? Found a paragraph unclear or a section missing? Reporting those already helps a lot, and if you can, sending pull requests is even better.
with code: if you are interested in a feature request, or are experiencing a reported bug, you can contribute a feature or a fix. You can simply drop a comment in the relevant issue, and we will do our best to guide you.
For easy documentation fixes, you can edit a file and send a pull request directly from the GitHub web interface. For more complex fixes or improvements, please read our contributor guide. The guide will show you how to setup a development environment to run tests or serve the documentation locally.
Contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
There are multiple ways to contribute to this project:
with feature requests: we are always happy to receive feedback and new ideas! If you have any, you can create new feature requests on our issue tracker. Make sure to search issues first, to avoid creating duplicate requests.
with bug reports: only you (the users) can help us find and fix bugs! We greatly appreciate if you can give us a bit of your time to create a proper bug report on our issue tracker. Same as for feature requests, make sure the bug is not already reported, by searching through issues first.
with user support: watch activity on the Github repository and our Gitter channel to answer issues and discussions created by users. Answering questions from users can take a lot of time off maintenance and new features: helping us with user support means more time for us to work on the project.
with documentation: spotted a mistake in the documentation? Found a paragraph unclear or a section missing? Reporting those already helps a lot, and if you can, sending pull requests is even better.
with code: if you are interested in a feature request, or are experiencing a reported bug, you can contribute a feature or a fix. You can simply drop a comment in the relevant issue, and we will do our best to guide you.
For easy documentation fixes, you can edit a file and send a pull request directly from the GitHub web interface. For more complex fixes or improvements, please read our contributor guide. The guide will show you how to setup a development environment to run tests or serve the documentation locally.
griffe2md outputs API docs in Markdown. It uses Griffe to load the data, and then use Jinja templates to render documentation in Markdown, just like mkdocstrings-python, but in Markdown instead of HTML.
quartodoc lets you quickly generate Python package API reference documentation using Markdown and Quarto. quartodoc is designed as an alternative to Sphinx. It uses Griffe to load API data and parse docstrings in order to render HTML documentation, just like mkdocstrings-python, but for Quarto instead of Mkdocs.
Pydanclick allows to use Pydantic models as Click options. It uses Griffe to parse docstrings and find Attributes sections, to help itself build Click options.
rafe is a tool for inspecting python environments and building packages (irrespective of language) in a reproducible manner. It wraps Griffe to provide a CLI command to check for API breaking changes.
Yapper converts Python docstrings to astro files for use by the Astro static site generator. It uses Griffe to parse Python modules and extracts Numpydoc-style docstrings.
griffe2md outputs API docs in Markdown. It uses Griffe to load the data, and then use Jinja templates to render documentation in Markdown, just like mkdocstrings-python, but in Markdown instead of HTML.
quartodoc lets you quickly generate Python package API reference documentation using Markdown and Quarto. quartodoc is designed as an alternative to Sphinx. It uses Griffe to load API data and parse docstrings in order to render HTML documentation, just like mkdocstrings-python, but for Quarto instead of Mkdocs.
Pydanclick allows to use Pydantic models as Click options. It uses Griffe to parse docstrings and find Attributes sections, to help itself build Click options.
rafe is a tool for inspecting python environments and building packages (irrespective of language) in a reproducible manner. It wraps Griffe to provide a CLI command to check for API breaking changes.
Yapper converts Python docstrings to astro files for use by the Astro static site generator. It uses Griffe to parse Python modules and extracts Numpydoc-style docstrings.
This extension sets the docstring parser to auto for all the docstrings of external packages. Packages are considered "external" when their sources are found in a virtual environment instead of a folder under the current working directory. Setting their docstring style to auto is useful if you plan on rendering the docstring of these objects in your own documentation.
This extension sets the docstring parser to auto for all the docstrings of external packages. Packages are considered "external" when their sources are found in a virtual environment instead of a folder under the current working directory. Setting their docstring style to auto is useful if you plan on rendering the docstring of these objects in your own documentation.
This extension reads docstrings for parameters, return values and more from type annotation using Annotated and the Doc class suggested in PEP 727. Documenting parameters and return values this way makes it possible to completely avoid relying on a particular "docstring style" (Google, Numpydoc, Sphinx, etc.) and just use plain markup in module/classes/function docstrings. Docstrings therefore do not have to be parsed at all.
This extension reads docstrings for parameters, return values and more from type annotation using Annotated and the Doc class suggested in PEP 727. Documenting parameters and return values this way makes it possible to completely avoid relying on a particular "docstring style" (Google, Numpydoc, Sphinx, etc.) and just use plain markup in module/classes/function docstrings. Docstrings therefore do not have to be parsed at all.
fromtypingimportAnnotatedasAnfromtyping_extensionsimportDoc
@@ -22,7 +22,7 @@
Just plain markup. """...
-
PEP 727 is likely to be withdrawn or rejected, but the Doc class will remain in typing_extensions, as told by Jelle Zijlstra:
We’ll probably keep it in typing_extensions indefinitely even if the PEP gets withdrawn or rejected, for backwards compatibility reasons.
You are free to use it in your own code using the typing-extensions version. If usage of typing_extensions.Doc becomes widespread, that will be a good argument for accepting the PEP and putting it in the standard library.
Welcome to the Griffe contributor guide! If you are familiar with Python tooling, development, and contributions to open-source projects, see the TL;DR at the end, otherwise we recommend you walk through the following pages:
If you are unsure about what to contribute to, you can check out our issue tracker to see if some issues are interesting to you, or you could check out our coverage report to help us cover more of the codebase with tests.
Welcome to the Griffe contributor guide! If you are familiar with Python tooling, development, and contributions to open-source projects, see the TL;DR at the end, otherwise we recommend you walk through the following pages:
If you are unsure about what to contribute to, you can check out our issue tracker to see if some issues are interesting to you, or you could check out our coverage report to help us cover more of the codebase with tests.
pipx allows to install and run Python applications in isolated environments.
ryeinstalluv
Rye is an all-in-one solution for Python project management, written in Rust.
Optionally, we recommend using direnv, which will add our scripts folder to your path when working on the project, allowing to call our make Python script with the usual make command.
If you didn't install direnv, just run ./scripts/make setup.
The setup command will install all the Python dependencies required to work on the project. This command will create a virtual environment in the .venv folder, as well as one virtual environment per supported Python version in the .venvs/3.x folders. If you cloned the repository on the same file-system as uv's cache, everything will be hard linked from the cache, so don't worry about wasting disk space.
This document describes our workflow when developing features, fixing bugs and updating the documentation. It also includes guidelines for pull requests on GitHub.
If you are unsure about how to fix or ignore a warning, just let the continuous integration fail, and we will help you during review. Don't bother updating the changelog, we will take care of this.
Breaking changes should generally be avoided. If we decide to add a breaking change anyway, we should first allow a deprecation period. To deprecate parts of the API, check Griffe's hints on how to deprecate things.
Use make check-api to check if there are any breaking changes. All of them should allow deprecation periods. Run this command again until no breaking changes are detected.
Deprecated code should also be marked as legacy code. We use Yore to mark legacy code. Similarly, code branches made to support older version of Python should be marked as legacy code using Yore too.
This document describes our workflow when developing features, fixing bugs and updating the documentation. It also includes guidelines for pull requests on GitHub.
If you are unsure about how to fix or ignore a warning, just let the continuous integration fail, and we will help you during review. Don't bother updating the changelog, we will take care of this.
Breaking changes should generally be avoided. If we decide to add a breaking change anyway, we should first allow a deprecation period. To deprecate parts of the API, check Griffe's hints on how to deprecate things.
Use make check-api to check if there are any breaking changes. All of them should allow deprecation periods. Run this command again until no breaking changes are detected.
Deprecated code should also be marked as legacy code. We use Yore to mark legacy code. Similarly, code branches made to support older version of Python should be marked as legacy code using Yore too.
Subject and body must be valid Markdown. Subject must have proper casing (uppercase for first letter if it makes sense), but no dot at the end, and no punctuation in general. Example:
feat: Add CLI option to run in verbose mode
@@ -14,11 +14,11 @@
Issue-10: https://github.com/namespace/project/issues/10
Related-to-PR-namespace/other-project#15: https://github.com/namespace/other-project/pull/15
-
These "trailers" must appear at the end of the body, without any blank lines between them. The trailer title can contain any character except colons :. We expect a full URI for each trailer, not just GitHub autolinks (for example, full GitHub URLs for commits and issues, not the hash or the #issue-number).
We do not enforce a line length on commit messages summary and body.
Occasional contributors
If this convention seems unclear to you, just write the message of your choice, and we will rewrite it ourselves before merging.
Link to any related issue in the Pull Request message.
During the review, we recommend using fixups:
# SHA is the SHA of the commit you want to fix
+
These "trailers" must appear at the end of the body, without any blank lines between them. The trailer title can contain any character except colons :. We expect a full URI for each trailer, not just GitHub autolinks (for example, full GitHub URLs for commits and issues, not the hash or the #issue-number).
We do not enforce a line length on commit messages summary and body.
Occasional contributors
If this convention seems unclear to you, just write the message of your choice, and we will rewrite it ourselves before merging.
Occasional or even regular contributors don't have to read this, but can anyway if they are interested in our release process.
Once we are ready for a new release (a few bugfixes and/or features merged in the main branch), maintainers should update the changelog. If our commit message convention was properly followed, the changelog can be automatically updated from the messages in the Git history with make changelog. This task updates the changelog in place to add a new version entry.
Once the changelog is updated, maintainers should review the new version entry, to:
(optionally) add general notes for this new version, like highlights
insert Breaking changes and Deprecations sections if needed, before other sections
add links to the relevant parts of the documentation
fix typos or markup if needed
Once the changelog is ready, a new release can be made with make release. If the version wasn't passed on the command-line with make release version=x.x.x, the task will prompt you for it. Use the same version as the one that was just added to the changelog. For example if the new version added to the changelog is 7.8.9, use make release version=7.8.9.
The release task will stage the changelog, commit, tag, push, then build distributions and upload them to PyPI.org, and finally deploy the documentation. If any of these steps fail, you can manually run each step with Git commands, then make build, make publish and make docs-deploy.
Ultimately, these expressions are what allow downstream tools such as mkdocstrings' Python handler to render cross-references to every object it knows of, coming from the current code base or loaded from object inventories (objects.inv files).
During static analysis, these expressions also allow to analyze decorators, dataclass fields, and many more things in great details, and in a robust manner, to build third-party libraries support in the form of Griffe extensions.
To learn more about expressions, read their API reference.
The Python language keeps evolving, and often library developers must continue supporting a few minor versions of Python. Therefore they cannot use some features that were introduced in the latest versions.
Yet this doesn't mean they can't enjoy latest features in their own docs: Griffe allows to "modernize" expressions, for example by replacing typing.Union with PEP 604 type unions |. Thanks to this, downstream tools like mkdocstrings can automatically transform type annotations into their modern equivalent. This improves consistency in your docs, and shows users how to use your code with the latest features of the language.
To modernize an expression, simply call its modernize() method. It returns a new, modernized expression. Some parts of the expression might be left unchanged, so be careful if you decide to mutate them.
Ultimately, these expressions are what allow downstream tools such as mkdocstrings' Python handler to render cross-references to every object it knows of, coming from the current code base or loaded from object inventories (objects.inv files).
During static analysis, these expressions also allow to analyze decorators, dataclass fields, and many more things in great details, and in a robust manner, to build third-party libraries support in the form of Griffe extensions.
To learn more about expressions, read their API reference.
The Python language keeps evolving, and often library developers must continue supporting a few minor versions of Python. Therefore they cannot use some features that were introduced in the latest versions.
Yet this doesn't mean they can't enjoy latest features in their own docs: Griffe allows to "modernize" expressions, for example by replacing typing.Union with PEP 604 type unions |. Thanks to this, downstream tools like mkdocstrings can automatically transform type annotations into their modern equivalent. This improves consistency in your docs, and shows users how to use your code with the latest features of the language.
To modernize an expression, simply call its modernize() method. It returns a new, modernized expression. Some parts of the expression might be left unchanged, so be careful if you decide to mutate them.
An API (Application Programming Interface) in the interface with which developers interact with your software. In the Python world, the API of your Python library is the set of modules, classes, functions and other attributes made available to your users. For example, users can do from your_library import this_function: this_function is part of the API of your_library.
Often times, when you develop a library, you create functions, classes, etc. that are only useful internally: they are not supposed to be used by your users. Python does not provide easy or standard ways to actually prevent users from using internal objects, so, to distinguish public objects from internal objects, we usually rely on conventions, such as prefixing internal objects' names with an underscore, for example def _internal_function(): ..., to mark them as "internal".
Prefixing an object's name with an underscore still does not prevent users from importing and using this object, but it informs them that they are not supposed to import and use it, and that this object might change or even disappear in the future, without notice.
On the other hand, public objects are supposed to stay compatible with previous versions of your library for at least a definite amount of time, to prevent downstream code from breaking. Any change that could break downstream code is supposed to be communicated before it is actually released. Maintainers of the library usually allow a period of time where the public object can still be used as before, but will emit deprecation warnings when doing so, hinting users that they should upgrade their use of the object (or use another object that will replace it). This period of time is usually called a deprecation period.
So, how do we mark an object as public? How do we inform our users which objects can safely be used, and which one are subject to unnotified changes? Usually, we rely again on the underscore prefix convention: if an object isn't prefixed with an underscore, it means that it is public. But essentially, your public API is what you say it is. If you clearly document that a single function of your package is public, and that all others are subject to unnotified changes and whose usage by users is not supported, then your public API is composed of this single function, and nothing else. Public APIs are a matter of communication. Concretely, it's about deciding what parts of your code base are public, and communicating that clearly.
Some components are obviously considered for the public API of a Python package:
the module layout
functions and their signature
classes (their inheritance), their methods and signatures
the rest of the module or class attributes, their types and values
Other components should be considered for the public API but are often forgotten:
logger names: users might rely on them to filter logs (see Logger names)
exceptions raised: users definitely rely on them to catch errors
Other components could be considered for the public API, but usually require too much maintenance:
logging messages: users might rely on them to grep the logs
exception messages: users might rely on them for various things
Besides, logging and exception messages simply cannot allow deprecation periods where both old and new messages are emitted. Maintainers could however consider adding unique, short codes to message for more robust consumption.
Our recommendation — Communicate your public API, verify what you can.
Take the time to learn about and use ways to declare, communicate and deprecate your public API. Your users will have an easier time using your library. On the maintenance side, you won't get bug reports for uses that are not supported, or you will be able to quickly close them by pointing at the documentation explaining what your public API is, or why something was deprecated, for how long, and how to upgrade.
Automate verifications around your public API with tools like Griffe. Currently Griffe doesn't support checking CLI configuration options, logger names or raised exceptions. If you have the capacity to, verify these manually before each release. Griffe checks and API rules enforcement are a very good starting point.
In the Python ecosystem we very often prefix objects with an underscore to mark them as internal, or private. Objects that are not prefixed are then implicitly considered public. For example:
An API (Application Programming Interface) in the interface with which developers interact with your software. In the Python world, the API of your Python library is the set of modules, classes, functions and other attributes made available to your users. For example, users can do from your_library import this_function: this_function is part of the API of your_library.
Often times, when you develop a library, you create functions, classes, etc. that are only useful internally: they are not supposed to be used by your users. Python does not provide easy or standard ways to actually prevent users from using internal objects, so, to distinguish public objects from internal objects, we usually rely on conventions, such as prefixing internal objects' names with an underscore, for example def _internal_function(): ..., to mark them as "internal".
Prefixing an object's name with an underscore still does not prevent users from importing and using this object, but it informs them that they are not supposed to import and use it, and that this object might change or even disappear in the future, without notice.
On the other hand, public objects are supposed to stay compatible with previous versions of your library for at least a definite amount of time, to prevent downstream code from breaking. Any change that could break downstream code is supposed to be communicated before it is actually released. Maintainers of the library usually allow a period of time where the public object can still be used as before, but will emit deprecation warnings when doing so, hinting users that they should upgrade their use of the object (or use another object that will replace it). This period of time is usually called a deprecation period.
So, how do we mark an object as public? How do we inform our users which objects can safely be used, and which one are subject to unnotified changes? Usually, we rely again on the underscore prefix convention: if an object isn't prefixed with an underscore, it means that it is public. But essentially, your public API is what you say it is. If you clearly document that a single function of your package is public, and that all others are subject to unnotified changes and whose usage by users is not supported, then your public API is composed of this single function, and nothing else. Public APIs are a matter of communication. Concretely, it's about deciding what parts of your code base are public, and communicating that clearly.
Some components are obviously considered for the public API of a Python package:
the module layout
functions and their signature
classes (their inheritance), their methods and signatures
the rest of the module or class attributes, their types and values
Other components should be considered for the public API but are often forgotten:
logger names: users might rely on them to filter logs (see Logger names)
exceptions raised: users definitely rely on them to catch errors
Other components could be considered for the public API, but usually require too much maintenance:
logging messages: users might rely on them to grep the logs
exception messages: users might rely on them for various things
Besides, logging and exception messages simply cannot allow deprecation periods where both old and new messages are emitted. Maintainers could however consider adding unique, short codes to message for more robust consumption.
Our recommendation — Communicate your public API, verify what you can.
Take the time to learn about and use ways to declare, communicate and deprecate your public API. Your users will have an easier time using your library. On the maintenance side, you won't get bug reports for uses that are not supported, or you will be able to quickly close them by pointing at the documentation explaining what your public API is, or why something was deprecated, for how long, and how to upgrade.
Automate verifications around your public API with tools like Griffe. Currently Griffe doesn't support checking CLI configuration options, logger names or raised exceptions. If you have the capacity to, verify these manually before each release. Griffe checks and API rules enforcement are a very good starting point.
In the Python ecosystem we very often prefix objects with an underscore to mark them as internal, or private. Objects that are not prefixed are then implicitly considered public. For example:
Here Thing and something are considered public even though they were imported. If __all__ was defined, it would take precedence and redundant aliases wouldn't apply.
Same as for redundant aliases, this convention says that all objects imported thanks to wildcard imports are public. This can again be useful in __init__ modules where you expose lots of objects declared in submodules.
Note that the wildcard imports logic stays the same, and imports either all objects that do not start with an underscore (imported objects included!), or all objects listed in __all__ if it is defined. It doesn't care about other conventions such as redundant aliases, or the wildcard imports convention itself.
Our recommendation — Use the underscore prefix and __all__ conventions.
Use both the underscore prefix convention for consistent naming at module and class levels, and the __all__ convention for declaring your public API. We do not recommend using the redundant aliases convention, because it doesn't provide any information at runtime. We do not recommend the wildcard import convention either, for the same reason and for additional reasons mentioned here. We still provide the griffe-public-redundant-aliases and griffe-public-wildcard-imports extensions for those who would still like to rely on these conventions.
To better support introspection, modules should explicitly declare the names in their public API using the __all__ attribute. Setting __all__ to an empty list indicates that the module has no public API.
Even with __all__ set appropriately, internal interfaces (packages, modules, classes, functions, attributes or other names) should still be prefixed with a single leading underscore.
Concatenating __all__ for easier maintenance of __init__ modules.
If you worry about maintenance of your __init__ modules, know that you can very well concatenate __all__ lists from submodules into the current one:
📁my_package/
-├──📄__init__.py
-├──📄module.py
+
Note that the wildcard imports logic stays the same, and imports either all objects that do not start with an underscore (imported objects included!), or all objects listed in __all__ if it is defined. It doesn't care about other conventions such as redundant aliases, or the wildcard imports convention itself.
Our recommendation — Use the underscore prefix and __all__ conventions.
Use both the underscore prefix convention for consistent naming at module and class levels, and the __all__ convention for declaring your public API. We do not recommend using the redundant aliases convention, because it doesn't provide any information at runtime. We do not recommend the wildcard import convention either, for the same reason and for additional reasons mentioned here. We still provide the griffe-public-redundant-aliases and griffe-public-wildcard-imports extensions for those who would still like to rely on these conventions.
To better support introspection, modules should explicitly declare the names in their public API using the __all__ attribute. Setting __all__ to an empty list indicates that the module has no public API.
Even with __all__ set appropriately, internal interfaces (packages, modules, classes, functions, attributes or other names) should still be prefixed with a single leading underscore.
Concatenating __all__ for easier maintenance of __init__ modules.
If you worry about maintenance of your __init__ modules, know that you can very well concatenate __all__ lists from submodules into the current one:
frommy_package.subpackage1.module1aimportthis1a,that1a__all__=["this1a","that1a"]
@@ -71,10 +71,10 @@
raiseAttributeError(f"module 'old_module' has no attribute '{name}'")
Such changes sometimes go unnoticed before the breaking change is released, because users don't enable deprecation warnings. These changes can also be confusing to users when they do notice the warnings: maybe they don't use the deprecated import themselves, and are not sure where to report the deprecated use. These changes also require time to upgrade, and time to maintain.
What if we could make this easier?
By hiding your module layout from your public API, you're removing all these pain points at once. Any object can freely move around without ever impacting users. Maintainers do not need to set deprecation periods where old and new uses are supported, or bump the major part of their semantic version when they stop supporting the old use. Hiding the module layout also removes the ambiguity of whether a submodule is considered public or not: PEP 8 doesn't mention anything about it, and it doesn't look like the __all__ convention expects developers to list their submodules too. In the end it looks like submodules are only subject to the underscore prefix convention.
So, how do we hide the module layout from the public API?
The most common way to hide the module layout is to make all your modules private, by prefixing their name with an underscore:
Now, if you want to move cast_spell into the _combat module, you can do so without impacting users. You can even rename your modules. All you have to do when doing so is update your top-level __init__ module to import the objects from the right locations.
If you have more than one layer of submodules, you don't have to make the next layer private: only the first one is enough, as it informs users that they shouldn't import from this layer anyway:
Whatever hidden layout you choose (private modules, internals, private package), it is not very important, as you will be able to switch from one to another easily. In Griffe we chose to experiment and go with the private package approach. This highlighted a few shortcomings that we were able to address in both Griffe and mkdocstrings-python, so we are happy with the result.
Top-level-only exposition doesn't play well with large packages.
The fully hidden layout plays well with small to medium projects. If you maintain a large project, it can become very impractical for both you and your users to expose every single object in the top-level __init__ module. For large projects, it therefore makes sense to keep at least one or two additional public layers in your module layout. Sometimes packages also implement many variations of the same abstract class, using the same name in many different modules: in these cases, the modules are effective namespaces that could be kept in the public API.
Our recommendation — Hide your module layout early.
Start hiding your module layout early! It is much easier to (partially) expose the layout later than to hide it after your users started relying on it. It will also make code reorganizations much easier.
Whether or not you are planning to hide your module layout, as recommended in the previous section, one thing that will help both you and your users is making sure your object names are unique across your code base. Having unique names ensures that you can expose everything at the top-level module of your package without having to alias objects (using from ... import x as y). It will also ensure that your users don't end up importing multiple different objects with the same name, again having to alias them. Finally, it forces you to use meaningful names for your objects, names that don't need the context of the above namespaces (generally modules) to understand what they mean. For example, in Griffe we previously exposed griffe.docstrings.utils.warning. Exposing warning at the top-level made it very vague: what does it do? So we renamed it docstring_warning, which is much clearer.
Ensuring unique names across a code base is sometimes not feasible, or not desirable; in this case, try to use namespacing while still hiding the module layout the best you can.
In accordance with our recommendation on module layouts, it is also useful to ensure that a single public object is exposed in a single location. Ensuring unique public location for each object removes any ambiguity on the user side as to where to import the object from. It also helps documentation generators that try to cross-reference objects: with several locations, they cannot know for sure which one is the best to reference (which path is best to use and display in the generated documentation). With a fully hidden layout, all objects are only exposed in the top-level module, so there is no ambiguity. With partially hidden layouts, or completely public layouts, make sure to declare your public API so that each object is only exposed in a single location. Example:
Whatever hidden layout you choose (private modules, internals, private package), it is not very important, as you will be able to switch from one to another easily. In Griffe we chose to experiment and go with the private package approach. This highlighted a few shortcomings that we were able to address in both Griffe and mkdocstrings-python, so we are happy with the result.
Top-level-only exposition doesn't play well with large packages.
The fully hidden layout plays well with small to medium projects. If you maintain a large project, it can become very impractical for both you and your users to expose every single object in the top-level __init__ module. For large projects, it therefore makes sense to keep at least one or two additional public layers in your module layout. Sometimes packages also implement many variations of the same abstract class, using the same name in many different modules: in these cases, the modules are effective namespaces that could be kept in the public API.
Our recommendation — Hide your module layout early.
Start hiding your module layout early! It is much easier to (partially) expose the layout later than to hide it after your users started relying on it. It will also make code reorganizations much easier.
Whether or not you are planning to hide your module layout, as recommended in the previous section, one thing that will help both you and your users is making sure your object names are unique across your code base. Having unique names ensures that you can expose everything at the top-level module of your package without having to alias objects (using from ... import x as y). It will also ensure that your users don't end up importing multiple different objects with the same name, again having to alias them. Finally, it forces you to use meaningful names for your objects, names that don't need the context of the above namespaces (generally modules) to understand what they mean. For example, in Griffe we previously exposed griffe.docstrings.utils.warning. Exposing warning at the top-level made it very vague: what does it do? So we renamed it docstring_warning, which is much clearer.
Ensuring unique names across a code base is sometimes not feasible, or not desirable; in this case, try to use namespacing while still hiding the module layout the best you can.
In accordance with our recommendation on module layouts, it is also useful to ensure that a single public object is exposed in a single location. Ensuring unique public location for each object removes any ambiguity on the user side as to where to import the object from. It also helps documentation generators that try to cross-reference objects: with several locations, they cannot know for sure which one is the best to reference (which path is best to use and display in the generated documentation). With a fully hidden layout, all objects are only exposed in the top-level module, so there is no ambiguity. With partially hidden layouts, or completely public layouts, make sure to declare your public API so that each object is only exposed in a single location. Example:
📁my_package/
+├──__init__.py
+└──module.py
Here the Hello class is exposed in both my_package.module and my_package.
It feels weird to "unpublicize" the Hello class in my_package.module by declaring an empty __all__, so maybe the module should be made private instead: my_package/_module.py. See other hiding techniques in the Module layout section.
Our recommendation — Expose public objects in single locations, use meaningful names.
We recommend making sure that each public object is exposed in a single location. Ensuring unique names might be more tricky depending on the code base, so we recommend ensuring meaningful names at least, not requiring the context of modules above to understand what the objects are for.
The documentation of the standard logging library recommends to use __name__ as logger name when obtaining a logger with logging.getLogger(), unless we have a specific reason for not doing that. Unfortunately, no examples of such specific reasons are given. So let us give one.
Using __name__ as logger names means that your loggers have the same name as your module paths. For example, the module package/module.py, whose path and __name__ value are package.module, will have a logger with the same name, i.e. package.module. If your module layout is public, that's fine: renaming the module or moving it around is already a breaking change that you must document.
However if your module layout is hidden, or if this particular module is private, then even though renaming it or moving it around is not breaking change, the change of name of its logger is. Indeed, by renaming your module (or moving it), you changed its __name__ value, and therefore you changed its logger name.
Now, users that were relying on this name (for example to silence WARNING-level logs and below coming from this particular module) will see their logic break without any error and without any deprecation warning.
# For example, the following would have zero effect if `_module` was renamed `_other_module`.
+
It feels weird to "unpublicize" the Hello class in my_package.module by declaring an empty __all__, so maybe the module should be made private instead: my_package/_module.py. See other hiding techniques in the Module layout section.
Our recommendation — Expose public objects in single locations, use meaningful names.
We recommend making sure that each public object is exposed in a single location. Ensuring unique names might be more tricky depending on the code base, so we recommend ensuring meaningful names at least, not requiring the context of modules above to understand what the objects are for.
The documentation of the standard logging library recommends to use __name__ as logger name when obtaining a logger with logging.getLogger(), unless we have a specific reason for not doing that. Unfortunately, no examples of such specific reasons are given. So let us give one.
Using __name__ as logger names means that your loggers have the same name as your module paths. For example, the module package/module.py, whose path and __name__ value are package.module, will have a logger with the same name, i.e. package.module. If your module layout is public, that's fine: renaming the module or moving it around is already a breaking change that you must document.
However if your module layout is hidden, or if this particular module is private, then even though renaming it or moving it around is not breaking change, the change of name of its logger is. Indeed, by renaming your module (or moving it), you changed its __name__ value, and therefore you changed its logger name.
Now, users that were relying on this name (for example to silence WARNING-level logs and below coming from this particular module) will see their logic break without any error and without any deprecation warning.
# For example, the following would have zero effect if `_module` was renamed `_other_module`.package_module_logger=logging.getLogger("package._module")package_module_logger.setLevel(logging.ERROR)
-
Could we emit a deprecation warning when users obtain the logger with the old name? Unfortunately, there is no standard way to do that. This would require patching logging.getLogger, which means it would only work when users actually use this method, in a Python interpreter, and not for all the other ways logging can be configured (configuration files, configuration dicts, etc.).
Since it is essentially impossible to deprecate a logger name, we recommend to avoid using __name__ as logger name, at the very least in private modules.
Our recommendation — Use a single logger.
Absolutely avoid using __name__ as logger name in private modules. If your module layout is hidden, or does not matter for logging purposes, just use the same logger everywhere by using your package name as logger name. Example: logger = logging.getLogger("griffe"). Show your users how to temporarily alter your global logger (typically with context managers) so that altering subloggers becomes unnecessary. Maybe even provide the utilities to do that.
Obviously, your public API should be documented. Each object should have a docstring that explains why the object is useful and how it is used. More on that in our docstrings recommendations. Docstrings work well for offline documentation; we recommend exposing your public API online too, for example with MkDocs and mkdocstrings' Python handler, or with other SSGs (Static Site Generators). Prefer a tool that is able to create a Sphinx-like inventory of objects (an objects.inv file) that will allow other projects to easily cross-reference your API from their own documentation. Make sure each and every object of your public API is documented in your web docs and therefore added to the objects inventory (and maybe that nothing else is added to this inventory as "public API").
Our recommendation — Document your public API extensively.
Write docstrings for each and every object of your public API. Deploy online documentation where each object is documented and added to an object inventory that can be consumed by third-party projects. If you find yourself reluctant to document a public object, it means that this object should maybe be internal instead.
Our documentation framework of choice is of course MkDocs combined with our mkdocstrings plugin.
If you already follow some of these recommendations, or if you decide to start following them, it might be a good idea to make sure that these recommendations keep being followed as your code base evolves. The intent of these recommendations, or "rules", can be captured in tests relatively easily thanks to Griffe.
We invite you to check out our own test file: test_internals.py. This test module asserts several things:
all public objects are exposed in the top-level griffe module
all public objects have unique names
all public objects have single locations
all public objects are added to the inventory (which means they are documented in our API docs)
no private object is added to the inventory
Our recommendation — Test your API declaration early.
The sooner you test your API declaration, the better your code base will evolve. This will force you to really think about how your API is exposed to yours users. This will prevent mistakes like leaving a new object as public while you don't want users to start relying on it, or forgetting to expose a public object in your top-level module or to document it in your API docs.
Depending on their configuration, many popular Python linters will warn you that you access or import private objects. This doesn't play well with hidden module layouts, where modules are private or moved under a private (sub-)package. Sometimes it doesn't even play well with private methods
Our recommendation — Ignore "protected access" warnings for your own package, or make the warnings smarter.
To users of linters, we recommend adding # noqa comments on the relevant code lines, or globally disabling warnings related to "private object access" if per-line exclusion requires too much maintenance.
To authors of linters, we recommend (if possible) making these warnings smarter: they shouldn't be triggered when private objects are accessed from within the same package. Marking objects as private is meant to prevent downstream code to use them, not to prevent the developers of the current package themselves to use them: they know what they are doing and should be allowed to use their own private objects without warnings. At the same time, they don't want to disable these warnings globally, so the warnings should be derived in multiple versions, or made smarter.
This section deserves an entire article, but we will try to stay succinct here.
Generally, we distinguish the API (Application Programming Interface) from the CLI (Command Line Interface), TUI (Textual User Interface) or GUI (Graphical User Interface). Contrary to TUIs or GUIs which are not likely to be controlled programmatically (they typically work with keyboard and mouse inputs), the CLI can easily be called by various scripts or programs, including from Python programs.
Even if a project was not designed to be used programmatically (doesn't expose a public API), it is a certainty that with enough popularity, it will be used programmatically. And the CLI will even more so be used programmatically if there is no API. Even if there is an API, sometimes it makes more sense to hook into the CLI rather than the API (cross-language integrations, wrappers, etc.).
Therefore, we urge everyone to consider their CLI as API too. We urge everyone to always design their project as library-first APIs rather than CLI-first tools.
The first user of your CLI as API is... you. When you declare your project's CLI entrypoint in pyproject.toml:
[project.scripts]
+
Could we emit a deprecation warning when users obtain the logger with the old name? Unfortunately, there is no standard way to do that. This would require patching logging.getLogger, which means it would only work when users actually use this method, in a Python interpreter, and not for all the other ways logging can be configured (configuration files, configuration dicts, etc.).
Since it is essentially impossible to deprecate a logger name, we recommend to avoid using __name__ as logger name, at the very least in private modules.
Our recommendation — Use a single logger.
Absolutely avoid using __name__ as logger name in private modules. If your module layout is hidden, or does not matter for logging purposes, just use the same logger everywhere by using your package name as logger name. Example: logger = logging.getLogger("griffe"). Show your users how to temporarily alter your global logger (typically with context managers) so that altering subloggers becomes unnecessary. Maybe even provide the utilities to do that.
Obviously, your public API should be documented. Each object should have a docstring that explains why the object is useful and how it is used. More on that in our docstrings recommendations. Docstrings work well for offline documentation; we recommend exposing your public API online too, for example with MkDocs and mkdocstrings' Python handler, or with other SSGs (Static Site Generators). Prefer a tool that is able to create a Sphinx-like inventory of objects (an objects.inv file) that will allow other projects to easily cross-reference your API from their own documentation. Make sure each and every object of your public API is documented in your web docs and therefore added to the objects inventory (and maybe that nothing else is added to this inventory as "public API").
Our recommendation — Document your public API extensively.
Write docstrings for each and every object of your public API. Deploy online documentation where each object is documented and added to an object inventory that can be consumed by third-party projects. If you find yourself reluctant to document a public object, it means that this object should maybe be internal instead.
Our documentation framework of choice is of course MkDocs combined with our mkdocstrings plugin.
If you already follow some of these recommendations, or if you decide to start following them, it might be a good idea to make sure that these recommendations keep being followed as your code base evolves. The intent of these recommendations, or "rules", can be captured in tests relatively easily thanks to Griffe.
We invite you to check out our own test file: test_internals.py. This test module asserts several things:
all public objects are exposed in the top-level griffe module
all public objects have unique names
all public objects have single locations
all public objects are added to the inventory (which means they are documented in our API docs)
no private object is added to the inventory
Our recommendation — Test your API declaration early.
The sooner you test your API declaration, the better your code base will evolve. This will force you to really think about how your API is exposed to yours users. This will prevent mistakes like leaving a new object as public while you don't want users to start relying on it, or forgetting to expose a public object in your top-level module or to document it in your API docs.
Depending on their configuration, many popular Python linters will warn you that you access or import private objects. This doesn't play well with hidden module layouts, where modules are private or moved under a private (sub-)package. Sometimes it doesn't even play well with private methods
Our recommendation — Ignore "protected access" warnings for your own package, or make the warnings smarter.
To users of linters, we recommend adding # noqa comments on the relevant code lines, or globally disabling warnings related to "private object access" if per-line exclusion requires too much maintenance.
To authors of linters, we recommend (if possible) making these warnings smarter: they shouldn't be triggered when private objects are accessed from within the same package. Marking objects as private is meant to prevent downstream code to use them, not to prevent the developers of the current package themselves to use them: they know what they are doing and should be allowed to use their own private objects without warnings. At the same time, they don't want to disable these warnings globally, so the warnings should be derived in multiple versions, or made smarter.
This section deserves an entire article, but we will try to stay succinct here.
Generally, we distinguish the API (Application Programming Interface) from the CLI (Command Line Interface), TUI (Textual User Interface) or GUI (Graphical User Interface). Contrary to TUIs or GUIs which are not likely to be controlled programmatically (they typically work with keyboard and mouse inputs), the CLI can easily be called by various scripts or programs, including from Python programs.
Even if a project was not designed to be used programmatically (doesn't expose a public API), it is a certainty that with enough popularity, it will be used programmatically. And the CLI will even more so be used programmatically if there is no API. Even if there is an API, sometimes it makes more sense to hook into the CLI rather than the API (cross-language integrations, wrappers, etc.).
Therefore, we urge everyone to consider their CLI as API too. We urge everyone to always design their project as library-first APIs rather than CLI-first tools.
The first user of your CLI as API is... you. When you declare your project's CLI entrypoint in pyproject.toml:
[project.scripts]griffe="griffe:main"
...this entrypoint ends up as a Python script in the bin directory of your virtual environment:
Now instead of having to call main(["dump", "..."]) in your tests, you can directly call dump(...), with all the benefits from static-typing and your IDE features, such as autocompletion, linting, etc..
The third and next users of your CLI as API are your users: just as you made your own life easier, you made their life easier for when they want to call some subcommands of your tool programmatically. No more messing with lists of strings without autocompletion or linting, no more patching of sys.argv, no more following the maze of transformations applied by this fancy CLI framework before finally reaching the crux of the subcommand you want to call, no more trying to replicate these transformations yourself with the CLI framework's API to avoid copy-pasting the dozens of lines you're only interested in.
Our recommendation — Decouple command-line parsing from your CLI entrypoints.
Do not tie the command parsing logic with your program's logic. Create functions early, make them accept arguments using basic types (int, str, list, etc.) so that your users can call your main command or subcommands with a single import and single statement. Do not encode all the logic in a single big main function. Decoupling the CLI-parsing logic from your entrypoints will make them much easier to test and use programmatically. Consider your entrypoints part of your API!
With time, the code base of your project evolves. You add features, you fix bugs, and you generally reorganize code. Some of these changes might make your project's public API incompatible with previous versions. In that case, you usually have to "deprecate" previous usage in favor of the new usage. That means you have to support both, and emit deprecation warnings when old usage is detected.
There are many different ways of deprecating previous usage of code, which depend on the change itself. We invite you to read our Checking APIs chapter, which describes all the API changes Griffe is able to detect, and provides hint on how to allow deprecation periods for each kind of change.
In addition to emitting deprecation warnings, you should also update the docstrings and documentation for the old usage to point at the new usage, add "deprecated" labels where possible, and mark objects as deprecated when possible.
Our recommendation — Allow a deprecation periods, document deprecations.
Try allowing deprecation periods for every breaking change. Most changes can be made backward-compatible at the cost of writing legacy code. Use tools like Yore to manage legacy code, and standard utilities like warnings.deprecated to mark objects as deprecated. Griffe extensions such as griffe-warnings-deprecated can help you by dynamically augmenting docstrings for your API documentation.
A few third-party libraries directly or indirectly related to public APIs deserve to be mentioned here.
public lets you decorate objects with @public.add to dynamically add them to __all__, so that you don't have to build a list of strings yourself. The "public visibility" marker is closer to each object, and might help avoiding mistakes like forgetting to update __all__ when an object is removed or renamed.
modul, from Frost Ming, the author of PDM, goes one step further and actually hides attributes that are not marked "exported" from users: they won't be able to access un-exported attributes, leaving only the public API visible.
Deprecated, which was probably a source of inspiration for PEP 702, allows decorating objects with @deprecated to mark them as deprecated. Such decorated callables will emit deprecation warnings when called. PEP 702's warnings.deprecated could be seen as its successor, bringing the feature directly into the standard library so that type checkers and other static analysis tool can converge on this way to mark objects as deprecated.
slothy, which is less directly related to public APIs, but useful for the case where you are hiding your modules layout and exposing all your public API from the top-level __init__ module. Depending on the size of your public API, and the time it takes to import everything (memory initializations, etc.), it might be interesting to make all these imports lazy. With a lazily imported public API, users who are only interested in a few objects of your public API won't have to pay the price of importing everything.
Why? Because the package.subpackage.thing submodule can eventually shadow the package.subpackage.thing attribute. Try this:
# Replicate the file tree from above.
@@ -34,13 +34,13 @@
'thing from thing'>>> # Still OK
From an API perspective, and given that both cases are very similar but differ in behavior, we recommend not doing that either.
If the goal is to isolate a single object into its own module, to then expose it in the parent module, then it would make sense that this object is the only object of the submodule to be exposed in the public API, and therefore the submodule could be marked as private by prefixing its name with an underscore:
With this, there is no ambiguity as to what subpackage.thing points to.
For the reasons mentioned above, Griffe does not support this kind of name shadowing. During static analysis, the submodule will take precedence over the attribute. During dynamic analysis, Griffe's behavior is undefined.
Wildcard imports allow to import from a module all objects that do not start with an underscore _, or all objects that are listed in the module's __all__ attribute, if it is defined.
Here, this and that will also be imported when we do from package.module import *. To prevent that, we would have to alias these names as such:
package/module.py
fromsomewhere_elseimportthisas_this,thatas_that
...which is not ideal.
It gets even worse if module.py itself uses wildcard imports:
package/module.py
fromsomewhere_elseimport*
Now using from package.module import * will import all objects that do not start with an underscore declared in the module, but also all the objects imported by it that do not start with an underscore, and also all the objects imported by the modules of the imported objects that do not start with an underscore, etc., recursively. Soon enough, we end up with dozens and dozens of objects exposed in package, while just a few of them are useful/meaningful to users.
Not only that, but it also increases the risk of creating cycles in imports. Python can handle some of these cycles, but static analysis tools such as Griffe can have a much harder time trying to resolve them.
In the explicit case, the situation improves, as only the objects listed in __all__ will be exported to the modules that wildcard imports them. It effectively stops namespace pollution, but it does not remove the risk of cyclic imports, only decreases it.
We have seen code bases where parent modules wildcard imports from submodules, while these submodules also wildcard imports from the parent modules... Python somehow handles this, but it is hell to handle statically, and it is just too error prone (cyclic imports, name shadowing, namespaces become dependent on the order of imports, etc.).
For these reasons, we recommend not using wildcard imports. Instead, we recommend declaring your public API explicitly with __all__, and combining __all__ lists together if needed:
Within your own code base, we recommend using canonical imports. By canonical, we mean importing objects from the module they are declared in, and not from another module that also imports them.
frompackage.module_aimportthing# Indirect import, bad.
@@ -122,9 +122,9 @@
# Recommending users to do `from package import np`# or `import package; package.np.etc`: bad.
Instead, let users import Numpy themselves, with import numpy as np. This will help other analysis tools, for example to detect that Numpy is used directly and should therefore be listed as a dependency. To quote PEP 8:
Imported names should always be considered an implementation detail. Other modules must not rely on indirect access to such imported names unless they are an explicitly documented part of the containing module’s API, such as os.path or a package’s __init__ module that exposes functionality from submodules.
Emphasis on exposes functionality from submodules: PEP 8 does not state exposing functionality from external packages.
Using canonical imports provides several benefits:
it can reduce the risk of cyclic imports
it can increase performance by reducing hoops and importing less things (for example by not passing through a parent module that imports many things from siblings modules)
it makes the code more readable and easier to refactor (less indirections)
it makes the life of static analysis tools easier (less indirections)
We recommend using the canonical-imports tool to automatically rewrite your imports as canonical.
Note however that we recommend using public imports (importing from the "public" locations rather than the canonical ones) when:
importing from other packages
importing from your own package within your tests suite
Apply these recommendations at your discretion: there may be other special cases where it might not make sense to use canonical imports.
Make your compiled objects tell their true location¤
Python modules can be written in other languages (C, C++, Rust) and compiled. To extract information from such compiled modules, we have to use dynamic analysis, since sources are not available.
A practice that seem common in projects including compiled modules in their distributions is to make the compiled modules private (prefix their names with an underscore), and to expose their objects from a public module higher-up in the module layout, for example by wildcard importing everything from it.
Since the objects are exposed in package.module instead of package._module, developers sometimes decide to make their compiled objects lie about their location, and make them say that they are defined in package.module instead of package._module. Example:
>>> frompackage._moduleimportMyObject>>> MyObject.__module__
diff --git a/index.html b/index.html
index 5bff3a30..7c63e459 100644
--- a/index.html
+++ b/index.html
@@ -58,7 +58,7 @@
Parameter kind was changed:
Old: positional or keyword
New: variadic positional
-
Griffe Insiders is a private fork of Griffe, hosted as a private GitHub repository. Almost1all new features are developed as part of this fork, which means that they are immediately available to all eligible sponsors, as they are made collaborators of this repository.
Every feature is tied to a funding goal in monthly subscriptions. When a funding goal is hit, the features that are tied to it are merged back into Griffe and released for general availability, making them available to all users. Bugfixes are always released in tandem.
Sponsorships make this project sustainable, as they buy the maintainers of this project time – a very scarce resource – which is spent on the development of new features, bug fixing, stability improvement, issue triage and general support. The biggest bottleneck in Open Source is time.3
If you're unsure if you should sponsor this project, check out the list of completed funding goals to learn whether you're already using features that were developed with the help of sponsorships. You're most likely using at least a handful of them, thanks to our awesome sponsors!
The moment you become a sponsor, you'll get immediate access to 13 additional features that you can start using right away, and which are currently exclusively available to sponsors:
Thanks for your interest in sponsoring! In order to become an eligible sponsor with your GitHub account, visit pawamoy's sponsor profile, and complete a sponsorship of $10 a month or more. You can use your individual or organization GitHub account for sponsoring.
Sponsorships lower than $10 a month are also very much appreciated, and useful. They won't grant you access to Insiders, but they will be counted towards reaching sponsorship goals. Every sponsorship helps us implementing new features and releasing them to the public.
Important: If you're sponsoring @pawamoy through a GitHub organization, please send a short email to insiders@pawamoy.fr with the name of your organization and the GitHub account of the individual that should be added as a collaborator.4
If you sponsor publicly, you're automatically added here with a link to your profile and avatar to show your support for Griffe. Alternatively, if you wish to keep your sponsorship private, you'll be a silent +1. You can select visibility during checkout and change it afterwards.
The following section lists all funding goals. Each goal contains a list of features prefixed with a checkmark symbol, denoting whether a feature is already available or planned, but not yet implemented. When the funding goal is hit, the features are released for general availability.
This section lists all funding goals that were previously completed, which means that those features were part of Insiders, but are now generally available and can be used by all users.
We're building an open source project and want to allow outside collaborators to use Griffe locally without having access to Insiders. Is this still possible?
Yes. Insiders is compatible with Griffe. Almost all new features and configuration options are either backward-compatible or implemented behind feature flags. Most Insiders features enhance the overall experience, though while these features add value for the users of your project, they shouldn't be necessary for previewing when making changes to content.
We don't want to pay for sponsorship every month. Are there any other options?
Yes. You can sponsor on a yearly basis by switching your GitHub account to a yearly billing cycle. If for some reason you cannot do that, you could also create a dedicated GitHub account with a yearly billing cycle, which you only use for sponsoring (some sponsors already do that).
If you have any problems or further questions, please reach out to insiders@pawamoy.fr.
Are we allowed to use Insiders under the same terms and conditions as Griffe?
Yes. Whether you're an individual or a company, you may use Griffe Insiders precisely under the same terms as Griffe, which are given by the ISC License. However, we kindly ask you to respect our fair use policy:
Please don't distribute the source code of Insiders. You may freely use it for public, private or commercial projects, privately fork or mirror it, but please don't make the source code public, as it would counteract the sponsorware strategy.
If you cancel your subscription, you're automatically removed as a collaborator and will miss out on all future updates of Insiders. However, you may use the latest version that's available to you as long as you like. Just remember that GitHub deletes private forks.
In general, every new feature is first exclusively released to sponsors, but sometimes upstream dependencies enhance existing features that must be supported by Griffe. ↩
Note that $10 a month is the minimum amount to become eligible for Insiders. While GitHub Sponsors also allows to sponsor lower amounts or one-time amounts, those can't be granted access to Insiders due to technical reasons. Such contributions are still very much welcome as they help ensuring the project's sustainability. ↩
Making an Open Source project sustainable is exceptionally hard: maintainers burn out, projects are abandoned. That's not great and very unpredictable. The sponsorware model ensures that if you decide to use Griffe, you can be sure that bugs are fixed quickly and new features are added regularly. ↩
It's currently not possible to grant access to each member of an organization, as GitHub only allows for adding users. Thus, after sponsoring, please send an email to insiders@pawamoy.fr, stating which account should become a collaborator of the Insiders repository. We're working on a solution which will make access to organizations much simpler. To ensure that access is not tied to a particular individual GitHub account, create a bot account (i.e. a GitHub account that is not tied to a specific individual), and use this account for the sponsoring. After being added to the list of collaborators, the bot account can create a private fork of the private Insiders GitHub repository, and grant access to all members of the organizations. ↩
If you cancel your sponsorship, GitHub schedules a cancellation request which will become effective at the end of the billing cycle. This means that even though you cancel your sponsorship, you will keep your access to Insiders as long as your cancellation isn't effective. All charges are processed by GitHub through Stripe. As we don't receive any information regarding your payment, and GitHub doesn't offer refunds, sponsorships are non-refundable. ↩
Griffe Insiders is a private fork of Griffe, hosted as a private GitHub repository. Almost1all new features are developed as part of this fork, which means that they are immediately available to all eligible sponsors, as they are made collaborators of this repository.
Every feature is tied to a funding goal in monthly subscriptions. When a funding goal is hit, the features that are tied to it are merged back into Griffe and released for general availability, making them available to all users. Bugfixes are always released in tandem.
Sponsorships make this project sustainable, as they buy the maintainers of this project time – a very scarce resource – which is spent on the development of new features, bug fixing, stability improvement, issue triage and general support. The biggest bottleneck in Open Source is time.3
If you're unsure if you should sponsor this project, check out the list of completed funding goals to learn whether you're already using features that were developed with the help of sponsorships. You're most likely using at least a handful of them, thanks to our awesome sponsors!
The moment you become a sponsor, you'll get immediate access to 13 additional features that you can start using right away, and which are currently exclusively available to sponsors:
Thanks for your interest in sponsoring! In order to become an eligible sponsor with your GitHub account, visit pawamoy's sponsor profile, and complete a sponsorship of $10 a month or more. You can use your individual or organization GitHub account for sponsoring.
Sponsorships lower than $10 a month are also very much appreciated, and useful. They won't grant you access to Insiders, but they will be counted towards reaching sponsorship goals. Every sponsorship helps us implementing new features and releasing them to the public.
Important: If you're sponsoring @pawamoy through a GitHub organization, please send a short email to insiders@pawamoy.fr with the name of your organization and the GitHub account of the individual that should be added as a collaborator.4
If you sponsor publicly, you're automatically added here with a link to your profile and avatar to show your support for Griffe. Alternatively, if you wish to keep your sponsorship private, you'll be a silent +1. You can select visibility during checkout and change it afterwards.
The following section lists all funding goals. Each goal contains a list of features prefixed with a checkmark symbol, denoting whether a feature is already available or planned, but not yet implemented. When the funding goal is hit, the features are released for general availability.
This section lists all funding goals that were previously completed, which means that those features were part of Insiders, but are now generally available and can be used by all users.
We're building an open source project and want to allow outside collaborators to use Griffe locally without having access to Insiders. Is this still possible?
Yes. Insiders is compatible with Griffe. Almost all new features and configuration options are either backward-compatible or implemented behind feature flags. Most Insiders features enhance the overall experience, though while these features add value for the users of your project, they shouldn't be necessary for previewing when making changes to content.
We don't want to pay for sponsorship every month. Are there any other options?
Yes. You can sponsor on a yearly basis by switching your GitHub account to a yearly billing cycle. If for some reason you cannot do that, you could also create a dedicated GitHub account with a yearly billing cycle, which you only use for sponsoring (some sponsors already do that).
If you have any problems or further questions, please reach out to insiders@pawamoy.fr.
Are we allowed to use Insiders under the same terms and conditions as Griffe?
Yes. Whether you're an individual or a company, you may use Griffe Insiders precisely under the same terms as Griffe, which are given by the ISC License. However, we kindly ask you to respect our fair use policy:
Please don't distribute the source code of Insiders. You may freely use it for public, private or commercial projects, privately fork or mirror it, but please don't make the source code public, as it would counteract the sponsorware strategy.
If you cancel your subscription, you're automatically removed as a collaborator and will miss out on all future updates of Insiders. However, you may use the latest version that's available to you as long as you like. Just remember that GitHub deletes private forks.
In general, every new feature is first exclusively released to sponsors, but sometimes upstream dependencies enhance existing features that must be supported by Griffe. ↩
Note that $10 a month is the minimum amount to become eligible for Insiders. While GitHub Sponsors also allows to sponsor lower amounts or one-time amounts, those can't be granted access to Insiders due to technical reasons. Such contributions are still very much welcome as they help ensuring the project's sustainability. ↩
Making an Open Source project sustainable is exceptionally hard: maintainers burn out, projects are abandoned. That's not great and very unpredictable. The sponsorware model ensures that if you decide to use Griffe, you can be sure that bugs are fixed quickly and new features are added regularly. ↩
It's currently not possible to grant access to each member of an organization, as GitHub only allows for adding users. Thus, after sponsoring, please send an email to insiders@pawamoy.fr, stating which account should become a collaborator of the Insiders repository. We're working on a solution which will make access to organizations much simpler. To ensure that access is not tied to a particular individual GitHub account, create a bot account (i.e. a GitHub account that is not tied to a specific individual), and use this account for the sponsoring. After being added to the list of collaborators, the bot account can create a private fork of the private Insiders GitHub repository, and grant access to all members of the organizations. ↩
If you cancel your sponsorship, GitHub schedules a cancellation request which will become effective at the end of the billing cycle. This means that even though you cancel your sponsorship, you will keep your access to Insiders as long as your cancellation isn't effective. All charges are processed by GitHub through Stripe. As we don't receive any information regarding your payment, and GitHub doesn't offer refunds, sponsorships are non-refundable. ↩
We provide this function for static analysis. It uses a NodeVisitor-like class, the Visitor, to compile and parse code (using compile) then visit the resulting AST (Abstract Syntax Tree).
Important
This function is generally not used directly. In most cases, users can rely on the GriffeLoader and its accompanying load shortcut and their respective options to load modules using static analysis.
We provide this function for static analysis. It uses a NodeVisitor-like class, the Visitor, to compile and parse code (using compile) then visit the resulting AST (Abstract Syntax Tree).
Important
This function is generally not used directly. In most cases, users can rely on the GriffeLoader and its accompanying load shortcut and their respective options to load modules using static analysis.
Sometimes we cannot get the source code of a module or an object, typically built-in modules like itertools. The only way to know what they are made of is to actually import them and inspect their contents.
Sometimes, even if the source code is available, loading the object is desired because it was created or modified dynamically, and our static agent is not powerful enough to infer all these dynamic modifications. In this case, we load the module using introspection.
The inspection agent works similarly to the regular Visitor agent, in that it maintains a state with the current object being handled, and recursively handle its members.
Important
This function is generally not used directly. In most cases, users can rely on the GriffeLoader and its accompanying load shortcut and their respective options to load modules using dynamic analysis.
Sometimes we cannot get the source code of a module or an object, typically built-in modules like itertools. The only way to know what they are made of is to actually import them and inspect their contents.
Sometimes, even if the source code is available, loading the object is desired because it was created or modified dynamically, and our static agent is not powerful enough to infer all these dynamic modifications. In this case, we load the module using introspection.
The inspection agent works similarly to the regular Visitor agent, in that it maintains a state with the current object being handled, and recursively handle its members.
Important
This function is generally not used directly. In most cases, users can rely on the GriffeLoader and its accompanying load shortcut and their respective options to load modules using dynamic analysis.
It can be a module, class, method, function, attribute, nested arbitrarily.
It works like this:
for a given object path a.b.x.y
it tries to import a.b.x.y as a module (with importlib.import_module)
if it fails, it tries again with a.b.x, storing y
then a.b, storing x.y
then a, storing b.x.y
if nothing worked, it raises an error
if one of the iteration worked, it moves on, and...
it tries to get the remaining (stored) parts with getattr
for example it gets b from a, then x from b, etc.
if a single attribute access fails, it raises an error
if everything worked, it returns the last obtained attribute
Since the function potentially tries multiple things before succeeding, all errors happening along the way are recorded, and re-emitted with an ImportError when it fails, to let users know what was tried.
Important
The paths given through the import_paths parameter are used to temporarily patch sys.path: this function is therefore not thread-safe.
Important
The paths given as import_paths must be correct. The contents of sys.path must be consistent to what a user of the imported code would expect. Given a set of paths, if the import fails for a user, it will fail here too, with potentially unintuitive errors. If we wanted to make this function more robust, we could add a loop to "roll the window" of given paths, shifting them to the left (for example: ("/a/a", "/a/b", "/a/c/"), then ("/a/b", "/a/c", "/a/a/"), then ("/a/c", "/a/a", "/a/b/")), to make sure each entry is given highest priority at least once, maintaining relative order, but we deem this unnecessary for now.
It can be a module, class, method, function, attribute, nested arbitrarily.
It works like this:
for a given object path a.b.x.y
it tries to import a.b.x.y as a module (with importlib.import_module)
if it fails, it tries again with a.b.x, storing y
then a.b, storing x.y
then a, storing b.x.y
if nothing worked, it raises an error
if one of the iteration worked, it moves on, and...
it tries to get the remaining (stored) parts with getattr
for example it gets b from a, then x from b, etc.
if a single attribute access fails, it raises an error
if everything worked, it returns the last obtained attribute
Since the function potentially tries multiple things before succeeding, all errors happening along the way are recorded, and re-emitted with an ImportError when it fails, to let users know what was tried.
Important
The paths given through the import_paths parameter are used to temporarily patch sys.path: this function is therefore not thread-safe.
Important
The paths given as import_paths must be correct. The contents of sys.path must be consistent to what a user of the imported code would expect. Given a set of paths, if the import fails for a user, it will fail here too, with potentially unintuitive errors. If we wanted to make this function more robust, we could add a loop to "roll the window" of given paths, shifting them to the left (for example: ("/a/a", "/a/b", "/a/c/"), then ("/a/b", "/a/c", "/a/a/"), then ("/a/c", "/a/a", "/a/b/")), to make sure each entry is given highest priority at least once, maintaining relative order, but we deem this unnecessary for now.
It's not really a tree but more a backward-linked list: each node has a reference to its parent, but not to its child (for simplicity purposes and to avoid bugs).
Each node stores an object, its name, and a reference to its parent node.
It's not really a tree but more a backward-linked list: each node has a reference to its parent, but not to its child (for simplicity purposes and to avoid bugs).
Each node stores an object, its name, and a reference to its parent node.