We welcome all contributions! ✨
See the EBP Contributing Guide for general details, and below for guidance specific to markdown-it-py.
Before continuing, make sure you've read:
Details of the port can be found in the markdown_it/port.yaml
and in port.yaml
files, within the extension folders.
Code style is tested using flake8, with the configuration set in .flake8
, and code formatted with black.
Installing with markdown-it-py[code_style]
makes the pre-commit package available, which will ensure this style is met before commits are submitted, by reformatting the code and testing for lint errors. It can be setup by:
>> cd markdown-it-py >> pre-commit install
Editors like VS Code also have automatic code reformat utilities, which can adhere to this standard.
All functions and class methods should be annotated with types and include a docstring. The prefered docstring format is outlined in markdown-it-py/docstring.fmt.mustache
and can be used automatically with the autodocstring VS Code extension.
For code tests, markdown-it-py uses pytest):
>> cd markdown-it-py >> pytest
You can also use tox, to run the tests in multiple isolated environments (see the tox.ini
file for available test environments):
>> cd markdown-it-py >> tox -p
This can also be used to run benchmarking tests using pytest-benchmark:
>> cd markdown-it-py tox -e py38-bench-packages -- --benchmark-min-rounds 50
For documentation build tests:
>> cd markdown-it-py/docs >> make clean >> make html-strict
target="_blank"
for the links.Sorry. You can't do it directly. All complex parsers are sync by nature. But you can use workarounds:
env
.Alternatively, you can render HTML, then parse it to DOM, or cheerio AST, and apply transformations in a more convenient way.
The right sequence is to split text to several tokens and add link tokens in between. The result will be: text
+ link_open
+ text
+ link_close
+ text
.
See implementations of linkify and emoji - those do text token splits.
Note: Don‘t try to replace text with HTML markup! That’s not secure.
The inline parser skips pieces of texts to optimize speed. It stops only on a small set of chars, which can be tokens. We did not made this list extensible for performance reasons too.
If you are absolutely sure that something important is missing there - create a ticket and we will consider adding it as a new charcode.