Compare commits
10 commits
83cd4d7119
...
ab799fef5b
| Author | SHA1 | Date | |
|---|---|---|---|
| ab799fef5b | |||
| 93fb55b53e | |||
| 5abd2ae86b | |||
| a9b7fac7fe | |||
| dbd93e4956 | |||
| f3fabf2d8a | |||
| 382423fe5a | |||
| d497ce9c71 | |||
| 68603a93d6 | |||
| 468e219ca8 |
24 changed files with 685 additions and 206 deletions
95
README.md
95
README.md
|
|
@ -1,4 +1,6 @@
|
||||||
# upload-generator
|
# upload-generator (ARCHIVE)
|
||||||
|
|
||||||
|
> This project has been superseded by my current web gallery build system. It won't receive any more updates.
|
||||||
|
|
||||||
Script to generate multi-gallery upload-ready files.
|
Script to generate multi-gallery upload-ready files.
|
||||||
|
|
||||||
|
|
@ -7,9 +9,32 @@ Script to generate multi-gallery upload-ready files.
|
||||||
- A Python environment to install dependencies (`pip install -r requirements.txt`); if unsure, create a fresh one with `virtualenv venv`.
|
- A Python environment to install dependencies (`pip install -r requirements.txt`); if unsure, create a fresh one with `virtualenv venv`.
|
||||||
- LibreOffice 6.0+, making sure that `libreoffice` is in your PATH.
|
- LibreOffice 6.0+, making sure that `libreoffice` is in your PATH.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
I recommend creating a virtualenv first. Linux/macOS/Unix example:
|
||||||
|
|
||||||
|
```sh
|
||||||
|
virtualenv venv
|
||||||
|
source venv/bin/activate # Also run every time you use this tool
|
||||||
|
pip install -r requirements.txt
|
||||||
|
activate-global-python-argcomplete
|
||||||
|
```
|
||||||
|
|
||||||
|
Windows example (autocompletion is not available):
|
||||||
|
|
||||||
|
```powershell
|
||||||
|
virtualenv venv
|
||||||
|
.\venv\Scripts\activate # Also run every time you use this tool
|
||||||
|
pip install -r requirements.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
Run `python test.py`.
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
Run with `python main.py -h` for options. Generated files are output to `./out` by default.
|
Run with `python main.py -h` (or simply `./main.py -h`) for options. Generated files are output to `./out` by default.
|
||||||
|
|
||||||
### Story files
|
### Story files
|
||||||
|
|
||||||
|
|
@ -29,43 +54,69 @@ In order to parse descriptions, you need a configuration file (default path is `
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Uppercase letters are optional. Only include your username for websites that you wish to generate descriptions for.
|
Uppercase letters for usernames are optional. Only include your username for websites that you wish to generate descriptions/stories for.
|
||||||
|
|
||||||
|
#### Basic formatting
|
||||||
|
|
||||||
Input descriptions should be formatted as BBCode. The following tags are accepted:
|
Input descriptions should be formatted as BBCode. The following tags are accepted:
|
||||||
|
|
||||||
```bbcode
|
```bbcode
|
||||||
[b]Bold text[/b]
|
[b]Bold text[/b]
|
||||||
[i]Italic text[/i]
|
[i]Italic text[/i]
|
||||||
[url=https://github.com]URL link[/url]
|
[u]Underline text[/u]
|
||||||
|
[center]Center-aligned text[/center]
|
||||||
|
[url=https://github.com/BadMannersXYZ]URL link[/url]
|
||||||
```
|
```
|
||||||
|
|
||||||
There are also special tags to link to yourself or other users automatically. This may include websites not available in the configuration:
|
#### Self-link formatting
|
||||||
|
|
||||||
|
`[self][/self]` will create a link to yourself for each website, with the same formatting as the `[user]...[/user]` switch. The inside of this tag must be always empty.
|
||||||
|
|
||||||
|
#### Conditional formatting
|
||||||
|
|
||||||
|
Another special set of tags is `[if=...][/if]` or `[if=...][/if][else][/else]`. The `if` tag lets you conditionally show content . The `else` tag is optional but must appear immediately after an `if` tag (no whitespace in-between), and displays whenever the condition is false instead.
|
||||||
|
|
||||||
|
The following parameters are available:
|
||||||
|
|
||||||
|
- `site`: generated according to the target website, eg. `[if=site==fa]...[/if]` or `[if=site!=furaffinity]...[/if][else]...[/else]`
|
||||||
|
- `define`: generated according to argument(s) defined to the script into the command line (i.e. with the `-D / --define-option` flag), eg. `[if=define==prod]...[/if][else]...[/else]` or `[if=define in possible_flag_1,possible_flag_2]...[/if][else]...[/else]`
|
||||||
|
|
||||||
|
The following conditions are available:
|
||||||
|
|
||||||
|
- `==`: eg. `[if=site==eka]Only show this on Eka's Portal.[/if][else]Show this everywhere except Eka's Portal![/else]`
|
||||||
|
- `!=`: eg. `[if=site!=eka]Show this everywhere except Eka's Portal![/if]`
|
||||||
|
- ` in `: eg. `[if=site in eka,fa]Only show this on Eka's Portal or Fur Affinity...[/if]`
|
||||||
|
|
||||||
|
#### Switch formatting
|
||||||
|
|
||||||
|
You can use special switch tags, which will generate different information per website automatically. There are two options available: creating different URLs per website, or linking to different users.
|
||||||
|
|
||||||
```bbcode
|
```bbcode
|
||||||
[self][/self]
|
Available for both [user]...[/user] and [siteurl]...[/siteurl] tags
|
||||||
|
- [generic=https://example.com/GenericUser]Generic text to display[/generic]
|
||||||
|
- [eka=EkasPortalUser][/eka] [aryion=EkasPortalUser][/aryion]
|
||||||
|
- [fa=FurAffinityUser][/fa] [furaffinity=FurAffinityUser][/furaffinity]
|
||||||
|
- [weasyl=WeasylUser][/weasyl]
|
||||||
|
- [ib=InkbunnyUser][/ib] [inkbunnny=InkbunnyUser][/inkbunnny]
|
||||||
|
- [sf=SoFurryUser][/sf] [sofurry=SoFurryUser][/sofurry]
|
||||||
|
|
||||||
[eka]EkasPortalUser[/eka]
|
Available only for [user]...[/user]
|
||||||
[fa]FurAffinityUser[/fa]
|
- [twitter=@TwitterUser][/twitter] - Leading '@' is optional
|
||||||
[weasyl]WeasylUser[/weasyl]
|
- [mastodon=@MastodonUser@mastodoninstance.com][/mastodon] - Leading '@' is optional
|
||||||
[ib]InkbunnyUser[/ib]
|
|
||||||
[sf]SoFurryUser[/sf]
|
|
||||||
[twitter]@TwitterUser[/twitter] - Leading '@' is optional
|
|
||||||
[mastodon]@MastodonUser@mastodoninstance.com[/mastodon] - Leading '@' is optional
|
|
||||||
```
|
```
|
||||||
|
|
||||||
`[self][/self]` tags must always be empty. The other tags are nestable and flexible, allowing attributes to display information differently on each supported website. Some examples:
|
These tags are nestable and flexible, requiring attributes to display information differently on each supported website. Some examples:
|
||||||
|
|
||||||
```bbcode
|
```bbcode
|
||||||
[eka=Lorem][/eka] is equivalent to [eka]Lorem[/eka].
|
[user][eka]Lorem[/eka][/user] is equivalent to [user][eka=Lorem][/eka][/user].
|
||||||
|
|
||||||
[fa=Ipsum]Dolor[/fa] shows Ipsum's username on FurAffinity, and Dolor everywhere else as a link to Ipsum's FA userpage.
|
[user][fa=Ipsum]Dolor[/fa][/user] shows Ipsum's username on Fur Affinity, and "Dolor" everywhere else with a link to Ipsum's userpage on FA.
|
||||||
|
|
||||||
[weasyl=Sit][ib=Amet][/ib][/weasyl] will show the two user links on Weasyl and Inkbunny as expected. For other websites, the innermost tag is prioritized - Inkbunny, in this case.
|
[user][ib=Sit][weasyl=Amet][twitter=Consectetur][/twitter][/weasyl][/ib][/user] will show a different usernames on Inkbunny and Weasyl. For other websites, the innermost user name and link are prioritized - Twitter, in this case.
|
||||||
[ib=Amet][weasyl=Sit][/weasyl][/ib] is the same as above, but the Weasyl link is prioritized instead.
|
[user][ib=Sit][twitter=Consectetur][weasyl=Amet][/weasyl][/twitter][/ib][/user] is similar, but the Weasyl user data is prioritized for websites other than Inkbunny. In this case, the Twitter tag is rendered useless, since descriptions can't be generated for the website.
|
||||||
|
|
||||||
[ib=Amet][weasyl=Sit]Consectetur[/weasyl][/ib] is the same as above, but Consectetur is displayed as the username for websites other than Inkbunny and Weasyl, with a link to the Weasyl gallery.
|
[siteurl][sf=https://a.com][eka=https://b.com]Adipiscing[/eka][/sf][/siteurl] displays links on SoFurry and Eka's Portal, with "Adipiscing" as the link's text. Other websites won't display any link.
|
||||||
|
[siteurl][sf=https://a.com][eka=https://b.com][generic=https://c.com]Adipiscing[/generic][/eka][/sf][/siteurl] is the same as above, but with the innermost generic tag serving as a fallback, guaranteeing that a link will be generated for all websites.
|
||||||
|
|
||||||
[generic=https://github.com/BadMannersXYZ]Bad Manners[/generic] can be used as the innermost tag with a mandatory URL attribute and default username, and is similar to the URL tag, but it can be nested within other profile links. Those other profile links get used only at their respective websites.
|
[user][fa=Elit][generic=https://github.com/BadMannersXYZ]Bad Manners[/generic][/fa][/user] shows how a generic tag can be used for user links as well, displayed everywhere aside from Fur Affinity in this example. User tags don't need an explicit fallback - the innermost tag is always used as a fallback for user links.
|
||||||
```
|
```
|
||||||
|
|
||||||
Another special set of tags is `[if][/if]` and `[else][/else]`. The if tag receives a parameter for the condition (i.e. `[if=parameter==value]...[/if]` or `[if=parameter!=value]...[/if]`) to check on the current transformer, and lets you show or omit generated content respectively. The else tag is optional but must appear immediately after an if tag (no whitespace in-between), and displays whenever the condition is false instead. For now, the if tag only accepts the `site` parameter (eg. `[if=site==fa]...[/if][else]...[/else]` or `[if=site!=furaffinity]...[/if]`).
|
|
||||||
|
|
|
||||||
442
description.py
442
description.py
|
|
@ -3,12 +3,18 @@ import io
|
||||||
import json
|
import json
|
||||||
import lark
|
import lark
|
||||||
import os
|
import os
|
||||||
|
import psutil
|
||||||
import re
|
import re
|
||||||
import subprocess
|
import subprocess
|
||||||
import typing
|
import typing
|
||||||
|
|
||||||
|
from sites import SUPPORTED_SITE_TAGS
|
||||||
|
|
||||||
SUPPORTED_USER_TAGS = ['eka', 'fa', 'weasyl', 'ib', 'sf', 'twitter', 'mastodon']
|
SUPPORTED_USER_TAGS: typing.Mapping[str, typing.Set[str]] = {
|
||||||
|
**SUPPORTED_SITE_TAGS,
|
||||||
|
'twitter': {'twitter'},
|
||||||
|
'mastodon': {'mastodon'},
|
||||||
|
}
|
||||||
|
|
||||||
DESCRIPTION_GRAMMAR = r"""
|
DESCRIPTION_GRAMMAR = r"""
|
||||||
?start: document_list
|
?start: document_list
|
||||||
|
|
@ -18,39 +24,58 @@ DESCRIPTION_GRAMMAR = r"""
|
||||||
document: b_tag
|
document: b_tag
|
||||||
| i_tag
|
| i_tag
|
||||||
| u_tag
|
| u_tag
|
||||||
|
| center_tag
|
||||||
| url_tag
|
| url_tag
|
||||||
| self_tag
|
| self_tag
|
||||||
| if_tag
|
| if_tag
|
||||||
| user_tag_root
|
| user_tag_root
|
||||||
|
| siteurl_tag_root
|
||||||
| TEXT
|
| TEXT
|
||||||
|
|
||||||
b_tag: "[b]" [document_list] "[/b]"
|
b_tag: "[b]" [document_list] "[/b]"
|
||||||
i_tag: "[i]" [document_list] "[/i]"
|
i_tag: "[i]" [document_list] "[/i]"
|
||||||
u_tag: "[u]" [document_list] "[/u]"
|
u_tag: "[u]" [document_list] "[/u]"
|
||||||
|
center_tag: "[center]" [document_list] "[/center]"
|
||||||
url_tag: "[url" ["=" [URL]] "]" [document_list] "[/url]"
|
url_tag: "[url" ["=" [URL]] "]" [document_list] "[/url]"
|
||||||
|
|
||||||
self_tag: "[self][/self]"
|
self_tag: "[self][/self]"
|
||||||
if_tag: "[if=" CONDITION "]" [document_list] "[/if]" [ "[else]" document_list "[/else]" ]
|
if_tag: "[if=" CONDITION "]" [document_list] "[/if]" [ "[else]" [document_list] "[/else]" ]
|
||||||
|
|
||||||
user_tag_root: user_tag
|
user_tag_root: "[user]" user_tag "[/user]"
|
||||||
user_tag: generic_tag | """
|
user_tag: user_tag_generic | """
|
||||||
|
|
||||||
DESCRIPTION_GRAMMAR += ' | '.join(f'{tag}_tag' for tag in SUPPORTED_USER_TAGS)
|
DESCRIPTION_GRAMMAR += ' | '.join(f'user_tag_{tag}' for tag in SUPPORTED_USER_TAGS)
|
||||||
DESCRIPTION_GRAMMAR += ''.join(f'\n {tag}_tag: "[{tag}" ["=" USERNAME] "]" USERNAME "[/{tag}]" | "[{tag}" "=" USERNAME "]" [user_tag] "[/{tag}]"' for tag in SUPPORTED_USER_TAGS)
|
for tag, alts in SUPPORTED_USER_TAGS.items():
|
||||||
|
DESCRIPTION_GRAMMAR += f'\n user_tag_{tag}: '
|
||||||
|
DESCRIPTION_GRAMMAR += ' | '.join(f'"[{alt}" ["=" USERNAME] "]" USERNAME "[/{alt}]" | "[{alt}" "=" USERNAME "]" [user_tag] "[/{alt}]"' for alt in alts)
|
||||||
|
|
||||||
DESCRIPTION_GRAMMAR += r"""
|
DESCRIPTION_GRAMMAR += r"""
|
||||||
generic_tag: "[generic=" URL "]" USERNAME "[/generic]"
|
user_tag_generic: "[generic=" URL "]" USERNAME "[/generic]"
|
||||||
|
|
||||||
USERNAME: /[a-zA-Z0-9][a-zA-Z0-9 _-]*/
|
siteurl_tag_root: "[siteurl]" siteurl_tag "[/siteurl]"
|
||||||
URL: /(https?:\/\/)?[^\]]+/
|
siteurl_tag: siteurl_tag_generic | """
|
||||||
|
|
||||||
|
DESCRIPTION_GRAMMAR += ' | '.join(f'siteurl_tag_{tag}' for tag in SUPPORTED_SITE_TAGS)
|
||||||
|
for tag, alts in SUPPORTED_SITE_TAGS.items():
|
||||||
|
DESCRIPTION_GRAMMAR += f'\n siteurl_tag_{tag}: '
|
||||||
|
DESCRIPTION_GRAMMAR += ' | '.join(f'"[{alt}" "=" URL "]" ( siteurl_tag | TEXT ) "[/{alt}]"' for alt in alts)
|
||||||
|
|
||||||
|
DESCRIPTION_GRAMMAR += r"""
|
||||||
|
siteurl_tag_generic: "[generic=" URL "]" TEXT "[/generic]"
|
||||||
|
|
||||||
|
USERNAME: / *@?[a-zA-Z0-9][a-zA-Z0-9 @._-]*/
|
||||||
|
URL: / *(https?:\/\/)?[^\]]+ */
|
||||||
TEXT: /([^\[]|[ \t\r\n])+/
|
TEXT: /([^\[]|[ \t\r\n])+/
|
||||||
CONDITION: / *[a-z]+ *(==|!=) *[a-zA-Z0-9]+ */
|
CONDITION: / *[a-z]+ *(==|!=) *[a-zA-Z0-9_-]+ *| *[a-z]+ +in +([a-zA-Z0-9_-]+ *, *)*[a-zA-Z0-9_-]+ */
|
||||||
"""
|
"""
|
||||||
|
|
||||||
DESCRIPTION_PARSER = lark.Lark(DESCRIPTION_GRAMMAR, parser='lalr')
|
DESCRIPTION_PARSER = lark.Lark(DESCRIPTION_GRAMMAR, parser='lalr')
|
||||||
|
|
||||||
|
|
||||||
class UserTag:
|
class DescriptionParsingError(ValueError):
|
||||||
|
pass
|
||||||
|
|
||||||
|
class SiteSwitchTag:
|
||||||
def __init__(self, default: typing.Optional[str]=None, **kwargs):
|
def __init__(self, default: typing.Optional[str]=None, **kwargs):
|
||||||
self.default = default
|
self.default = default
|
||||||
self._sites: typing.OrderedDict[str, typing.Optional[str]] = OrderedDict()
|
self._sites: typing.OrderedDict[str, typing.Optional[str]] = OrderedDict()
|
||||||
|
|
@ -70,30 +95,53 @@ class UserTag:
|
||||||
def __getitem__(self, name: str) -> typing.Optional[str]:
|
def __getitem__(self, name: str) -> typing.Optional[str]:
|
||||||
return self._sites.get(name)
|
return self._sites.get(name)
|
||||||
|
|
||||||
|
def __contains__(self, name: str) -> bool:
|
||||||
|
return name in self._sites
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def sites(self):
|
def sites(self):
|
||||||
yield from self._sites
|
yield from self._sites
|
||||||
|
|
||||||
class UploadTransformer(lark.Transformer):
|
class UploadTransformer(lark.Transformer):
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, define_options=set(), *args, **kwargs):
|
||||||
super(UploadTransformer, self).__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
|
self.define_options = define_options
|
||||||
|
# Init user_tag_xxxx methods
|
||||||
def _user_tag_factory(tag):
|
def _user_tag_factory(tag):
|
||||||
# Create a new UserTag if innermost node, or append to list in order
|
# Create a new user SiteSwitchTag if innermost node, or append to list in order
|
||||||
def user_tag(data):
|
def user_tag(data):
|
||||||
attribute, inner = data[0], data[1]
|
attribute, inner = data[0], data[1]
|
||||||
if attribute and attribute.strip():
|
if attribute and attribute.strip():
|
||||||
if isinstance(inner, UserTag):
|
if isinstance(inner, SiteSwitchTag):
|
||||||
inner[tag] = attribute.strip()
|
inner[tag] = attribute.strip()
|
||||||
return inner
|
return inner
|
||||||
user = UserTag(default=inner and inner.strip())
|
user = SiteSwitchTag(default=inner and inner.strip())
|
||||||
user[tag] = attribute.strip()
|
user[tag] = attribute.strip()
|
||||||
return user
|
return user
|
||||||
user = UserTag()
|
user = SiteSwitchTag()
|
||||||
user[tag] = inner.strip()
|
user[tag] = inner.strip()
|
||||||
return user
|
return user
|
||||||
return user_tag
|
return user_tag
|
||||||
for tag in SUPPORTED_USER_TAGS:
|
for tag in SUPPORTED_USER_TAGS:
|
||||||
setattr(self, f'{tag}_tag', _user_tag_factory(tag))
|
setattr(self, f'user_tag_{tag}', _user_tag_factory(tag))
|
||||||
|
# Init siteurl_tag_xxxx methods
|
||||||
|
def _siteurl_tag_factory(tag):
|
||||||
|
# Create a new siteurl SiteSwitchTag if innermost node, or append to list in order
|
||||||
|
def siteurl_tag(data):
|
||||||
|
attribute, inner = data[0], data[1]
|
||||||
|
if attribute and attribute.strip():
|
||||||
|
if isinstance(inner, SiteSwitchTag):
|
||||||
|
inner[tag] = attribute.strip()
|
||||||
|
return inner
|
||||||
|
siteurl = SiteSwitchTag(default=inner and inner.strip())
|
||||||
|
siteurl[tag] = attribute.strip()
|
||||||
|
return siteurl
|
||||||
|
siteurl = SiteSwitchTag()
|
||||||
|
siteurl[tag] = inner.strip()
|
||||||
|
return siteurl
|
||||||
|
return siteurl_tag
|
||||||
|
for tag in SUPPORTED_SITE_TAGS:
|
||||||
|
setattr(self, f'siteurl_tag_{tag}', _siteurl_tag_factory(tag))
|
||||||
|
|
||||||
def document_list(self, data):
|
def document_list(self, data):
|
||||||
return ''.join(data)
|
return ''.join(data)
|
||||||
|
|
@ -110,6 +158,9 @@ class UploadTransformer(lark.Transformer):
|
||||||
def u_tag(self, _):
|
def u_tag(self, _):
|
||||||
raise NotImplementedError('UploadTransformer.u_tag is abstract')
|
raise NotImplementedError('UploadTransformer.u_tag is abstract')
|
||||||
|
|
||||||
|
def center_tag(self, _):
|
||||||
|
raise NotImplementedError('UploadTransformer.center_tag is abstract')
|
||||||
|
|
||||||
def url_tag(self, _):
|
def url_tag(self, _):
|
||||||
raise NotImplementedError('UploadTransformer.url_tag is abstract')
|
raise NotImplementedError('UploadTransformer.url_tag is abstract')
|
||||||
|
|
||||||
|
|
@ -119,57 +170,86 @@ class UploadTransformer(lark.Transformer):
|
||||||
def transformer_matches_site(self, site: str) -> bool:
|
def transformer_matches_site(self, site: str) -> bool:
|
||||||
raise NotImplementedError('UploadTransformer.transformer_matches_site is abstract')
|
raise NotImplementedError('UploadTransformer.transformer_matches_site is abstract')
|
||||||
|
|
||||||
|
def transformer_matches_define(self, option: str) -> bool:
|
||||||
|
return option in self.define_options
|
||||||
|
|
||||||
def if_tag(self, data: typing.Tuple[str, str, str]):
|
def if_tag(self, data: typing.Tuple[str, str, str]):
|
||||||
condition, truthy_document, falsy_document = data
|
condition, truthy_document, falsy_document = data[0], data[1], data[2]
|
||||||
equality_condition = condition.split('==')
|
# Test equality condition, i.e. `site==foo`
|
||||||
|
equality_condition = condition.split('==', 1)
|
||||||
if len(equality_condition) == 2 and equality_condition[1].strip():
|
if len(equality_condition) == 2 and equality_condition[1].strip():
|
||||||
conditional_test = f'transformer_matches_{equality_condition[0].strip()}'
|
conditional_test = f'transformer_matches_{equality_condition[0].strip()}'
|
||||||
if hasattr(self, conditional_test):
|
if hasattr(self, conditional_test):
|
||||||
if getattr(self, conditional_test)(equality_condition[1].strip()):
|
if getattr(self, conditional_test)(equality_condition[1].strip()):
|
||||||
return truthy_document or ''
|
return truthy_document or ''
|
||||||
return falsy_document or ''
|
return falsy_document or ''
|
||||||
inequality_condition = condition.split('!=')
|
# Test inequality condition, i.e. `site!=foo`
|
||||||
|
inequality_condition = condition.split('!=', 1)
|
||||||
if len(inequality_condition) == 2 and inequality_condition[1].strip():
|
if len(inequality_condition) == 2 and inequality_condition[1].strip():
|
||||||
conditional_test = f'transformer_matches_{inequality_condition[0].strip()}'
|
conditional_test = f'transformer_matches_{inequality_condition[0].strip()}'
|
||||||
if hasattr(self, conditional_test):
|
if hasattr(self, conditional_test):
|
||||||
if not getattr(self, conditional_test)(inequality_condition[1].strip()):
|
if not getattr(self, conditional_test)(inequality_condition[1].strip()):
|
||||||
return truthy_document or ''
|
return truthy_document or ''
|
||||||
return falsy_document or ''
|
return falsy_document or ''
|
||||||
|
# Test inclusion condition, i.e. `site in foo,bar`
|
||||||
|
inclusion_condition = condition.split(' in ', 1)
|
||||||
|
if len(inclusion_condition) == 2 and inclusion_condition[1].strip():
|
||||||
|
conditional_test = f'transformer_matches_{inclusion_condition[0].strip()}'
|
||||||
|
if hasattr(self, conditional_test):
|
||||||
|
matches = (parameter.strip() for parameter in inclusion_condition[1].split(','))
|
||||||
|
if any(getattr(self, conditional_test)(match) for match in matches):
|
||||||
|
return truthy_document or ''
|
||||||
|
return falsy_document or ''
|
||||||
raise ValueError(f'Invalid [if][/if] tag condition: {condition}')
|
raise ValueError(f'Invalid [if][/if] tag condition: {condition}')
|
||||||
|
|
||||||
def user_tag_root(self, data):
|
def user_tag_root(self, data):
|
||||||
user_data: UserTag = data[0]
|
user_data: SiteSwitchTag = data[0]
|
||||||
for site in user_data.sites:
|
for site in user_data.sites:
|
||||||
if site == 'generic':
|
if site == 'generic':
|
||||||
return self.url_tag((user_data['generic'].strip(), user_data.default))
|
return self.url_tag((user_data['generic'], user_data.default))
|
||||||
elif site == 'eka':
|
elif site == 'aryion':
|
||||||
return self.url_tag((f'https://aryion.com/g4/user/{user_data["eka"]}', user_data.default or user_data["eka"]))
|
return self.url_tag((f'https://aryion.com/g4/user/{user_data["aryion"]}', user_data.default or user_data["aryion"]))
|
||||||
elif site == 'fa':
|
elif site == 'furaffinity':
|
||||||
return self.url_tag((f'https://furaffinity.net/user/{user_data["fa"].replace("_", "")}', user_data.default or user_data['fa']))
|
return self.url_tag((f'https://furaffinity.net/user/{user_data["furaffinity"].replace("_", "")}', user_data.default or user_data['furaffinity']))
|
||||||
elif site == 'weasyl':
|
elif site == 'weasyl':
|
||||||
return self.url_tag((f'https://www.weasyl.com/~{user_data["weasyl"].replace(" ", "").lower()}', user_data.default or user_data['weasyl']))
|
return self.url_tag((f'https://www.weasyl.com/~{user_data["weasyl"].replace(" ", "").lower()}', user_data.default or user_data['weasyl']))
|
||||||
elif site == 'ib':
|
elif site == 'inkbunny':
|
||||||
return self.url_tag((f'https://inkbunny.net/{user_data["ib"]}', user_data.default or user_data['ib']))
|
return self.url_tag((f'https://inkbunny.net/{user_data["inkbunny"]}', user_data.default or user_data['inkbunny']))
|
||||||
elif site == 'sf':
|
elif site == 'sofurry':
|
||||||
return self.url_tag((f'https://{user_data["sf"].replace(" ", "-").lower()}.sofurry.com', user_data.default or user_data['sf']))
|
return self.url_tag((f'https://{user_data["sofurry"].replace(" ", "-").lower()}.sofurry.com', user_data.default or user_data['sofurry']))
|
||||||
elif site == 'twitter':
|
elif site == 'twitter':
|
||||||
return self.url_tag((f'https://twitter.com/{user_data["twitter"].rsplit("@", 1)[-1]}', user_data.default or user_data['twitter']))
|
return self.url_tag((f'https://twitter.com/{user_data["twitter"].rsplit("@", 1)[-1]}', user_data.default or user_data['twitter']))
|
||||||
elif site == 'mastodon':
|
elif site == 'mastodon':
|
||||||
*_, mastodon_user, mastodon_instance = user_data["mastodon"].rsplit('@', 2)
|
*_, mastodon_user, mastodon_instance = user_data["mastodon"].rsplit('@', 2)
|
||||||
return self.url_tag((f'https://{mastodon_instance}/@{mastodon_user}', user_data.default or user_data['mastodon']))
|
return self.url_tag((f'https://{mastodon_instance.strip()}/@{mastodon_user.strip()}', user_data.default or user_data['mastodon']))
|
||||||
else:
|
else:
|
||||||
print(f'Unknown site "{site}" found in user tag; ignoring...')
|
print(f'Unknown site "{site}" found in user tag; ignoring...')
|
||||||
raise TypeError('Invalid UserTag data')
|
raise TypeError('Invalid user SiteSwitchTag data - no matches found')
|
||||||
|
|
||||||
def user_tag(self, data):
|
def user_tag(self, data):
|
||||||
return data[0]
|
return data[0]
|
||||||
|
|
||||||
def generic_tag(self, data):
|
def user_tag_generic(self, data):
|
||||||
attribute, inner = data[0], data[1]
|
attribute, inner = data[0], data[1]
|
||||||
user = UserTag(default=inner.strip())
|
user = SiteSwitchTag(default=inner.strip())
|
||||||
user['generic'] = attribute.strip()
|
user['generic'] = attribute.strip()
|
||||||
return user
|
return user
|
||||||
|
|
||||||
|
def siteurl_tag_root(self, data):
|
||||||
|
siteurl_data: SiteSwitchTag = data[0]
|
||||||
|
if 'generic' in siteurl_data:
|
||||||
|
return self.url_tag((siteurl_data['generic'], siteurl_data.default))
|
||||||
|
return ''
|
||||||
|
|
||||||
|
def siteurl_tag(self, data):
|
||||||
|
return data[0]
|
||||||
|
|
||||||
|
def siteurl_tag_generic(self, data):
|
||||||
|
attribute, inner = data[0], data[1]
|
||||||
|
siteurl = SiteSwitchTag(default=inner.strip())
|
||||||
|
siteurl['generic'] = attribute.strip()
|
||||||
|
return siteurl
|
||||||
|
|
||||||
class BbcodeTransformer(UploadTransformer):
|
class BbcodeTransformer(UploadTransformer):
|
||||||
def b_tag(self, data):
|
def b_tag(self, data):
|
||||||
if data[0] is None or not data[0].strip():
|
if data[0] is None or not data[0].strip():
|
||||||
|
|
@ -186,8 +266,15 @@ class BbcodeTransformer(UploadTransformer):
|
||||||
return ''
|
return ''
|
||||||
return f'[u]{data[0]}[/u]'
|
return f'[u]{data[0]}[/u]'
|
||||||
|
|
||||||
|
def center_tag(self, data):
|
||||||
|
if data[0] is None or not data[0].strip():
|
||||||
|
return ''
|
||||||
|
return f'[center]{data[0]}[/center]'
|
||||||
|
|
||||||
def url_tag(self, data):
|
def url_tag(self, data):
|
||||||
return f'[url={data[0] or ""}]{data[1] or ""}[/url]'
|
if data[0] is None or not data[0].strip():
|
||||||
|
return data[1].strip() if data[1] else ''
|
||||||
|
return f'[url={data[0].strip()}]{data[1] if data[1] and data[1].strip() else data[0].strip()}[/url]'
|
||||||
|
|
||||||
class MarkdownTransformer(UploadTransformer):
|
class MarkdownTransformer(UploadTransformer):
|
||||||
def b_tag(self, data):
|
def b_tag(self, data):
|
||||||
|
|
@ -206,7 +293,9 @@ class MarkdownTransformer(UploadTransformer):
|
||||||
return f'<u>{data[0]}</u>' # Markdown should support simple HTML tags
|
return f'<u>{data[0]}</u>' # Markdown should support simple HTML tags
|
||||||
|
|
||||||
def url_tag(self, data):
|
def url_tag(self, data):
|
||||||
return f'[{data[1] or ""}]({data[0] or ""})'
|
if data[0] is None or not data[0].strip():
|
||||||
|
return data[1].strip() if data[1] else ''
|
||||||
|
return f'[{data[1] if data[1] and data[1].strip() else data[0].strip()}]({data[0].strip()})'
|
||||||
|
|
||||||
class PlaintextTransformer(UploadTransformer):
|
class PlaintextTransformer(UploadTransformer):
|
||||||
def b_tag(self, data):
|
def b_tag(self, data):
|
||||||
|
|
@ -218,141 +307,210 @@ class PlaintextTransformer(UploadTransformer):
|
||||||
def u_tag(self, data):
|
def u_tag(self, data):
|
||||||
return str(data[0]) if data[0] else ''
|
return str(data[0]) if data[0] else ''
|
||||||
|
|
||||||
|
def center_tag(self, data):
|
||||||
|
return str(data[0]) if data[0] else ''
|
||||||
|
|
||||||
def url_tag(self, data):
|
def url_tag(self, data):
|
||||||
|
if data[0] is None or not data[0].strip():
|
||||||
|
return data[1] if data[1] and data[1].strip() else ''
|
||||||
if data[1] is None or not data[1].strip():
|
if data[1] is None or not data[1].strip():
|
||||||
return str(data[0]) if data[0] else ''
|
return data[0].strip()
|
||||||
return f'{data[1].strip()}: {data[0] or ""}'
|
return f'{data[1]}: {data[0].strip()}'
|
||||||
|
|
||||||
def user_tag_root(self, data):
|
def user_tag_root(self, data):
|
||||||
user_data = data[0]
|
user_data = data[0]
|
||||||
for site in user_data.sites:
|
for site in user_data.sites:
|
||||||
if site == 'generic':
|
if site == 'generic':
|
||||||
break
|
break
|
||||||
elif site == 'eka':
|
elif site == 'aryion':
|
||||||
return f'{user_data["eka"]} on Eka\'s Portal'
|
return f'{user_data["aryion"]} on Eka\'s Portal'
|
||||||
elif site == 'fa':
|
elif site == 'furaffinity':
|
||||||
return f'{user_data["fa"]} on Fur Affinity'
|
return f'{user_data["furaffinity"]} on Fur Affinity'
|
||||||
elif site == 'weasyl':
|
elif site == 'weasyl':
|
||||||
return f'{user_data["weasyl"]} on Weasyl'
|
return f'{user_data["weasyl"]} on Weasyl'
|
||||||
elif site == 'ib':
|
elif site == 'inkbunny':
|
||||||
return f'{user_data["ib"]} on Inkbunny'
|
return f'{user_data["inkbunny"]} on Inkbunny'
|
||||||
elif site == 'sf':
|
elif site == 'sofurry':
|
||||||
return f'{user_data["sf"]} on SoFurry'
|
return f'{user_data["sofurry"]} on SoFurry'
|
||||||
elif site == 'twitter':
|
elif site == 'twitter':
|
||||||
return f'@{user_data["twitter"].rsplit("@", 1)[-1]} on Twitter'
|
return f'@{user_data["twitter"].rsplit("@", 1)[-1]} on Twitter'
|
||||||
elif site == 'mastodon':
|
elif site == 'mastodon':
|
||||||
*_, mastodon_user, mastodon_instance = user_data["mastodon"].rsplit('@', 2)
|
*_, mastodon_user, mastodon_instance = user_data["mastodon"].rsplit('@', 2)
|
||||||
return f'@{mastodon_user} on {mastodon_instance}'
|
return f'@{mastodon_user.strip()} on {mastodon_instance.strip()}'
|
||||||
else:
|
else:
|
||||||
print(f'Unknown site "{site}" found in user tag; ignoring...')
|
print(f'Unknown site "{site}" found in user tag; ignoring...')
|
||||||
return super(PlaintextTransformer, self).user_tag_root(data)
|
return super().user_tag_root(data)
|
||||||
|
|
||||||
class AryionTransformer(BbcodeTransformer):
|
class AryionTransformer(BbcodeTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super(AryionTransformer, self).__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((UserTag(eka=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(aryion=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for AryionTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
def transformer_matches_site(self, site: str) -> bool:
|
@staticmethod
|
||||||
return site in ('eka', 'aryion')
|
def transformer_matches_site(site: str) -> bool:
|
||||||
|
return site in SUPPORTED_USER_TAGS['aryion']
|
||||||
|
|
||||||
def user_tag_root(self, data):
|
def user_tag_root(self, data):
|
||||||
user_data = data[0]
|
user_data: SiteSwitchTag = data[0]
|
||||||
if user_data['eka']:
|
if user_data['aryion']:
|
||||||
return f':icon{user_data["eka"]}:'
|
return f':icon{user_data["aryion"]}:'
|
||||||
return super(AryionTransformer, self).user_tag_root(data)
|
return super().user_tag_root(data)
|
||||||
|
|
||||||
|
def siteurl_tag_root(self, data):
|
||||||
|
siteurl_data: SiteSwitchTag = data[0]
|
||||||
|
if 'aryion' in siteurl_data:
|
||||||
|
return self.url_tag((siteurl_data['aryion'], siteurl_data.default))
|
||||||
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
class FuraffinityTransformer(BbcodeTransformer):
|
class FuraffinityTransformer(BbcodeTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super(FuraffinityTransformer, self).__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((UserTag(fa=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(furaffinity=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for FuraffinityTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
def transformer_matches_site(self, site: str) -> bool:
|
@staticmethod
|
||||||
return site in ('fa', 'furaffinity')
|
def transformer_matches_site(site: str) -> bool:
|
||||||
|
return site in SUPPORTED_USER_TAGS['furaffinity']
|
||||||
|
|
||||||
def user_tag_root(self, data):
|
def user_tag_root(self, data):
|
||||||
user_data = data[0]
|
user_data: SiteSwitchTag = data[0]
|
||||||
if user_data['fa']:
|
if user_data['furaffinity']:
|
||||||
return f':icon{user_data["fa"]}:'
|
return f':icon{user_data["furaffinity"]}:'
|
||||||
return super(FuraffinityTransformer, self).user_tag_root(data)
|
return super().user_tag_root(data)
|
||||||
|
|
||||||
|
def siteurl_tag_root(self, data):
|
||||||
|
siteurl_data: SiteSwitchTag = data[0]
|
||||||
|
if 'furaffinity' in siteurl_data:
|
||||||
|
return self.url_tag((siteurl_data['furaffinity'], siteurl_data.default))
|
||||||
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
class WeasylTransformer(MarkdownTransformer):
|
class WeasylTransformer(MarkdownTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super(WeasylTransformer, self).__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((UserTag(weasyl=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(weasyl=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for WeasylTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
def transformer_matches_site(self, site: str) -> bool:
|
@staticmethod
|
||||||
|
def transformer_matches_site(site: str) -> bool:
|
||||||
return site == 'weasyl'
|
return site == 'weasyl'
|
||||||
|
|
||||||
|
def center_tag(self, data):
|
||||||
|
if data[0] is None or not data[0].strip():
|
||||||
|
return ''
|
||||||
|
return f'<div class="align-center">{data[0]}</div>'
|
||||||
|
|
||||||
def user_tag_root(self, data):
|
def user_tag_root(self, data):
|
||||||
user_data = data[0]
|
user_data: SiteSwitchTag = data[0]
|
||||||
if user_data['weasyl']:
|
if user_data['weasyl']:
|
||||||
return f'<!~{user_data["weasyl"].replace(" ", "")}>'
|
return f'<!~{user_data["weasyl"].replace(" ", "")}>'
|
||||||
if user_data.default is None:
|
for site in user_data.sites:
|
||||||
for site in user_data.sites:
|
if site == 'furaffinity':
|
||||||
if site == 'fa':
|
return f'<fa:{user_data["furaffinity"]}>'
|
||||||
return f'<fa:{user_data["fa"]}>'
|
if site == 'inkbunny':
|
||||||
if site == 'ib':
|
return f'<ib:{user_data["inkbunny"]}>'
|
||||||
return f'<ib:{user_data["ib"]}>'
|
if site == 'sofurry':
|
||||||
if site == 'sf':
|
return f'<sf:{user_data["sofurry"]}>'
|
||||||
return f'<sf:{user_data["sf"]}>'
|
return super().user_tag_root(data)
|
||||||
return super(WeasylTransformer, self).user_tag_root(data)
|
|
||||||
|
def siteurl_tag_root(self, data):
|
||||||
|
siteurl_data: SiteSwitchTag = data[0]
|
||||||
|
if 'weasyl' in siteurl_data:
|
||||||
|
return self.url_tag((siteurl_data['weasyl'], siteurl_data.default))
|
||||||
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
class InkbunnyTransformer(BbcodeTransformer):
|
class InkbunnyTransformer(BbcodeTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super(InkbunnyTransformer, self).__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((UserTag(ib=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(inkbunny=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for InkbunnyTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
def transformer_matches_site(self, site: str) -> bool:
|
@staticmethod
|
||||||
return site in ('ib', 'inkbunny')
|
def transformer_matches_site(site: str) -> bool:
|
||||||
|
return site in SUPPORTED_USER_TAGS['inkbunny']
|
||||||
|
|
||||||
def user_tag_root(self, data):
|
def user_tag_root(self, data):
|
||||||
user_data = data[0]
|
user_data: SiteSwitchTag = data[0]
|
||||||
if user_data['ib']:
|
if user_data['inkbunny']:
|
||||||
return f'[iconname]{user_data["ib"]}[/iconname]'
|
return f'[iconname]{user_data["inkbunny"]}[/iconname]'
|
||||||
if user_data.default is None:
|
for site in user_data.sites:
|
||||||
for site in user_data.sites:
|
if site == 'furaffinity':
|
||||||
if site == 'fa':
|
return f'[fa]{user_data["furaffinity"]}[/fa]'
|
||||||
return f'[fa]{user_data["fa"]}[/fa]'
|
if site == 'sofurry':
|
||||||
if site == 'sf':
|
return f'[sf]{user_data["sofurry"]}[/sf]'
|
||||||
return f'[sf]{user_data["sf"]}[/sf]'
|
if site == 'weasyl':
|
||||||
if site == 'weasyl':
|
return f'[weasyl]{user_data["weasyl"].replace(" ", "").lower()}[/weasyl]'
|
||||||
return f'[weasyl]{user_data["weasyl"].replace(" ", "").lower()}[/weasyl]'
|
return super().user_tag_root(data)
|
||||||
return super(InkbunnyTransformer, self).user_tag_root(data)
|
|
||||||
|
def siteurl_tag_root(self, data):
|
||||||
|
siteurl_data: SiteSwitchTag = data[0]
|
||||||
|
if 'inkbunny' in siteurl_data:
|
||||||
|
return self.url_tag((siteurl_data['inkbunny'], siteurl_data.default))
|
||||||
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
class SoFurryTransformer(BbcodeTransformer):
|
class SoFurryTransformer(BbcodeTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super(SoFurryTransformer, self).__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((UserTag(sf=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(sofurry=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for SoFurryTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
def transformer_matches_site(self, site: str) -> bool:
|
@staticmethod
|
||||||
return site in ('sf', 'sofurry')
|
def transformer_matches_site(site: str) -> bool:
|
||||||
|
return site in SUPPORTED_USER_TAGS['sofurry']
|
||||||
|
|
||||||
def user_tag_root(self, data):
|
def user_tag_root(self, data):
|
||||||
user_data = data[0]
|
user_data: SiteSwitchTag = data[0]
|
||||||
if user_data['sf']:
|
if user_data['sofurry']:
|
||||||
return f':icon{user_data["sf"]}:'
|
return f':icon{user_data["sofurry"]}:'
|
||||||
if user_data.default is None:
|
for site in user_data.sites:
|
||||||
for site in user_data.sites:
|
if site == 'furaffinity':
|
||||||
if site == 'fa':
|
return f'fa!{user_data["furaffinity"]}'
|
||||||
return f'fa!{user_data["fa"]}'
|
if site == 'inkbunny':
|
||||||
if site == 'ib':
|
return f'ib!{user_data["inkbunny"]}'
|
||||||
return f'ib!{user_data["ib"]}'
|
return super().user_tag_root(data)
|
||||||
return super(SoFurryTransformer, self).user_tag_root(data)
|
|
||||||
|
def siteurl_tag_root(self, data):
|
||||||
|
siteurl_data: SiteSwitchTag = data[0]
|
||||||
|
if 'sofurry' in siteurl_data:
|
||||||
|
return self.url_tag((siteurl_data['sofurry'], siteurl_data.default))
|
||||||
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
|
|
||||||
def parse_description(description_path, config_path, out_dir, ignore_empty_files=False):
|
def validate_parsed_tree(parsed_tree):
|
||||||
ps = subprocess.Popen(('libreoffice', '--cat', description_path), stdout=subprocess.PIPE)
|
for node in parsed_tree.iter_subtrees_topdown():
|
||||||
description = '\n'.join(line.strip() for line in io.TextIOWrapper(ps.stdout, encoding='utf-8-sig'))
|
if node.data in {'b_tag', 'i_tag', 'u_tag', 'url_tag'}:
|
||||||
|
node_type = str(node.data)
|
||||||
|
for node2 in node.find_data(node_type):
|
||||||
|
if node != node2:
|
||||||
|
raise DescriptionParsingError(f'Invalid nested {node_type} on line {node2.data.line} column {node2.data.column}')
|
||||||
|
|
||||||
|
def parse_description(description_path, config, out_dir, ignore_empty_files=False, define_options=set()):
|
||||||
|
for proc in psutil.process_iter(['cmdline']):
|
||||||
|
if proc.info['cmdline'] and 'libreoffice' in proc.info['cmdline'][0] and '--writer' in proc.info['cmdline'][1:]:
|
||||||
|
if ignore_empty_files:
|
||||||
|
print('WARN: LibreOffice Writer appears to be running. This command may output empty files until it is closed.')
|
||||||
|
break
|
||||||
|
print('WARN: LibreOffice Writer appears to be running. This command may raise an error until it is closed.')
|
||||||
|
break
|
||||||
|
|
||||||
|
description = ''
|
||||||
|
with subprocess.Popen(('libreoffice', '--cat', description_path), stdout=subprocess.PIPE) as ps:
|
||||||
|
description = '\n'.join(line.strip() for line in io.TextIOWrapper(ps.stdout, encoding='utf-8-sig'))
|
||||||
if not description or re.match(r'^\s+$', description):
|
if not description or re.match(r'^\s+$', description):
|
||||||
error = f'Description processing returned empty file: libreoffice --cat {description_path}'
|
error = f'Description processing returned empty file: libreoffice --cat {description_path}'
|
||||||
if ignore_empty_files:
|
if ignore_empty_files:
|
||||||
|
|
@ -360,7 +518,21 @@ def parse_description(description_path, config_path, out_dir, ignore_empty_files
|
||||||
else:
|
else:
|
||||||
raise RuntimeError(error)
|
raise RuntimeError(error)
|
||||||
|
|
||||||
parsed_description = DESCRIPTION_PARSER.parse(description)
|
try:
|
||||||
|
parsed_description = DESCRIPTION_PARSER.parse(description)
|
||||||
|
except lark.UnexpectedInput as e:
|
||||||
|
input_error = e.match_examples(DESCRIPTION_PARSER.parse, {
|
||||||
|
'Unclosed tag': ['[b]text', '[i]text', '[u]text', '[url]text'],
|
||||||
|
'Unopened tag': ['text[/b]', 'text[/i]', 'text[/u]', 'text[/url]'],
|
||||||
|
'Unknown tag': ['[invalid]text[/invalid]'],
|
||||||
|
'Missing tag brackets': ['b]text[/b]', '[btext[/b]', '[b]text/b]', '[b]text[/b', 'i]text[/i]', '[itext[/i]', '[i]text/i]', '[i]text[/i', 'u]text[/u]', '[utext[/u]', '[u]text/u]', '[u]text[/u'],
|
||||||
|
'Missing tag slash': ['[b]text[b]', '[i]text[i]', '[u]text[u]'],
|
||||||
|
'Empty switch tag': ['[user][/user]', '[siteurl][/siteurl]'],
|
||||||
|
'Empty user tag': ['[user][aryion][/aryion][/user]', '[user][furaffinity][/furaffinity][/user]', '[user][inkbunny][/inkbunny][/user]', '[user][sofurry][/sofurry][/user]', '[user][weasyl][/weasyl][/user]', '[user][twitter][/twitter][/user]', '[user][mastodon][/mastodon][/user]', '[user][aryion=][/aryion][/user]', '[user][furaffinity=][/furaffinity][/user]', '[user][inkbunny=][/inkbunny][/user]', '[user][sofurry=][/sofurry][/user]', '[user][weasyl=][/weasyl][/user]', '[user][twitter=][/twitter][/user]', '[user][mastodon=][/mastodon][/user]'],
|
||||||
|
'Empty siteurl tag': ['[siteurl][aryion][/aryion][/siteurl]', '[siteurl][furaffinity][/furaffinity][/siteurl]', '[siteurl][inkbunny][/inkbunny][/siteurl]', '[siteurl][sofurry][/sofurry][/siteurl]', '[siteurl][weasyl][/weasyl][/siteurl]' '[siteurl][aryion=][/aryion][/siteurl]', '[siteurl][furaffinity=][/furaffinity][/siteurl]', '[siteurl][inkbunny=][/inkbunny][/siteurl]', '[siteurl][sofurry=][/sofurry][/siteurl]', '[siteurl][weasyl=][/weasyl][/siteurl]'],
|
||||||
|
})
|
||||||
|
raise DescriptionParsingError(f'Unable to parse description. {input_error or "Unknown grammar error"} in line {e.line} column {e.column}:\n{e.get_context(description)}') from e
|
||||||
|
validate_parsed_tree(parsed_description)
|
||||||
transformations = {
|
transformations = {
|
||||||
'aryion': ('desc_aryion.txt', AryionTransformer),
|
'aryion': ('desc_aryion.txt', AryionTransformer),
|
||||||
'furaffinity': ('desc_furaffinity.txt', FuraffinityTransformer),
|
'furaffinity': ('desc_furaffinity.txt', FuraffinityTransformer),
|
||||||
|
|
@ -368,31 +540,29 @@ def parse_description(description_path, config_path, out_dir, ignore_empty_files
|
||||||
'sofurry': ('desc_sofurry.txt', SoFurryTransformer),
|
'sofurry': ('desc_sofurry.txt', SoFurryTransformer),
|
||||||
'weasyl': ('desc_weasyl.md', WeasylTransformer),
|
'weasyl': ('desc_weasyl.md', WeasylTransformer),
|
||||||
}
|
}
|
||||||
with open(config_path, 'r') as f:
|
# assert all(k in SUPPORTED_SITE_TAGS for k in transformations)
|
||||||
config = json.load(f)
|
|
||||||
# Validate JSON
|
# Validate JSON
|
||||||
errors = []
|
errors = []
|
||||||
if type(config) is not dict:
|
for (website, username) in config.items():
|
||||||
errors.append(ValueError('Configuration must be a JSON object'))
|
if website not in transformations:
|
||||||
else:
|
errors.append(ValueError(f'Website \'{website}\' is unsupported'))
|
||||||
for (website, username) in config.items():
|
elif type(username) is not str:
|
||||||
if website not in transformations:
|
errors.append(ValueError(f'Website \'{website}\' has invalid username \'{json.dumps(username)}\''))
|
||||||
errors.append(ValueError(f'Website \'{website}\' is unsupported'))
|
elif username.strip() == '':
|
||||||
elif type(username) is not str:
|
errors.append(ValueError(f'Website \'{website}\' has empty username'))
|
||||||
errors.append(ValueError(f'Website \'{website}\' has invalid username \'{json.dumps(username)}\''))
|
if not any(ws in config for ws in transformations):
|
||||||
elif username.strip() == '':
|
errors.append(ValueError('No valid websites found'))
|
||||||
errors.append(ValueError(f'Website \'{website}\' has empty username'))
|
|
||||||
if not any(ws in config for ws in ('aryion', 'furaffinity', 'weasyl', 'inkbunny', 'sofurry')):
|
|
||||||
errors.append(ValueError('No valid websites found'))
|
|
||||||
if errors:
|
if errors:
|
||||||
raise ExceptionGroup('Invalid configuration for description parsing', errors)
|
raise ExceptionGroup('Invalid configuration for description parsing', errors)
|
||||||
# Create descriptions
|
# Create descriptions
|
||||||
re_multiple_empty_lines = re.compile(r'\n\n+')
|
RE_MULTIPLE_EMPTY_LINES = re.compile(r'\n\n+')
|
||||||
for (website, username) in config.items():
|
for (website, username) in config.items():
|
||||||
(filepath, transformer) = transformations[website]
|
(filepath, transformer) = transformations[website]
|
||||||
with open(os.path.join(out_dir, filepath), 'w') as f:
|
with open(os.path.join(out_dir, filepath), 'w') as f:
|
||||||
if description.strip():
|
if description.strip():
|
||||||
transformed_description = transformer(username).transform(parsed_description)
|
transformed_description = transformer(self_user=username, define_options=define_options).transform(parsed_description)
|
||||||
f.write(re_multiple_empty_lines.sub('\n\n', transformed_description))
|
cleaned_description = RE_MULTIPLE_EMPTY_LINES.sub('\n\n', transformed_description).strip()
|
||||||
else:
|
if cleaned_description:
|
||||||
f.write('')
|
f.write(cleaned_description)
|
||||||
|
f.write('\n')
|
||||||
|
f.write('')
|
||||||
|
|
|
||||||
79
main.py
Normal file → Executable file
79
main.py
Normal file → Executable file
|
|
@ -1,18 +1,47 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
# PYTHON_ARGCOMPLETE_OK
|
||||||
|
import argcomplete
|
||||||
|
from argcomplete.completers import FilesCompleter, DirectoriesCompleter
|
||||||
import argparse
|
import argparse
|
||||||
|
import json
|
||||||
import os
|
import os
|
||||||
|
import re
|
||||||
from subprocess import CalledProcessError
|
from subprocess import CalledProcessError
|
||||||
import shutil
|
import shutil
|
||||||
import tempfile
|
import tempfile
|
||||||
|
|
||||||
from description import parse_description
|
from description import parse_description
|
||||||
from story import parse_story
|
from story import parse_story
|
||||||
|
from sites import INVERSE_SUPPORTED_SITE_TAGS
|
||||||
|
|
||||||
|
|
||||||
def main(out_dir_path=None, story_path=None, description_path=None, file_path=None, config_path=None, keep_out_dir=False, ignore_empty_files=False):
|
def main(out_dir_path=None, story_path=None, description_path=None, file_paths=[], config_path=None, keep_out_dir=False, ignore_empty_files=False, define_options=[]):
|
||||||
if not out_dir_path:
|
if not out_dir_path:
|
||||||
raise ValueError('Missing out_dir_path')
|
raise ValueError('Missing out_dir_path')
|
||||||
if not config_path:
|
if not config_path:
|
||||||
raise ValueError('Missing config_path')
|
raise ValueError('Missing config_path')
|
||||||
|
if not file_paths:
|
||||||
|
file_paths = []
|
||||||
|
if not define_options:
|
||||||
|
define_options = []
|
||||||
|
config = None
|
||||||
|
if story_path or description_path:
|
||||||
|
with open(config_path, 'r') as f:
|
||||||
|
config_json = json.load(f)
|
||||||
|
if type(config_json) is not dict:
|
||||||
|
raise ValueError('The configuration file must contain a valid JSON object')
|
||||||
|
config = {}
|
||||||
|
for k, v in config_json.items():
|
||||||
|
if type(v) is not str:
|
||||||
|
raise ValueError(f'Invalid configuration value for entry "{k}": expected string, got {type(v)}')
|
||||||
|
new_k = INVERSE_SUPPORTED_SITE_TAGS.get(k)
|
||||||
|
if not new_k:
|
||||||
|
print(f'Ignoring unknown configuration key "{k}"...')
|
||||||
|
if new_k in config:
|
||||||
|
raise ValueError(f'Duplicate configuration entry for website "{new_key}": found collision with key "{k}"')
|
||||||
|
config[new_k] = v
|
||||||
|
if len(config) == 0:
|
||||||
|
raise ValueError(f'Invalid configuration file "{config_path}": no valid sites defined')
|
||||||
remove_out_dir = not keep_out_dir and os.path.isdir(out_dir_path)
|
remove_out_dir = not keep_out_dir and os.path.isdir(out_dir_path)
|
||||||
with tempfile.TemporaryDirectory() as tdir:
|
with tempfile.TemporaryDirectory() as tdir:
|
||||||
# Clear output dir if it exists and shouldn't be kept
|
# Clear output dir if it exists and shouldn't be kept
|
||||||
|
|
@ -24,14 +53,17 @@ def main(out_dir_path=None, story_path=None, description_path=None, file_path=No
|
||||||
try:
|
try:
|
||||||
# Convert original file to .rtf (Aryion) and .txt (all others)
|
# Convert original file to .rtf (Aryion) and .txt (all others)
|
||||||
if story_path:
|
if story_path:
|
||||||
parse_story(story_path, config_path, out_dir_path, tdir, ignore_empty_files)
|
parse_story(story_path, config, out_dir_path, tdir, ignore_empty_files)
|
||||||
|
|
||||||
# Parse FA description and convert for each website
|
# Parse FA description and convert for each website
|
||||||
if description_path:
|
if description_path:
|
||||||
parse_description(description_path, config_path, out_dir_path, ignore_empty_files)
|
define_options_set = set(define_options)
|
||||||
|
if len(define_options_set) < len(define_options):
|
||||||
|
print('WARNING: duplicated entries defined with -D / --define-option')
|
||||||
|
parse_description(description_path, config, out_dir_path, ignore_empty_files, define_options)
|
||||||
|
|
||||||
# Copy generic file over to output
|
# Copy generic files over to output
|
||||||
if file_path:
|
for file_path in file_paths:
|
||||||
shutil.copy(file_path, out_dir_path)
|
shutil.copy(file_path, out_dir_path)
|
||||||
|
|
||||||
except CalledProcessError as e:
|
except CalledProcessError as e:
|
||||||
|
|
@ -52,32 +84,41 @@ def main(out_dir_path=None, story_path=None, description_path=None, file_path=No
|
||||||
if __name__ == '__main__':
|
if __name__ == '__main__':
|
||||||
parser = argparse.ArgumentParser(description='generate multi-gallery upload-ready files')
|
parser = argparse.ArgumentParser(description='generate multi-gallery upload-ready files')
|
||||||
parser.add_argument('-o', '--output-dir', dest='out_dir_path', default='./out',
|
parser.add_argument('-o', '--output-dir', dest='out_dir_path', default='./out',
|
||||||
help='path of output directory')
|
help='path of output directory').completer = DirectoriesCompleter
|
||||||
parser.add_argument('-c', '--config', dest='config_path', default='./config.json',
|
parser.add_argument('-c', '--config', dest='config_path', default='./config.json',
|
||||||
help='path of JSON configuration file')
|
help='path of JSON configuration file').completer = FilesCompleter
|
||||||
|
parser.add_argument('-D', '--define-option', dest='define_options', action='append',
|
||||||
|
help='options to define as a truthy value when parsing descriptions')
|
||||||
parser.add_argument('-s', '--story', dest='story_path',
|
parser.add_argument('-s', '--story', dest='story_path',
|
||||||
help='path of LibreOffice-readable story file')
|
help='path of LibreOffice-readable story file').completer = FilesCompleter
|
||||||
parser.add_argument('-d', '--description', dest='description_path',
|
parser.add_argument('-d', '--description', dest='description_path',
|
||||||
help='path of BBCode-formatted description file')
|
help='path of BBCode-formatted description file').completer = FilesCompleter
|
||||||
parser.add_argument('-f', '--file', dest='file_path',
|
parser.add_argument('-f', '--file', dest='file_paths', action='append',
|
||||||
help='path of generic file to include in output (i.e. an image or thumbnail)')
|
help='path(s) of generic file(s) to include in output (i.e. an image or thumbnail)').completer = FilesCompleter
|
||||||
parser.add_argument('-k', '--keep-out-dir', dest='keep_out_dir', action='store_true',
|
parser.add_argument('-k', '--keep-out-dir', dest='keep_out_dir', action='store_true',
|
||||||
help='whether output directory contents should be kept.\nif set, a script error may leave partial files behind')
|
help='whether output directory contents should be kept.\nif set, a script error may leave partial files behind')
|
||||||
parser.add_argument('-I', '--ignore-empty-files', dest='ignore_empty_files', action='store_true',
|
parser.add_argument('-I', '--ignore-empty-files', dest='ignore_empty_files', action='store_true',
|
||||||
help='do not raise an error if any input file is empty or whitespace-only')
|
help='do not raise an error if any input file is empty or whitespace-only')
|
||||||
|
argcomplete.autocomplete(parser)
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
if not any([args.story_path, args.description_path]):
|
file_paths = args.file_paths or []
|
||||||
parser.error('at least one of ( --story | --description ) must be set')
|
if not (args.story_path or args.description_path or any(file_paths)):
|
||||||
|
parser.error('at least one of ( --story | --description | --file ) must be set')
|
||||||
if args.out_dir_path and os.path.exists(args.out_dir_path) and not os.path.isdir(args.out_dir_path):
|
if args.out_dir_path and os.path.exists(args.out_dir_path) and not os.path.isdir(args.out_dir_path):
|
||||||
parser.error('--output-dir must be an existing directory or inexistent')
|
parser.error(f'--output-dir {args.out_dir_path} must be an existing directory or inexistent; found a file instead')
|
||||||
if args.story_path and not os.path.isfile(args.story_path):
|
if args.story_path and not os.path.isfile(args.story_path):
|
||||||
parser.error('--story must be a valid file')
|
parser.error(f'--story {args.story_path} is not a valid file')
|
||||||
if args.description_path and not os.path.isfile(args.description_path):
|
if args.description_path and not os.path.isfile(args.description_path):
|
||||||
parser.error('--description must be a valid file')
|
parser.error(f'--description {args.description_path} is not a valid file')
|
||||||
if args.file_path and not os.path.isfile(args.file_path):
|
for file_path in file_paths:
|
||||||
parser.error('--file must be a valid file')
|
if not os.path.isfile(file_path):
|
||||||
if args.config_path and not os.path.isfile(args.config_path):
|
parser.error(f'--file {file_path} is not a valid file')
|
||||||
|
if (args.story_path or args.description_path) and args.config_path and not os.path.isfile(args.config_path):
|
||||||
parser.error('--config must be a valid file')
|
parser.error('--config must be a valid file')
|
||||||
|
if args.define_options:
|
||||||
|
for option in args.define_options:
|
||||||
|
if not re.match(r'^[a-zA-Z0-9_-]+$', option):
|
||||||
|
parser.error(f'--define-option {option} is not a valid option; it must only contain alphanumeric characters, dashes, or underlines')
|
||||||
|
|
||||||
main(**vars(args))
|
main(**vars(args))
|
||||||
|
|
|
||||||
|
|
@ -1 +1,4 @@
|
||||||
lark==1.1.5
|
argcomplete==3.2.1
|
||||||
|
lark==1.1.8
|
||||||
|
parameterized==0.9.0
|
||||||
|
psutil==5.9.6
|
||||||
|
|
|
||||||
13
sites.py
Normal file
13
sites.py
Normal file
|
|
@ -0,0 +1,13 @@
|
||||||
|
import itertools
|
||||||
|
import typing
|
||||||
|
|
||||||
|
SUPPORTED_SITE_TAGS: typing.Mapping[str, typing.Set[str]] = {
|
||||||
|
'aryion': {'aryion', 'eka', 'eka_portal'},
|
||||||
|
'furaffinity': {'furaffinity', 'fa'},
|
||||||
|
'weasyl': {'weasyl'},
|
||||||
|
'inkbunny': {'inkbunny', 'ib'},
|
||||||
|
'sofurry': {'sofurry', 'sf'},
|
||||||
|
}
|
||||||
|
|
||||||
|
INVERSE_SUPPORTED_SITE_TAGS: typing.Mapping[str, str] = \
|
||||||
|
dict(itertools.chain.from_iterable(zip(v, itertools.repeat(k)) for (k, v) in SUPPORTED_SITE_TAGS.items()))
|
||||||
71
story.py
71
story.py
|
|
@ -1,6 +1,7 @@
|
||||||
import io
|
import io
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
|
import psutil
|
||||||
import re
|
import re
|
||||||
import subprocess
|
import subprocess
|
||||||
|
|
||||||
|
|
@ -16,43 +17,57 @@ def get_rtf_styles(rtf_source: str):
|
||||||
rtf_styles[style_name] = rtf_style
|
rtf_styles[style_name] = rtf_style
|
||||||
return rtf_styles
|
return rtf_styles
|
||||||
|
|
||||||
def parse_story(story_path, config_path, out_dir, temp_dir, ignore_empty_files=False):
|
def parse_story(story_path, config, out_dir, temp_dir, ignore_empty_files=False):
|
||||||
with open(config_path, 'r') as f:
|
should_create_txt_story = any(ws in config for ws in ('furaffinity', 'inkbunny', 'sofurry'))
|
||||||
config = json.load(f)
|
should_create_md_story = any(ws in config for ws in ('weasyl',))
|
||||||
if type(config) is not dict:
|
|
||||||
raise ValueError('Invalid configuration for story parsing: Configuration must be a JSON object')
|
|
||||||
should_create_txt_story = any(ws in config for ws in ('furaffinity', 'weasyl', 'inkbunny', 'sofurry'))
|
|
||||||
should_create_rtf_story = any(ws in config for ws in ('aryion',))
|
should_create_rtf_story = any(ws in config for ws in ('aryion',))
|
||||||
if not should_create_txt_story and not should_create_rtf_story:
|
if not (should_create_txt_story or should_create_md_story or should_create_rtf_story):
|
||||||
raise ValueError('Invalid configuration for story parsing: No valid websites found')
|
raise ValueError('Invalid configuration for story parsing: No valid websites found')
|
||||||
|
|
||||||
|
for proc in psutil.process_iter(['cmdline']):
|
||||||
|
if proc.info['cmdline'] and 'libreoffice' in proc.info['cmdline'][0] and '--writer' in proc.info['cmdline'][1:]:
|
||||||
|
if ignore_empty_files:
|
||||||
|
print('WARN: LibreOffice Writer appears to be running. This command may output empty files until it is closed.')
|
||||||
|
break
|
||||||
|
print('WARN: LibreOffice Writer appears to be running. This command may raise an error until it is closed.')
|
||||||
|
break
|
||||||
|
|
||||||
story_filename = os.path.split(story_path)[1].rsplit('.')[0]
|
story_filename = os.path.split(story_path)[1].rsplit('.')[0]
|
||||||
txt_out_path = os.path.join(out_dir, f'{story_filename}.txt') if should_create_txt_story else os.devnull
|
txt_out_path = os.path.join(out_dir, f'{story_filename}.txt') if should_create_txt_story else os.devnull
|
||||||
|
md_out_path = os.path.join(out_dir, f'{story_filename}.md') if should_create_md_story else os.devnull
|
||||||
txt_tmp_path = os.path.join(temp_dir, f'{story_filename}.txt') if should_create_rtf_story else os.devnull
|
txt_tmp_path = os.path.join(temp_dir, f'{story_filename}.txt') if should_create_rtf_story else os.devnull
|
||||||
RE_EMPTY_LINE = re.compile('^$')
|
RE_EMPTY_LINE = re.compile(r'^$')
|
||||||
|
RE_SEQUENTIAL_EQUAL_SIGNS = re.compile(r'=(?==)')
|
||||||
is_only_empty_lines = True
|
is_only_empty_lines = True
|
||||||
ps = subprocess.Popen(('libreoffice', '--cat', story_path), stdout=subprocess.PIPE)
|
with subprocess.Popen(('libreoffice', '--cat', story_path), stdout=subprocess.PIPE) as ps:
|
||||||
# Mangle output files so that .RTF will always have a single LF between lines, and .TXT can have one or two CRLF
|
# Mangle output files so that .RTF will always have a single LF between lines, and .TXT/.MD can have one or two CRLF
|
||||||
with open(txt_out_path, 'w', newline='\r\n') as txt_out, open(txt_tmp_path, 'w') as txt_tmp:
|
with open(txt_out_path, 'w', newline='\r\n') as txt_out, open(md_out_path, 'w', newline='\r\n') as md_out, open(txt_tmp_path, 'w') as txt_tmp:
|
||||||
needs_empty_line = False
|
needs_empty_line = False
|
||||||
for line in io.TextIOWrapper(ps.stdout, encoding='utf-8-sig'):
|
for line in io.TextIOWrapper(ps.stdout, encoding='utf-8-sig'):
|
||||||
# Remove empty lines
|
# Remove empty lines
|
||||||
line = line.strip()
|
line = line.strip()
|
||||||
if RE_EMPTY_LINE.search(line) and not is_only_empty_lines:
|
md_line = line
|
||||||
needs_empty_line = True
|
if RE_EMPTY_LINE.search(line) and not is_only_empty_lines:
|
||||||
else:
|
needs_empty_line = True
|
||||||
if is_only_empty_lines:
|
|
||||||
txt_out.writelines((line,))
|
|
||||||
txt_tmp.writelines((line,))
|
|
||||||
is_only_empty_lines = False
|
|
||||||
else:
|
else:
|
||||||
if needs_empty_line:
|
if should_create_md_story:
|
||||||
txt_out.writelines(('\n\n', line))
|
md_line = RE_SEQUENTIAL_EQUAL_SIGNS.sub('= ', line.replace(r'*', r'\*'))
|
||||||
needs_empty_line = False
|
if is_only_empty_lines:
|
||||||
|
txt_out.writelines((line,))
|
||||||
|
md_out.writelines((md_line,))
|
||||||
|
txt_tmp.writelines((line,))
|
||||||
|
is_only_empty_lines = False
|
||||||
else:
|
else:
|
||||||
txt_out.writelines(('\n', line))
|
if needs_empty_line:
|
||||||
txt_tmp.writelines(('\n', line))
|
txt_out.writelines(('\n\n', line))
|
||||||
txt_out.writelines(('\n'))
|
md_out.writelines(('\n\n', md_line))
|
||||||
|
needs_empty_line = False
|
||||||
|
else:
|
||||||
|
txt_out.writelines(('\n', line))
|
||||||
|
md_out.writelines(('\n', md_line))
|
||||||
|
txt_tmp.writelines(('\n', line))
|
||||||
|
txt_out.writelines(('\n'))
|
||||||
|
md_out.writelines(('\n'))
|
||||||
if is_only_empty_lines:
|
if is_only_empty_lines:
|
||||||
error = f'Story processing returned empty file: libreoffice --cat {story_path}'
|
error = f'Story processing returned empty file: libreoffice --cat {story_path}'
|
||||||
if ignore_empty_files:
|
if ignore_empty_files:
|
||||||
|
|
|
||||||
55
test.py
Normal file
55
test.py
Normal file
|
|
@ -0,0 +1,55 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
import glob
|
||||||
|
import os.path
|
||||||
|
from parameterized import parameterized
|
||||||
|
import re
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
import warnings
|
||||||
|
|
||||||
|
from description import parse_description, DescriptionParsingError
|
||||||
|
|
||||||
|
class TestParseDescription(unittest.TestCase):
|
||||||
|
config = {
|
||||||
|
'aryion': 'UserAryion',
|
||||||
|
'furaffinity': 'UserFuraffinity',
|
||||||
|
'inkbunny': 'UserInkbunny',
|
||||||
|
'sofurry': 'UserSoFurry',
|
||||||
|
'weasyl': 'UserWeasyl',
|
||||||
|
}
|
||||||
|
define_options = {'test_parse_description'}
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.tmpdir = tempfile.TemporaryDirectory(ignore_cleanup_errors=True)
|
||||||
|
warnings.simplefilter('ignore', ResourceWarning)
|
||||||
|
|
||||||
|
def tearDown(self):
|
||||||
|
self.tmpdir.cleanup()
|
||||||
|
warnings.simplefilter('default', ResourceWarning)
|
||||||
|
|
||||||
|
@parameterized.expand([
|
||||||
|
(re.match(r'.*(input_\d+)\.txt', v)[1], v) for v in sorted(glob.iglob('./test/description/input_*.txt'))
|
||||||
|
])
|
||||||
|
def test_parse_success(self, name, test_description):
|
||||||
|
with tempfile.TemporaryDirectory(ignore_cleanup_errors=True) as tmpdir:
|
||||||
|
parse_description(test_description, self.config, tmpdir, define_options=self.define_options)
|
||||||
|
for expected_output_file in glob.iglob(f'./test/description/output_{name[6:]}/*'):
|
||||||
|
received_output_file = os.path.join(tmpdir, os.path.split(expected_output_file)[1])
|
||||||
|
self.assertTrue(os.path.exists(received_output_file))
|
||||||
|
self.assertTrue(os.path.isfile(received_output_file))
|
||||||
|
with open(received_output_file, 'r') as f:
|
||||||
|
received_description = f.read()
|
||||||
|
with open(expected_output_file, 'r') as f:
|
||||||
|
expected_description = f.read()
|
||||||
|
self.assertEqual(received_description, expected_description)
|
||||||
|
|
||||||
|
@parameterized.expand([
|
||||||
|
(re.match(r'.*(error_.+)\.txt', v)[1], v) for v in sorted(glob.iglob('./test/description/error_*.txt'))
|
||||||
|
])
|
||||||
|
def test_parse_errors(self, _, test_description):
|
||||||
|
self.assertRaises(DescriptionParsingError, lambda: parse_description(test_description, self.config, self.tmpdir.name, define_options=self.define_options))
|
||||||
|
self.assertListEqual(glob.glob(os.path.join(self.tmpdir.name, '*')), [])
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
1
test/description/error_1_nested_url_tag.txt
Normal file
1
test/description/error_1_nested_url_tag.txt
Normal file
|
|
@ -0,0 +1 @@
|
||||||
|
[url=https://example.com]Nested [url=https://example.net]URLs[/url][/url]
|
||||||
1
test/description/error_2_deeply_nested_b_tag.txt
Normal file
1
test/description/error_2_deeply_nested_b_tag.txt
Normal file
|
|
@ -0,0 +1 @@
|
||||||
|
ZERO[b]ONE[i]TWO[u]THREE[b]FOUR[url=https://example.com]FIVE[/url]FOUR[/b]THREE[/u]TWO[/i]ONE[/b]ZERO
|
||||||
1
test/description/error_3_unclosed_i_tag.txt
Normal file
1
test/description/error_3_unclosed_i_tag.txt
Normal file
|
|
@ -0,0 +1 @@
|
||||||
|
[i]Hello world!
|
||||||
1
test/description/error_4_unopened_u_tag.txt
Normal file
1
test/description/error_4_unopened_u_tag.txt
Normal file
|
|
@ -0,0 +1 @@
|
||||||
|
Hello world![/u]
|
||||||
1
test/description/error_5_unknown_user_tag.txt
Normal file
1
test/description/error_5_unknown_user_tag.txt
Normal file
|
|
@ -0,0 +1 @@
|
||||||
|
[user][unknown=Foo]Bar[/unknown][/user]
|
||||||
9
test/description/input_1.txt
Normal file
9
test/description/input_1.txt
Normal file
|
|
@ -0,0 +1,9 @@
|
||||||
|
[b]Hello world![/b]
|
||||||
|
|
||||||
|
This is just a [u]simple[/u] test to show that basic functionality of [url=https://github.com/BadMannersXYZ/upload-generator]upload-generator[/url] [i]works[/i]. [if=define==test_parse_description]And this is running in a unit test.[/if][else]Why did you parse this outside of a unit test?![/else]
|
||||||
|
|
||||||
|
[center]Reminder that I am [self][/self]![/center]
|
||||||
|
|
||||||
|
My friend: [user][sofurry=FriendSoFurry][fa=FriendFa][mastodon=@FriendMastodon@example.org]Friend123[/mastodon][/fa][/sofurry][/user][if=site in ib,aryion,weasyl] (I dunno his account here...)[/if]
|
||||||
|
|
||||||
|
[siteurl][eka=https://example.com/eka][inkbunny=https://example.com/ib][generic=https://example.com/generic]Check this page![/generic][/inkbunny][/eka][/siteurl]
|
||||||
12
test/description/input_2.txt
Normal file
12
test/description/input_2.txt
Normal file
|
|
@ -0,0 +1,12 @@
|
||||||
|
[self][/self]
|
||||||
|
|
||||||
|
[if=site==eka] -> [/if][user][eka=EkaPerson]EkaName[/eka][/user] [user][eka]EkaPerson[/eka][/user]
|
||||||
|
[if=site==fa] -> [/if][user][fa=FaPerson]FaName[/fa][/user] [user][fa]FaPerson[/fa][/user]
|
||||||
|
[if=site==ib] -> [/if][user][ib=IbPerson]IbName[/ib][/user] [user][ib]IbPerson[/ib][/user]
|
||||||
|
[if=site==sofurry] -> [/if][user][sf=SfPerson]SfName[/sf][/user] [user][sf]SfPerson[/sf][/user]
|
||||||
|
[if=site==weasyl] -> [/if][user][weasyl=WeasylPerson]WeasylName[/weasyl][/user] [user][weasyl]WeasylPerson[/weasyl][/user]
|
||||||
|
[user][twitter=XPerson]XName[/twitter][/user] [user][twitter]XPerson[/twitter][/user]
|
||||||
|
[user][mastodon=MastodonPerson@example.com]MastodonName[/mastodon][/user] [user][mastodon]MastodonPerson@example.com[/mastodon][/user]
|
||||||
|
[user][twitter=Ignored][generic=https://example.net/GenericPerson]GenericName[/generic][/twitter][/user]
|
||||||
|
|
||||||
|
[siteurl][aryion=https://example.com/aryion][furaffinity=https://example.com/furaffinity][inkbunny=https://example.com/inkbunny][sofurry=https://example.com/sofurry][generic=https://example.com/generic]Link[/generic][/sofurry][/inkbunny][/furaffinity][/aryion][/siteurl]
|
||||||
9
test/description/output_1/desc_aryion.txt
Normal file
9
test/description/output_1/desc_aryion.txt
Normal file
|
|
@ -0,0 +1,9 @@
|
||||||
|
[b]Hello world![/b]
|
||||||
|
|
||||||
|
This is just a [u]simple[/u] test to show that basic functionality of [url=https://github.com/BadMannersXYZ/upload-generator]upload-generator[/url] [i]works[/i]. And this is running in a unit test.
|
||||||
|
|
||||||
|
[center]Reminder that I am :iconUserAryion:![/center]
|
||||||
|
|
||||||
|
My friend: [url=https://example.org/@FriendMastodon]Friend123[/url] (I dunno his account here...)
|
||||||
|
|
||||||
|
[url=https://example.com/eka]Check this page![/url]
|
||||||
9
test/description/output_1/desc_furaffinity.txt
Normal file
9
test/description/output_1/desc_furaffinity.txt
Normal file
|
|
@ -0,0 +1,9 @@
|
||||||
|
[b]Hello world![/b]
|
||||||
|
|
||||||
|
This is just a [u]simple[/u] test to show that basic functionality of [url=https://github.com/BadMannersXYZ/upload-generator]upload-generator[/url] [i]works[/i]. And this is running in a unit test.
|
||||||
|
|
||||||
|
[center]Reminder that I am :iconUserFuraffinity:![/center]
|
||||||
|
|
||||||
|
My friend: :iconFriendFa:
|
||||||
|
|
||||||
|
[url=https://example.com/generic]Check this page![/url]
|
||||||
9
test/description/output_1/desc_inkbunny.txt
Normal file
9
test/description/output_1/desc_inkbunny.txt
Normal file
|
|
@ -0,0 +1,9 @@
|
||||||
|
[b]Hello world![/b]
|
||||||
|
|
||||||
|
This is just a [u]simple[/u] test to show that basic functionality of [url=https://github.com/BadMannersXYZ/upload-generator]upload-generator[/url] [i]works[/i]. And this is running in a unit test.
|
||||||
|
|
||||||
|
[center]Reminder that I am [iconname]UserInkbunny[/iconname]![/center]
|
||||||
|
|
||||||
|
My friend: [fa]FriendFa[/fa] (I dunno his account here...)
|
||||||
|
|
||||||
|
[url=https://example.com/ib]Check this page![/url]
|
||||||
9
test/description/output_1/desc_sofurry.txt
Normal file
9
test/description/output_1/desc_sofurry.txt
Normal file
|
|
@ -0,0 +1,9 @@
|
||||||
|
[b]Hello world![/b]
|
||||||
|
|
||||||
|
This is just a [u]simple[/u] test to show that basic functionality of [url=https://github.com/BadMannersXYZ/upload-generator]upload-generator[/url] [i]works[/i]. And this is running in a unit test.
|
||||||
|
|
||||||
|
[center]Reminder that I am :iconUserSoFurry:![/center]
|
||||||
|
|
||||||
|
My friend: :iconFriendSoFurry:
|
||||||
|
|
||||||
|
[url=https://example.com/generic]Check this page![/url]
|
||||||
9
test/description/output_1/desc_weasyl.md
Normal file
9
test/description/output_1/desc_weasyl.md
Normal file
|
|
@ -0,0 +1,9 @@
|
||||||
|
**Hello world!**
|
||||||
|
|
||||||
|
This is just a <u>simple</u> test to show that basic functionality of [upload-generator](https://github.com/BadMannersXYZ/upload-generator) *works*. And this is running in a unit test.
|
||||||
|
|
||||||
|
<div class="align-center">Reminder that I am <!~UserWeasyl>!</div>
|
||||||
|
|
||||||
|
My friend: <fa:FriendFa> (I dunno his account here...)
|
||||||
|
|
||||||
|
[Check this page!](https://example.com/generic)
|
||||||
12
test/description/output_2/desc_aryion.txt
Normal file
12
test/description/output_2/desc_aryion.txt
Normal file
|
|
@ -0,0 +1,12 @@
|
||||||
|
:iconUserAryion:
|
||||||
|
|
||||||
|
-> :iconEkaPerson: :iconEkaPerson:
|
||||||
|
[url=https://furaffinity.net/user/FaPerson]FaName[/url] [url=https://furaffinity.net/user/FaPerson]FaPerson[/url]
|
||||||
|
[url=https://inkbunny.net/IbPerson]IbName[/url] [url=https://inkbunny.net/IbPerson]IbPerson[/url]
|
||||||
|
[url=https://sfperson.sofurry.com]SfName[/url] [url=https://sfperson.sofurry.com]SfPerson[/url]
|
||||||
|
[url=https://www.weasyl.com/~weasylperson]WeasylName[/url] [url=https://www.weasyl.com/~weasylperson]WeasylPerson[/url]
|
||||||
|
[url=https://twitter.com/XPerson]XName[/url] [url=https://twitter.com/XPerson]XPerson[/url]
|
||||||
|
[url=https://example.com/@MastodonPerson]MastodonName[/url] [url=https://example.com/@MastodonPerson]MastodonPerson@example.com[/url]
|
||||||
|
[url=https://example.net/GenericPerson]GenericName[/url]
|
||||||
|
|
||||||
|
[url=https://example.com/aryion]Link[/url]
|
||||||
12
test/description/output_2/desc_furaffinity.txt
Normal file
12
test/description/output_2/desc_furaffinity.txt
Normal file
|
|
@ -0,0 +1,12 @@
|
||||||
|
:iconUserFuraffinity:
|
||||||
|
|
||||||
|
[url=https://aryion.com/g4/user/EkaPerson]EkaName[/url] [url=https://aryion.com/g4/user/EkaPerson]EkaPerson[/url]
|
||||||
|
-> :iconFaPerson: :iconFaPerson:
|
||||||
|
[url=https://inkbunny.net/IbPerson]IbName[/url] [url=https://inkbunny.net/IbPerson]IbPerson[/url]
|
||||||
|
[url=https://sfperson.sofurry.com]SfName[/url] [url=https://sfperson.sofurry.com]SfPerson[/url]
|
||||||
|
[url=https://www.weasyl.com/~weasylperson]WeasylName[/url] [url=https://www.weasyl.com/~weasylperson]WeasylPerson[/url]
|
||||||
|
[url=https://twitter.com/XPerson]XName[/url] [url=https://twitter.com/XPerson]XPerson[/url]
|
||||||
|
[url=https://example.com/@MastodonPerson]MastodonName[/url] [url=https://example.com/@MastodonPerson]MastodonPerson@example.com[/url]
|
||||||
|
[url=https://example.net/GenericPerson]GenericName[/url]
|
||||||
|
|
||||||
|
[url=https://example.com/furaffinity]Link[/url]
|
||||||
12
test/description/output_2/desc_inkbunny.txt
Normal file
12
test/description/output_2/desc_inkbunny.txt
Normal file
|
|
@ -0,0 +1,12 @@
|
||||||
|
[iconname]UserInkbunny[/iconname]
|
||||||
|
|
||||||
|
[url=https://aryion.com/g4/user/EkaPerson]EkaName[/url] [url=https://aryion.com/g4/user/EkaPerson]EkaPerson[/url]
|
||||||
|
[fa]FaPerson[/fa] [fa]FaPerson[/fa]
|
||||||
|
-> [iconname]IbPerson[/iconname] [iconname]IbPerson[/iconname]
|
||||||
|
[sf]SfPerson[/sf] [sf]SfPerson[/sf]
|
||||||
|
[weasyl]weasylperson[/weasyl] [weasyl]weasylperson[/weasyl]
|
||||||
|
[url=https://twitter.com/XPerson]XName[/url] [url=https://twitter.com/XPerson]XPerson[/url]
|
||||||
|
[url=https://example.com/@MastodonPerson]MastodonName[/url] [url=https://example.com/@MastodonPerson]MastodonPerson@example.com[/url]
|
||||||
|
[url=https://example.net/GenericPerson]GenericName[/url]
|
||||||
|
|
||||||
|
[url=https://example.com/inkbunny]Link[/url]
|
||||||
12
test/description/output_2/desc_sofurry.txt
Normal file
12
test/description/output_2/desc_sofurry.txt
Normal file
|
|
@ -0,0 +1,12 @@
|
||||||
|
:iconUserSoFurry:
|
||||||
|
|
||||||
|
[url=https://aryion.com/g4/user/EkaPerson]EkaName[/url] [url=https://aryion.com/g4/user/EkaPerson]EkaPerson[/url]
|
||||||
|
fa!FaPerson fa!FaPerson
|
||||||
|
ib!IbPerson ib!IbPerson
|
||||||
|
-> :iconSfPerson: :iconSfPerson:
|
||||||
|
[url=https://www.weasyl.com/~weasylperson]WeasylName[/url] [url=https://www.weasyl.com/~weasylperson]WeasylPerson[/url]
|
||||||
|
[url=https://twitter.com/XPerson]XName[/url] [url=https://twitter.com/XPerson]XPerson[/url]
|
||||||
|
[url=https://example.com/@MastodonPerson]MastodonName[/url] [url=https://example.com/@MastodonPerson]MastodonPerson@example.com[/url]
|
||||||
|
[url=https://example.net/GenericPerson]GenericName[/url]
|
||||||
|
|
||||||
|
[url=https://example.com/sofurry]Link[/url]
|
||||||
12
test/description/output_2/desc_weasyl.md
Normal file
12
test/description/output_2/desc_weasyl.md
Normal file
|
|
@ -0,0 +1,12 @@
|
||||||
|
<!~UserWeasyl>
|
||||||
|
|
||||||
|
[EkaName](https://aryion.com/g4/user/EkaPerson) [EkaPerson](https://aryion.com/g4/user/EkaPerson)
|
||||||
|
<fa:FaPerson> <fa:FaPerson>
|
||||||
|
<ib:IbPerson> <ib:IbPerson>
|
||||||
|
<sf:SfPerson> <sf:SfPerson>
|
||||||
|
-> <!~WeasylPerson> <!~WeasylPerson>
|
||||||
|
[XName](https://twitter.com/XPerson) [XPerson](https://twitter.com/XPerson)
|
||||||
|
[MastodonName](https://example.com/@MastodonPerson) [MastodonPerson@example.com](https://example.com/@MastodonPerson)
|
||||||
|
[GenericName](https://example.net/GenericPerson)
|
||||||
|
|
||||||
|
[Link](https://example.com/generic)
|
||||||
Reference in a new issue