Further improvements to descriptions/config
- Allow alt. keys to be used in config (eg. `eka` or `eka_portal` => `aryion`) and refactor out this logic - Refactor duplicated config parsing logic - Add `-D --define-option` args for script invokation conditions - Allow `-f --file-path` arg to be used several times - Allow `-f --file-path` to be used without setting up an input story or description
This commit is contained in:
parent
382423fe5a
commit
f3fabf2d8a
5 changed files with 133 additions and 79 deletions
23
README.md
23
README.md
|
@ -9,20 +9,20 @@ Script to generate multi-gallery upload-ready files.
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
I recommend creating a virtualenv first. Linux/Mac/Unix example:
|
I recommend creating a virtualenv first. Linux/macOS/Unix example:
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
virtualenv venv
|
virtualenv venv
|
||||||
source venv/bin/activate # Also run every time you'll use this tool
|
source venv/bin/activate # Also run every time you use this tool
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
activate-global-python-argcomplete
|
activate-global-python-argcomplete
|
||||||
```
|
```
|
||||||
|
|
||||||
Windows example (no autocompletion):
|
Windows example (autocompletion is not available):
|
||||||
|
|
||||||
```powershell
|
```powershell
|
||||||
virtualenv venv
|
virtualenv venv
|
||||||
./venv/Scripts/activate # Also run every time you'll use this tool
|
.\venv\Scripts\activate # Also run every time you use this tool
|
||||||
pip install -r requirements.txt
|
pip install -r requirements.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -48,7 +48,7 @@ In order to parse descriptions, you need a configuration file (default path is `
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
Uppercase letters are optional. Only include your username for websites that you wish to generate descriptions for.
|
Uppercase letters for usernames are optional. Only include your username for websites that you wish to generate descriptions/stories for.
|
||||||
|
|
||||||
#### Basic formatting
|
#### Basic formatting
|
||||||
|
|
||||||
|
@ -66,17 +66,18 @@ Input descriptions should be formatted as BBCode. The following tags are accepte
|
||||||
|
|
||||||
#### Conditional formatting
|
#### Conditional formatting
|
||||||
|
|
||||||
Another special set of tags is `[if=...][/if]` or `[if=...][/if][else][/else]`. The `if` tag lets you conditionally show content for each website. The `else` tag is optional but must appear immediately after an `if` tag (no whitespace in-between), and displays whenever the condition is false instead.
|
Another special set of tags is `[if=...][/if]` or `[if=...][/if][else][/else]`. The `if` tag lets you conditionally show content . The `else` tag is optional but must appear immediately after an `if` tag (no whitespace in-between), and displays whenever the condition is false instead.
|
||||||
|
|
||||||
The following parameter is available:
|
The following parameters are available:
|
||||||
|
|
||||||
- `site`: eg. `[if=site==fa]...[/if]` or `[if=site!=furaffinity]...[/if][else]...[/else]`
|
- `site`: generated according to the target website, eg. `[if=site==fa]...[/if]` or `[if=site!=furaffinity]...[/if][else]...[/else]`
|
||||||
|
- `define`: generated according to argument(s) defined to the script into the command line (i.e. with the `-D / --define-option` flag), eg. `[if=define==prod]...[/if][else]...[/else]` or `[if=define in possible_flag_1,possible_flag_2]...[/if][else]...[/else]`
|
||||||
|
|
||||||
The following conditions are available:
|
The following conditions are available:
|
||||||
|
|
||||||
- `==`: eg. `[if=site==eka]Only show this on Eka's Portal![/if][else]Show this everywhere except Eka's Portal![/else]`
|
- `==`: eg. `[if=site==eka]Only show this on Eka's Portal.[/if][else]Show this everywhere except Eka's Portal![/else]`
|
||||||
- `!=`: eg. `[if=site!=eka]Show this everywhere except Eka's Portal![/if]`
|
- `!=`: eg. `[if=site!=eka]Show this everywhere except Eka's Portal![/if]`
|
||||||
- ` in `: eg. `[if=site in eka,fa]Only show this on Eka's Portal and Fur Affinity![/if]`
|
- ` in `: eg. `[if=site in eka,fa]Only show this on Eka's Portal or Fur Affinity...[/if]`
|
||||||
|
|
||||||
#### Switch formatting
|
#### Switch formatting
|
||||||
|
|
||||||
|
@ -101,7 +102,7 @@ These tags are nestable and flexible, requiring attributes to display informatio
|
||||||
```bbcode
|
```bbcode
|
||||||
[user][eka]Lorem[/eka][/user] is equivalent to [user][eka=Lorem][/eka][/user].
|
[user][eka]Lorem[/eka][/user] is equivalent to [user][eka=Lorem][/eka][/user].
|
||||||
|
|
||||||
[user][fa=Ipsum]Dolor[/fa][/user] shows Ipsum's username on Fur Affinity, and "Dolor" everywhere else with a link to Ipsum's FA userpage.
|
[user][fa=Ipsum]Dolor[/fa][/user] shows Ipsum's username on Fur Affinity, and "Dolor" everywhere else with a link to Ipsum's userpage on FA.
|
||||||
|
|
||||||
[user][ib=Sit][weasyl=Amet][twitter=Consectetur][/twitter][/weasyl][/ib][/user] will show a different usernames on Inkbunny and Weasyl. For other websites, the innermost user name and link are prioritized - Twitter, in this case.
|
[user][ib=Sit][weasyl=Amet][twitter=Consectetur][/twitter][/weasyl][/ib][/user] will show a different usernames on Inkbunny and Weasyl. For other websites, the innermost user name and link are prioritized - Twitter, in this case.
|
||||||
[user][ib=Sit][twitter=Consectetur][weasyl=Amet][/weasyl][/twitter][/ib][/user] is similar, but the Weasyl user data is prioritized for websites other than Inkbunny. In this case, the Twitter tag is rendered useless, since descriptions can't be generated for the website.
|
[user][ib=Sit][twitter=Consectetur][weasyl=Amet][/weasyl][/twitter][/ib][/user] is similar, but the Weasyl user data is prioritized for websites other than Inkbunny. In this case, the Twitter tag is rendered useless, since descriptions can't be generated for the website.
|
||||||
|
|
102
description.py
102
description.py
|
@ -8,13 +8,7 @@ import re
|
||||||
import subprocess
|
import subprocess
|
||||||
import typing
|
import typing
|
||||||
|
|
||||||
SUPPORTED_SITE_TAGS: typing.Mapping[str, typing.Set[str]] = {
|
from sites import SUPPORTED_SITE_TAGS
|
||||||
'aryion': {'eka', 'aryion'},
|
|
||||||
'furaffinity': {'fa', 'furaffinity'},
|
|
||||||
'weasyl': {'weasyl'},
|
|
||||||
'inkbunny': {'ib', 'inkbunny'},
|
|
||||||
'sofurry': {'sf', 'sofurry'},
|
|
||||||
}
|
|
||||||
|
|
||||||
SUPPORTED_USER_TAGS: typing.Mapping[str, typing.Set[str]] = {
|
SUPPORTED_USER_TAGS: typing.Mapping[str, typing.Set[str]] = {
|
||||||
**SUPPORTED_SITE_TAGS,
|
**SUPPORTED_SITE_TAGS,
|
||||||
|
@ -70,11 +64,9 @@ DESCRIPTION_GRAMMAR += r"""
|
||||||
USERNAME: / *[a-zA-Z0-9][a-zA-Z0-9 _-]*/
|
USERNAME: / *[a-zA-Z0-9][a-zA-Z0-9 _-]*/
|
||||||
URL: / *(https?:\/\/)?[^\]]+ */
|
URL: / *(https?:\/\/)?[^\]]+ */
|
||||||
TEXT: /([^\[]|[ \t\r\n])+/
|
TEXT: /([^\[]|[ \t\r\n])+/
|
||||||
CONDITION: / *[a-z]+ *(==|!=) *[a-zA-Z0-9]+ *| *[a-z]+ +in +([a-zA-Z0-9]+ *, *)*[a-zA-Z0-9]+ */
|
CONDITION: / *[a-z]+ *(==|!=) *[a-zA-Z0-9_-]+ *| *[a-z]+ +in +([a-zA-Z0-9_-]+ *, *)*[a-zA-Z0-9_-]+ */
|
||||||
"""
|
"""
|
||||||
|
|
||||||
DESCRIPTION_PARSER = lark.Lark(DESCRIPTION_GRAMMAR, parser='lalr')
|
|
||||||
|
|
||||||
|
|
||||||
class SiteSwitchTag:
|
class SiteSwitchTag:
|
||||||
def __init__(self, default: typing.Optional[str]=None, **kwargs):
|
def __init__(self, default: typing.Optional[str]=None, **kwargs):
|
||||||
|
@ -104,18 +96,23 @@ class SiteSwitchTag:
|
||||||
yield from self._sites
|
yield from self._sites
|
||||||
|
|
||||||
class UploadTransformer(lark.Transformer):
|
class UploadTransformer(lark.Transformer):
|
||||||
def __init__(self, *args, **kwargs):
|
def __init__(self, define_options=set(), *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
|
self.define_options = define_options
|
||||||
# Init user_tag_xxxx methods
|
# Init user_tag_xxxx methods
|
||||||
def _user_tag_factory(tag):
|
def _user_tag_factory(tag):
|
||||||
# Create a new user SiteSwitchTag if innermost node, or append to list in order
|
# Create a new user SiteSwitchTag if innermost node, or append to list in order
|
||||||
def user_tag(data):
|
def user_tag(data):
|
||||||
attribute, inner = data[0], data[1]
|
attribute, inner = data[0], data[1]
|
||||||
if isinstance(inner, SiteSwitchTag):
|
if attribute and attribute.strip():
|
||||||
inner[tag] = attribute.strip()
|
if isinstance(inner, SiteSwitchTag):
|
||||||
return inner
|
inner[tag] = attribute.strip()
|
||||||
user = SiteSwitchTag(default=inner and inner.strip())
|
return inner
|
||||||
user[tag] = attribute.strip()
|
user = SiteSwitchTag(default=inner and inner.strip())
|
||||||
|
user[tag] = attribute.strip()
|
||||||
|
return user
|
||||||
|
user = SiteSwitchTag()
|
||||||
|
user[tag] = inner.strip()
|
||||||
return user
|
return user
|
||||||
return user_tag
|
return user_tag
|
||||||
for tag in SUPPORTED_USER_TAGS:
|
for tag in SUPPORTED_USER_TAGS:
|
||||||
|
@ -129,7 +126,7 @@ class UploadTransformer(lark.Transformer):
|
||||||
if isinstance(inner, SiteSwitchTag):
|
if isinstance(inner, SiteSwitchTag):
|
||||||
inner[tag] = attribute.strip()
|
inner[tag] = attribute.strip()
|
||||||
return inner
|
return inner
|
||||||
siteurl = SiteSwitchTag(default=inner.strip())
|
siteurl = SiteSwitchTag(default=inner and inner.strip())
|
||||||
siteurl[tag] = attribute.strip()
|
siteurl[tag] = attribute.strip()
|
||||||
return siteurl
|
return siteurl
|
||||||
siteurl = SiteSwitchTag()
|
siteurl = SiteSwitchTag()
|
||||||
|
@ -163,6 +160,9 @@ class UploadTransformer(lark.Transformer):
|
||||||
def transformer_matches_site(self, site: str) -> bool:
|
def transformer_matches_site(self, site: str) -> bool:
|
||||||
raise NotImplementedError('UploadTransformer.transformer_matches_site is abstract')
|
raise NotImplementedError('UploadTransformer.transformer_matches_site is abstract')
|
||||||
|
|
||||||
|
def transformer_matches_define(self, option: str) -> bool:
|
||||||
|
return option in self.define_options
|
||||||
|
|
||||||
def if_tag(self, data: typing.Tuple[str, str, str]):
|
def if_tag(self, data: typing.Tuple[str, str, str]):
|
||||||
condition, truthy_document, falsy_document = data[0], data[1], data[2]
|
condition, truthy_document, falsy_document = data[0], data[1], data[2]
|
||||||
# Test equality condition, i.e. `site==foo`
|
# Test equality condition, i.e. `site==foo`
|
||||||
|
@ -324,10 +324,12 @@ class PlaintextTransformer(UploadTransformer):
|
||||||
return super().user_tag_root(data)
|
return super().user_tag_root(data)
|
||||||
|
|
||||||
class AryionTransformer(BbcodeTransformer):
|
class AryionTransformer(BbcodeTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((SiteSwitchTag(aryion=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(aryion=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for AryionTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -347,10 +349,12 @@ class AryionTransformer(BbcodeTransformer):
|
||||||
return super().siteurl_tag_root(data)
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
class FuraffinityTransformer(BbcodeTransformer):
|
class FuraffinityTransformer(BbcodeTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((SiteSwitchTag(furaffinity=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(furaffinity=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for FuraffinityTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -370,10 +374,12 @@ class FuraffinityTransformer(BbcodeTransformer):
|
||||||
return super().siteurl_tag_root(data)
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
class WeasylTransformer(MarkdownTransformer):
|
class WeasylTransformer(MarkdownTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((SiteSwitchTag(weasyl=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(weasyl=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for WeasylTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -401,10 +407,12 @@ class WeasylTransformer(MarkdownTransformer):
|
||||||
return super().siteurl_tag_root(data)
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
class InkbunnyTransformer(BbcodeTransformer):
|
class InkbunnyTransformer(BbcodeTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((SiteSwitchTag(inkbunny=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(inkbunny=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for InkbunnyTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -432,10 +440,12 @@ class InkbunnyTransformer(BbcodeTransformer):
|
||||||
return super().siteurl_tag_root(data)
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
class SoFurryTransformer(BbcodeTransformer):
|
class SoFurryTransformer(BbcodeTransformer):
|
||||||
def __init__(self, self_user, *args, **kwargs):
|
def __init__(self, self_user=None, *args, **kwargs):
|
||||||
super().__init__(*args, **kwargs)
|
super().__init__(*args, **kwargs)
|
||||||
def self_tag(data):
|
def self_tag(data):
|
||||||
return self.user_tag_root((SiteSwitchTag(sofurry=self_user),))
|
if self_user:
|
||||||
|
return self.user_tag_root((SiteSwitchTag(sofurry=self_user),))
|
||||||
|
raise ValueError('self_tag is unavailable for SoFurryTransformer - no user provided')
|
||||||
self.self_tag = self_tag
|
self.self_tag = self_tag
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
|
@ -461,7 +471,7 @@ class SoFurryTransformer(BbcodeTransformer):
|
||||||
return super().siteurl_tag_root(data)
|
return super().siteurl_tag_root(data)
|
||||||
|
|
||||||
|
|
||||||
def parse_description(description_path, config_path, out_dir, ignore_empty_files=False):
|
def parse_description(description_path, config, out_dir, ignore_empty_files=False, define_options=set()):
|
||||||
for proc in psutil.process_iter(['cmdline']):
|
for proc in psutil.process_iter(['cmdline']):
|
||||||
if proc.info['cmdline'] and 'libreoffice' in proc.info['cmdline'][0] and '--writer' in proc.info['cmdline'][1:]:
|
if proc.info['cmdline'] and 'libreoffice' in proc.info['cmdline'][0] and '--writer' in proc.info['cmdline'][1:]:
|
||||||
if ignore_empty_files:
|
if ignore_empty_files:
|
||||||
|
@ -479,7 +489,7 @@ def parse_description(description_path, config_path, out_dir, ignore_empty_files
|
||||||
else:
|
else:
|
||||||
raise RuntimeError(error)
|
raise RuntimeError(error)
|
||||||
|
|
||||||
parsed_description = DESCRIPTION_PARSER.parse(description)
|
parsed_description = lark.Lark(DESCRIPTION_GRAMMAR, parser='lalr').parse(description)
|
||||||
transformations = {
|
transformations = {
|
||||||
'aryion': ('desc_aryion.txt', AryionTransformer),
|
'aryion': ('desc_aryion.txt', AryionTransformer),
|
||||||
'furaffinity': ('desc_furaffinity.txt', FuraffinityTransformer),
|
'furaffinity': ('desc_furaffinity.txt', FuraffinityTransformer),
|
||||||
|
@ -487,22 +497,18 @@ def parse_description(description_path, config_path, out_dir, ignore_empty_files
|
||||||
'sofurry': ('desc_sofurry.txt', SoFurryTransformer),
|
'sofurry': ('desc_sofurry.txt', SoFurryTransformer),
|
||||||
'weasyl': ('desc_weasyl.md', WeasylTransformer),
|
'weasyl': ('desc_weasyl.md', WeasylTransformer),
|
||||||
}
|
}
|
||||||
with open(config_path, 'r') as f:
|
# assert all(k in SUPPORTED_SITE_TAGS for k in transformations)
|
||||||
config = json.load(f)
|
|
||||||
# Validate JSON
|
# Validate JSON
|
||||||
errors = []
|
errors = []
|
||||||
if type(config) is not dict:
|
for (website, username) in config.items():
|
||||||
errors.append(ValueError('Configuration must be a JSON object'))
|
if website not in transformations:
|
||||||
else:
|
errors.append(ValueError(f'Website \'{website}\' is unsupported'))
|
||||||
for (website, username) in config.items():
|
elif type(username) is not str:
|
||||||
if website not in transformations:
|
errors.append(ValueError(f'Website \'{website}\' has invalid username \'{json.dumps(username)}\''))
|
||||||
errors.append(ValueError(f'Website \'{website}\' is unsupported'))
|
elif username.strip() == '':
|
||||||
elif type(username) is not str:
|
errors.append(ValueError(f'Website \'{website}\' has empty username'))
|
||||||
errors.append(ValueError(f'Website \'{website}\' has invalid username \'{json.dumps(username)}\''))
|
if not any(ws in config for ws in transformations):
|
||||||
elif username.strip() == '':
|
errors.append(ValueError('No valid websites found'))
|
||||||
errors.append(ValueError(f'Website \'{website}\' has empty username'))
|
|
||||||
if not any(ws in config for ws in transformations):
|
|
||||||
errors.append(ValueError('No valid websites found'))
|
|
||||||
if errors:
|
if errors:
|
||||||
raise ExceptionGroup('Invalid configuration for description parsing', errors)
|
raise ExceptionGroup('Invalid configuration for description parsing', errors)
|
||||||
# Create descriptions
|
# Create descriptions
|
||||||
|
@ -511,7 +517,9 @@ def parse_description(description_path, config_path, out_dir, ignore_empty_files
|
||||||
(filepath, transformer) = transformations[website]
|
(filepath, transformer) = transformations[website]
|
||||||
with open(os.path.join(out_dir, filepath), 'w') as f:
|
with open(os.path.join(out_dir, filepath), 'w') as f:
|
||||||
if description.strip():
|
if description.strip():
|
||||||
transformed_description = transformer(username).transform(parsed_description)
|
transformed_description = transformer(self_user=username, define_options=define_options).transform(parsed_description)
|
||||||
f.write(RE_MULTIPLE_EMPTY_LINES.sub('\n\n', transformed_description).strip() + '\n')
|
cleaned_description = RE_MULTIPLE_EMPTY_LINES.sub('\n\n', transformed_description).strip()
|
||||||
else:
|
if cleaned_description:
|
||||||
f.write('')
|
f.write(cleaned_description)
|
||||||
|
f.write('\n')
|
||||||
|
f.write('')
|
||||||
|
|
66
main.py
66
main.py
|
@ -3,20 +3,45 @@
|
||||||
import argcomplete
|
import argcomplete
|
||||||
from argcomplete.completers import FilesCompleter, DirectoriesCompleter
|
from argcomplete.completers import FilesCompleter, DirectoriesCompleter
|
||||||
import argparse
|
import argparse
|
||||||
|
import json
|
||||||
import os
|
import os
|
||||||
|
import re
|
||||||
from subprocess import CalledProcessError
|
from subprocess import CalledProcessError
|
||||||
import shutil
|
import shutil
|
||||||
import tempfile
|
import tempfile
|
||||||
|
|
||||||
from description import parse_description
|
from description import parse_description
|
||||||
from story import parse_story
|
from story import parse_story
|
||||||
|
from sites import INVERSE_SUPPORTED_SITE_TAGS
|
||||||
|
|
||||||
|
|
||||||
def main(out_dir_path=None, story_path=None, description_path=None, file_path=None, config_path=None, keep_out_dir=False, ignore_empty_files=False):
|
def main(out_dir_path=None, story_path=None, description_path=None, file_paths=[], config_path=None, keep_out_dir=False, ignore_empty_files=False, define_options=[]):
|
||||||
if not out_dir_path:
|
if not out_dir_path:
|
||||||
raise ValueError('Missing out_dir_path')
|
raise ValueError('Missing out_dir_path')
|
||||||
if not config_path:
|
if not config_path:
|
||||||
raise ValueError('Missing config_path')
|
raise ValueError('Missing config_path')
|
||||||
|
if not file_paths:
|
||||||
|
file_paths = []
|
||||||
|
if not define_options:
|
||||||
|
define_options = []
|
||||||
|
config = None
|
||||||
|
if story_path or description_path:
|
||||||
|
with open(config_path, 'r') as f:
|
||||||
|
config_json = json.load(f)
|
||||||
|
if type(config_json) is not dict:
|
||||||
|
raise ValueError('The configuration file must contain a valid JSON object')
|
||||||
|
config = {}
|
||||||
|
for k, v in config_json.items():
|
||||||
|
if type(v) is not str:
|
||||||
|
raise ValueError(f'Invalid configuration value for entry "{k}": expected string, got {type(v)}')
|
||||||
|
new_k = INVERSE_SUPPORTED_SITE_TAGS.get(k)
|
||||||
|
if not new_k:
|
||||||
|
print(f'Ignoring unknown configuration key "{k}"...')
|
||||||
|
if new_k in config:
|
||||||
|
raise ValueError(f'Duplicate configuration entry for website "{new_key}": found collision with key "{k}"')
|
||||||
|
config[new_k] = v
|
||||||
|
if len(config) == 0:
|
||||||
|
raise ValueError(f'Invalid configuration file "{config_path}": no valid sites defined')
|
||||||
remove_out_dir = not keep_out_dir and os.path.isdir(out_dir_path)
|
remove_out_dir = not keep_out_dir and os.path.isdir(out_dir_path)
|
||||||
with tempfile.TemporaryDirectory() as tdir:
|
with tempfile.TemporaryDirectory() as tdir:
|
||||||
# Clear output dir if it exists and shouldn't be kept
|
# Clear output dir if it exists and shouldn't be kept
|
||||||
|
@ -28,14 +53,17 @@ def main(out_dir_path=None, story_path=None, description_path=None, file_path=No
|
||||||
try:
|
try:
|
||||||
# Convert original file to .rtf (Aryion) and .txt (all others)
|
# Convert original file to .rtf (Aryion) and .txt (all others)
|
||||||
if story_path:
|
if story_path:
|
||||||
parse_story(story_path, config_path, out_dir_path, tdir, ignore_empty_files)
|
parse_story(story_path, config, out_dir_path, tdir, ignore_empty_files)
|
||||||
|
|
||||||
# Parse FA description and convert for each website
|
# Parse FA description and convert for each website
|
||||||
if description_path:
|
if description_path:
|
||||||
parse_description(description_path, config_path, out_dir_path, ignore_empty_files)
|
define_options_set = set(define_options)
|
||||||
|
if len(define_options_set) < len(define_options):
|
||||||
|
print('WARNING: duplicated entries defined with -D / --define-option')
|
||||||
|
parse_description(description_path, config, out_dir_path, ignore_empty_files, define_options)
|
||||||
|
|
||||||
# Copy generic file over to output
|
# Copy generic files over to output
|
||||||
if file_path:
|
for file_path in file_paths:
|
||||||
shutil.copy(file_path, out_dir_path)
|
shutil.copy(file_path, out_dir_path)
|
||||||
|
|
||||||
except CalledProcessError as e:
|
except CalledProcessError as e:
|
||||||
|
@ -59,12 +87,14 @@ if __name__ == '__main__':
|
||||||
help='path of output directory').completer = DirectoriesCompleter
|
help='path of output directory').completer = DirectoriesCompleter
|
||||||
parser.add_argument('-c', '--config', dest='config_path', default='./config.json',
|
parser.add_argument('-c', '--config', dest='config_path', default='./config.json',
|
||||||
help='path of JSON configuration file').completer = FilesCompleter
|
help='path of JSON configuration file').completer = FilesCompleter
|
||||||
|
parser.add_argument('-D', '--define-option', dest='define_options', action='append',
|
||||||
|
help='options to define as a truthy value when parsing descriptions')
|
||||||
parser.add_argument('-s', '--story', dest='story_path',
|
parser.add_argument('-s', '--story', dest='story_path',
|
||||||
help='path of LibreOffice-readable story file').completer = FilesCompleter
|
help='path of LibreOffice-readable story file').completer = FilesCompleter
|
||||||
parser.add_argument('-d', '--description', dest='description_path',
|
parser.add_argument('-d', '--description', dest='description_path',
|
||||||
help='path of BBCode-formatted description file').completer = FilesCompleter
|
help='path of BBCode-formatted description file').completer = FilesCompleter
|
||||||
parser.add_argument('-f', '--file', dest='file_path',
|
parser.add_argument('-f', '--file', dest='file_paths', action='append',
|
||||||
help='path of generic file to include in output (i.e. an image or thumbnail)').completer = FilesCompleter
|
help='path(s) of generic file(s) to include in output (i.e. an image or thumbnail)').completer = FilesCompleter
|
||||||
parser.add_argument('-k', '--keep-out-dir', dest='keep_out_dir', action='store_true',
|
parser.add_argument('-k', '--keep-out-dir', dest='keep_out_dir', action='store_true',
|
||||||
help='whether output directory contents should be kept.\nif set, a script error may leave partial files behind')
|
help='whether output directory contents should be kept.\nif set, a script error may leave partial files behind')
|
||||||
parser.add_argument('-I', '--ignore-empty-files', dest='ignore_empty_files', action='store_true',
|
parser.add_argument('-I', '--ignore-empty-files', dest='ignore_empty_files', action='store_true',
|
||||||
|
@ -72,17 +102,23 @@ if __name__ == '__main__':
|
||||||
argcomplete.autocomplete(parser)
|
argcomplete.autocomplete(parser)
|
||||||
args = parser.parse_args()
|
args = parser.parse_args()
|
||||||
|
|
||||||
if not any([args.story_path, args.description_path]):
|
file_paths = args.file_paths or []
|
||||||
parser.error('at least one of ( --story | --description ) must be set')
|
if not (args.story_path or args.description_path or any(file_paths)):
|
||||||
|
parser.error('at least one of ( --story | --description | --file ) must be set')
|
||||||
if args.out_dir_path and os.path.exists(args.out_dir_path) and not os.path.isdir(args.out_dir_path):
|
if args.out_dir_path and os.path.exists(args.out_dir_path) and not os.path.isdir(args.out_dir_path):
|
||||||
parser.error('--output-dir must be an existing directory or inexistent')
|
parser.error(f'--output-dir {args.out_dir_path} must be an existing directory or inexistent; found a file instead')
|
||||||
if args.story_path and not os.path.isfile(args.story_path):
|
if args.story_path and not os.path.isfile(args.story_path):
|
||||||
parser.error('--story must be a valid file')
|
parser.error(f'--story {args.story_path} is not a valid file')
|
||||||
if args.description_path and not os.path.isfile(args.description_path):
|
if args.description_path and not os.path.isfile(args.description_path):
|
||||||
parser.error('--description must be a valid file')
|
parser.error(f'--description {args.description_path} is not a valid file')
|
||||||
if args.file_path and not os.path.isfile(args.file_path):
|
for file_path in file_paths:
|
||||||
parser.error('--file must be a valid file')
|
if not os.path.isfile(file_path):
|
||||||
if args.config_path and not os.path.isfile(args.config_path):
|
parser.error(f'--file {file_path} is not a valid file')
|
||||||
|
if (args.story_path or args.description_path) and args.config_path and not os.path.isfile(args.config_path):
|
||||||
parser.error('--config must be a valid file')
|
parser.error('--config must be a valid file')
|
||||||
|
if args.define_options:
|
||||||
|
for option in args.define_options:
|
||||||
|
if not re.match(r'^[a-zA-Z0-9_-]+$', option):
|
||||||
|
parser.error(f'--define-option {option} is not a valid option; it must only contain alphanumeric characters, dashes, or underlines')
|
||||||
|
|
||||||
main(**vars(args))
|
main(**vars(args))
|
||||||
|
|
13
sites.py
Normal file
13
sites.py
Normal file
|
@ -0,0 +1,13 @@
|
||||||
|
import itertools
|
||||||
|
import typing
|
||||||
|
|
||||||
|
SUPPORTED_SITE_TAGS: typing.Mapping[str, typing.Set[str]] = {
|
||||||
|
'aryion': {'aryion', 'eka', 'eka_portal'},
|
||||||
|
'furaffinity': {'furaffinity', 'fa'},
|
||||||
|
'weasyl': {'weasyl'},
|
||||||
|
'inkbunny': {'inkbunny', 'ib'},
|
||||||
|
'sofurry': {'sofurry', 'sf'},
|
||||||
|
}
|
||||||
|
|
||||||
|
INVERSE_SUPPORTED_SITE_TAGS: typing.Mapping[str, str] = \
|
||||||
|
dict(itertools.chain.from_iterable(zip(v, itertools.repeat(k)) for (k, v) in SUPPORTED_SITE_TAGS.items()))
|
8
story.py
8
story.py
|
@ -17,15 +17,11 @@ def get_rtf_styles(rtf_source: str):
|
||||||
rtf_styles[style_name] = rtf_style
|
rtf_styles[style_name] = rtf_style
|
||||||
return rtf_styles
|
return rtf_styles
|
||||||
|
|
||||||
def parse_story(story_path, config_path, out_dir, temp_dir, ignore_empty_files=False):
|
def parse_story(story_path, config, out_dir, temp_dir, ignore_empty_files=False):
|
||||||
with open(config_path, 'r') as f:
|
|
||||||
config = json.load(f)
|
|
||||||
if type(config) is not dict:
|
|
||||||
raise ValueError('Invalid configuration for story parsing: Configuration must be a JSON object')
|
|
||||||
should_create_txt_story = any(ws in config for ws in ('furaffinity', 'inkbunny', 'sofurry'))
|
should_create_txt_story = any(ws in config for ws in ('furaffinity', 'inkbunny', 'sofurry'))
|
||||||
should_create_md_story = any(ws in config for ws in ('weasyl',))
|
should_create_md_story = any(ws in config for ws in ('weasyl',))
|
||||||
should_create_rtf_story = any(ws in config for ws in ('aryion',))
|
should_create_rtf_story = any(ws in config for ws in ('aryion',))
|
||||||
if not any((should_create_txt_story, should_create_md_story, should_create_rtf_story)):
|
if not (should_create_txt_story or should_create_md_story or should_create_rtf_story):
|
||||||
raise ValueError('Invalid configuration for story parsing: No valid websites found')
|
raise ValueError('Invalid configuration for story parsing: No valid websites found')
|
||||||
|
|
||||||
for proc in psutil.process_iter(['cmdline']):
|
for proc in psutil.process_iter(['cmdline']):
|
||||||
|
|
Reference in a new issue