mirror of
https://github.com/ManiMatter/decluttarr.git
synced 2026-04-17 19:53:54 +02:00
Added sigterm handling to exit cleanly when running in Docker.
Fixed typos and various linting issues such as PEP violations. Added ruff and fixed common issues and linting issues.
This commit is contained in:
3
.gitignore
vendored
3
.gitignore
vendored
@@ -9,4 +9,5 @@ venv
|
||||
temp
|
||||
.notebooks
|
||||
**/old/
|
||||
logs/*
|
||||
logs/*
|
||||
.idea/*
|
||||
|
||||
2
.vscode/settings.json
vendored
2
.vscode/settings.json
vendored
@@ -6,5 +6,5 @@
|
||||
"tests"
|
||||
],
|
||||
"python.testing.unittestEnabled": false,
|
||||
"python.testing.pytestEnabled": true,
|
||||
"python.testing.pytestEnabled": true
|
||||
}
|
||||
@@ -2,9 +2,9 @@
|
||||
|
||||
## Table of contents
|
||||
- [Overview](#overview)
|
||||
- [Feature Requests](#feature--requests)
|
||||
- [Bug Reports](#bug--reports)
|
||||
- [Code Contributions](#code--contributions)
|
||||
- [Feature Requests](#feature-requests)
|
||||
- [Bug Reports](#bug-reports)
|
||||
- [Code Contributions](#code-contributions)
|
||||
|
||||
## Overview
|
||||
Thank you for wanting to contribute to this project.
|
||||
@@ -27,7 +27,7 @@ Please go follow these steps to submit a bug:
|
||||
- Create meaningful logs by:
|
||||
1) Switch decluttarr to debug mode (setting LOG_LEVEL: DEBUG)
|
||||
2) Turn off all remove functions but one where you expect a removal (example: REMOVE_STALLED: True and the rest on False)
|
||||
3) Let it run until the supposed remove should be trigged
|
||||
3) Let it run until the supposed remove should be triggered
|
||||
4) Paste the full logs to a pastebin
|
||||
5) Share your settings (docker-compose or config.conf)
|
||||
6) Optional: If helpful, share screenshots showing the problem (from your arr-app or qbit)
|
||||
@@ -40,23 +40,23 @@ Code contributions are very welcome - thanks for helping improve this app!
|
||||
3) Only commit code that you have written yourself and is not owned by anybody else
|
||||
4) Create a PR against the "dev" branch
|
||||
5) Be responsive to code review
|
||||
5) Once the code is reviewed and OK, it will be merged to dev branch, which will create the "dev"-docker image
|
||||
6) Help testing that the dev image works
|
||||
7= Finally, we will then commit the change to the main branch, which will create the "latest"-docker image
|
||||
6) Once the code is reviewed and OK, it will be merged to dev branch, which will create the "dev"-docker image
|
||||
7) Help testing that the dev image works
|
||||
8) Finally, we will then commit the change to the main branch, which will create the "latest"-docker image
|
||||
|
||||
You do not need to know about how to create docker images to contribute here.
|
||||
To get started:
|
||||
1) Create a fork of decluttarr
|
||||
2) Clone the git repository from the dev branch to your local machine `git clone -b dev https://github.com/yourName/decluttarr`
|
||||
2) Create a virtual python environment (`python3 -m venv venv`)
|
||||
3) Activate the virtual environment (`source venv/bin/activate`)
|
||||
4) Install python libraries (`pip install -r docker/requirements.txt`)
|
||||
5) Adjust the config/config.conf to your needs
|
||||
6) Adjust the code in the files as needed
|
||||
7) Run the script (`python3 main.py`)
|
||||
8) Push your changes to your own git repo and use a descriptive name for the branch name (e.g. add-feature-to-xyz; bugfix-xyz)
|
||||
9) Test the dev-image it creates automatically
|
||||
10) Create the PR from your repo to ManiMatter/decluttarr (dev branch)
|
||||
11) Make sure all checks pass
|
||||
12) Squash your commits
|
||||
13) Test that the docker image works that was created when you pushed to your fork
|
||||
3) Create a virtual python environment (`python3 -m venv venv`)
|
||||
4) Activate the virtual environment (`source venv/bin/activate`)
|
||||
5) Install python libraries (`pip install -r docker/requirements.txt`)
|
||||
6) Adjust the config/config.conf to your needs
|
||||
7) Adjust the code in the files as needed
|
||||
8) Run the script (`python3 main.py`)
|
||||
9) Push your changes to your own git repo and use a descriptive name for the branch name (e.g. add-feature-to-xyz; bugfix-xyz)
|
||||
10) Test the dev-image it creates automatically
|
||||
11) Create the PR from your repo to ManiMatter/decluttarr (dev branch)
|
||||
12) Make sure all checks pass
|
||||
13) Squash your commits
|
||||
14) Test that the docker image works that was created when you pushed to your fork
|
||||
|
||||
34
README.md
34
README.md
@@ -11,7 +11,7 @@ _Like this app? Thanks for giving it a_ ⭐️
|
||||
- [Docker-compose with config file (recommended)](#docker-docker-compose-together-with-configyaml)
|
||||
- [Docker-compose only](#docker-specifying-all-settings-in-docker-compose)
|
||||
- [Explanation of the settings](#explanation-of-the-settings)
|
||||
- [General](#general)
|
||||
- [General](#general-settings)
|
||||
- [LOG_LEVEL](#log_level)
|
||||
- [TEST_RUN](#test_run)
|
||||
- [TIMER](#timer)
|
||||
@@ -36,7 +36,7 @@ _Like this app? Thanks for giving it a_ ⭐️
|
||||
- [REMOVE_UNMONITORED](#remove_unmonitored)
|
||||
- [SEARCH_CUTOFF_UNMET_CONTENT](#search_unmet_cutoff_content)
|
||||
- [SEARCH_MISSING_CONTENT](#search_missing_content)
|
||||
- [Instances](#instances)
|
||||
- [Instances](#arr-instances)
|
||||
- [SONARR](#sonarr)
|
||||
- [RADARR](#radarr)
|
||||
- [READARR](#readarr)
|
||||
@@ -64,7 +64,7 @@ Feature overview:
|
||||
- Removing downloads that are stalled
|
||||
- Removing downloads belonging to movies/series/albums etc. that have been marked as "unmonitored"
|
||||
- Periodically searching for better content on movies/series/albums etc. where cutoff has not been reached yet
|
||||
- Periodcially searching for missing content that has not yet been found
|
||||
- Periodically searching for missing content that has not yet been found
|
||||
|
||||
|
||||
Key behaviors:
|
||||
@@ -91,7 +91,7 @@ How to run this:
|
||||
- The feature that distinguishes private and private trackers (private_tracker_handling, public_tracker_handling) does not work
|
||||
- Removal of bad files and <100% availability (remove_bad_files) does not work
|
||||
- If you see strange errors such as "found 10 / 3 times", consider turning on the setting "Reject Blocklisted Torrent Hashes While Grabbing". On nightly Radarr/Sonarr/Readarr/Lidarr/Whisparr, the option is located under settings/indexers in the advanced options of each indexer, on Prowlarr it is under settings/apps and then the advanced settings of the respective app
|
||||
- If you use qBittorrent and none of your torrents get removed and the verbose logs tell that all torrents are protected by the protected_tag even if they are not, you may be using a qBittorrent version that has problems with API calls and you may want to consider switching to a different qBit image (see https://github.com/ManiMatter/decluttarr/issues/56)
|
||||
- If you use qBittorrent and none of your torrents get removed and the verbose logs tell that all torrents are protected by the protected_tag even if they are not, you may be using a qBittorrent version that has problems with API calls, and you may want to consider switching to a different qBit image (see https://github.com/ManiMatter/decluttarr/issues/56)
|
||||
- Currently, “\*Arr” apps are only supported in English. Refer to issue https://github.com/ManiMatter/decluttarr/issues/132 for more details
|
||||
- If you experience yaml issues, please check the closed issues. There are different notations, and it may very well be that the issue you found has already been solved in one of the issues. Once you figured your problem, feel free to post your yaml to help others here: https://github.com/ManiMatter/decluttarr/issues/173
|
||||
|
||||
@@ -125,13 +125,13 @@ jobs:
|
||||
### Running in docker
|
||||
|
||||
In docker, there are two ways how you can run decluttarr.
|
||||
The [recommendeded approach](#docker-docker-compose-together-with-configyaml) is to use a config.yaml file (similar to running the script [locally](#running-locally)).
|
||||
The [recommended approach](#docker-docker-compose-together-with-configyaml) is to use a config.yaml file (similar to running the script [locally](#running-locally)).
|
||||
Alternatively, you can put all settings [directly in your docker-compose](#docker-specifying-all-settings-in-docker-compose), which may bloat it a bit.
|
||||
|
||||
|
||||
#### Docker: Docker-compose together with Config.yaml
|
||||
1. Use the following input for your `docker-compose.yml`
|
||||
2. Download the config_example.yaml from the config folder (on github) and put it into your mounted folder
|
||||
2. Download the config_example.yaml from the config folder (on GitHub) and put it into your mounted folder
|
||||
3. Rename it to config.yaml and adjust the settings to your needs
|
||||
4. Run `docker-compose up -d` in the directory where the file is located to create the docker container
|
||||
|
||||
@@ -162,7 +162,7 @@ If you want to have everything in docker compose:
|
||||
1. Use the following input for your `docker-compose.yml`
|
||||
2. Tweak the settings to your needs
|
||||
3. Remove the things that are commented out (if you don't need them), or uncomment them
|
||||
4. If you face problems with yaml formats etc, please first check the open and closed issues on github, before opening new ones
|
||||
4. If you face problems with yaml formats etc., please first check the open and closed issues on GitHub, before opening new ones
|
||||
5. Run `docker-compose up -d` in the directory where the file is located to create the docker container
|
||||
|
||||
Note: Always pull the "**latest**" version. The "dev" version is for testing only, and should only be pulled when contributing code or supporting with bug fixes
|
||||
@@ -207,7 +207,7 @@ services:
|
||||
# SEARCH_MISSING_CONTENT: True
|
||||
|
||||
# # --- OR: Jobs (with job-specific settings) ---
|
||||
# Alternatively, you can use the below notation, which for certain jobs allows you to set additioanl parameters
|
||||
# Alternatively, you can use the below notation, which for certain jobs allows you to set additional parameters
|
||||
# As written above, these can also be set as Job Defaults so you don't have to specify them as granular as below.
|
||||
# REMOVE_BAD_FILES: True
|
||||
# REMOVE_FAILED_DOWNLOADS: True
|
||||
@@ -313,14 +313,14 @@ Configures the general behavior of the application (across all features)
|
||||
- Allows you to configure download client names that will be skipped by decluttarr
|
||||
Note: The names provided here have to 100% match with how you have named your download clients in your *arr application(s)
|
||||
- Type: List of strings
|
||||
- Is Mandatory: No (Defaults to [], ie. nothing ignored])
|
||||
- Is Mandatory: No (Defaults to [], i.e. nothing ignored])
|
||||
|
||||
#### PRIVATE_TRACKER_HANDLING / PUBLIC_TRACKER_HANDLING
|
||||
|
||||
- Defines what happens with private/public tracker torrents if they are flagged by a removal job
|
||||
- Note that this only works for qbittorrent currently (if you set up qbittorrent in your config)
|
||||
- "remove" means that torrents are removed (default behavior)
|
||||
- "skip" means they are disregarded (which some users might find handy to protect their private trackers prematurely, ie., before their seed targets are met)
|
||||
- "skip" means they are disregarded (which some users might find handy to protect their private trackers prematurely, i.e., before their seed targets are met)
|
||||
- "obsolete_tag" means that rather than being removed, the torrents are tagged. This allows other applications (such as [qbit_manage](https://github.com/StuffAnThings/qbit_manage) to monitor them and remove them once seed targets are fulfilled)
|
||||
- Type: String
|
||||
- Permissible Values: remove, skip, obsolete_tag
|
||||
@@ -371,10 +371,10 @@ If a job has the same settings configured on job-level, the job-level settings w
|
||||
#### MAX_CONCURRENT_SEARCHES
|
||||
|
||||
- Only relevant together with search_unmet_cutoff_content and search_missing_content
|
||||
- Specified how many ites concurrently on a single arr should be search for in a given iteration
|
||||
- Specified how many ites concurrently on a single arr should be searched for in a given iteration
|
||||
- Each arr counts separately
|
||||
- Example: If your wanted-list has 100 entries, and you define "3" as your number, after roughly 30 searches you'll have all items on your list searched for.
|
||||
- Since the timer-setting steer how often the jobs run, if you put 10minutes there, after one hour you'll have run 6x, and thus already processed 18 searches. Long story short: No need to put a very high number here (else you'll just create unecessary traffic on your end..).
|
||||
- Since the timer-setting steer how often the jobs run, if you put 10minutes there, after one hour you'll have run 6x, and thus already processed 18 searches. Long story short: No need to put a very high number here (else you'll just create unnecessary traffic on your end.).
|
||||
- Type: Integer
|
||||
- Permissible Values: Any number
|
||||
- Is Mandatory: No (Defaults to 3)
|
||||
@@ -388,12 +388,12 @@ This is the interesting section. It defines which job you want decluttarr to run
|
||||
- Steers whether files within torrents are marked as 'not download' if they match one of these conditions
|
||||
1) They are less than 100% available
|
||||
2) They are not one of the desired file types supported by the *arr apps:
|
||||
3) They contain one of these words (case insensitive) and are smaller than 500 MB:
|
||||
3) They contain one of these words (case-insensitive) and are smaller than 500 MB:
|
||||
- Trailer
|
||||
- Sample
|
||||
|
||||
- If all files of a torrent are marked as 'not download' then the torrent will be removed and blacklisted
|
||||
- Note that this is only supported when qBittorrent is configured in decluttarr and it will turn on the setting 'Keep unselected files in ".unwanted" folder' in qBittorrent
|
||||
- Note that this is only supported when qBittorrent is configured in decluttarr, and it will turn on the setting 'Keep unselected files in ".unwanted" folder' in qBittorrent
|
||||
- Type: Boolean
|
||||
- Permissible Values: True, False
|
||||
- Is Mandatory: No (Defaults to False)
|
||||
@@ -426,8 +426,8 @@ This is the interesting section. It defines which job you want decluttarr to run
|
||||
- Permissible Values: True, False or max_strikes (int)
|
||||
- Is Mandatory: No (Defaults to False)
|
||||
- Note:
|
||||
- With max_strikes you can define how many time this torrent can be caught before being removed
|
||||
- Instead of configuring it here, you may also configure it as a default across all jobs or use the built-in defaults (see futher above under "Max_Strikes")
|
||||
- With max_strikes you can define how many times this torrent can be caught before being removed
|
||||
- Instead of configuring it here, you may also configure it as a default across all jobs or use the built-in defaults (see further above under "Max_Strikes")
|
||||
|
||||
#### REMOVE_MISSING_FILES
|
||||
|
||||
@@ -450,7 +450,7 @@ This is the interesting section. It defines which job you want decluttarr to run
|
||||
|
||||
- Steers whether slow downloads are removed from the queue
|
||||
- Blocklisted: Yes
|
||||
- Note: Does not apply to usenet downloads (since there users pay for certain speed, slowness should not occurr)
|
||||
- Note: Does not apply to usenet downloads (since there users pay for certain speed, slowness should not occur)
|
||||
- Type: Boolean or Dict
|
||||
- Permissible Values:
|
||||
If bool: True, False
|
||||
|
||||
@@ -57,7 +57,7 @@ instances:
|
||||
|
||||
download_clients:
|
||||
qbittorrent:
|
||||
- base_url: "http://qbittorrent:8080" # You can use decluttar without qbit (not all features available, see readme).
|
||||
- base_url: "http://qbittorrent:8080" # You can use decluttarr without qbit (not all features available, see readme).
|
||||
# username: xxxx # (optional -> if not provided, assuming not needed)
|
||||
# password: xxxx # (optional -> if not provided, assuming not needed)
|
||||
# name: "qBittorrent" # (optional -> if not provided, assuming "qBittorrent". Must correspond with what is specified in your *arr as download client name)
|
||||
|
||||
@@ -37,7 +37,7 @@ COPY main.py main.py
|
||||
COPY src src
|
||||
|
||||
|
||||
# Install health check
|
||||
# Install health check
|
||||
RUN apt-get update && apt-get install -y --no-install-recommends procps && \
|
||||
apt-get clean && rm -rf /var/lib/apt/lists/*
|
||||
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
|
||||
|
||||
@@ -11,3 +11,4 @@ autoflake==2.3.1
|
||||
isort==5.13.2
|
||||
envyaml==1.10.211231
|
||||
demjson3==3.0.6
|
||||
ruff==0.11.11
|
||||
24
main.py
24
main.py
@@ -1,13 +1,17 @@
|
||||
import asyncio
|
||||
from src.settings.settings import Settings
|
||||
import signal
|
||||
import sys
|
||||
import types
|
||||
|
||||
from src.utils.startup import launch_steps
|
||||
from src.utils.log_setup import logger
|
||||
from src.job_manager import JobManager
|
||||
from src.settings.settings import Settings
|
||||
from src.utils.log_setup import logger
|
||||
from src.utils.startup import launch_steps
|
||||
|
||||
settings = Settings()
|
||||
job_manager = JobManager(settings)
|
||||
|
||||
|
||||
# # Main function
|
||||
async def main():
|
||||
await launch_steps(settings)
|
||||
@@ -29,8 +33,20 @@ async def main():
|
||||
|
||||
# Wait for the next run
|
||||
await asyncio.sleep(settings.general.timer * 60)
|
||||
return
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
def terminate(sigterm: signal.SIGTERM, frame: types.FrameType) -> None: # noqa: ARG001
|
||||
"""
|
||||
Terminate cleanly. Needed for respecting 'docker stop'.
|
||||
|
||||
Args:
|
||||
----
|
||||
sigterm (signal.Signal): The termination signal.
|
||||
frame: The execution frame.
|
||||
|
||||
"""
|
||||
logger.info("Termination signal received.")
|
||||
sys.exit(0)
|
||||
signal.signal(signal.SIGTERM, terminate)
|
||||
asyncio.run(main())
|
||||
|
||||
54
ruff.toml
Normal file
54
ruff.toml
Normal file
@@ -0,0 +1,54 @@
|
||||
# Exclude a variety of commonly ignored directories.
|
||||
exclude = [
|
||||
".bzr",
|
||||
".direnv",
|
||||
".eggs",
|
||||
".git",
|
||||
".git-rewrite",
|
||||
".hg",
|
||||
".ipynb_checkpoints",
|
||||
".mypy_cache",
|
||||
".nox",
|
||||
".pants.d",
|
||||
".pyenv",
|
||||
".pytest_cache",
|
||||
".pytype",
|
||||
".ruff_cache",
|
||||
".svn",
|
||||
".tox",
|
||||
".venv",
|
||||
".vscode",
|
||||
"__pypackages__",
|
||||
"_build",
|
||||
"buck-out",
|
||||
"build",
|
||||
"dist",
|
||||
"node_modules",
|
||||
"site-packages",
|
||||
"venv",
|
||||
]
|
||||
|
||||
# Assume Python 3.10
|
||||
target-version = "py310"
|
||||
|
||||
[lint]
|
||||
# Enable Pyflakes (`F`) and a subset of the pycodestyle (`E`) codes by default.
|
||||
# Unlike Flake8, Ruff doesn't enable pycodestyle warnings (`W`) or
|
||||
# McCabe complexity (`C901`) by default.
|
||||
select = ["ALL"]
|
||||
ignore = ["D203", "D212", "E501"]
|
||||
|
||||
[lint.per-file-ignores]
|
||||
# "src/jobs/remove_bad_files.py" = ["ERA001"]
|
||||
"tests/settings/test__user_config_from_env.py" = ["S101"]
|
||||
"tests/jobs/test_strikes_handler.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_remove_unmonitored.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_remove_stalled.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_remove_slow.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_remove_orphans.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_remove_missing_files.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_remove_metadata_missing.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_remove_failed_imports.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_remove_failed_downloads.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_remove_bad_files.py" = ["S101", "SLF001"]
|
||||
"tests/jobs/test_removal_handler.py" = ["S101", "SLF001"]
|
||||
@@ -1,18 +1,16 @@
|
||||
# Cleans the download queue
|
||||
from src.utils.log_setup import logger
|
||||
from src.utils.queue_manager import QueueManager
|
||||
|
||||
from src.jobs.remove_bad_files import RemoveBadFiles
|
||||
from src.jobs.remove_failed_imports import RemoveFailedImports
|
||||
from src.jobs.remove_failed_downloads import RemoveFailedDownloads
|
||||
from src.jobs.remove_failed_imports import RemoveFailedImports
|
||||
from src.jobs.remove_metadata_missing import RemoveMetadataMissing
|
||||
from src.jobs.remove_missing_files import RemoveMissingFiles
|
||||
from src.jobs.remove_orphans import RemoveOrphans
|
||||
from src.jobs.remove_slow import RemoveSlow
|
||||
from src.jobs.remove_stalled import RemoveStalled
|
||||
from src.jobs.remove_unmonitored import RemoveUnmonitored
|
||||
|
||||
from src.jobs.search_handler import SearchHandler
|
||||
from src.utils.log_setup import logger
|
||||
from src.utils.queue_manager import QueueManager
|
||||
|
||||
|
||||
class JobManager:
|
||||
@@ -27,7 +25,7 @@ class JobManager:
|
||||
await self.search_jobs()
|
||||
|
||||
async def removal_jobs(self):
|
||||
logger.verbose(f"")
|
||||
logger.verbose("")
|
||||
logger.verbose(f"Cleaning queue on {self.arr.name}:")
|
||||
if not await self._queue_has_items():
|
||||
return
|
||||
@@ -62,27 +60,26 @@ class JobManager:
|
||||
full_queue = await queue_manager.get_queue_items("full")
|
||||
if full_queue:
|
||||
logger.debug(
|
||||
f"job_runner/full_queue at start: %s",
|
||||
"job_runner/full_queue at start: %s",
|
||||
queue_manager.format_queue(full_queue),
|
||||
)
|
||||
return True
|
||||
else:
|
||||
logger.verbose(">>> Queue is empty.")
|
||||
return False
|
||||
logger.verbose(">>> Queue is empty.")
|
||||
return False
|
||||
|
||||
async def _qbit_connected(self):
|
||||
for qbit in self.settings.download_clients.qbittorrent:
|
||||
# Check if any client is disconnected
|
||||
if not await qbit.check_qbit_connected():
|
||||
logger.warning(
|
||||
f">>> qBittorrent is disconnected. Skipping queue cleaning on {self.arr.name}."
|
||||
f">>> qBittorrent is disconnected. Skipping queue cleaning on {self.arr.name}.",
|
||||
)
|
||||
return False
|
||||
return True
|
||||
|
||||
def _get_removal_jobs(self):
|
||||
"""
|
||||
Returns a list of enabled removal job instances based on the provided settings.
|
||||
Return a list of enabled removal job instances based on the provided settings.
|
||||
|
||||
Each job is included if the corresponding attribute exists and is truthy in settings.jobs.
|
||||
"""
|
||||
@@ -102,6 +99,6 @@ class JobManager:
|
||||
for removal_job_name, removal_job_class in removal_job_classes.items():
|
||||
if getattr(self.settings.jobs, removal_job_name, False):
|
||||
jobs.append(
|
||||
removal_job_class(self.arr, self.settings, removal_job_name)
|
||||
removal_job_class(self.arr, self.settings, removal_job_name),
|
||||
)
|
||||
return jobs
|
||||
|
||||
0
src/jobs/__init__.py
Normal file
0
src/jobs/__init__.py
Normal file
@@ -1,5 +1,6 @@
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
|
||||
class RemovalHandler:
|
||||
def __init__(self, arr, settings, job_name):
|
||||
self.arr = arr
|
||||
@@ -37,7 +38,6 @@ class RemovalHandler:
|
||||
str(self.arr.tracker.deleted),
|
||||
)
|
||||
|
||||
|
||||
async def _remove_download(self, queue_item, blocklist):
|
||||
queue_id = queue_item["id"]
|
||||
logger.info(f">>> Job '{self.job_name}' triggered removal: {queue_item['title']}")
|
||||
@@ -50,19 +50,18 @@ class RemovalHandler:
|
||||
for qbit in self.settings.download_clients.qbittorrent:
|
||||
await qbit.set_tag(tags=[self.settings.general.obsolete_tag], hashes=[download_id])
|
||||
|
||||
|
||||
async def _get_handling_method(self, download_id, queue_item):
|
||||
if queue_item['protocol'] != 'torrent':
|
||||
return "remove" # handling is only implemented for torrent
|
||||
if queue_item["protocol"] != "torrent":
|
||||
return "remove" # handling is only implemented for torrent
|
||||
|
||||
client_implemenation = await self.arr.get_download_client_implementation(queue_item['downloadClient'])
|
||||
if client_implemenation != "QBittorrent":
|
||||
return "remove" # handling is only implemented for qbit
|
||||
client_implementation = await self.arr.get_download_client_implementation(queue_item["downloadClient"])
|
||||
if client_implementation != "QBittorrent":
|
||||
return "remove" # handling is only implemented for qbit
|
||||
|
||||
if len(self.settings.download_clients.qbittorrent) == 0:
|
||||
return "remove" # qbit not configured, thus can't tag
|
||||
return "remove" # qbit not configured, thus can't tag
|
||||
|
||||
if download_id in self.arr.tracker.private:
|
||||
return self.settings.general.private_tracker_handling
|
||||
|
||||
|
||||
return self.settings.general.public_tracker_handling
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from src.utils.queue_manager import QueueManager
|
||||
from src.utils.log_setup import logger
|
||||
from src.jobs.strikes_handler import StrikesHandler
|
||||
|
||||
from src.jobs.removal_handler import RemovalHandler
|
||||
from src.jobs.strikes_handler import StrikesHandler
|
||||
from src.utils.log_setup import logger
|
||||
from src.utils.queue_manager import QueueManager
|
||||
|
||||
|
||||
class RemovalJob(ABC):
|
||||
@@ -15,23 +16,22 @@ class RemovalJob(ABC):
|
||||
max_strikes = None
|
||||
|
||||
# Default class attributes (can be overridden in subclasses)
|
||||
def __init__(self, arr, settings, job_name):
|
||||
def __init__(self, arr, settings, job_name) -> None:
|
||||
self.arr = arr
|
||||
self.settings = settings
|
||||
self.job_name = job_name
|
||||
self.job = getattr(self.settings.jobs, self.job_name)
|
||||
self.queue_manager = QueueManager(self.arr, self.settings)
|
||||
self.strikes_handler = StrikesHandler( job_name=self.job_name, arr=self.arr, max_strikes=self.max_strikes, )
|
||||
self.strikes_handler = StrikesHandler(job_name=self.job_name, arr=self.arr, max_strikes=self.max_strikes)
|
||||
|
||||
|
||||
async def run(self):
|
||||
async def run(self) -> int:
|
||||
if not self.job.enabled:
|
||||
return 0
|
||||
if await self.is_queue_empty(self.job_name, self.queue_scope):
|
||||
if self.max_strikes:
|
||||
self.strikes_handler.all_recovered()
|
||||
return 0
|
||||
|
||||
|
||||
logger.debug(f"removal_job.py: Running job '{self.job_name}'")
|
||||
self.affected_items = await self._find_affected_items()
|
||||
self.affected_downloads = self.queue_manager.group_by_download_id(self.affected_items)
|
||||
@@ -52,8 +52,6 @@ class RemovalJob(ABC):
|
||||
|
||||
return len(self.affected_downloads)
|
||||
|
||||
|
||||
|
||||
async def is_queue_empty(self, job_name, queue_scope="normal"):
|
||||
# Check if queue empty
|
||||
queue_items = await self.queue_manager.get_queue_items(queue_scope)
|
||||
@@ -62,14 +60,12 @@ class RemovalJob(ABC):
|
||||
self.queue_manager.format_queue(queue_items),
|
||||
)
|
||||
# Early exit if no queue
|
||||
if not queue_items:
|
||||
return True
|
||||
return False
|
||||
return bool(not queue_items)
|
||||
|
||||
|
||||
def _ignore_protected(self):
|
||||
def _ignore_protected(self) -> None:
|
||||
"""
|
||||
Filters out downloads that are in the protected tracker.
|
||||
Filter out downloads that are in the protected tracker.
|
||||
|
||||
Directly updates self.affected_downloads.
|
||||
"""
|
||||
self.affected_downloads = {
|
||||
@@ -78,6 +74,6 @@ class RemovalJob(ABC):
|
||||
if download_id not in self.arr.tracker.protected
|
||||
}
|
||||
|
||||
@abstractmethod # Imlemented on level of each removal job
|
||||
async def _find_affected_items(self):
|
||||
pass
|
||||
@abstractmethod # Implemented on level of each removal job
|
||||
async def _find_affected_items(self) -> None:
|
||||
pass
|
||||
|
||||
@@ -1,25 +1,30 @@
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from src.jobs.removal_job import RemovalJob
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
good_extensions = [
|
||||
# Movies, TV Shows - Radarr, Sonarr, Whisparr
|
||||
".webm", ".m4v", ".3gp", ".nsv", ".ty", ".strm", ".rm", ".rmvb", ".m3u", ".ifo", ".mov", ".qt", ".divx", ".xvid", ".bivx", ".nrg", ".pva", ".wmv", ".asf", ".asx", ".ogm", ".ogv", ".m2v", ".avi", ".bin", ".dat", ".dvr-ms", ".mpg", ".mpeg", ".mp4", ".avc", ".vp3", ".svq3", ".nuv", ".viv", ".dv", ".fli", ".flv", ".wpl", ".img", ".iso", ".vob", ".mkv", ".mk3d", ".ts", ".wtv", ".m2ts",
|
||||
# Subs - Radarr, Sonarr, Whisparr
|
||||
".sub", ".srt", ".idx",
|
||||
# Audio - Lidarr, Readarr
|
||||
".aac", ".aif", ".aiff", ".aifc", ".ape", ".flac", ".mp2", ".mp3", ".m4a", ".m4b", ".m4p", ".mp4a", ".oga", ".ogg", ".opus", ".vorbis", ".wma", ".wav", ".wv", "wavepack",
|
||||
# Text - Readarr
|
||||
".epub", ".kepub", ".mobi", ".azw3", ".pdf",
|
||||
]
|
||||
|
||||
bad_keywords = ["Sample", "Trailer"]
|
||||
|
||||
|
||||
class RemoveBadFiles(RemovalJob):
|
||||
queue_scope = "normal"
|
||||
blocklist = True
|
||||
|
||||
# fmt: off
|
||||
good_extensions = [
|
||||
# Movies, TV Shows (Radarr, Sonarr, Whisparr)
|
||||
".webm", ".m4v", ".3gp", ".nsv", ".ty", ".strm", ".rm", ".rmvb", ".m3u", ".ifo", ".mov", ".qt", ".divx", ".xvid", ".bivx", ".nrg", ".pva", ".wmv", ".asf", ".asx", ".ogm", ".ogv", ".m2v", ".avi", ".bin", ".dat", ".dvr-ms", ".mpg", ".mpeg", ".mp4", ".avc", ".vp3", ".svq3", ".nuv", ".viv", ".dv", ".fli", ".flv", ".wpl", ".img", ".iso", ".vob", ".mkv", ".mk3d", ".ts", ".wtv", ".m2ts",
|
||||
# Subs (Radarr, Sonarr, Whisparr)
|
||||
".sub", ".srt", ".idx",
|
||||
# Audio (Lidarr, Readarr)
|
||||
".aac", ".aif", ".aiff", ".aifc", ".ape", ".flac", ".mp2", ".mp3", ".m4a", ".m4b", ".m4p", ".mp4a", ".oga", ".ogg", ".opus", ".vorbis", ".wma", ".wav", ".wv", "wavepack",
|
||||
# Text (Readarr)
|
||||
".epub", ".kepub", ".mobi", ".azw3", ".pdf",
|
||||
]
|
||||
|
||||
bad_keywords = ["Sample", "Trailer"]
|
||||
bad_keyword_limit = 500 # Megabyte; do not remove items larger than that
|
||||
bad_keyword_limit = 500 # Megabyte; do not remove items larger than that
|
||||
|
||||
# fmt: on
|
||||
|
||||
async def _find_affected_items(self):
|
||||
@@ -38,10 +43,12 @@ class RemoveBadFiles(RemovalJob):
|
||||
affected_items.extend(client_items)
|
||||
return affected_items
|
||||
|
||||
|
||||
def _group_download_ids_by_client(self, queue):
|
||||
"""Group all relevant download IDs by download client.
|
||||
Limited to qbittorrent currently, as no other download clients implemented"""
|
||||
"""
|
||||
Group all relevant download IDs by download client.
|
||||
|
||||
Limited to qbittorrent currently, as no other download clients implemented
|
||||
"""
|
||||
result = {}
|
||||
|
||||
for item in queue:
|
||||
@@ -59,12 +66,11 @@ class RemoveBadFiles(RemovalJob):
|
||||
|
||||
result.setdefault(download_client, {
|
||||
"download_client_type": download_client_type,
|
||||
"download_ids": set()
|
||||
"download_ids": set(),
|
||||
})["download_ids"].add(item["downloadId"])
|
||||
|
||||
return result
|
||||
|
||||
|
||||
async def _handle_qbit(self, qbit_client, hashes, queue):
|
||||
"""Handle qBittorrent-specific logic for marking files as 'Do Not Download'."""
|
||||
affected_items = []
|
||||
@@ -88,55 +94,48 @@ class RemoveBadFiles(RemovalJob):
|
||||
|
||||
return affected_items
|
||||
|
||||
# -- Helper functions for qbit handling --
|
||||
# -- Helper functions for qbit handling --
|
||||
def _get_items_to_process(self, qbit_items):
|
||||
"""Return only downloads that have metadata, are supposedly downloading.
|
||||
Additionally, each dowload should be checked at least once (for bad extensions), and thereafter only if availabiliy drops to less than 100%"""
|
||||
"""
|
||||
Return only downloads that have metadata, are supposedly downloading.
|
||||
|
||||
This is to prevent the case where a download has metadata but is not actually downloading.
|
||||
Additionally, each download should be checked at least once (for bad extensions), and thereafter only if availability drops to less than 100%
|
||||
"""
|
||||
return [
|
||||
item for item in qbit_items
|
||||
if (
|
||||
item.get("has_metadata")
|
||||
and item["state"] in {"downloading", "forcedDL", "stalledDL"}
|
||||
and (
|
||||
item["hash"] not in self.arr.tracker.extension_checked
|
||||
or item["availability"] < 1
|
||||
)
|
||||
item.get("has_metadata")
|
||||
and item["state"] in {"downloading", "forcedDL", "stalledDL"}
|
||||
and (
|
||||
item["hash"] not in self.arr.tracker.extension_checked
|
||||
or item["availability"] < 1
|
||||
)
|
||||
)
|
||||
]
|
||||
|
||||
|
||||
async def _get_active_files(self, qbit_client, torrent_hash):
|
||||
@staticmethod
|
||||
async def _get_active_files(qbit_client, torrent_hash) -> list[dict]:
|
||||
"""Return only files from the torrent that are still set to download, with file extension and name."""
|
||||
files = await qbit_client.get_torrent_files(torrent_hash) # Await the async method
|
||||
return [
|
||||
{
|
||||
**f, # Include all original file properties
|
||||
"file_name": os.path.basename(f["name"]), # Add proper filename (without folder)
|
||||
"file_extension": os.path.splitext(f["name"])[1], # Add file_extension (e.g., .mp3)
|
||||
"file_name": Path(f["name"]).name, # Add proper filename (without folder)
|
||||
"file_extension": Path(f["name"]).suffix, # Add file_extension (e.g., .mp3)
|
||||
}
|
||||
for f in files if f["priority"] > 0
|
||||
]
|
||||
|
||||
def _log_stopped_files(self, stopped_files, torrent_name):
|
||||
@staticmethod
|
||||
def _log_stopped_files(stopped_files, torrent_name) -> None:
|
||||
logger.verbose(
|
||||
f">>> Stopped downloading {len(stopped_files)} file{'s' if len(stopped_files) != 1 else ''} in: {torrent_name}"
|
||||
f">>> Stopped downloading {len(stopped_files)} file{'s' if len(stopped_files) != 1 else ''} in: {torrent_name}",
|
||||
)
|
||||
|
||||
for file, reasons in stopped_files:
|
||||
logger.verbose(f">>> - {file['file_name']} ({' & '.join(reasons)})")
|
||||
|
||||
def _all_files_stopped(self, torrent_files):
|
||||
"""Check if no files remain with download priority."""
|
||||
return all(f["priority"] == 0 for f in torrent_files)
|
||||
|
||||
def _match_queue_items(self, queue, download_hash):
|
||||
"""Find matching queue item(s) by downloadId (uppercase)."""
|
||||
return [
|
||||
item for item in queue
|
||||
if item["downloadId"] == download_hash.upper()
|
||||
]
|
||||
|
||||
|
||||
def _get_stoppable_files(self, torrent_files):
|
||||
"""Return files that can be marked as 'Do not Download' based on specific conditions."""
|
||||
stoppable_files = []
|
||||
@@ -153,7 +152,7 @@ class RemoveBadFiles(RemovalJob):
|
||||
# Check for bad keywords
|
||||
if self._contains_bad_keyword(file):
|
||||
reasons.append("Contains bad keyword in path")
|
||||
|
||||
|
||||
# Check if the file has low availability
|
||||
if self._is_complete_partial(file):
|
||||
reasons.append(f"Low availability: {file['availability'] * 100:.1f}%")
|
||||
@@ -164,10 +163,10 @@ class RemoveBadFiles(RemovalJob):
|
||||
|
||||
return stoppable_files
|
||||
|
||||
|
||||
def _is_bad_extension(self, file):
|
||||
@staticmethod
|
||||
def _is_bad_extension(file) -> bool:
|
||||
"""Check if the file has a bad extension."""
|
||||
return file['file_extension'].lower() not in self.good_extensions
|
||||
return file["file_extension"].lower() not in good_extensions
|
||||
|
||||
def _contains_bad_keyword(self, file):
|
||||
"""Check if the file path contains a bad keyword and is smaller than the limit."""
|
||||
@@ -175,34 +174,31 @@ class RemoveBadFiles(RemovalJob):
|
||||
file_size_mb = file.get("size", 0) / 1024 / 1024
|
||||
|
||||
return (
|
||||
any(keyword.lower() in file_path for keyword in self.bad_keywords)
|
||||
and file_size_mb <= self.bad_keyword_limit
|
||||
any(keyword.lower() in file_path for keyword in bad_keywords)
|
||||
and file_size_mb <= self.bad_keyword_limit
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def _is_complete_partial(file) -> bool:
|
||||
"""Check if the availability is less than 100% and the file is not fully downloaded."""
|
||||
return bool(file["availability"] < 1 and file["progress"] != 1)
|
||||
|
||||
def _is_complete_partial(self, file):
|
||||
"""Check if the availability is less than 100% and the file is not fully downloaded"""
|
||||
if file["availability"] < 1 and not file["progress"] == 1:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
async def _mark_files_as_stopped(self, qbit_client, torrent_hash, stoppable_files):
|
||||
"""Mark specific files as 'Do Not Download' in qBittorrent."""
|
||||
for file, _ in stoppable_files:
|
||||
for file, _ in stoppable_files:
|
||||
if not self.settings.general.test_run:
|
||||
await qbit_client.set_torrent_file_priority(torrent_hash, file['index'], 0)
|
||||
await qbit_client.set_torrent_file_priority(torrent_hash, file["index"], 0)
|
||||
|
||||
def _all_files_stopped(self, torrent_files, stoppable_files):
|
||||
@staticmethod
|
||||
def _all_files_stopped(torrent_files, stoppable_files) -> bool:
|
||||
"""Check if all files are either stopped (priority 0) or in the stoppable files list."""
|
||||
stoppable_file_indexes= {file[0]["index"] for file in stoppable_files}
|
||||
stoppable_file_indexes = {file[0]["index"] for file in stoppable_files}
|
||||
return all(f["priority"] == 0 or f["index"] in stoppable_file_indexes for f in torrent_files)
|
||||
|
||||
def _match_queue_items(self, queue, download_hash):
|
||||
@staticmethod
|
||||
def _match_queue_items(queue, download_hash) -> list:
|
||||
"""Find matching queue item(s) by downloadId (uppercase)."""
|
||||
return [
|
||||
item for item in queue
|
||||
if item["downloadId"].upper() == download_hash.upper()
|
||||
]
|
||||
|
||||
|
||||
|
||||
@@ -1,17 +1,13 @@
|
||||
from src.jobs.removal_job import RemovalJob
|
||||
|
||||
|
||||
class RemoveFailedDownloads(RemovalJob):
|
||||
queue_scope = "normal"
|
||||
blocklist = False
|
||||
|
||||
async def _find_affected_items(self):
|
||||
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
|
||||
affected_items = []
|
||||
|
||||
for item in queue:
|
||||
if "status" in item:
|
||||
if item["status"] == "failed":
|
||||
affected_items.append(item)
|
||||
return affected_items
|
||||
return [item for item in queue if "status" in item and item["status"] == "failed"]
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import fnmatch
|
||||
|
||||
from src.jobs.removal_job import RemovalJob
|
||||
|
||||
|
||||
class RemoveFailedImports(RemovalJob):
|
||||
queue_scope = "normal"
|
||||
blocklist = True
|
||||
@@ -21,11 +23,12 @@ class RemoveFailedImports(RemovalJob):
|
||||
|
||||
return affected_items
|
||||
|
||||
def _is_valid_item(self, item):
|
||||
@staticmethod
|
||||
def _is_valid_item(item) -> bool:
|
||||
"""Check if item has the necessary fields and is in a valid state."""
|
||||
# Required fields that must be present in the item
|
||||
required_fields = {"status", "trackedDownloadStatus", "trackedDownloadState", "statusMessages"}
|
||||
|
||||
|
||||
# Check if all required fields are present
|
||||
if not all(field in item for field in required_fields):
|
||||
return False
|
||||
@@ -35,35 +38,33 @@ class RemoveFailedImports(RemovalJob):
|
||||
return False
|
||||
|
||||
# Check if the tracked download state is one of the allowed states
|
||||
if item["trackedDownloadState"] not in {"importPending", "importFailed", "importBlocked"}:
|
||||
return False
|
||||
|
||||
# If all checks pass, the item is valid
|
||||
return True
|
||||
return not (item["trackedDownloadState"] not in {"importPending", "importFailed", "importBlocked"})
|
||||
|
||||
|
||||
def _prepare_removal_messages(self, item, patterns):
|
||||
def _prepare_removal_messages(self, item, patterns) -> list[str]:
|
||||
"""Prepare removal messages, adding the tracked download state and matching messages."""
|
||||
messages = self._get_matching_messages(item["statusMessages"], patterns)
|
||||
if not messages:
|
||||
return []
|
||||
|
||||
removal_messages = [f">>>>> Tracked Download State: {item['trackedDownloadState']}"] + messages
|
||||
return removal_messages
|
||||
return [f">>>>> Tracked Download State: {item['trackedDownloadState']}", *messages]
|
||||
|
||||
def _get_matching_messages(self, status_messages, patterns):
|
||||
@staticmethod
|
||||
def _get_matching_messages(status_messages, patterns) -> list:
|
||||
"""Extract messages matching the provided patterns (or all messages if no pattern)."""
|
||||
matched_messages = []
|
||||
|
||||
|
||||
if not patterns:
|
||||
# No patterns provided, include all messages
|
||||
for status_message in status_messages:
|
||||
matched_messages.extend(f">>>>> - {msg}" for msg in status_message.get("messages", []))
|
||||
else:
|
||||
# Patterns provided, match only those messages that fit the patterns
|
||||
for status_message in status_messages:
|
||||
for msg in status_message.get("messages", []):
|
||||
if any(fnmatch.fnmatch(msg, pattern) for pattern in patterns):
|
||||
matched_messages.append(f">>>>> - {msg}")
|
||||
|
||||
return matched_messages
|
||||
matched_messages.extend(
|
||||
f">>>>> - {msg}"
|
||||
for status_message in status_messages
|
||||
for msg in status_message.get("messages", [])
|
||||
if any(fnmatch.fnmatch(msg, pattern) for pattern in patterns)
|
||||
)
|
||||
|
||||
return matched_messages
|
||||
|
||||
@@ -7,13 +7,4 @@ class RemoveMetadataMissing(RemovalJob):
|
||||
|
||||
async def _find_affected_items(self):
|
||||
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
|
||||
affected_items = []
|
||||
|
||||
for item in queue:
|
||||
if "errorMessage" in item and "status" in item:
|
||||
if (
|
||||
item["status"] == "queued"
|
||||
and item["errorMessage"] == "qBittorrent is downloading metadata"
|
||||
):
|
||||
affected_items.append(item)
|
||||
return affected_items
|
||||
return [item for item in queue if "errorMessage" in item and "status" in item and (item["status"] == "queued" and item["errorMessage"] == "qBittorrent is downloading metadata")]
|
||||
|
||||
@@ -1,20 +1,17 @@
|
||||
from src.jobs.removal_job import RemovalJob
|
||||
|
||||
|
||||
class RemoveMissingFiles(RemovalJob):
|
||||
queue_scope = "normal"
|
||||
blocklist = False
|
||||
|
||||
async def _find_affected_items(self):
|
||||
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
|
||||
affected_items = []
|
||||
|
||||
for item in queue:
|
||||
if self._is_failed_torrent(item) or self._is_bad_nzb(item):
|
||||
affected_items.append(item)
|
||||
return [item for item in queue if self._is_failed_torrent(item) or self._is_bad_nzb(item)]
|
||||
|
||||
return affected_items
|
||||
|
||||
def _is_failed_torrent(self, item):
|
||||
@staticmethod
|
||||
def _is_failed_torrent(item) -> bool:
|
||||
return (
|
||||
"status" in item
|
||||
and item["status"] == "warning"
|
||||
@@ -26,7 +23,8 @@ class RemoveMissingFiles(RemovalJob):
|
||||
]
|
||||
)
|
||||
|
||||
def _is_bad_nzb(self, item):
|
||||
@staticmethod
|
||||
def _is_bad_nzb(item) -> bool:
|
||||
if "status" in item and item["status"] == "completed" and "statusMessages" in item:
|
||||
for status_message in item["statusMessages"]:
|
||||
if "messages" in status_message:
|
||||
|
||||
@@ -1,11 +1,9 @@
|
||||
from src.jobs.removal_job import RemovalJob
|
||||
|
||||
|
||||
class RemoveOrphans(RemovalJob):
|
||||
queue_scope = "full"
|
||||
blocklist = False
|
||||
|
||||
async def _find_affected_items(self):
|
||||
affected_items = await self.queue_manager.get_queue_items(queue_scope="orphans")
|
||||
return affected_items
|
||||
|
||||
|
||||
return await self.queue_manager.get_queue_items(queue_scope="orphans")
|
||||
|
||||
@@ -26,31 +26,34 @@ class RemoveSlow(RemovalJob):
|
||||
|
||||
if self._is_completed_but_stuck(item):
|
||||
logger.info(
|
||||
f">>> '{self.job_name}' detected download marked as slow as well as completed. Files most likely in process of being moved. Not removing: {item['title']}"
|
||||
f">>> '{self.job_name}' detected download marked as slow as well as completed. Files most likely in process of being moved. Not removing: {item['title']}",
|
||||
)
|
||||
continue
|
||||
|
||||
downloaded, previous, increment, speed = await self._get_progress_stats(
|
||||
item
|
||||
item,
|
||||
)
|
||||
if self._is_slow(speed):
|
||||
affected_items.append(item)
|
||||
logger.debug(
|
||||
f'remove_slow/slow speed detected: {item["title"]} '
|
||||
f"(Speed: {speed} KB/s, KB now: {downloaded}, KB previous: {previous}, "
|
||||
f"Diff: {increment}, In Minutes: {self.settings.general.timer})"
|
||||
f"Diff: {increment}, In Minutes: {self.settings.general.timer})",
|
||||
)
|
||||
|
||||
return affected_items
|
||||
|
||||
def _is_valid_item(self, item):
|
||||
required_keys = {"downloadId", "size", "sizeleft", "status", "protocol"}
|
||||
@staticmethod
|
||||
def _is_valid_item(item) -> bool:
|
||||
required_keys = {"downloadId", "size", "sizeleft", "status", "protocol"}
|
||||
return required_keys.issubset(item)
|
||||
|
||||
def _is_usenet(self, item):
|
||||
@staticmethod
|
||||
def _is_usenet(item) -> bool:
|
||||
return item.get("protocol") == "usenet"
|
||||
|
||||
def _is_completed_but_stuck(self, item):
|
||||
@staticmethod
|
||||
def _is_completed_but_stuck(item) -> bool:
|
||||
return (
|
||||
item["status"] == "downloading"
|
||||
and item["size"] > 0
|
||||
@@ -68,7 +71,7 @@ class RemoveSlow(RemovalJob):
|
||||
|
||||
download_progress = self._get_download_progress(item, download_id)
|
||||
previous_progress, increment, speed = self._compute_increment_and_speed(
|
||||
download_id, download_progress
|
||||
download_id, download_progress,
|
||||
)
|
||||
|
||||
self.arr.tracker.download_progress[download_id] = download_progress
|
||||
@@ -84,15 +87,18 @@ class RemoveSlow(RemovalJob):
|
||||
return progress
|
||||
return self._fallback_progress(item)
|
||||
|
||||
def _try_get_qbit_progress(self, qbit, download_id):
|
||||
@staticmethod
|
||||
def _try_get_qbit_progress(qbit, download_id):
|
||||
# noinspection PyBroadException
|
||||
try:
|
||||
return qbit.get_download_progress(download_id)
|
||||
except Exception:
|
||||
except Exception: # noqa: BLE001
|
||||
return None
|
||||
|
||||
def _fallback_progress(self, item):
|
||||
@staticmethod
|
||||
def _fallback_progress(item):
|
||||
logger.debug(
|
||||
"get_progress_stats: Using imprecise method to determine download increments because either a different download client than qBitorrent is used, or the download client name in the config does not match with what is configured in your *arr download client settings"
|
||||
"get_progress_stats: Using imprecise method to determine download increments because either a different download client than qBitorrent is used, or the download client name in the config does not match with what is configured in your *arr download client settings",
|
||||
)
|
||||
return item["size"] - item["sizeleft"]
|
||||
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
"""Removes stalled downloads."""
|
||||
|
||||
from src.jobs.removal_job import RemovalJob
|
||||
|
||||
|
||||
@@ -7,15 +9,7 @@ class RemoveStalled(RemovalJob):
|
||||
|
||||
async def _find_affected_items(self):
|
||||
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
|
||||
affected_items = []
|
||||
for item in queue:
|
||||
if "errorMessage" in item and "status" in item:
|
||||
if (
|
||||
item["status"] == "warning"
|
||||
and item["errorMessage"]
|
||||
== "The download is stalled with no connections"
|
||||
):
|
||||
affected_items.append(item)
|
||||
return affected_items
|
||||
return [item for item in queue if "errorMessage" in item and "status" in item and (item["status"] == "warning" and item["errorMessage"] == "The download is stalled with no connections")]
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
from src.jobs.removal_job import RemovalJob
|
||||
|
||||
|
||||
class RemoveUnmonitored(RemovalJob):
|
||||
queue_scope = "normal"
|
||||
blocklist = False
|
||||
@@ -10,15 +11,8 @@ class RemoveUnmonitored(RemovalJob):
|
||||
# First pass: Check if items are monitored
|
||||
monitored_download_ids = []
|
||||
for item in queue:
|
||||
detail_item_id = item["detail_item_id"]
|
||||
if await self.arr.is_monitored(detail_item_id):
|
||||
monitored_download_ids.append(item["downloadId"])
|
||||
if await self.arr.is_monitored(item["detail_item_id"]):
|
||||
monitored_download_ids.append(item["downloadId"]) # noqa: PERF401 - Can't make this a list comprehension due to the 'await'
|
||||
|
||||
# Second pass: Append queue items none that depends on download id is monitored
|
||||
affected_items = []
|
||||
for queue_item in queue:
|
||||
if queue_item["downloadId"] not in monitored_download_ids:
|
||||
affected_items.append(
|
||||
queue_item
|
||||
) # One downloadID may be shared by multiple queue_items. Only removes it if ALL queueitems are unmonitored
|
||||
return affected_items
|
||||
return [queue_item for queue_item in queue if queue_item["downloadId"] not in monitored_download_ids] # One downloadID may be shared by multiple queue_items. Only removes it if ALL queue_items are unmonitored
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
from datetime import datetime, timedelta, timezone
|
||||
|
||||
import dateutil.parser
|
||||
|
||||
from src.utils.log_setup import logger
|
||||
from src.utils.wanted_manager import WantedManager
|
||||
from src.utils.queue_manager import QueueManager
|
||||
from src.utils.wanted_manager import WantedManager
|
||||
|
||||
|
||||
class SearchHandler:
|
||||
@@ -22,7 +23,7 @@ class SearchHandler:
|
||||
return
|
||||
|
||||
queue = await QueueManager(self.arr, self.settings).get_queue_items(
|
||||
queue_scope="normal"
|
||||
queue_scope="normal",
|
||||
)
|
||||
wanted_items = self._filter_wanted_items(wanted_items, queue)
|
||||
if not wanted_items:
|
||||
@@ -40,7 +41,8 @@ class SearchHandler:
|
||||
logger.verbose(f"Searching for unmet cutoff content on {self.arr.name}:")
|
||||
self.job = self.settings.jobs.search_unmet_cutoff_content
|
||||
else:
|
||||
raise ValueError(f"Unknown search type: {search_type}")
|
||||
error = f"Unknown search type: {search_type}"
|
||||
raise ValueError(error)
|
||||
|
||||
def _get_initial_wanted_items(self, search_type):
|
||||
wanted = self.wanted_manager.get_wanted_items(search_type)
|
||||
@@ -51,13 +53,13 @@ class SearchHandler:
|
||||
def _filter_wanted_items(self, items, queue):
|
||||
items = self._filter_already_downloading(items, queue)
|
||||
if not items:
|
||||
logger.verbose(f">>> All items already downloading, nothing to search for.")
|
||||
logger.verbose(">>> All items already downloading, nothing to search for.")
|
||||
return []
|
||||
|
||||
items = self._filter_recent_searches(items)
|
||||
if not items:
|
||||
logger.verbose(
|
||||
f">>> All items recently searched for, thus not triggering another search."
|
||||
">>> All items recently searched for, thus not triggering another search.",
|
||||
)
|
||||
return []
|
||||
|
||||
|
||||
@@ -8,12 +8,10 @@ class StrikesHandler:
|
||||
self.max_strikes = max_strikes
|
||||
self.tracker.defective.setdefault(job_name, {})
|
||||
|
||||
|
||||
def check_permitted_strikes(self, affected_downloads):
|
||||
self._recover_downloads(affected_downloads)
|
||||
return self._apply_strikes_and_filter(affected_downloads)
|
||||
|
||||
|
||||
def all_recovered(self):
|
||||
if self.tracker.defective.get(self.job_name):
|
||||
self.tracker.defective[self.job_name] = {}
|
||||
@@ -35,7 +33,6 @@ class StrikesHandler:
|
||||
)
|
||||
del self.tracker.defective[self.job_name][d_id]
|
||||
|
||||
|
||||
def _apply_strikes_and_filter(self, affected_downloads):
|
||||
for d_id, queue_items in list(affected_downloads.items()):
|
||||
title = queue_items[0]["title"]
|
||||
@@ -47,10 +44,9 @@ class StrikesHandler:
|
||||
|
||||
return affected_downloads
|
||||
|
||||
|
||||
def _increment_strike(self, d_id, title):
|
||||
entry = self.tracker.defective[self.job_name].setdefault(
|
||||
d_id, {"title": title, "strikes": 0}
|
||||
d_id, {"title": title, "strikes": 0},
|
||||
)
|
||||
entry["strikes"] += 1
|
||||
return entry["strikes"]
|
||||
@@ -66,7 +62,7 @@ class StrikesHandler:
|
||||
">>> Job '%s' detected download (%s/%s strikes): %s",
|
||||
self.job_name, strikes, self.max_strikes, title,
|
||||
)
|
||||
elif strikes_left <= -2:
|
||||
elif strikes_left <= -2: # noqa: PLR2004
|
||||
logger.info(
|
||||
">>> Job '%s' detected download (%s/%s strikes): %s",
|
||||
self.job_name, strikes, self.max_strikes, title,
|
||||
@@ -74,4 +70,4 @@ class StrikesHandler:
|
||||
logger.info(
|
||||
'>>> [Tip!] Since this download should already have been removed in a previous iteration but keeps coming back, this indicates the blocking of the torrent does not work correctly. Consider turning on the option "Reject Blocklisted Torrent Hashes While Grabbing" on the indexer in the *arr app: %s',
|
||||
title,
|
||||
)
|
||||
)
|
||||
|
||||
0
src/settings/__init__.py
Normal file
0
src/settings/__init__.py
Normal file
@@ -1,5 +1,6 @@
|
||||
import yaml
|
||||
|
||||
|
||||
def mask_sensitive_value(value, key, sensitive_attributes):
|
||||
"""Mask the value if it's in the sensitive attributes."""
|
||||
return "*****" if key in sensitive_attributes else value
|
||||
@@ -40,19 +41,19 @@ def clean_object(obj, sensitive_attributes, internal_attributes, hide_internal_a
|
||||
"""Clean an object (either a dict, class instance, or other types)."""
|
||||
if isinstance(obj, dict):
|
||||
return clean_dict(obj, sensitive_attributes, internal_attributes, hide_internal_attr)
|
||||
elif hasattr(obj, "__dict__"):
|
||||
if hasattr(obj, "__dict__"):
|
||||
return clean_dict(vars(obj), sensitive_attributes, internal_attributes, hide_internal_attr)
|
||||
else:
|
||||
return mask_sensitive_value(obj, "", sensitive_attributes)
|
||||
return mask_sensitive_value(obj, "", sensitive_attributes)
|
||||
|
||||
|
||||
def get_config_as_yaml(
|
||||
data,
|
||||
sensitive_attributes=None,
|
||||
internal_attributes=None,
|
||||
*,
|
||||
hide_internal_attr=True,
|
||||
):
|
||||
"""Main function to process the configuration into YAML format."""
|
||||
"""Process the configuration into YAML format."""
|
||||
if sensitive_attributes is None:
|
||||
sensitive_attributes = set()
|
||||
if internal_attributes is None:
|
||||
@@ -67,7 +68,7 @@ def get_config_as_yaml(
|
||||
# Process list-based config
|
||||
if isinstance(obj, list):
|
||||
cleaned_list = clean_list(
|
||||
obj, sensitive_attributes, internal_attributes, hide_internal_attr
|
||||
obj, sensitive_attributes, internal_attributes, hide_internal_attr,
|
||||
)
|
||||
if cleaned_list:
|
||||
config_output[key] = cleaned_list
|
||||
@@ -75,7 +76,7 @@ def get_config_as_yaml(
|
||||
# Process dict or class-like object config
|
||||
else:
|
||||
cleaned_obj = clean_object(
|
||||
obj, sensitive_attributes, internal_attributes, hide_internal_attr
|
||||
obj, sensitive_attributes, internal_attributes, hide_internal_attr,
|
||||
)
|
||||
if cleaned_obj:
|
||||
config_output[key] = cleaned_obj
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import os
|
||||
|
||||
from src.settings._config_as_yaml import get_config_as_yaml
|
||||
|
||||
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
from src.settings._config_as_yaml import get_config_as_yaml
|
||||
from src.settings._download_clients_qBit import QbitClients
|
||||
from src.settings._download_clients_qbit import QbitClients
|
||||
|
||||
download_client_types = ["qbittorrent"]
|
||||
|
||||
|
||||
class DownloadClients:
|
||||
"""Represents all download clients."""
|
||||
|
||||
qbittorrent = None
|
||||
download_client_types = [
|
||||
"qbittorrent",
|
||||
]
|
||||
|
||||
def __init__(self, config, settings):
|
||||
self._set_qbit_clients(config, settings)
|
||||
self.check_unique_download_client_types()
|
||||
@@ -15,7 +17,7 @@ class DownloadClients:
|
||||
download_clients = config.get("download_clients", {})
|
||||
if isinstance(download_clients, dict):
|
||||
self.qbittorrent = QbitClients(config, settings)
|
||||
if not self.qbittorrent: # Unsets settings in general section needed for qbit (if no qbit is defined)
|
||||
if not self.qbittorrent: # Unsets settings in general section needed for qbit (if no qbit is defined)
|
||||
for key in [
|
||||
"private_tracker_handling",
|
||||
"public_tracker_handling",
|
||||
@@ -25,40 +27,42 @@ class DownloadClients:
|
||||
setattr(settings.general, key, None)
|
||||
|
||||
def config_as_yaml(self):
|
||||
"""Logs all download clients."""
|
||||
"""Log all download clients."""
|
||||
return get_config_as_yaml(
|
||||
{"qbittorrent": self.qbittorrent},
|
||||
sensitive_attributes={"username", "password", "cookie"},
|
||||
internal_attributes={ "api_url", "cookie", "settings", "min_version"},
|
||||
hide_internal_attr=True
|
||||
internal_attributes={"api_url", "cookie", "settings", "min_version"},
|
||||
hide_internal_attr=True,
|
||||
)
|
||||
|
||||
|
||||
def check_unique_download_client_types(self):
|
||||
"""Ensures that all download client names are unique.
|
||||
This is important since downloadClient in arr goes by name, and
|
||||
this is needed to link it to the right IP set up in the yaml config
|
||||
(which may be different to the one donfigured in arr)"""
|
||||
"""
|
||||
Ensure that all download client names are unique.
|
||||
|
||||
This is important since downloadClient in arr goes by name, and
|
||||
this is needed to link it to the right IP set up in the yaml config
|
||||
(which may be different to the one configured in arr)
|
||||
"""
|
||||
seen = set()
|
||||
for download_client_type in self.download_client_types:
|
||||
for download_client_type in download_client_types:
|
||||
download_clients = getattr(self, download_client_type, [])
|
||||
|
||||
# Check each client in the list
|
||||
for client in download_clients:
|
||||
name = getattr(client, "name", None)
|
||||
if name is None:
|
||||
raise ValueError(f'{download_client_type} client does not have a name ({client.base_url}).\nMake sure that the name corresponds with the name set in your *arr app for that download client.')
|
||||
error = f"{download_client_type} client does not have a name ({client.base_url}).\nMake sure that the name corresponds with the name set in your *arr app for that download client."
|
||||
raise ValueError(error)
|
||||
|
||||
if name.lower() in seen:
|
||||
raise ValueError(f"Download client names must be unique. Duplicate name found: '{name}'\nMake sure that the name corresponds with the name set in your *arr app for that download client.")
|
||||
else:
|
||||
seen.add(name.lower())
|
||||
error = f"Download client names must be unique. Duplicate name found: '{name}'\nMake sure that the name corresponds with the name set in your *arr app for that download client."
|
||||
raise ValueError(error)
|
||||
seen.add(name.lower())
|
||||
|
||||
def get_download_client_by_name(self, name: str):
|
||||
"""Retrieve the download client and its type by its name."""
|
||||
name_lower = name.lower()
|
||||
for download_client_type in self.download_client_types:
|
||||
for download_client_type in download_client_types:
|
||||
download_clients = getattr(self, download_client_type, [])
|
||||
|
||||
# Check each client in the list
|
||||
|
||||
@@ -1,14 +1,16 @@
|
||||
from packaging import version
|
||||
from src.utils.common import make_request, wait_and_exit
|
||||
|
||||
from src.settings._constants import ApiEndpoints, MinVersions
|
||||
from src.utils.common import make_request, wait_and_exit
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
|
||||
class QbitError(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class QbitClients(list):
|
||||
"""Represents all qBittorrent clients"""
|
||||
"""Represents all qBittorrent clients."""
|
||||
|
||||
def __init__(self, config, settings):
|
||||
super().__init__()
|
||||
@@ -19,7 +21,7 @@ class QbitClients(list):
|
||||
|
||||
if not isinstance(qbit_config, list):
|
||||
logger.error(
|
||||
"Invalid config format for qbittorrent clients. Expected a list."
|
||||
"Invalid config format for qbittorrent clients. Expected a list.",
|
||||
)
|
||||
return
|
||||
|
||||
@@ -30,29 +32,29 @@ class QbitClients(list):
|
||||
logger.error(f"Error parsing qbittorrent client config: {e}")
|
||||
|
||||
|
||||
|
||||
class QbitClient:
|
||||
"""Represents a single qBittorrent client."""
|
||||
|
||||
cookie: str = None
|
||||
cookie: dict[str, str] = None
|
||||
version: str = None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
settings,
|
||||
base_url: str = None,
|
||||
username: str = None,
|
||||
password: str = None,
|
||||
name: str = None
|
||||
self,
|
||||
settings,
|
||||
base_url: str | None = None,
|
||||
username: str | None = None,
|
||||
password: str | None = None,
|
||||
name: str | None = None,
|
||||
):
|
||||
self.settings = settings
|
||||
if not base_url:
|
||||
logger.error("Skipping qBittorrent client entry: 'base_url' is required.")
|
||||
raise ValueError("qBittorrent client must have a 'base_url'.")
|
||||
error = "qBittorrent client must have a 'base_url'."
|
||||
raise ValueError(error)
|
||||
|
||||
self.base_url = base_url.rstrip("/")
|
||||
self.api_url = self.base_url + getattr(ApiEndpoints, "qbittorrent")
|
||||
self.min_version = getattr(MinVersions, "qbittorrent")
|
||||
self.api_url = self.base_url + ApiEndpoints.qbittorrent
|
||||
self.min_version = MinVersions.qbittorrent
|
||||
self.username = username
|
||||
self.password = password
|
||||
self.name = name
|
||||
@@ -63,24 +65,28 @@ class QbitClient:
|
||||
self._remove_none_attributes()
|
||||
|
||||
def _remove_none_attributes(self):
|
||||
"""Removes attributes that are None to keep the object clean."""
|
||||
"""Remove attributes that are None to keep the object clean."""
|
||||
for attr in list(vars(self)):
|
||||
if getattr(self, attr) is None:
|
||||
delattr(self, attr)
|
||||
|
||||
|
||||
async def refresh_cookie(self):
|
||||
"""Refresh the qBittorrent session cookie."""
|
||||
|
||||
def _connection_error():
|
||||
error = "Login failed."
|
||||
raise ConnectionError(error)
|
||||
|
||||
try:
|
||||
endpoint = f"{self.api_url}/auth/login"
|
||||
data = {"username": getattr(self, 'username', ''), "password": getattr(self, 'password', '')}
|
||||
data = {"username": getattr(self, "username", ""), "password": getattr(self, "password", "")}
|
||||
headers = {"content-type": "application/x-www-form-urlencoded"}
|
||||
response = await make_request(
|
||||
"post", endpoint, self.settings, data=data, headers=headers
|
||||
"post", endpoint, self.settings, data=data, headers=headers,
|
||||
)
|
||||
|
||||
if response.text == "Fails.":
|
||||
raise ConnectionError("Login failed.")
|
||||
_connection_error()
|
||||
|
||||
self.cookie = {"SID": response.cookies["SID"]}
|
||||
logger.debug("qBit cookie refreshed!")
|
||||
@@ -89,8 +95,6 @@ class QbitClient:
|
||||
self.cookie = {}
|
||||
raise QbitError(e) from e
|
||||
|
||||
|
||||
|
||||
async def fetch_version(self):
|
||||
"""Fetch the current qBittorrent version."""
|
||||
endpoint = f"{self.api_url}/app/version"
|
||||
@@ -98,24 +102,21 @@ class QbitClient:
|
||||
self.version = response.text[1:] # Remove the '_v' prefix
|
||||
logger.debug(f"qBit version for client qBittorrent: {self.version}")
|
||||
|
||||
|
||||
async def validate_version(self):
|
||||
"""Check if the qBittorrent version meets minimum and recommended requirements."""
|
||||
min_version = self.settings.min_versions.qbittorrent
|
||||
|
||||
if version.parse(self.version) < version.parse(min_version):
|
||||
logger.error(
|
||||
f"Please update qBittorrent to at least version {min_version}. Current version: {self.version}"
|
||||
)
|
||||
raise QbitError(
|
||||
f"qBittorrent version {self.version} is too old. Please update."
|
||||
f"Please update qBittorrent to at least version {min_version}. Current version: {self.version}",
|
||||
)
|
||||
error = f"qBittorrent version {self.version} is too old. Please update."
|
||||
raise QbitError(error)
|
||||
if version.parse(self.version) < version.parse("5.0.0"):
|
||||
logger.info(
|
||||
f"[Tip!] Consider upgrading to qBittorrent v5.0.0 or newer to reduce network overhead."
|
||||
"[Tip!] Consider upgrading to qBittorrent v5.0.0 or newer to reduce network overhead.",
|
||||
)
|
||||
|
||||
|
||||
async def create_tag(self):
|
||||
"""Create the protection tag in qBittorrent if it doesn't exist."""
|
||||
url = f"{self.api_url}/torrents/tags"
|
||||
@@ -134,34 +135,32 @@ class QbitClient:
|
||||
cookies=self.cookie,
|
||||
)
|
||||
|
||||
if (
|
||||
self.settings.general.public_tracker_handling == "tag_as_obsolete"
|
||||
or self.settings.general.private_tracker_handling == "tag_as_obsolete"
|
||||
):
|
||||
if self.settings.general.obsolete_tag not in current_tags:
|
||||
logger.verbose(f"Creating obsolete tag: {self.settings.general.obsolete_tag}")
|
||||
if not self.settings.general.test_run:
|
||||
data = {"tags": self.settings.general.obsolete_tag}
|
||||
await make_request(
|
||||
"post",
|
||||
self.api_url + "/torrents/createTags",
|
||||
self.settings,
|
||||
data=data,
|
||||
cookies=self.cookie,
|
||||
)
|
||||
if ((self.settings.general.public_tracker_handling == "tag_as_obsolete"
|
||||
or self.settings.general.private_tracker_handling == "tag_as_obsolete")
|
||||
and self.settings.general.obsolete_tag not in current_tags):
|
||||
logger.verbose(f"Creating obsolete tag: {self.settings.general.obsolete_tag}")
|
||||
if not self.settings.general.test_run:
|
||||
data = {"tags": self.settings.general.obsolete_tag}
|
||||
await make_request(
|
||||
"post",
|
||||
self.api_url + "/torrents/createTags",
|
||||
self.settings,
|
||||
data=data,
|
||||
cookies=self.cookie,
|
||||
)
|
||||
|
||||
async def set_unwanted_folder(self):
|
||||
"""Set the 'unwanted folder' setting in qBittorrent if needed."""
|
||||
if self.settings.jobs.remove_bad_files:
|
||||
endpoint = f"{self.api_url}/app/preferences"
|
||||
response = await make_request(
|
||||
"get", endpoint, self.settings, cookies=self.cookie
|
||||
"get", endpoint, self.settings, cookies=self.cookie,
|
||||
)
|
||||
qbit_settings = response.json()
|
||||
|
||||
if not qbit_settings.get("use_unwanted_folder"):
|
||||
logger.info(
|
||||
"Enabling 'Keep unselected files in .unwanted folder' in qBittorrent."
|
||||
"Enabling 'Keep unselected files in .unwanted folder' in qBittorrent.",
|
||||
)
|
||||
if not self.settings.general.test_run:
|
||||
data = {"json": '{"use_unwanted_folder": true}'}
|
||||
@@ -173,39 +172,32 @@ class QbitClient:
|
||||
cookies=self.cookie,
|
||||
)
|
||||
|
||||
|
||||
async def check_qbit_reachability(self):
|
||||
"""Check if the qBittorrent URL is reachable."""
|
||||
try:
|
||||
endpoint = f"{self.api_url}/auth/login"
|
||||
data = {"username": getattr(self, 'username', ''), "password": getattr(self, 'password', '')}
|
||||
data = {"username": getattr(self, "username", ""), "password": getattr(self, "password", "")}
|
||||
headers = {"content-type": "application/x-www-form-urlencoded"}
|
||||
await make_request(
|
||||
"post", endpoint, self.settings, data=data, headers=headers, log_error=False
|
||||
"post", endpoint, self.settings, data=data, headers=headers, log_error=False,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
except Exception as e: # noqa: BLE001
|
||||
tip = "💡 Tip: Did you specify the URL (and username/password if required) correctly?"
|
||||
logger.error(f"-- | qBittorrent\n❗️ {e}\n{tip}\n")
|
||||
wait_and_exit()
|
||||
|
||||
|
||||
async def check_qbit_connected(self):
|
||||
"""Check if the qBittorrent is connected to internet."""
|
||||
qbit_connection_status = ((
|
||||
await make_request(
|
||||
"get",
|
||||
self.api_url + "/sync/maindata",
|
||||
self.settings,
|
||||
cookies=self.cookie,
|
||||
)
|
||||
).json())["server_state"]["connection_status"]
|
||||
if qbit_connection_status == "disconnected":
|
||||
return False
|
||||
else:
|
||||
return True
|
||||
|
||||
|
||||
await make_request(
|
||||
"get",
|
||||
self.api_url + "/sync/maindata",
|
||||
self.settings,
|
||||
cookies=self.cookie,
|
||||
)
|
||||
).json())["server_state"]["connection_status"]
|
||||
return qbit_connection_status != "disconnected"
|
||||
|
||||
async def setup(self):
|
||||
"""Perform the qBittorrent setup by calling relevant managers."""
|
||||
@@ -228,9 +220,8 @@ class QbitClient:
|
||||
await self.create_tag()
|
||||
await self.set_unwanted_folder()
|
||||
|
||||
|
||||
async def get_protected_and_private(self):
|
||||
"""Fetches torrents from qBittorrent and checks for protected and private status."""
|
||||
"""Fetch torrents from qBittorrent and checks for protected and private status."""
|
||||
protected_downloads = []
|
||||
private_downloads = []
|
||||
|
||||
@@ -270,11 +261,12 @@ class QbitClient:
|
||||
|
||||
async def set_tag(self, tags, hashes):
|
||||
"""
|
||||
Sets tags to one or more torrents in qBittorrent.
|
||||
Set tags to one or more torrents in qBittorrent.
|
||||
|
||||
Args:
|
||||
tags (list): A list of tag names to be added.
|
||||
hashes (list): A list of torrent hashes to which the tags should be applied.
|
||||
|
||||
"""
|
||||
# Ensure hashes are provided as a string separated by '|'
|
||||
hashes_str = "|".join(hashes)
|
||||
@@ -285,7 +277,7 @@ class QbitClient:
|
||||
# Prepare the data for the request
|
||||
data = {
|
||||
"hashes": hashes_str,
|
||||
"tags": tags_str
|
||||
"tags": tags_str,
|
||||
}
|
||||
|
||||
# Perform the request to add the tag(s) to the torrents
|
||||
@@ -294,15 +286,13 @@ class QbitClient:
|
||||
self.api_url + "/torrents/addTags",
|
||||
self.settings,
|
||||
data=data,
|
||||
cookies=self.cookie,
|
||||
cookies=self.cookie,
|
||||
)
|
||||
|
||||
|
||||
async def get_download_progress(self, download_id):
|
||||
items = await self.get_qbit_items(download_id)
|
||||
return items[0]["completed"]
|
||||
|
||||
|
||||
async def get_qbit_items(self, hashes=None):
|
||||
params = None
|
||||
if hashes:
|
||||
@@ -319,7 +309,6 @@ class QbitClient:
|
||||
)
|
||||
return response.json()
|
||||
|
||||
|
||||
async def get_torrent_files(self, download_id):
|
||||
# this may not work if the wrong qbit
|
||||
response = await make_request(
|
||||
@@ -331,8 +320,8 @@ class QbitClient:
|
||||
)
|
||||
return response.json()
|
||||
|
||||
async def set_torrent_file_priority(self, download_id, file_id, priority = 0):
|
||||
data={
|
||||
async def set_torrent_file_priority(self, download_id, file_id, priority=0):
|
||||
data = {
|
||||
"hash": download_id.lower(),
|
||||
"id": file_id,
|
||||
"priority": priority,
|
||||
@@ -344,4 +333,3 @@ class QbitClient:
|
||||
data=data,
|
||||
cookies=self.cookie,
|
||||
)
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
import yaml
|
||||
from src.utils.log_setup import logger
|
||||
from src.settings._validate_data_types import validate_data_types
|
||||
from src.settings._config_as_yaml import get_config_as_yaml
|
||||
from src.settings._validate_data_types import validate_data_types
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
VALID_TRACKER_HANDLING = {"remove", "skip", "obsolete_tag"}
|
||||
|
||||
|
||||
class General:
|
||||
"""Represents general settings for the application."""
|
||||
VALID_TRACKER_HANDLING = {"remove", "skip", "obsolete_tag"}
|
||||
|
||||
log_level: str = "INFO"
|
||||
test_run: bool = False
|
||||
@@ -17,7 +18,6 @@ class General:
|
||||
obsolete_tag: str = None
|
||||
protected_tag: str = "Keep"
|
||||
|
||||
|
||||
def __init__(self, config):
|
||||
general_config = config.get("general", {})
|
||||
self.log_level = general_config.get("log_level", self.log_level.upper())
|
||||
@@ -32,31 +32,31 @@ class General:
|
||||
self.protected_tag = general_config.get("protected_tag", self.protected_tag)
|
||||
|
||||
# Validate tracker handling settings
|
||||
self.private_tracker_handling = self._validate_tracker_handling( self.private_tracker_handling, "private_tracker_handling" )
|
||||
self.public_tracker_handling = self._validate_tracker_handling( self.public_tracker_handling, "public_tracker_handling" )
|
||||
self.private_tracker_handling = self._validate_tracker_handling(self.private_tracker_handling, "private_tracker_handling")
|
||||
self.public_tracker_handling = self._validate_tracker_handling(self.public_tracker_handling, "public_tracker_handling")
|
||||
self.obsolete_tag = self._determine_obsolete_tag(self.obsolete_tag)
|
||||
|
||||
|
||||
validate_data_types(self)
|
||||
self._remove_none_attributes()
|
||||
|
||||
def _remove_none_attributes(self):
|
||||
"""Removes attributes that are None to keep the object clean."""
|
||||
"""Remove attributes that are None to keep the object clean."""
|
||||
for attr in list(vars(self)):
|
||||
if getattr(self, attr) is None:
|
||||
delattr(self, attr)
|
||||
|
||||
def _validate_tracker_handling(self, value, field_name):
|
||||
"""Validates tracker handling options. Defaults to 'remove' if invalid."""
|
||||
if value not in self.VALID_TRACKER_HANDLING:
|
||||
@staticmethod
|
||||
def _validate_tracker_handling(value, field_name) -> str:
|
||||
"""Validate tracker handling options. Defaults to 'remove' if invalid."""
|
||||
if value not in VALID_TRACKER_HANDLING:
|
||||
logger.error(
|
||||
f"Invalid value '{value}' for {field_name}. Defaulting to 'remove'."
|
||||
f"Invalid value '{value}' for {field_name}. Defaulting to 'remove'.",
|
||||
)
|
||||
return "remove"
|
||||
return value
|
||||
|
||||
def _determine_obsolete_tag(self, obsolete_tag):
|
||||
"""Defaults obsolete tag to "obsolete", only if none is provided and the tag is needed for handling """
|
||||
"""Set obsolete tag to "obsolete", only if none is provided and the tag is needed for handling."""
|
||||
if obsolete_tag is None and (
|
||||
self.private_tracker_handling == "obsolete_tag"
|
||||
or self.public_tracker_handling == "obsolete_tag"
|
||||
@@ -65,10 +65,7 @@ class General:
|
||||
return obsolete_tag
|
||||
|
||||
def config_as_yaml(self):
|
||||
"""Logs all general settings."""
|
||||
# yaml_output = yaml.dump(vars(self), indent=2, default_flow_style=False, sort_keys=False)
|
||||
# logger.info(f"General Settings:\n{yaml_output}")
|
||||
|
||||
"""Log all general settings."""
|
||||
return get_config_as_yaml(
|
||||
vars(self),
|
||||
)
|
||||
)
|
||||
|
||||
@@ -1,16 +1,16 @@
|
||||
import requests
|
||||
from packaging import version
|
||||
|
||||
from src.utils.log_setup import logger
|
||||
from src.settings._config_as_yaml import get_config_as_yaml
|
||||
from src.settings._constants import (
|
||||
ApiEndpoints,
|
||||
MinVersions,
|
||||
FullQueueParameter,
|
||||
DetailItemKey,
|
||||
DetailItemSearchCommand,
|
||||
FullQueueParameter,
|
||||
MinVersions,
|
||||
)
|
||||
from src.settings._config_as_yaml import get_config_as_yaml
|
||||
from src.utils.common import make_request, wait_and_exit
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
|
||||
class Tracker:
|
||||
@@ -52,9 +52,9 @@ class Instances:
|
||||
"""Return a list of arr instances matching the given arr_type."""
|
||||
return [arr for arr in self.arrs if arr.arr_type == arr_type]
|
||||
|
||||
def config_as_yaml(self, hide_internal_attr=True):
|
||||
"""Logs all configured Arr instances while masking sensitive attributes."""
|
||||
internal_attributes={
|
||||
def config_as_yaml(self, *, hide_internal_attr=True):
|
||||
"""Log all configured Arr instances while masking sensitive attributes."""
|
||||
internal_attributes = {
|
||||
"settings",
|
||||
"api_url",
|
||||
"min_version",
|
||||
@@ -65,7 +65,7 @@ class Instances:
|
||||
"detail_item_id_key",
|
||||
"detail_item_ids_key",
|
||||
"detail_item_search_command",
|
||||
}
|
||||
}
|
||||
|
||||
outputs = []
|
||||
for arr_type in ["sonarr", "radarr", "readarr", "lidarr", "whisparr"]:
|
||||
@@ -81,8 +81,6 @@ class Instances:
|
||||
|
||||
return "\n".join(outputs)
|
||||
|
||||
|
||||
|
||||
def check_any_arrs(self):
|
||||
"""Check if there are any ARR instances."""
|
||||
if not self.arrs:
|
||||
@@ -117,12 +115,11 @@ class ArrInstances(list):
|
||||
arr_type=arr_type,
|
||||
base_url=client_config["base_url"],
|
||||
api_key=client_config["api_key"],
|
||||
)
|
||||
),
|
||||
)
|
||||
except KeyError as e:
|
||||
logger.error(
|
||||
f"Missing required key {e} in {arr_type} client config."
|
||||
)
|
||||
error = f"Missing required key {e} in {arr_type} client config."
|
||||
logger.error(error)
|
||||
|
||||
|
||||
class ArrInstance:
|
||||
@@ -135,11 +132,13 @@ class ArrInstance:
|
||||
def __init__(self, settings, arr_type: str, base_url: str, api_key: str):
|
||||
if not base_url:
|
||||
logger.error(f"Skipping {arr_type} client entry: 'base_url' is required.")
|
||||
raise ValueError(f"{arr_type} client must have a 'base_url'.")
|
||||
error = f"{arr_type} client must have a 'base_url'."
|
||||
raise ValueError(error)
|
||||
|
||||
if not api_key:
|
||||
logger.error(f"Skipping {arr_type} client entry: 'api_key' is required.")
|
||||
raise ValueError(f"{arr_type} client must have an 'api_key'.")
|
||||
error = f"{arr_type} client must have an 'api_key'."
|
||||
raise ValueError(error)
|
||||
|
||||
self.settings = settings
|
||||
self.arr_type = arr_type
|
||||
@@ -151,8 +150,8 @@ class ArrInstance:
|
||||
self.detail_item_key = getattr(DetailItemKey, arr_type)
|
||||
self.detail_item_id_key = self.detail_item_key + "Id"
|
||||
self.detail_item_ids_key = self.detail_item_key + "Ids"
|
||||
self.detail_item_search_command = getattr(DetailItemSearchCommand, arr_type)
|
||||
|
||||
self.detail_item_search_command = getattr(DetailItemSearchCommand, arr_type)
|
||||
|
||||
async def _check_ui_language(self):
|
||||
"""Check if the UI language is set to English."""
|
||||
endpoint = self.api_url + "/config/ui"
|
||||
@@ -162,25 +161,26 @@ class ArrInstance:
|
||||
if ui_language > 1: # Not English
|
||||
logger.error("!! %s Error: !!", self.name)
|
||||
logger.error(
|
||||
f"> Decluttarr only works correctly if UI language is set to English (under Settings/UI in {self.name})"
|
||||
f"> Decluttarr only works correctly if UI language is set to English (under Settings/UI in {self.name})",
|
||||
)
|
||||
logger.error(
|
||||
"> Details: https://github.com/ManiMatter/decluttarr/issues/132)"
|
||||
"> Details: https://github.com/ManiMatter/decluttarr/issues/132)",
|
||||
)
|
||||
raise ArrError("Not English")
|
||||
error = "Not English"
|
||||
raise ArrError(error)
|
||||
|
||||
def _check_min_version(self, status):
|
||||
"""Check if ARR instance meets minimum version requirements."""
|
||||
self.version = status["version"]
|
||||
min_version = getattr(self.settings.min_versions, self.arr_type)
|
||||
|
||||
if min_version:
|
||||
if version.parse(self.version) < version.parse(min_version):
|
||||
logger.error("!! %s Error: !!", self.name)
|
||||
logger.error(
|
||||
f"> Please update {self.name} ({self.base_url}) to at least version {min_version}. Current version: {self.version}"
|
||||
)
|
||||
raise ArrError("Not meeting minimum version requirements")
|
||||
if min_version and version.parse(self.version) < version.parse(min_version):
|
||||
logger.error("!! %s Error: !!", self.name)
|
||||
logger.error(
|
||||
f"> Please update {self.name} ({self.base_url}) to at least version {min_version}. Current version: {self.version}",
|
||||
)
|
||||
error = f"Not meeting minimum version requirements: {min_version}"
|
||||
logger.error(error)
|
||||
|
||||
def _check_arr_type(self, status):
|
||||
"""Check if the ARR instance is of the correct type."""
|
||||
@@ -188,9 +188,10 @@ class ArrInstance:
|
||||
if actual_arr_type.lower() != self.arr_type:
|
||||
logger.error("!! %s Error: !!", self.name)
|
||||
logger.error(
|
||||
f"> Your {self.name} ({self.base_url}) points to a {actual_arr_type} instance, rather than {self.arr_type}. Did you specify the wrong IP?"
|
||||
f"> Your {self.name} ({self.base_url}) points to a {actual_arr_type} instance, rather than {self.arr_type}. Did you specify the wrong IP?",
|
||||
)
|
||||
raise ArrError("Wrong Arr Type")
|
||||
error = "Wrong Arr Type"
|
||||
logger.error(error)
|
||||
|
||||
async def _check_reachability(self):
|
||||
"""Check if ARR instance is reachable."""
|
||||
@@ -198,14 +199,13 @@ class ArrInstance:
|
||||
endpoint = self.api_url + "/system/status"
|
||||
headers = {"X-Api-Key": self.api_key}
|
||||
response = await make_request(
|
||||
"get", endpoint, self.settings, headers=headers, log_error=False
|
||||
"get", endpoint, self.settings, headers=headers, log_error=False,
|
||||
)
|
||||
status = response.json()
|
||||
return status
|
||||
return response.json()
|
||||
except Exception as e:
|
||||
if isinstance(e, requests.exceptions.HTTPError):
|
||||
response = getattr(e, "response", None)
|
||||
if response is not None and response.status_code == 401:
|
||||
if response is not None and response.status_code == 401: # noqa: PLR2004
|
||||
tip = "💡 Tip: Have you configured the API_KEY correctly?"
|
||||
else:
|
||||
tip = f"💡 Tip: HTTP error occurred. Status: {getattr(response, 'status_code', 'unknown')}"
|
||||
@@ -218,7 +218,7 @@ class ArrInstance:
|
||||
raise ArrError(e) from e
|
||||
|
||||
async def setup(self):
|
||||
"""Checks on specific ARR instance"""
|
||||
"""Check on specific ARR instance."""
|
||||
try:
|
||||
status = await self._check_reachability()
|
||||
self.name = status.get("instanceName", self.arr_type)
|
||||
@@ -230,7 +230,7 @@ class ArrInstance:
|
||||
logger.info(f"OK | {self.name} ({self.base_url})")
|
||||
logger.debug(f"Current version of {self.name}: {self.version}")
|
||||
|
||||
except Exception as e:
|
||||
except Exception as e: # noqa: BLE001
|
||||
if not isinstance(e, ArrError):
|
||||
logger.error(f"Unhandled error: {e}", exc_info=True)
|
||||
wait_and_exit()
|
||||
@@ -253,17 +253,19 @@ class ArrInstance:
|
||||
return client.get("implementation", None)
|
||||
return None
|
||||
|
||||
async def remove_queue_item(self, queue_id, blocklist=False):
|
||||
async def remove_queue_item(self, queue_id, *, blocklist=False):
|
||||
"""
|
||||
Remove a specific queue item from the queue by its qeue id.
|
||||
Remove a specific queue item from the queue by its queue id.
|
||||
|
||||
Sends a delete request to the API to remove the item.
|
||||
|
||||
Args:
|
||||
queue_id (str): The quueue ID of the queue item to be removed.
|
||||
queue_id (str): The queue ID of the queue item to be removed.
|
||||
blocklist (bool): Whether to add the item to the blocklist. Default is False.
|
||||
|
||||
Returns:
|
||||
bool: Returns True if the removal was successful, False otherwise.
|
||||
|
||||
"""
|
||||
endpoint = f"{self.api_url}/queue/{queue_id}"
|
||||
headers = {"X-Api-Key": self.api_key}
|
||||
@@ -271,17 +273,14 @@ class ArrInstance:
|
||||
|
||||
# Send the request to remove the download from the queue
|
||||
response = await make_request(
|
||||
"delete", endpoint, self.settings, headers=headers, json=json_payload
|
||||
"delete", endpoint, self.settings, headers=headers, json=json_payload,
|
||||
)
|
||||
|
||||
# If the response is successful, return True, else return False
|
||||
if response.status_code == 200:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
return response.status_code == 200 # noqa: PLR2004
|
||||
|
||||
async def is_monitored(self, detail_id):
|
||||
"""Check if detail item (like a book, series, etc) is monitored."""
|
||||
"""Check if detail item (like a book, series, etc.) is monitored."""
|
||||
endpoint = f"{self.api_url}/{self.detail_item_key}/{detail_id}"
|
||||
headers = {"X-Api-Key": self.api_key}
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
from src.utils.log_setup import logger
|
||||
from src.settings._validate_data_types import validate_data_types
|
||||
from src.settings._config_as_yaml import get_config_as_yaml
|
||||
from src.settings._validate_data_types import validate_data_types
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
|
||||
class JobParams:
|
||||
@@ -33,7 +33,7 @@ class JobParams:
|
||||
self._remove_none_attributes()
|
||||
|
||||
def _remove_none_attributes(self):
|
||||
"""Removes attributes that are None to keep the object clean."""
|
||||
"""Remove attributes that are None to keep the object clean."""
|
||||
for attr in list(vars(self)):
|
||||
if getattr(self, attr) is None:
|
||||
delattr(self, attr)
|
||||
@@ -52,16 +52,16 @@ class JobDefaults:
|
||||
job_defaults_config = config.get("job_defaults", {})
|
||||
self.max_strikes = job_defaults_config.get("max_strikes", self.max_strikes)
|
||||
self.max_concurrent_searches = job_defaults_config.get(
|
||||
"max_concurrent_searches", self.max_concurrent_searches
|
||||
"max_concurrent_searches", self.max_concurrent_searches,
|
||||
)
|
||||
self.min_days_between_searches = job_defaults_config.get(
|
||||
"min_days_between_searches", self.min_days_between_searches
|
||||
"min_days_between_searches", self.min_days_between_searches,
|
||||
)
|
||||
validate_data_types(self)
|
||||
|
||||
|
||||
class Jobs:
|
||||
"""Represents all jobs explicitly"""
|
||||
"""Represent all jobs explicitly."""
|
||||
|
||||
def __init__(self, config):
|
||||
self.job_defaults = JobDefaults(config)
|
||||
@@ -73,10 +73,10 @@ class Jobs:
|
||||
self.remove_bad_files = JobParams()
|
||||
self.remove_failed_downloads = JobParams()
|
||||
self.remove_failed_imports = JobParams(
|
||||
message_patterns=self.job_defaults.message_patterns
|
||||
message_patterns=self.job_defaults.message_patterns,
|
||||
)
|
||||
self.remove_metadata_missing = JobParams(
|
||||
max_strikes=self.job_defaults.max_strikes
|
||||
max_strikes=self.job_defaults.max_strikes,
|
||||
)
|
||||
self.remove_missing_files = JobParams()
|
||||
self.remove_orphans = JobParams()
|
||||
@@ -102,8 +102,7 @@ class Jobs:
|
||||
self._set_job_settings(job_name, config["jobs"][job_name])
|
||||
|
||||
def _set_job_settings(self, job_name, job_config):
|
||||
"""Sets per-job config settings"""
|
||||
|
||||
"""Set per-job config settings."""
|
||||
job = getattr(self, job_name, None)
|
||||
if (
|
||||
job_config is None
|
||||
@@ -128,8 +127,8 @@ class Jobs:
|
||||
|
||||
setattr(self, job_name, job)
|
||||
validate_data_types(
|
||||
job, self.job_defaults
|
||||
) # Validates and applies defauls from job_defaults
|
||||
job, self.job_defaults,
|
||||
) # Validates and applies defaults from job_defaults
|
||||
|
||||
def log_status(self):
|
||||
job_strings = []
|
||||
@@ -152,7 +151,7 @@ class Jobs:
|
||||
)
|
||||
|
||||
def list_job_status(self):
|
||||
"""Returns a string showing each job and whether it's enabled or not using emojis."""
|
||||
"""Return a string showing each job and whether it's enabled or not using emojis."""
|
||||
lines = []
|
||||
for name, obj in vars(self).items():
|
||||
if hasattr(obj, "enabled"):
|
||||
|
||||
@@ -1,5 +1,8 @@
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
CONFIG_MAPPING = {
|
||||
@@ -34,7 +37,8 @@ CONFIG_MAPPING = {
|
||||
|
||||
|
||||
def get_user_config(settings):
|
||||
"""Checks if data is read from enviornment variables, or from yaml file.
|
||||
"""
|
||||
Check if data is read from environment variables, or from yaml file.
|
||||
|
||||
Reads from environment variables if in docker, unless in docker-compose "USE_CONFIG_YAML" is set to true.
|
||||
Then the config file is read.
|
||||
@@ -53,7 +57,7 @@ def get_user_config(settings):
|
||||
|
||||
|
||||
def _parse_env_var(key: str) -> dict | list | str | int | None:
|
||||
"""Helper function to parse one setting input key"""
|
||||
"""Parse one setting input key."""
|
||||
raw_value = os.getenv(key)
|
||||
if raw_value is None:
|
||||
return None
|
||||
@@ -67,7 +71,7 @@ def _parse_env_var(key: str) -> dict | list | str | int | None:
|
||||
|
||||
|
||||
def _load_section(keys: list[str]) -> dict:
|
||||
"""Helper function to parse one section of expected config"""
|
||||
"""Parse one section of expected config."""
|
||||
section_config = {}
|
||||
for key in keys:
|
||||
parsed = _parse_env_var(key)
|
||||
@@ -76,14 +80,6 @@ def _load_section(keys: list[str]) -> dict:
|
||||
return section_config
|
||||
|
||||
|
||||
def _load_from_env() -> dict:
|
||||
"""Main function to load settings from env"""
|
||||
config = {}
|
||||
for section, keys in CONFIG_MAPPING.items():
|
||||
config[section] = _load_section(keys)
|
||||
return config
|
||||
|
||||
|
||||
def _load_from_env() -> dict:
|
||||
config = {}
|
||||
|
||||
@@ -100,7 +96,7 @@ def _load_from_env() -> dict:
|
||||
parsed_value = _lowercase(parsed_value)
|
||||
except yaml.YAMLError as e:
|
||||
logger.error(
|
||||
f"Failed to parse environment variable {key} as YAML:\n{e}"
|
||||
f"Failed to parse environment variable {key} as YAML:\n{e}",
|
||||
)
|
||||
parsed_value = {}
|
||||
section_config[key.lower()] = parsed_value
|
||||
@@ -111,28 +107,26 @@ def _load_from_env() -> dict:
|
||||
|
||||
|
||||
def _lowercase(data):
|
||||
"""Translates recevied keys (for instance setting-keys of jobs) to lower case"""
|
||||
"""Translate received keys (for instance setting-keys of jobs) to lower case."""
|
||||
if isinstance(data, dict):
|
||||
return {str(k).lower(): _lowercase(v) for k, v in data.items()}
|
||||
elif isinstance(data, list):
|
||||
if isinstance(data, list):
|
||||
return [_lowercase(item) for item in data]
|
||||
else:
|
||||
# Leave strings and other types unchanged
|
||||
return data
|
||||
# Leave strings and other types unchanged
|
||||
return data
|
||||
|
||||
|
||||
def _config_file_exists(settings):
|
||||
config_path = settings.paths.config_file
|
||||
return os.path.exists(config_path)
|
||||
return Path(config_path).exists()
|
||||
|
||||
|
||||
def _load_from_yaml_file(settings):
|
||||
"""Reads config from YAML file and returns a dict."""
|
||||
"""Read config from YAML file and returns a dict."""
|
||||
config_path = settings.paths.config_file
|
||||
try:
|
||||
with open(config_path, "r", encoding="utf-8") as file:
|
||||
config = yaml.safe_load(file) or {}
|
||||
return config
|
||||
with Path(config_path).open(encoding="utf-8") as file:
|
||||
return yaml.safe_load(file) or {}
|
||||
except yaml.YAMLError as e:
|
||||
logger.error("Error reading YAML file: %s", e)
|
||||
return {}
|
||||
|
||||
@@ -1,14 +1,21 @@
|
||||
|
||||
|
||||
import inspect
|
||||
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
|
||||
def validate_data_types(cls, default_cls=None):
|
||||
"""Ensures all attributes match expected types dynamically.
|
||||
"""
|
||||
Ensure all attributes match expected types dynamically.
|
||||
|
||||
If default_cls is provided, the default key is taken from this class rather than the own class
|
||||
If the attribute doesn't exist in `default_cls`, fall back to `cls.__class__`.
|
||||
|
||||
"""
|
||||
|
||||
def _unhandled_conversion():
|
||||
error = f"Unhandled type conversion for '{attr}': {expected_type}"
|
||||
raise TypeError(error)
|
||||
|
||||
annotations = inspect.get_annotations(cls.__class__) # Extract type hints
|
||||
|
||||
for attr, expected_type in annotations.items():
|
||||
@@ -17,7 +24,7 @@ def validate_data_types(cls, default_cls=None):
|
||||
|
||||
value = getattr(cls, attr)
|
||||
default_source = default_cls if default_cls and hasattr(default_cls, attr) else cls.__class__
|
||||
default_value = getattr(default_source, attr, None)
|
||||
default_value = getattr(default_source, attr, None)
|
||||
|
||||
if value == default_value:
|
||||
continue
|
||||
@@ -37,22 +44,20 @@ def validate_data_types(cls, default_cls=None):
|
||||
elif expected_type is dict:
|
||||
value = convert_to_dict(value)
|
||||
else:
|
||||
raise TypeError(f"Unhandled type conversion for '{attr}': {expected_type}")
|
||||
except Exception as e:
|
||||
|
||||
_unhandled_conversion()
|
||||
except Exception as e: # noqa: BLE001
|
||||
logger.error(
|
||||
f"❗️ Invalid type for '{attr}': Expected {expected_type.__name__}, but got {type(value).__name__}. "
|
||||
f"Error: {e}. Using default value: {default_value}"
|
||||
f"Error: {e}. Using default value: {default_value}",
|
||||
)
|
||||
value = default_value
|
||||
|
||||
setattr(cls, attr, value)
|
||||
|
||||
|
||||
|
||||
# --- Helper Functions ---
|
||||
def convert_to_bool(raw_value):
|
||||
"""Converts strings like 'yes', 'no', 'true', 'false' into boolean values."""
|
||||
"""Convert strings like 'yes', 'no', 'true', 'false' into boolean values."""
|
||||
if isinstance(raw_value, bool):
|
||||
return raw_value
|
||||
|
||||
@@ -64,28 +69,29 @@ def convert_to_bool(raw_value):
|
||||
|
||||
if raw_value in true_values:
|
||||
return True
|
||||
elif raw_value in false_values:
|
||||
if raw_value in false_values:
|
||||
return False
|
||||
else:
|
||||
raise ValueError(f"Invalid boolean value: '{raw_value}'")
|
||||
error = f"Invalid boolean value: '{raw_value}'"
|
||||
raise ValueError(error)
|
||||
|
||||
|
||||
def convert_to_str(raw_value):
|
||||
"""Ensures a string and trims whitespace."""
|
||||
"""Ensure a string and trims whitespace."""
|
||||
if isinstance(raw_value, str):
|
||||
return raw_value.strip()
|
||||
return str(raw_value).strip()
|
||||
|
||||
|
||||
def convert_to_list(raw_value):
|
||||
"""Ensures a value is a list."""
|
||||
"""Ensure a value is a list."""
|
||||
if isinstance(raw_value, list):
|
||||
return [convert_to_str(item) for item in raw_value]
|
||||
return [convert_to_str(raw_value)] # Wrap single values in a list
|
||||
|
||||
|
||||
def convert_to_dict(raw_value):
|
||||
"""Ensures a value is a dictionary."""
|
||||
"""Ensure a value is a dictionary."""
|
||||
if isinstance(raw_value, dict):
|
||||
return {convert_to_str(k): v for k, v in raw_value.items()}
|
||||
raise TypeError(f"Expected dict but got {type(raw_value).__name__}")
|
||||
error = f"Expected dict but got {type(raw_value).__name__}"
|
||||
raise TypeError(error)
|
||||
|
||||
@@ -1,14 +1,14 @@
|
||||
from src.utils.log_setup import configure_logging
|
||||
from src.settings._constants import Envs, MinVersions, Paths
|
||||
# from src.settings._migrate_legacy import migrate_legacy
|
||||
from src.settings._general import General
|
||||
from src.settings._jobs import Jobs
|
||||
from src.settings._download_clients import DownloadClients
|
||||
from src.settings._general import General
|
||||
from src.settings._instances import Instances
|
||||
from src.settings._jobs import Jobs
|
||||
from src.settings._user_config import get_user_config
|
||||
from src.utils.log_setup import configure_logging
|
||||
|
||||
|
||||
class Settings:
|
||||
|
||||
|
||||
min_versions = MinVersions()
|
||||
paths = Paths()
|
||||
|
||||
@@ -21,7 +21,6 @@ class Settings:
|
||||
self.instances = Instances(config, self)
|
||||
configure_logging(self)
|
||||
|
||||
|
||||
def __repr__(self):
|
||||
sections = [
|
||||
("ENVIRONMENT SETTINGS", "envs"),
|
||||
@@ -30,11 +29,8 @@ class Settings:
|
||||
("JOB SETTINGS", "jobs"),
|
||||
("INSTANCE SETTINGS", "instances"),
|
||||
("DOWNLOAD CLIENT SETTINGS", "download_clients"),
|
||||
]
|
||||
messages = []
|
||||
messages.append("🛠️ Decluttarr - Settings 🛠️")
|
||||
messages.append("-"*80)
|
||||
# messages.append("")
|
||||
]
|
||||
messages = ["🛠️ Decluttarr - Settings 🛠️", "-" * 80]
|
||||
for title, attr_name in sections:
|
||||
section = getattr(self, attr_name, None)
|
||||
section_content = section.config_as_yaml()
|
||||
@@ -44,11 +40,11 @@ class Settings:
|
||||
elif section_content != "{}":
|
||||
messages.append(self._format_section_title(title))
|
||||
messages.append(section_content)
|
||||
messages.append("") # Extra linebreak after section
|
||||
messages.append("") # Extra linebreak after section
|
||||
return "\n".join(messages)
|
||||
|
||||
|
||||
def _format_section_title(self, name, border_length=50, symbol="="):
|
||||
@staticmethod
|
||||
def _format_section_title(name, border_length=50, symbol="=") -> str:
|
||||
"""Format section title with centered name and hash borders."""
|
||||
padding = max(border_length - len(name) - 2, 0) # 4 for spaces
|
||||
left_hashes = right_hashes = padding // 2
|
||||
|
||||
0
src/utils/__init__.py
Normal file
0
src/utils/__init__.py
Normal file
@@ -1,19 +1,19 @@
|
||||
import asyncio
|
||||
import sys
|
||||
import time
|
||||
import asyncio
|
||||
|
||||
import requests
|
||||
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
|
||||
async def make_request(
|
||||
method: str, endpoint: str, settings, timeout: int = 15, log_error = True, **kwargs
|
||||
method: str, endpoint: str, settings, timeout: int = 15, *, log_error=True, **kwargs,
|
||||
) -> requests.Response:
|
||||
"""
|
||||
A utility function to make HTTP requests (GET, POST, DELETE, PUT).
|
||||
"""
|
||||
"""Make HTTP requests (GET, POST, DELETE, PUT)."""
|
||||
try:
|
||||
logger.debug(
|
||||
f"common.py/make_request: Making {method.upper()} request to {endpoint} with params={kwargs.get('params')} and headers={kwargs.get('headers')}"
|
||||
f"common.py/make_request: Making {method.upper()} request to {endpoint} with params={kwargs.get('params')} and headers={kwargs.get('headers')}",
|
||||
)
|
||||
# Make the request using the method passed (get, post, etc.)
|
||||
response = await asyncio.to_thread(
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import logging
|
||||
import os
|
||||
from logging.handlers import RotatingFileHandler
|
||||
from pathlib import Path
|
||||
|
||||
# Track added logging levels
|
||||
_added_levels = {}
|
||||
@@ -9,7 +9,8 @@ _added_levels = {}
|
||||
def add_logging_level(level_name, level_num):
|
||||
"""Dynamically add a custom logging level."""
|
||||
if level_name in _added_levels or level_num in _added_levels.values():
|
||||
raise ValueError(f"Logging level '{level_name}' or number '{level_num}' already exists.")
|
||||
error = f"Logging level '{level_name}' or number '{level_num}' already exists."
|
||||
raise ValueError(error)
|
||||
|
||||
logging.addLevelName(level_num, level_name.upper())
|
||||
|
||||
@@ -26,41 +27,41 @@ def add_logging_level(level_name, level_num):
|
||||
add_logging_level("TRACE", 5)
|
||||
add_logging_level("VERBOSE", 15)
|
||||
|
||||
|
||||
# Configure the default logger
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def set_handler_format(log_handler, long_format = True):
|
||||
|
||||
def set_handler_format(log_handler, *, long_format=True):
|
||||
if long_format:
|
||||
target_format = logging.Formatter("%(asctime)s | %(levelname)-7s | %(message)s", "%Y-%m-%d %H:%M:%S")
|
||||
else:
|
||||
target_format = logging.Formatter("%(levelname)-7s | %(message)s")
|
||||
log_handler.setFormatter(target_format)
|
||||
|
||||
|
||||
# Default console handler
|
||||
console_handler = logging.StreamHandler()
|
||||
set_handler_format(console_handler, long_format = True)
|
||||
set_handler_format(console_handler, long_format=True)
|
||||
logger.addHandler(console_handler)
|
||||
logger.setLevel(logging.INFO)
|
||||
|
||||
|
||||
|
||||
def configure_logging(settings):
|
||||
"""Add a file handler and adjust log levels for all handlers."""
|
||||
if settings.envs.in_docker:
|
||||
set_handler_format(console_handler, long_format = False)
|
||||
set_handler_format(console_handler, long_format=False)
|
||||
|
||||
log_file = settings.paths.logs
|
||||
log_dir = os.path.dirname(log_file)
|
||||
os.makedirs(log_dir, exist_ok=True)
|
||||
log_dir = Path(log_file).parent
|
||||
Path(log_dir).mkdir(exist_ok=True, parents=True)
|
||||
|
||||
# File handler
|
||||
file_handler = RotatingFileHandler(log_file, maxBytes=50 * 1024 * 1024, backupCount=2)
|
||||
set_handler_format(file_handler, long_format = True)
|
||||
set_handler_format(file_handler, long_format=True)
|
||||
logger.addHandler(file_handler)
|
||||
|
||||
# Update log level for all handlers
|
||||
log_level = getattr(logging, settings.general.log_level.upper(), logging.INFO)
|
||||
for handler in logger.handlers:
|
||||
handler.setLevel(log_level)
|
||||
logger.setLevel(log_level)
|
||||
logger.setLevel(log_level)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
from src.utils.log_setup import logger
|
||||
from src.utils.common import make_request
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
|
||||
class QueueManager:
|
||||
@@ -9,7 +9,8 @@ class QueueManager:
|
||||
|
||||
async def get_queue_items(self, queue_scope):
|
||||
"""
|
||||
Retrieves queue items based on the scope.
|
||||
Retrieve queue items based on the scope.
|
||||
|
||||
queue_scope:
|
||||
"normal" = normal queue
|
||||
"orphans" = orphaned queue items (in full queue but not in normal queue)
|
||||
@@ -24,10 +25,11 @@ class QueueManager:
|
||||
elif queue_scope == "full":
|
||||
queue_items = await self._get_queue(full_queue=True)
|
||||
else:
|
||||
raise ValueError(f"Invalid queue_scope: {queue_scope}")
|
||||
error = f"Invalid queue_scope: {queue_scope}"
|
||||
raise ValueError(error)
|
||||
return queue_items
|
||||
|
||||
async def _get_queue(self, full_queue=False):
|
||||
async def _get_queue(self, *, full_queue=False):
|
||||
# Step 1: Refresh the queue (now internal)
|
||||
await self._refresh_queue()
|
||||
|
||||
@@ -40,15 +42,14 @@ class QueueManager:
|
||||
# Step 4: Filter the queue based on delayed items and ignored download clients
|
||||
queue = self._ignore_delayed_queue_items(queue)
|
||||
queue = self._filter_out_ignored_download_clients(queue)
|
||||
queue = self._add_detail_item_key(queue)
|
||||
return queue
|
||||
return self._add_detail_item_key(queue)
|
||||
|
||||
def _add_detail_item_key(self, queue):
|
||||
"""Normalizes episodeID, bookID, etc so it can just be called by 'detail_item_id'"""
|
||||
"""Normalize episodeID, bookID, etc. so it can just be called by 'detail_item_id'."""
|
||||
for items in queue:
|
||||
items["detail_item_id"] = items.get(self.arr.detail_item_id_key)
|
||||
items["detail_item_id"] = items.get(self.arr.detail_item_id_key)
|
||||
return queue
|
||||
|
||||
|
||||
async def _refresh_queue(self):
|
||||
# Refresh the queue by making the POST request using an external make_request function
|
||||
await make_request(
|
||||
@@ -85,7 +86,7 @@ class QueueManager:
|
||||
records = (
|
||||
await make_request(
|
||||
method="GET",
|
||||
endpoint=f"{self.arr.api_url}/queue",
|
||||
endpoint=f"{self.arr.api_url}/queue",
|
||||
settings=self.settings,
|
||||
params=params,
|
||||
headers={"X-Api-Key": self.arr.api_key},
|
||||
@@ -93,10 +94,11 @@ class QueueManager:
|
||||
).json()
|
||||
return records["records"]
|
||||
|
||||
def _ignore_delayed_queue_items(self, queue):
|
||||
@staticmethod
|
||||
def _ignore_delayed_queue_items(queue) -> list | None:
|
||||
# Ignores delayed queue items
|
||||
if queue is None:
|
||||
return queue
|
||||
return None
|
||||
seen_combinations = set()
|
||||
filtered_queue = []
|
||||
for queue_item in queue:
|
||||
@@ -135,7 +137,8 @@ class QueueManager:
|
||||
|
||||
return filtered_queue
|
||||
|
||||
def format_queue(self, queue_items):
|
||||
@staticmethod
|
||||
def format_queue(queue_items) -> list | str:
|
||||
if not queue_items:
|
||||
return "empty"
|
||||
|
||||
@@ -158,7 +161,8 @@ class QueueManager:
|
||||
|
||||
return list(formatted_dict.values())
|
||||
|
||||
def group_by_download_id(self, queue_items):
|
||||
@staticmethod
|
||||
def group_by_download_id(queue_items) -> dict:
|
||||
# Groups queue items by download ID and returns a dict where download ID is the key, and value is the list of queue items belonging to that downloadID
|
||||
# Queue item is limited to certain keys
|
||||
retain_keys = {
|
||||
@@ -184,7 +188,7 @@ class QueueManager:
|
||||
|
||||
# Filter and add default values if keys are missing
|
||||
filtered_item = {
|
||||
key: queue_item.get(key, retain_keys.get(key, None))
|
||||
key: queue_item.get(key, retain_keys.get(key))
|
||||
for key in retain_keys
|
||||
}
|
||||
|
||||
|
||||
@@ -1,26 +1,24 @@
|
||||
import warnings
|
||||
|
||||
from src.utils.log_setup import logger
|
||||
|
||||
|
||||
def show_welcome(settings):
|
||||
messages = []
|
||||
messages = ["🎉🎉🎉 Decluttarr - Application Started! 🎉🎉🎉",
|
||||
"-" * 80,
|
||||
"⭐️ Like this app?",
|
||||
"Thanks for giving it a ⭐️ on GitHub!",
|
||||
"https://github.com/ManiMatter/decluttarr/"]
|
||||
|
||||
# Show welcome message
|
||||
messages.append("🎉🎉🎉 Decluttarr - Application Started! 🎉🎉🎉")
|
||||
messages.append("-"*80)
|
||||
# messages.append("")
|
||||
messages.append("⭐️ Like this app?")
|
||||
messages.append("Thanks for giving it a ⭐️ on GitHub!")
|
||||
messages.append("https://github.com/ManiMatter/decluttarr/")
|
||||
|
||||
# Show info level tip
|
||||
if settings.general.log_level == "INFO":
|
||||
# messages.append("")
|
||||
messages.append("")
|
||||
messages.append("💡 Tip: More logs?")
|
||||
messages.append("If you want to know more about what's going on, switch log level to 'VERBOSE'")
|
||||
|
||||
# Show bug report tip
|
||||
# messages.append("")
|
||||
messages.append("")
|
||||
messages.append("🐛 Found a bug?")
|
||||
messages.append("Before reporting bugs on GitHub, please:")
|
||||
@@ -30,11 +28,9 @@ def show_welcome(settings):
|
||||
messages.append("4) Turn off any features other than the one(s) causing it")
|
||||
messages.append("5) Provide the full logs via pastebin on your GitHub issue")
|
||||
messages.append("Once submitted, thanks for being responsive and helping debug / re-test")
|
||||
|
||||
|
||||
# Show test mode tip
|
||||
if settings.general.test_run:
|
||||
# messages.append("")
|
||||
messages.append("")
|
||||
messages.append("=================== IMPORTANT ====================")
|
||||
messages.append(" ⚠️ ⚠️ ⚠️ TEST MODE IS ACTIVE ⚠️ ⚠️ ⚠️")
|
||||
@@ -43,7 +39,6 @@ def show_welcome(settings):
|
||||
messages.append("==================================================")
|
||||
|
||||
messages.append("")
|
||||
# messages.append("-"*80)
|
||||
# Log all messages at once
|
||||
logger.info("\n".join(messages))
|
||||
|
||||
|
||||
@@ -8,12 +8,12 @@ class WantedManager:
|
||||
|
||||
async def get_wanted_items(self, missing_or_cutoff):
|
||||
"""
|
||||
Retrieves wanted items :
|
||||
missing_or_cutoff: Drives whether missing or cutoff items are retrieved
|
||||
Retrieve wanted items.
|
||||
|
||||
missing_or_cutoff: Drives whether missing or cutoff items are retrieved
|
||||
"""
|
||||
record_count = await self._get_total_records(missing_or_cutoff)
|
||||
missing_or_cutoff = await self._get_arr_records(missing_or_cutoff, record_count)
|
||||
return missing_or_cutoff
|
||||
return await self._get_arr_records(missing_or_cutoff, record_count)
|
||||
|
||||
async def _get_total_records(self, missing_or_cutoff):
|
||||
# Get the total number of records from wanted
|
||||
@@ -46,9 +46,8 @@ class WantedManager:
|
||||
).json()
|
||||
return records["records"]
|
||||
|
||||
|
||||
async def search_items(self, detail_ids):
|
||||
"""Search items by detail IDs"""
|
||||
"""Search items by detail IDs."""
|
||||
if isinstance(detail_ids, str):
|
||||
detail_ids = [detail_ids]
|
||||
|
||||
@@ -62,4 +61,4 @@ class WantedManager:
|
||||
settings=self.settings,
|
||||
json=json,
|
||||
headers={"X-Api-Key": self.arr.api_key},
|
||||
)
|
||||
)
|
||||
|
||||
0
tests/jobs/__init__.py
Normal file
0
tests/jobs/__init__.py
Normal file
@@ -1,10 +1,12 @@
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
import pytest
|
||||
|
||||
from src.jobs.removal_handler import RemovalHandler
|
||||
|
||||
|
||||
# ---------- Fixtures ----------
|
||||
|
||||
|
||||
@pytest.fixture(name="mock_logger")
|
||||
def fixture_mock_logger():
|
||||
with patch("src.jobs.removal_handler.logger") as mock:
|
||||
@@ -46,8 +48,8 @@ def fixture_affected_downloads():
|
||||
"status": "paused",
|
||||
"trackedDownloadState": "downloading",
|
||||
"statusMessages": [],
|
||||
}
|
||||
]
|
||||
},
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
@@ -55,7 +57,7 @@ def fixture_affected_downloads():
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"protocol, qb_config, client_impl, is_private, pub_handling, priv_handling, expected",
|
||||
("protocol", "qb_config", "client_impl", "is_private", "pub_handling", "priv_handling", "expected"),
|
||||
[
|
||||
("emule", [AsyncMock()], "MyDonkey", None, "remove", "remove", "remove"),
|
||||
("torrent", [], "QBittorrent", None, "remove", "remove", "remove"),
|
||||
@@ -108,7 +110,7 @@ async def test_remove_downloads(
|
||||
|
||||
if expected == "remove":
|
||||
arr.remove_queue_item.assert_awaited_once_with(
|
||||
queue_id=item["id"], blocklist=True
|
||||
queue_id=item["id"], blocklist=True,
|
||||
)
|
||||
assert download_id in arr.tracker.deleted
|
||||
|
||||
|
||||
@@ -1,8 +1,11 @@
|
||||
from unittest.mock import MagicMock, AsyncMock
|
||||
from pathlib import Path
|
||||
from unittest.mock import AsyncMock
|
||||
|
||||
import pytest
|
||||
|
||||
from src.jobs.remove_bad_files import RemoveBadFiles
|
||||
from tests.jobs.test_utils import removal_job_fix
|
||||
import os
|
||||
|
||||
|
||||
# Fixture for arr mock
|
||||
@pytest.fixture(name="arr")
|
||||
@@ -18,8 +21,7 @@ def fixture_arr():
|
||||
|
||||
@pytest.fixture(name="qbit_client")
|
||||
def fixture_qbit_client():
|
||||
qbit_client = AsyncMock()
|
||||
return qbit_client
|
||||
return AsyncMock()
|
||||
|
||||
|
||||
@pytest.fixture(name="removal_job")
|
||||
@@ -30,7 +32,7 @@ def fixture_removal_job(arr):
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"file_name, expected_result",
|
||||
("file_name", "expected_result"),
|
||||
[
|
||||
("file.mp4", False), # Good extension
|
||||
("file.mkv", False), # Good extension
|
||||
@@ -40,17 +42,18 @@ def fixture_removal_job(arr):
|
||||
],
|
||||
)
|
||||
def test_is_bad_extension(removal_job, file_name, expected_result):
|
||||
"""This test will verify that files with bad extensions are properly identified."""
|
||||
"""Verify that files with bad extensions are properly identified."""
|
||||
# Act
|
||||
file = {"name": file_name} # Simulating a file object
|
||||
file["file_extension"] = os.path.splitext(file["name"])[1].lower()
|
||||
file["file_extension"] = Path(file["name"]).suffix.lower()
|
||||
result = removal_job._is_bad_extension(file) # pylint: disable=W0212
|
||||
|
||||
# Assert
|
||||
assert result == expected_result
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"name, size_bytes, expected_result",
|
||||
("name", "size_bytes", "expected_result"),
|
||||
[
|
||||
("My.Movie.2024.2160/Subfolder/sample.mkv", 100 * 1024, True), # 100 KB, 'sample' keyword in filename
|
||||
("My.Movie.2024.2160/Subfolder/Sample.mkv", 100 * 1024, True), # 100 KB, case-insensitive match
|
||||
@@ -75,16 +78,16 @@ def test_contains_bad_keyword(removal_job, name, size_bytes, expected_result):
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"file, is_incomplete_partial",
|
||||
("file", "is_incomplete_partial"),
|
||||
[
|
||||
({"availability": 1, "progress": 1}, False), # Fully available
|
||||
({"availability": 0.5, "progress": 0.5}, True), # Low availability
|
||||
( {"availability": 0.5, "progress": 1}, False,), # Downloaded, low availability
|
||||
({"availability": 0.5, "progress": 1}, False), # Downloaded, low availability
|
||||
({"availability": 0.9, "progress": 0.8}, True), # Low availability
|
||||
],
|
||||
)
|
||||
def test_is_complete_partial(removal_job, file, is_incomplete_partial):
|
||||
"""This test checks if the availability logic works correctly."""
|
||||
"""Check if the availability logic works correctly."""
|
||||
# Act
|
||||
result = removal_job._is_complete_partial(file) # pylint: disable=W0212
|
||||
|
||||
@@ -93,7 +96,7 @@ def test_is_complete_partial(removal_job, file, is_incomplete_partial):
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"qbit_item, expected_processed",
|
||||
("qbit_item", "expected_processed"),
|
||||
[
|
||||
# Case 1: Torrent without metadata
|
||||
(
|
||||
@@ -185,7 +188,7 @@ async def test_get_items_to_process(qbit_item, expected_processed, removal_job,
|
||||
|
||||
# Act
|
||||
processed_items = removal_job._get_items_to_process(
|
||||
[qbit_item]
|
||||
[qbit_item],
|
||||
) # pylint: disable=W0212
|
||||
|
||||
# Extract the hash from the processed items
|
||||
@@ -199,7 +202,7 @@ async def test_get_items_to_process(qbit_item, expected_processed, removal_job,
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"file, should_be_stoppable",
|
||||
("file", "should_be_stoppable"),
|
||||
[
|
||||
# Stopped files - No need to stop again
|
||||
(
|
||||
@@ -222,7 +225,7 @@ async def test_get_items_to_process(qbit_item, expected_processed, removal_job,
|
||||
},
|
||||
False,
|
||||
),
|
||||
# Bad file extension – Always stop (if not alredy stopped)
|
||||
# Bad file extension - Always stop (if not alredy stopped)
|
||||
(
|
||||
{
|
||||
"index": 0,
|
||||
@@ -253,7 +256,7 @@ async def test_get_items_to_process(qbit_item, expected_processed, removal_job,
|
||||
},
|
||||
True,
|
||||
),
|
||||
# Good file extension – Stop only if availability < 1 **and** progress < 1
|
||||
# Good file extension - Stop only if availability < 1 **and** progress < 1
|
||||
(
|
||||
{
|
||||
"index": 0,
|
||||
@@ -297,8 +300,8 @@ async def test_get_items_to_process(qbit_item, expected_processed, removal_job,
|
||||
],
|
||||
)
|
||||
def test_get_stoppable_file_single(removal_job, file, should_be_stoppable):
|
||||
# Add file_extension based on the file name
|
||||
file["file_extension"] = os.path.splitext(file["name"])[1].lower()
|
||||
# Add file_extension based on the file name
|
||||
file["file_extension"] = Path(file["name"]).suffix.lower()
|
||||
stoppable = removal_job._get_stoppable_files([file]) # pylint: disable=W0212
|
||||
is_stoppable = bool(stoppable)
|
||||
assert is_stoppable == should_be_stoppable
|
||||
@@ -316,7 +319,7 @@ def fixture_torrent_files():
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"stoppable_indexes, all_files_stopped",
|
||||
("stoppable_indexes", "all_files_stopped"),
|
||||
[
|
||||
([0], False), # Case 1: Nothing changes (stopping an already stopped file)
|
||||
([2], False), # Case 2: One additional file stopped
|
||||
@@ -325,7 +328,7 @@ def fixture_torrent_files():
|
||||
],
|
||||
)
|
||||
def test_all_files_stopped(
|
||||
removal_job, torrent_files, stoppable_indexes, all_files_stopped
|
||||
removal_job, torrent_files, stoppable_indexes, all_files_stopped,
|
||||
):
|
||||
# Create stoppable_files using only the index for each file and a dummy reason
|
||||
stoppable_files = [({"index": idx}, "some reason") for idx in stoppable_indexes]
|
||||
|
||||
@@ -1,36 +1,38 @@
|
||||
import pytest
|
||||
|
||||
from src.jobs.remove_failed_downloads import RemoveFailedDownloads
|
||||
from tests.jobs.test_utils import removal_job_fix
|
||||
|
||||
|
||||
# Test to check if items with "failed" status are included in affected items with parameterized data
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"queue_data, expected_download_ids",
|
||||
("queue_data", "expected_download_ids"),
|
||||
[
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "failed"}, # Item with failed status
|
||||
{"downloadId": "2", "status": "completed"}, # Item with completed status
|
||||
{"downloadId": "3"} # No status field
|
||||
{"downloadId": "3"}, # No status field
|
||||
],
|
||||
["1"] # Only the failed item should be affected
|
||||
["1"], # Only the failed item should be affected
|
||||
),
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "completed"}, # Item with completed status
|
||||
{"downloadId": "2", "status": "completed"},
|
||||
{"downloadId": "3", "status": "completed"}
|
||||
{"downloadId": "3", "status": "completed"},
|
||||
],
|
||||
[] # No failed items, so no affected items
|
||||
[], # No failed items, so no affected items
|
||||
),
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "failed"}, # Item with failed status
|
||||
{"downloadId": "2", "status": "failed"}
|
||||
{"downloadId": "2", "status": "failed"},
|
||||
],
|
||||
["1", "2"] # Both failed items should be affected
|
||||
)
|
||||
]
|
||||
["1", "2"], # Both failed items should be affected
|
||||
),
|
||||
],
|
||||
)
|
||||
async def test_find_affected_items(queue_data, expected_download_ids):
|
||||
# Arrange
|
||||
|
||||
@@ -1,11 +1,14 @@
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from src.jobs.remove_failed_imports import RemoveFailedImports
|
||||
from tests.jobs.test_utils import removal_job_fix
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"item, expected_result",
|
||||
("item", "expected_result"),
|
||||
[
|
||||
# Valid item scenario
|
||||
(
|
||||
@@ -15,7 +18,7 @@ from tests.jobs.test_utils import removal_job_fix
|
||||
"trackedDownloadState": "importPending",
|
||||
"statusMessages": [{"messages": ["Import failed"]}],
|
||||
},
|
||||
True
|
||||
True,
|
||||
),
|
||||
# Invalid item with wrong status
|
||||
(
|
||||
@@ -25,7 +28,7 @@ from tests.jobs.test_utils import removal_job_fix
|
||||
"trackedDownloadState": "importPending",
|
||||
"statusMessages": [{"messages": ["Import failed"]}],
|
||||
},
|
||||
False
|
||||
False,
|
||||
),
|
||||
# Invalid item with missing required fields
|
||||
(
|
||||
@@ -34,7 +37,7 @@ from tests.jobs.test_utils import removal_job_fix
|
||||
"trackedDownloadState": "importPending",
|
||||
"statusMessages": [{"messages": ["Import failed"]}],
|
||||
},
|
||||
False
|
||||
False,
|
||||
),
|
||||
# Invalid item with wrong trackedDownloadStatus
|
||||
(
|
||||
@@ -44,7 +47,7 @@ from tests.jobs.test_utils import removal_job_fix
|
||||
"trackedDownloadState": "importPending",
|
||||
"statusMessages": [{"messages": ["Import failed"]}],
|
||||
},
|
||||
False
|
||||
False,
|
||||
),
|
||||
# Invalid item with wrong trackedDownloadState
|
||||
(
|
||||
@@ -54,23 +57,21 @@ from tests.jobs.test_utils import removal_job_fix
|
||||
"trackedDownloadState": "downloaded",
|
||||
"statusMessages": [{"messages": ["Import failed"]}],
|
||||
},
|
||||
False
|
||||
False,
|
||||
),
|
||||
]
|
||||
],
|
||||
)
|
||||
async def test_is_valid_item(item, expected_result):
|
||||
#Fix
|
||||
# Fix
|
||||
removal_job = removal_job_fix(RemoveFailedImports)
|
||||
|
||||
# Act
|
||||
result = removal_job._is_valid_item(item) # pylint: disable=W0212
|
||||
result = removal_job._is_valid_item(item) # pylint: disable=W0212
|
||||
|
||||
# Assert
|
||||
assert result == expected_result
|
||||
|
||||
|
||||
|
||||
|
||||
# Fixture with 3 valid items with different messages and downloadId
|
||||
@pytest.fixture(name="queue_data")
|
||||
def fixture_queue_data():
|
||||
@@ -95,14 +96,14 @@ def fixture_queue_data():
|
||||
"trackedDownloadStatus": "warning",
|
||||
"trackedDownloadState": "importBlocked",
|
||||
"statusMessages": [{"messages": ["Import blocked due to issue C"]}],
|
||||
}
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
# Test the different patterns and check if the right downloads are selected
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"patterns, expected_download_ids, removal_messages_expected",
|
||||
("patterns", "expected_download_ids", "removal_messages_expected"),
|
||||
[
|
||||
(["*"], ["1", "2", "3"], True), # Match everything, expect removal messages
|
||||
(["Import failed*"], ["1", "2"], True), # Match "Import failed", expect removal messages
|
||||
@@ -124,18 +125,17 @@ async def test_find_affected_items_with_patterns(queue_data, patterns, expected_
|
||||
|
||||
# Assert
|
||||
assert isinstance(affected_items, list)
|
||||
|
||||
|
||||
# Check if the correct downloadIds are in the affected items
|
||||
affected_download_ids = [item["downloadId"] for item in affected_items]
|
||||
|
||||
# Assert the affected download IDs are as expected
|
||||
assert sorted(affected_download_ids) == sorted(expected_download_ids)
|
||||
|
||||
|
||||
# Check if removal messages are expected and present
|
||||
for item in affected_items:
|
||||
if removal_messages_expected:
|
||||
assert "removal_messages" in item, f"Expected removal messages for item {item['downloadId']}"
|
||||
assert len(item["removal_messages"]) > 0, f"Expected non-empty removal messages for item {item['downloadId']}"
|
||||
else:
|
||||
assert "removal_messages" not in item, f"Did not expect removal messages for item {item['downloadId']}"
|
||||
assert "removal_messages" not in item, f"Did not expect removal messages for item {item['downloadId']}"
|
||||
|
||||
@@ -1,43 +1,45 @@
|
||||
import pytest
|
||||
|
||||
from src.jobs.remove_metadata_missing import RemoveMetadataMissing
|
||||
from tests.jobs.test_utils import removal_job_fix
|
||||
|
||||
|
||||
# Test to check if items with the specific error message are included in affected items with parameterized data
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"queue_data, expected_download_ids",
|
||||
("queue_data", "expected_download_ids"),
|
||||
[
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"}, # Valid item
|
||||
{"downloadId": "2", "status": "completed", "errorMessage": "qBittorrent is downloading metadata"}, # Wrong status
|
||||
{"downloadId": "3", "status": "queued", "errorMessage": "Some other error"} # Incorrect errorMessage
|
||||
{"downloadId": "3", "status": "queued", "errorMessage": "Some other error"}, # Incorrect errorMessage
|
||||
],
|
||||
["1"] # Only the item with "queued" status and the correct errorMessage should be affected
|
||||
["1"], # Only the item with "queued" status and the correct errorMessage should be affected
|
||||
),
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "queued", "errorMessage": "Some other error"}, # Incorrect errorMessage
|
||||
{"downloadId": "2", "status": "completed", "errorMessage": "qBittorrent is downloading metadata"}, # Wrong status
|
||||
{"downloadId": "3", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"} # Correct item
|
||||
{"downloadId": "3", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"}, # Correct item
|
||||
],
|
||||
["3"] # Only the item with "queued" status and the correct errorMessage should be affected
|
||||
["3"], # Only the item with "queued" status and the correct errorMessage should be affected
|
||||
),
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"}, # Valid item
|
||||
{"downloadId": "2", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"} # Another valid item
|
||||
{"downloadId": "2", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"}, # Another valid item
|
||||
],
|
||||
["1", "2"] # Both items match the condition
|
||||
["1", "2"], # Both items match the condition
|
||||
),
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "completed", "errorMessage": "qBittorrent is downloading metadata"}, # Wrong status
|
||||
{"downloadId": "2", "status": "queued", "errorMessage": "Some other error"} # Incorrect errorMessage
|
||||
{"downloadId": "2", "status": "queued", "errorMessage": "Some other error"}, # Incorrect errorMessage
|
||||
],
|
||||
[] # No items match the condition
|
||||
)
|
||||
]
|
||||
[], # No items match the condition
|
||||
),
|
||||
],
|
||||
)
|
||||
async def test_find_affected_items(queue_data, expected_download_ids):
|
||||
# Arrange
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
import pytest
|
||||
|
||||
from src.jobs.remove_missing_files import RemoveMissingFiles
|
||||
from tests.jobs.test_utils import removal_job_fix
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"queue_data, expected_download_ids",
|
||||
("queue_data", "expected_download_ids"),
|
||||
[
|
||||
(
|
||||
[ # valid failed torrent (warning + matching errorMessage)
|
||||
@@ -12,13 +14,13 @@ from tests.jobs.test_utils import removal_job_fix
|
||||
{"downloadId": "2", "status": "warning", "errorMessage": "The download is missing files"},
|
||||
{"downloadId": "3", "status": "warning", "errorMessage": "qBittorrent is reporting missing files"},
|
||||
],
|
||||
["1", "2", "3"]
|
||||
["1", "2", "3"],
|
||||
),
|
||||
(
|
||||
[ # wrong status for errorMessage, should be ignored
|
||||
{"downloadId": "1", "status": "failed", "errorMessage": "The download is missing files"},
|
||||
],
|
||||
[]
|
||||
[],
|
||||
),
|
||||
(
|
||||
[ # valid "completed" with matching statusMessage
|
||||
@@ -26,18 +28,18 @@ from tests.jobs.test_utils import removal_job_fix
|
||||
"downloadId": "1",
|
||||
"status": "completed",
|
||||
"statusMessages": [
|
||||
{"messages": ["No files found are eligible for import in /some/path"]}
|
||||
{"messages": ["No files found are eligible for import in /some/path"]},
|
||||
],
|
||||
},
|
||||
{
|
||||
"downloadId": "2",
|
||||
"status": "completed",
|
||||
"statusMessages": [
|
||||
{"messages": ["Everything looks good!"]}
|
||||
{"messages": ["Everything looks good!"]},
|
||||
],
|
||||
},
|
||||
],
|
||||
["1"]
|
||||
["1"],
|
||||
),
|
||||
(
|
||||
[ # No statusMessages key or irrelevant messages
|
||||
@@ -45,10 +47,10 @@ from tests.jobs.test_utils import removal_job_fix
|
||||
{
|
||||
"downloadId": "2",
|
||||
"status": "completed",
|
||||
"statusMessages": [{"messages": ["Other message"]}]
|
||||
"statusMessages": [{"messages": ["Other message"]}],
|
||||
},
|
||||
],
|
||||
[]
|
||||
[],
|
||||
),
|
||||
(
|
||||
[ # Mixed: one matching warning + one matching statusMessage
|
||||
@@ -56,13 +58,13 @@ from tests.jobs.test_utils import removal_job_fix
|
||||
{
|
||||
"downloadId": "2",
|
||||
"status": "completed",
|
||||
"statusMessages": [{"messages": ["No files found are eligible for import in foo"]}]
|
||||
"statusMessages": [{"messages": ["No files found are eligible for import in foo"]}],
|
||||
},
|
||||
{"downloadId": "3", "status": "completed"},
|
||||
],
|
||||
["1", "2"]
|
||||
["1", "2"],
|
||||
),
|
||||
]
|
||||
],
|
||||
)
|
||||
async def test_find_affected_items(queue_data, expected_download_ids):
|
||||
# Arrange
|
||||
|
||||
@@ -1,7 +1,9 @@
|
||||
import pytest
|
||||
|
||||
from src.jobs.remove_orphans import RemoveOrphans
|
||||
from tests.jobs.test_utils import removal_job_fix
|
||||
|
||||
|
||||
@pytest.fixture(name="queue_data")
|
||||
def fixture_queue_data():
|
||||
return [
|
||||
@@ -28,9 +30,10 @@ def fixture_queue_data():
|
||||
"status": "paused",
|
||||
"trackedDownloadState": "downloading",
|
||||
"statusMessages": [],
|
||||
}
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_find_affected_items_returns_queue(queue_data):
|
||||
# Fix
|
||||
@@ -41,6 +44,6 @@ async def test_find_affected_items_returns_queue(queue_data):
|
||||
|
||||
# Assert
|
||||
assert isinstance(affected_items, list)
|
||||
assert len(affected_items) == 2
|
||||
assert len(affected_items) == 2 # noqa: PLR2004
|
||||
assert affected_items[0]["downloadId"] == "AABBCC"
|
||||
assert affected_items[1]["downloadId"] == "112233"
|
||||
|
||||
@@ -1,12 +1,14 @@
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from src.jobs.remove_slow import RemoveSlow
|
||||
from tests.jobs.test_utils import removal_job_fix
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"item, expected_result",
|
||||
("item", "expected_result"),
|
||||
[
|
||||
(
|
||||
# Valid: has downloadId, size, sizeleft, and status = "downloading"
|
||||
@@ -119,17 +121,17 @@ def fixture_arr():
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"min_speed, expected_ids",
|
||||
("min_speed", "expected_ids"),
|
||||
[
|
||||
(0, []), # No min download speed; all torrents pass
|
||||
(500, ["stuck"]), # Only stuck and slow are included
|
||||
(1000, ["stuck", "slow"]), # Same as above
|
||||
(10000, ["stuck", "slow", "medium"]), # Only stuck and slow are below 5.0
|
||||
(1000000, ["stuck", "slow", "medium", "fast"]), # Fast torrent included (but not importing)
|
||||
(1000000, ["stuck", "slow", "medium", "fast"]), # Fast torrent included (but not importing)
|
||||
],
|
||||
)
|
||||
async def test_find_affected_items_with_varied_speeds(
|
||||
slow_queue_data, min_speed, expected_ids, arr
|
||||
slow_queue_data, min_speed, expected_ids, arr,
|
||||
):
|
||||
removal_job = removal_job_fix(RemoveSlow, queue_data=slow_queue_data)
|
||||
|
||||
@@ -139,12 +141,12 @@ async def test_find_affected_items_with_varied_speeds(
|
||||
removal_job.settings = MagicMock()
|
||||
removal_job.settings.general.timer = 1 # 1 minute for speed calculation
|
||||
removal_job.arr = arr # Inject the mocked arr object
|
||||
removal_job._is_valid_item = MagicMock( return_value=True ) # Mock the _is_valid_item method to always return True # pylint: disable=W0212
|
||||
removal_job._is_valid_item = MagicMock(return_value=True) # Mock the _is_valid_item method to always return True # pylint: disable=W0212
|
||||
|
||||
# Inject size and sizeleft into each item in the queue
|
||||
for item in slow_queue_data:
|
||||
item["size"] = item["total_size"] * 1000000 # Inject total size as 'size'
|
||||
item["sizeleft"] = ( item["size"] - item["progress_now"] * 1000000 ) # Calculate sizeleft
|
||||
item["sizeleft"] = (item["size"] - item["progress_now"] * 1000000) # Calculate sizeleft
|
||||
item["status"] = "downloading"
|
||||
item["title"] = item["downloadId"]
|
||||
|
||||
@@ -165,4 +167,4 @@ async def test_find_affected_items_with_varied_speeds(
|
||||
|
||||
# Ensure 'importing' and 'usenet' are never flagged for removal
|
||||
assert "importing" not in affected_ids
|
||||
assert "usenet" not in affected_ids
|
||||
assert "usenet" not in affected_ids
|
||||
|
||||
@@ -1,43 +1,45 @@
|
||||
import pytest
|
||||
|
||||
from src.jobs.remove_stalled import RemoveStalled
|
||||
from tests.jobs.test_utils import removal_job_fix
|
||||
|
||||
|
||||
# Test to check if items with the specific error message are included in affected items with parameterized data
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"queue_data, expected_download_ids",
|
||||
("queue_data", "expected_download_ids"),
|
||||
[
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "warning", "errorMessage": "The download is stalled with no connections"}, # Valid item
|
||||
{"downloadId": "2", "status": "completed", "errorMessage": "The download is stalled with no connections"}, # Wrong status
|
||||
{"downloadId": "3", "status": "warning", "errorMessage": "Some other error"} # Incorrect errorMessage
|
||||
{"downloadId": "3", "status": "warning", "errorMessage": "Some other error"}, # Incorrect errorMessage
|
||||
],
|
||||
["1"] # Only the item with "warning" status and the correct errorMessage should be affected
|
||||
["1"], # Only the item with "warning" status and the correct errorMessage should be affected
|
||||
),
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "warning", "errorMessage": "Some other error"}, # Incorrect errorMessage
|
||||
{"downloadId": "2", "status": "completed", "errorMessage": "The download is stalled with no connections"}, # Wrong status
|
||||
{"downloadId": "3", "status": "warning", "errorMessage": "The download is stalled with no connections"} # Correct item
|
||||
{"downloadId": "3", "status": "warning", "errorMessage": "The download is stalled with no connections"}, # Correct item
|
||||
],
|
||||
["3"] # Only the item with "warning" status and the correct errorMessage should be affected
|
||||
["3"], # Only the item with "warning" status and the correct errorMessage should be affected
|
||||
),
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "warning", "errorMessage": "The download is stalled with no connections"}, # Valid item
|
||||
{"downloadId": "2", "status": "warning", "errorMessage": "The download is stalled with no connections"} # Another valid item
|
||||
{"downloadId": "2", "status": "warning", "errorMessage": "The download is stalled with no connections"}, # Another valid item
|
||||
],
|
||||
["1", "2"] # Both items match the condition
|
||||
["1", "2"], # Both items match the condition
|
||||
),
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "status": "completed", "errorMessage": "The download is stalled with no connections"}, # Wrong status
|
||||
{"downloadId": "2", "status": "warning", "errorMessage": "Some other error"} # Incorrect errorMessage
|
||||
{"downloadId": "2", "status": "warning", "errorMessage": "Some other error"}, # Incorrect errorMessage
|
||||
],
|
||||
[] # No items match the condition
|
||||
)
|
||||
]
|
||||
[], # No items match the condition
|
||||
),
|
||||
],
|
||||
)
|
||||
async def test_find_affected_items(queue_data, expected_download_ids):
|
||||
# Arrange
|
||||
|
||||
@@ -1,64 +1,68 @@
|
||||
from unittest.mock import AsyncMock, MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from src.jobs.remove_unmonitored import RemoveUnmonitored
|
||||
from tests.jobs.test_utils import removal_job_fix
|
||||
|
||||
|
||||
@pytest.fixture(name="arr")
|
||||
def fixture_arr():
|
||||
mock = MagicMock()
|
||||
mock.is_monitored = AsyncMock()
|
||||
return mock
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
@pytest.mark.parametrize(
|
||||
"queue_data, monitored_ids, expected_download_ids",
|
||||
("queue_data", "monitored_ids", "expected_download_ids"),
|
||||
[
|
||||
# All items monitored -> no affected items
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "detail_item_id": 101},
|
||||
{"downloadId": "2", "detail_item_id": 102}
|
||||
{"downloadId": "2", "detail_item_id": 102},
|
||||
],
|
||||
{101: True, 102: True},
|
||||
[]
|
||||
[],
|
||||
),
|
||||
# All items unmonitored -> all affected
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "detail_item_id": 101},
|
||||
{"downloadId": "2", "detail_item_id": 102}
|
||||
{"downloadId": "2", "detail_item_id": 102},
|
||||
],
|
||||
{101: False, 102: False},
|
||||
["1", "2"]
|
||||
["1", "2"],
|
||||
),
|
||||
# One monitored, one not
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "detail_item_id": 101},
|
||||
{"downloadId": "2", "detail_item_id": 102}
|
||||
{"downloadId": "2", "detail_item_id": 102},
|
||||
],
|
||||
{101: True, 102: False},
|
||||
["2"]
|
||||
["2"],
|
||||
),
|
||||
# Shared downloadId, only one monitored -> not affected
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "detail_item_id": 101},
|
||||
{"downloadId": "1", "detail_item_id": 102}
|
||||
{"downloadId": "1", "detail_item_id": 102},
|
||||
],
|
||||
{101: False, 102: True},
|
||||
[]
|
||||
[],
|
||||
),
|
||||
# Shared downloadId, none monitored -> affected
|
||||
(
|
||||
[
|
||||
{"downloadId": "1", "detail_item_id": 101},
|
||||
{"downloadId": "1", "detail_item_id": 102}
|
||||
{"downloadId": "1", "detail_item_id": 102},
|
||||
],
|
||||
{101: False, 102: False},
|
||||
["1", "1"]
|
||||
["1", "1"],
|
||||
),
|
||||
]
|
||||
],
|
||||
)
|
||||
async def test_find_affected_items(queue_data, monitored_ids, expected_download_ids, arr):
|
||||
# Patch arr mock with side_effect
|
||||
@@ -76,4 +80,4 @@ async def test_find_affected_items(queue_data, monitored_ids, expected_download_
|
||||
# Assert
|
||||
affected_download_ids = [item["downloadId"] for item in affected_items]
|
||||
assert affected_download_ids == expected_download_ids, \
|
||||
f"Expected downloadIds {expected_download_ids}, got {affected_download_ids}"
|
||||
f"Expected downloadIds {expected_download_ids}, got {affected_download_ids}"
|
||||
|
||||
@@ -1,9 +1,12 @@
|
||||
import pytest
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import pytest
|
||||
|
||||
from src.jobs.strikes_handler import StrikesHandler
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"current_hashes, expected_remaining_in_tracker",
|
||||
("current_hashes", "expected_remaining_in_tracker"),
|
||||
[
|
||||
([], []), # nothing active → all removed
|
||||
(["HASH1", "HASH2"], ["HASH1", "HASH2"]), # both active → none removed
|
||||
@@ -11,14 +14,14 @@ from src.jobs.strikes_handler import StrikesHandler
|
||||
],
|
||||
)
|
||||
def test_recover_downloads(current_hashes, expected_remaining_in_tracker):
|
||||
"""Tests if tracker correctly removes items (if recovered) and adds new ones"""
|
||||
"""Test if tracker correctly removes items (if recovered) and adds new ones."""
|
||||
# Fix
|
||||
tracker = MagicMock()
|
||||
tracker.defective = {
|
||||
"remove_stalled": {
|
||||
"HASH1": {"title": "Movie-with-one-strike", "strikes": 1},
|
||||
"HASH2": {"title": "Movie-with-three-strikes", "strikes": 3},
|
||||
}
|
||||
},
|
||||
}
|
||||
arr = MagicMock()
|
||||
arr.tracker = tracker
|
||||
@@ -35,12 +38,12 @@ def test_recover_downloads(current_hashes, expected_remaining_in_tracker):
|
||||
# ---------- Test ----------
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
"strikes_before_increment, max_strikes, expected_in_affected_downloads",
|
||||
("strikes_before_increment", "max_strikes", "expected_in_affected_downloads"),
|
||||
[
|
||||
(1, 3, False), # Below limit → should not be affected
|
||||
(2, 3, False), # Below limit → should not be affected
|
||||
(3, 3, True), # At limit, will be pushed over limit → should not be affected
|
||||
(4, 3, True), # Over limit → should be affected
|
||||
(4, 3, True), # Over limit → should be affected
|
||||
],
|
||||
)
|
||||
def test_apply_strikes_and_filter(strikes_before_increment, max_strikes, expected_in_affected_downloads):
|
||||
@@ -54,7 +57,7 @@ def test_apply_strikes_and_filter(strikes_before_increment, max_strikes, expecte
|
||||
handler = StrikesHandler(job_name=job_name, arr=arr, max_strikes=max_strikes)
|
||||
|
||||
affected_downloads = {
|
||||
"HASH1": [{"title": "dummy"}]
|
||||
"HASH1": [{"title": "dummy"}],
|
||||
}
|
||||
|
||||
result = handler._apply_strikes_and_filter(affected_downloads) # pylint: disable=W0212
|
||||
|
||||
@@ -1,33 +1,31 @@
|
||||
# test_utils.py
|
||||
from unittest.mock import AsyncMock
|
||||
from unittest.mock import patch
|
||||
from asyncio import Future
|
||||
from unittest.mock import AsyncMock, patch
|
||||
|
||||
|
||||
def mock_class_init(cls, *args, **kwargs):
|
||||
"""
|
||||
Mocks the __init__ method of a class to bypass constructor logic.
|
||||
"""
|
||||
with patch.object(cls, '__init__', lambda x, *args, **kwargs: None):
|
||||
instance = cls(*args, **kwargs)
|
||||
return instance
|
||||
"""Mock the __init__ method of a class to bypass constructor logic."""
|
||||
with patch.object(cls, "__init__", lambda x, *args, **kwargs: None):
|
||||
return cls(*args, **kwargs)
|
||||
|
||||
|
||||
def removal_job_fix(cls, queue_data=None, settings=None):
|
||||
"""
|
||||
Mocks the initialization of Jobs and the queue_manager attribute.
|
||||
|
||||
Mock the initialization of Jobs and the queue_manager attribute.
|
||||
|
||||
Args:
|
||||
cls: The class to instantiate (e.g., RemoveOrphans).
|
||||
queue_data: The mock data for the get_queue_items method (default: None).
|
||||
|
||||
settings: The mock data for the settings (default: None).
|
||||
|
||||
Returns:
|
||||
An instance of the class with a mocked queue_manager.
|
||||
instance: An instance of the class with a mocked queue_manager.
|
||||
|
||||
"""
|
||||
# Mock the initialization of the class (no need to pass arr, settings, job_name)
|
||||
instance = mock_class_init(cls, arr=None, settings=settings, job_name="Test Job")
|
||||
|
||||
|
||||
# Mock the queue_manager and its get_queue_items method
|
||||
instance.queue_manager = AsyncMock()
|
||||
instance.queue_manager.get_queue_items.return_value = queue_data
|
||||
|
||||
return instance
|
||||
|
||||
return instance
|
||||
|
||||
0
tests/settings/__init__.py
Normal file
0
tests/settings/__init__.py
Normal file
@@ -1,8 +1,11 @@
|
||||
"""Test loading the user configuration from environment variables."""
|
||||
import os
|
||||
import textwrap
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
import yaml
|
||||
from unittest.mock import patch
|
||||
|
||||
from src.settings._user_config import _load_from_env
|
||||
|
||||
# ---- Pytest Fixtures ----
|
||||
@@ -13,7 +16,7 @@ timer_value = "10"
|
||||
ssl_verification_value = "true"
|
||||
|
||||
# List
|
||||
ignored_download_clients_yaml = textwrap.dedent("""
|
||||
ignored_download_clients_yaml = textwrap.dedent("""
|
||||
- emulerr
|
||||
- napster
|
||||
""").strip()
|
||||
@@ -21,12 +24,12 @@ ignored_download_clients_yaml = textwrap.dedent("""
|
||||
# Job: No settings
|
||||
remove_bad_files_yaml = "" # empty string represents flag enabled with no config
|
||||
|
||||
# Job: One Setting
|
||||
# Job: One Setting
|
||||
remove_slow_yaml = textwrap.dedent("""
|
||||
- max_strikes: 3
|
||||
""").strip()
|
||||
|
||||
# Job: Multiple Setting
|
||||
# Job: Multiple Setting
|
||||
remove_stalled_yaml = textwrap.dedent("""
|
||||
- min_speed: 100
|
||||
- max_strikes: 3
|
||||
@@ -55,6 +58,7 @@ qbit_yaml = textwrap.dedent("""
|
||||
password: "qbit_password1"
|
||||
""").strip()
|
||||
|
||||
|
||||
@pytest.fixture(name="env_vars")
|
||||
def fixture_env_vars():
|
||||
env = {
|
||||
@@ -82,7 +86,8 @@ radarr_expected = yaml.safe_load(radarr_yaml)
|
||||
sonarr_expected = yaml.safe_load(sonarr_yaml)
|
||||
qbit_expected = yaml.safe_load(qbit_yaml)
|
||||
|
||||
@pytest.mark.parametrize("section,key,expected", [
|
||||
|
||||
@pytest.mark.parametrize(("section", "key", "expected"), [
|
||||
("general", "log_level", log_level_value),
|
||||
("general", "timer", int(timer_value)),
|
||||
("general", "ssl_verification", True),
|
||||
@@ -94,16 +99,14 @@ qbit_expected = yaml.safe_load(qbit_yaml)
|
||||
("instances", "sonarr", sonarr_expected),
|
||||
("download_clients", "qbittorrent", qbit_expected),
|
||||
])
|
||||
def test_env_loading_parametrized(env_vars, section, key, expected): # pylint: disable=unused-argument
|
||||
def test_env_loading_parametrized(env_vars, section, key, expected): # pylint: disable=unused-argument # noqa: ARG001
|
||||
config = _load_from_env()
|
||||
assert section in config
|
||||
assert key in config[section]
|
||||
value = config[section][key]
|
||||
|
||||
|
||||
if isinstance(expected, list):
|
||||
# Compare as lists
|
||||
assert value == expected
|
||||
else:
|
||||
assert value == expected
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user