Code Rewrite to support multi instances

This commit is contained in:
Benjamin Harder
2024-08-20 23:28:20 +02:00
parent 2041337914
commit 1663703186
80 changed files with 4560 additions and 2954 deletions

View File

@@ -1,7 +1,6 @@
__pycache__/
.pytest_cache/
config/config.conf
test*.py
ToDo
.vscode
snip*.py
.notebooks

View File

@@ -121,7 +121,7 @@ jobs:
fi
docker buildx build \
--platform linux/amd64,linux/arm64 -f docker/Dockerfile . \
--platform linux/amd64,linux/arm64 -f docker/dockerfile . \
--progress plain \
-t $IMAGE_NAME:$IMAGE_TAG \
$TAG_LATEST \

6
.gitignore vendored
View File

@@ -1,7 +1,11 @@
__pycache__/
.pytest_cache/
config/config.conf
config/config.yaml
ToDo
snip*.py
venv
testMagnets.txt
.venv
temp
.notebooks
**/old/

View File

@@ -1,10 +1,34 @@
repos:
- repo: local
hooks:
- id: black
name: black
entry: venv/bin/black
language: system
types: [python]
- repo: local
hooks:
- id: black
name: black
entry: |
bash -c 'BIN=".venv/bin/black";
[ ! -f "$BIN" ] && BIN=".venv/Scripts/black";
$BIN .'
language: system
- id: autoflake
name: autoflake
entry: |
bash -c 'BIN=".venv/bin/autoflake";
[ ! -f "$BIN" ] && BIN=".venv/Scripts/autoflake";
$BIN --in-place --remove-all-unused-imports --remove-unused-variables --recursive --exclude .venv .'
language: system
- id: isort
name: isort
entry: |
bash -c 'BIN=".venv/bin/isort";
[ ! -f "$BIN" ] && BIN=".venv/Scripts/isort";
$BIN -rc .'
language: system
- id: pylint
name: pylint
entry: |
bash -c 'BIN=".venv/bin/pylint";
[ ! -f "$BIN" ] && BIN=".venv/Scripts/pylint";
$BIN .'
language: system

View File

@@ -1,8 +0,0 @@
[pytest]
# log_cli = true
addopts = -q --tb=short -s
log_cli_level = INFO
log_cli_format = %(asctime)s - %(levelname)s - %(name)s - %(message)s
log_cli_date_format = %Y-%m-%d %H:%M:%S
testpaths =
tests

9
.vscode/extensions.json vendored Normal file
View File

@@ -0,0 +1,9 @@
{
"recommendations": [
"ms-python.python",
"ms-python.pylint",
"ms-python.black-formatter",
"ms-python.isort",
"ms-toolsai.jupyter"
]
}

15
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,15 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Main from Root",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/main.py",
"cwd": "${workspaceFolder}",
"env": {
"PYTHONPATH": "${workspaceFolder}"
}
}
]
}

View File

@@ -1,4 +1,10 @@
{
"editor.formatOnSave": true,
"editor.defaultFormatter": "ms-python.black-formatter"
"editor.defaultFormatter": "ms-python.black-formatter",
"python.terminal.activateEnvironment": true,
"python.testing.pytestArgs": [
"tests"
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true,
}

438
README.md
View File

@@ -8,54 +8,74 @@ _Like this app? Thanks for giving it a_ ⭐️
- [Dependencies & Hints & FAQ](#dependencies--hints--faq)
- [Getting started](#getting-started)
- [Explanation of the settings](#explanation-of-the-settings)
- [Credits](#credits)
- [Disclaimer](#disclaimer)
## Overview
Decluttarr keeps the radarr & sonarr & lidarr & readarr & whisparr queue free of stalled / redundant downloads
Decluttarr is a helper tool that works with the *arr-application suite, and automates the clean-up for their download queues, keeping them free of stalled / redundant downloads.
It supports [Radarr](https://github.com/Radarr/Radarr/), [Sonarr](https://github.com/Sonarr/Sonarr/), [Readarr](https://github.com/Readarr/Readarr/), [Lidarr](https://github.com/Lidarr/Lidarr/), and [Whisparr](https://github.com/Whisparr/Whisparr/).
Feature overview:
- Automatically delete downloads that are stuck downloading metadata (& trigger download from another source)
- Automatically delete failed downloads (& trigger download from another source)
- Automatically delete downloads belonging to radarr/sonarr/etc. items that have been deleted in the meantime ('Orphan downloads')
- Automatically delete stalled downloads, after they have been found to be stalled multiple times in a row (& trigger download from another source)
- Automatically delete slow downloads, after they have been found to be slow multiple times in a row (& trigger download from another source)
- Automatically delete downloads belonging to radarr/sonarr/etc. items that are unmonitored
- Automatically delete downloads that failed importing since they are not a format upgrade (i.e. a better version is already present)
- Preventing download of bad files and removing torrents with less than 100% availability (remove_bad_files)
- Removing downloads that failed to download (remove_failed_downloads)
- Removing downloads that failed to import (remove_failed_imports)
- Removing downloads that are stuck downloading metadata (remove_metadata_missing)
- Removing downloads that are missing files (remove_missing_files)
- Removing downloads belonging to movies/series/albums/etc that have been deleted since the download was started (remove_orphans)
- Removing downloads that are repeatedly have been found to be slow (remove_slow)
- Removing downloads that are stalled
- Removing downloads belonging to movies/series/albums etc. that have been marked as "unmonitored"
- Periodically searching for better content on movies/series/albums etc. where cutoff has not been reached yet
- Periodcially searching for missing content that has not yet been found
Key behaviors:
- Torrents of private trackers and public trackers in different ways (they can be removed, be skipped entirely, or be tagged as 'obsolete', so that other programs can remove them once the seed targets have been reached)
- If a job removes a download, it will automatically trigger a search for a new download, and remove the (partial) files downloaded thus far
- Certain jobs add removed downloads automatically to the blocklists of the arr-applications, to prevent the same download from being grabbed again
- If certain downloads should not be touched by decluttarr, they can be tagged with a protection-tag in Qbit
- You can test decluttarr, which shows you what decluttarr would do, without it actually doing it (test_run)
- Decluttarr supports multiple instances (for instance, multiple Sonarr instances) as well as multiple qBittorrent instances
How to run this:
- There are two ways how to run decluttarr.
- Either, decluttarr is run as local script (run main.py) and settings are maintained in a config.yaml
- Alternatively, delcuttarr is run as docker image. Here, either all settings can either be configured via docker-compose, or alternatively also the config.yaml is used
- Check out [Getting started](#getting-started)
You may run this locally by launching main.py, or by pulling the docker image.
You can find a sample docker-compose.yml [here](#method-1-docker).
## Dependencies & Hints & FAQ
- Use Sonarr v4 & Radarr v5, else certain features may not work correctly
- qBittorrent is recommended but not required. If you don't use qBittorrent, you will experience the following limitations:
- When detecting slow downloads, the speeds provided by the \*arr apps will be used, which is less accurate than what qBittorrent returns when queried directly
- The feature that allows to protect downloads from removal (NO_STALLED_REMOVAL_QBIT_TAG) does not work
- The feature that ignores private trackers does not work
- The feature that allows to protect downloads from removal (protected_tag) does not work
- The feature that distinguishes private and private trackers (private_tracker_handling, public_tracker_handling) does not work
- Removal of bad files and <100% availabiltiy (remove_bad_files) does not work
- If you see strange errors such as "found 10 / 3 times", consider turning on the setting "Reject Blocklisted Torrent Hashes While Grabbing". On nightly Radarr/Sonarr/Readarr/Lidarr/Whisparr, the option is located under settings/indexers in the advanced options of each indexer, on Prowlarr it is under settings/apps and then the advanced settings of the respective app
- When broken torrents are removed the files belonging to them are deleted
- Across all removal types: A new download from another source is automatically added by radarr/sonarr/lidarr/readarr/whisparr (if available)
- If you use qBittorrent and none of your torrents get removed and the verbose logs tell that all torrents are protected by the NO_STALLED_REMOVAL_QBIT_TAG even if they are not, you may be using a qBittorrent version that has problems with API calls and you may want to consider switching to a different qBit image (see https://github.com/ManiMatter/decluttarr/issues/56)
- If you use qBittorrent and none of your torrents get removed and the verbose logs tell that all torrents are protected by the protected_tag even if they are not, you may be using a qBittorrent version that has problems with API calls and you may want to consider switching to a different qBit image (see https://github.com/ManiMatter/decluttarr/issues/56)
- Currently, “\*Arr” apps are only supported in English. Refer to issue https://github.com/ManiMatter/decluttarr/issues/132 for more details
- If you experience yaml issues, please check the closed issues. There are different notations, and it may very well be that the issue you found has already been solved in one of the issues. Once you figured your problem, feel free to post your yaml to help others here: https://github.com/ManiMatter/decluttarr/issues/173
- declutarr only supports single radarr / sonarr instances. If you have multiple instances of those \*arrs, solution is to run multiple decluclutarrs as well
## Getting started
There's two ways to run this:
There's two (and a half) ways to run this:
- As a docker container with docker-compose, whilst leaving the detailed configuration in a separate yaml file (see [Method 1](#method-1-docker-with-config-file)). This is the __recommended setup__ when running in docker
- As a docker container with docker-compose, with all configuration in your docker-compose (can be lengthy) (see [Method 2](#method-1-docker-with-config-file-recommended-setup))
- By cloning the repository and running the script locally (see [Method 3](#method-3-running-locally))
- As a docker container with docker-compose
- By cloning the repository and running the script manually
The ways are explained below and there's an explanation for the different settings below that
Both ways are explained below and there's an explanation for the different settings below that
### Method 1: Docker (with config file) __[recommended setup]__
1. Use the following input for your `docker-compose.yml`
2. Download the config_example.yaml from the config folder (on github) and put it into your mounted folder
3. Rename it to config.yaml and adjust the settings to your needs
4. Run `docker-compose up -d` in the directory where the file is located to create the docker container
### Method 1: Docker
1. Make a `docker-compose.yml` file
2. Use the following as a base for that and tweak the settings to your needs
Note: Always pull the "**latest**" version. The "dev" version is for testing only, and should only be pulled when contributing code or supporting with bug fixes
```yaml
version: "3.3"
@@ -68,78 +88,196 @@ services:
TZ: Europe/Zurich
PUID: 1000
PGID: 1000
## General
# TEST_RUN: True
# SSL_VERIFICATION: False
LOG_LEVEL: INFO
## Features
REMOVE_TIMER: 10
REMOVE_FAILED: True
REMOVE_FAILED_IMPORTS: True
REMOVE_METADATA_MISSING: True
REMOVE_MISSING_FILES: True
REMOVE_ORPHANS: True
REMOVE_SLOW: True
REMOVE_STALLED: True
REMOVE_UNMONITORED: True
RUN_PERIODIC_RESCANS: '
{
"SONARR": {"MISSING": true, "CUTOFF_UNMET": true, "MAX_CONCURRENT_SCANS": 3, "MIN_DAYS_BEFORE_RESCAN": 7},
"RADARR": {"MISSING": true, "CUTOFF_UNMET": true, "MAX_CONCURRENT_SCANS": 3, "MIN_DAYS_BEFORE_RESCAN": 7}
}'
# Feature Settings
PERMITTED_ATTEMPTS: 3
NO_STALLED_REMOVAL_QBIT_TAG: Don't Kill
MIN_DOWNLOAD_SPEED: 100
FAILED_IMPORT_MESSAGE_PATTERNS: '
[
"Not a Custom Format upgrade for existing",
"Not an upgrade for existing"
]'
IGNORED_DOWNLOAD_CLIENTS: ["emulerr"]
## Radarr
RADARR_URL: http://radarr:7878
RADARR_KEY: $RADARR_API_KEY
## Sonarr
SONARR_URL: http://sonarr:8989
SONARR_KEY: $SONARR_API_KEY
## Lidarr
LIDARR_URL: http://lidarr:8686
LIDARR_KEY: $LIDARR_API_KEY
## Readarr
READARR_URL: http://readarr:8787
READARR_KEY: $READARR_API_KEY
## Whisparr
WHISPARR_URL: http://whisparr:6969
WHISPARR_KEY: $WHISPARR_API_KEY
## qBitorrent
QBITTORRENT_URL: http://qbittorrent:8080
# QBITTORRENT_USERNAME: Your name
# QBITTORRENT_PASSWORD: Your password
volumes:
- $DOCKERDIR/appdata/decluttarr/config.yaml:/config/config.yaml
```
3. Run `docker-compose up -d` in the directory where the file is located to create the docker container
Note: Always pull the "**latest**" version. The "dev" version is for testing only, and should only be pulled when contributing code or supporting with bug fixes
### Method 2: Running manually
### Method 2: Docker (without config file)
1. Use the following input for your `docker-compose.yml`
2. Tweak the settings to your needs
3. Remove the things that are commented out (if you don't need them), or uncomment them
4. If you face problems with yaml formats etc, please first check the open and closed issues on github, before opening new ones
5. Run `docker-compose up -d` in the directory where the file is located to create the docker container
Note: Always pull the "**latest**" version. The "dev" version is for testing only, and should only be pulled when contributing code or supporting with bug fixes
```yaml
version: "3.3"
services:
decluttarr:
image: ghcr.io/manimatter/decluttarr:latest
container_name: decluttarr
restart: always
environment:
TZ: Europe/Zurich
PUID: 1000
PGID: 1000
# general settings
GENERAL: >
{
"log_level": "VERBOSE",
"test_run": true,
"timer": 10,
"ignored_download_clients": [],
"ssl_verification": true
// "private_tracker_handling": "obsolete_tag", // remove, skip, obsolete_tag. Optional. Default: remove
// "public_tracker_handling": "remove", // remove, skip, obsolete_tag. Optional. Default: remove
// "obsolete_tag": "Obsolete", // optional. Default: "Obsolete"
// "protected_tag": "Keep" // optional. Default: "Keep"
}
# job defaults
JOB_DEFAULTS: >
{
"max_strikes": 3,
"min_days_between_searches": 7,
"max_concurrent_searches": 3
}
# jobs
JOBS: >
{
"remove_bad_files": {},
"remove_failed_downloads": {},
"remove_failed_imports": {
// "message_patterns": ["*"]
},
"remove_metadata_missing": {
// "max_strikes": 3
},
"remove_missing_files": {},
"remove_orphans": {},
"remove_slow": {
// "min_speed": 100,
// "max_strikes": 3
},
"remove_stalled": {
// "max_strikes": 3
},
"remove_unmonitored": {},
"search_unmet_cutoff_content": {
// "min_days_between_searches": 7,
// "max_concurrent_searches": 3
},
"search_missing_content": {
// "min_days_between_searches": 7,
// "max_concurrent_searches": 3
}
}
# instances
INSTANCES: >
{
"sonarr": [
{ "base_url": "http://sonarr:8989", "api_key": "xxxx" }
],
"radarr": [
{ "base_url": "http://radarr:7878", "api_key": "xxxx" }
],
"readarr": [
{ "base_url": "http://readarr:8787", "api_key": "xxxx" }
],
"lidarr": [
{ "base_url": "http://lidarr:8686", "api_key": "xxxx" }
],
"whisparr": [
{ "base_url": "http://whisparr:6969", "api_key": "xxxx" }
]
}
# download clients
DOWNLOAD_CLIENTS: >
{
"qbittorrent": [
{
"base_url": "http://qbittorrent:8080"
// "username": "xxxx", // optional
// "password": "xxxx", // optional
// "name": "qBittorrent" // optional; must match client name in *arr
}
]
}
```
environment:
<<: *default-tz-puid-pgid
LOG_LEVEL: DEBUG
TEST_RUN: True
TIMER: 10
# IGNORED_DOWNLOAD_CLIENTS: |
# - emulerr
# SSL_VERIFICATION: true
# # --- Optional: Job Defaults ---
# MAX_STRIKES: 3
# MIN_DAYS_BETWEEN_SEARCHES: 7
# MAX_CONCURRENT_SEARCHES: 3
# # --- Jobs (short notation) ---
# REMOVE_BAD_FILES: True
# REMOVE_FAILED_DOWNLOADS: True
# REMOVE_FAILED_IMPORTS: True
# REMOVE_METADATA_MISSING: True
# REMOVE_MISSING_FILES: True
# REMOVE_ORPHANS: True
# REMOVE_SLOW: True
# REMOVE_STALLED: True
# REMOVE_UNMONITORED: True
# SEARCH_BETTER_CONTENT: True
# SEARCH_MISSING_CONTENT: True
# # --- OR: Jobs (with job-specific settings) ---
# REMOVE_BAD_FILES: True
# REMOVE_FAILED_DOWNLOADS: True
# REMOVE_FAILED_IMPORTS:
# REMOVE_METADATA_MISSING: |
# max_strikes: 3
# REMOVE_MISSING_FILES: True
# REMOVE_ORPHANS: True
# REMOVE_SLOW: |
# min_speed: 100
# max_strikes: 3
# REMOVE_STALLED: |
# max_strikes: 3
# REMOVE_UNMONITORED: True
# SEARCH_BETTER_CONTENT: |
# min_days_between_searches: 7
# max_concurrent_searches: 3
# SEARCH_MISSING_CONTENT: |
# min_days_between_searches: 7
# max_concurrent_searches: 3
# --- Instances ---
SONARR: |
- base_url: "http://sonarr:8989"
api_key: "bdc9d74fdb2b4627aec1cf6c93ed2b2d"
RADARR: |
- base_url: "http://radarr:7878"
api_key: "9412e07e582d4f9587fb56e8777ede10"
# READARR: |
# - base_url: "http://readarr:8787"
# api_key: "e65e8ad6cdb6434289df002b20a27dc3"
# --- Download Clients ---
QBITTORRENT: |
- base_url: "http://qbittorrent:8080"
### Method 3: Running locally
1. Clone the repository with `git clone -b latest https://github.com/ManiMatter/decluttarr.git`
Note: Do provide the `-b latest` in the clone command, else you will be pulling the dev branch which is not what you are after.
2. Rename the `config.conf-Example` inside the config folder to `config.conf`
3. Tweak `config.conf` to your needs
2. Rename the `config_example.yaml` inside the config folder to `config.yaml`
3. Tweak `config.yaml` to your needs
4. Install the libraries listed in the docker/requirements.txt (pip install -r requirements.txt)
5. Run the script with `python3 main.py`
Note: The `config.conf` is disregarded when running via docker-compose.yml
## Explanation of the settings
@@ -164,6 +302,13 @@ Configures the general behavior of the application (across all features)
- Permissible Values: True, False
- Is Mandatory: No (Defaults to False)
**TIMER**
- Sets the frequency of how often the queue is checked for orphan and stalled downloads
- Type: Integer
- Unit: Minutes
- Is Mandatory: No (Defaults to 10)
**SSL_VERIFICATION**
- Turns SSL certificate verification on or off for all API calls
@@ -173,38 +318,83 @@ Configures the general behavior of the application (across all features)
- Permissible Values: True, False
- Is Mandatory: No (Defaults to True)
**IGNORE_DOWNLOAD_CLIENTS**
- Allows you to configure download client names that will be skipped by decluttarr
Note: The names provided here have to 100% match with how you have named your download clients in your *arr application(s)
- Type: List of strings
- Is Mandatory: No (Defaults to [], ie. nothing ignored])
**PRIVATE_TRACKER_HANDLING / PUBLIC_TRACKER_HANDLING**
- Defines what happens with private/public tracker torrents if they are flagged by a removal job
- Note that this only works for qbittorrent currently (if you set up qbittorrent in your config)
- "remove" means that torrents are removed (default behavior)
- "skip" means they are disregarded (which some users might find handy to protect their private trackers prematurely, ie., before their seed targets are met)
- "obsolete_tag" means that rather than being removed, the torrents are tagged. This allows other applications (such as [qbit_manage](https://github.com/StuffAnThings/qbit_manage) to monitor them and remove them once seed targets are fulfilled
- Type: String
- Permissible Values: remove, skip, obsolete_tag
- Is Mandatory: No (Defaults to remove)
**OBSOLETE_TAG**
- Only relevant in conjunction with PRIVATE_TRACKER_HANDLING / PUBLIC_TRACKER_HANDLING
- If either of these two settings are set to "obsolete_tag", then this setting can be used to define the tag that has to be applied
- Type: String
- Permissible Values: Any
- Is Mandatory: No (Defaults to "Obsolete")
**PROTECTED_TAG**
- If you do not want a given torrent being removed by decluttarr in any circumstance, you can use this feature to protect it from being removed
- Go to qBittorrent and mark the torrent with the tag you define here - it won't be touched
- Note that this only works for qbittorrent currently (if you set up qbittorrent in your config)
- Type: String
- Permissible Values: Any
- Is Mandatory: No (Defaults to "Keep")
---
### **Features settings**
---
Steers which type of cleaning is applied to the downloads queue
### **Job Defaults**
**REMOVE_TIMER**
Certain jobs take in additional configuration settings. If you want to define these settings globally (for all jobs to which they apply), you can do this here.
- Sets the frequency of how often the queue is checked for orphan and stalled downloads
If a job has the same settings configured on job-level, the job-level settings will take precedence.
**MAX_STRIKES**
- Certain jobs wait before removing a download, until the jobs have caught the same download a given number of times. This is defined by max_strikes
- max_strikes defines the total permissible counts a job can catch a download; catching it once more, and it will remove the ownload.
- Type: Integer
- Unit: Minutes
- Is Mandatory: No (Defaults to 10)
- Unit: Number of times the job catches a download
- Is Mandatory: No (Defaults to 3)
**REMOVE_FAILED**
**MIN_DAYS_BETWEEN_SEARCHES**
- Steers whether failed downloads with no connections are removed from the queue
- These downloads are not added to the blocklist
- A new download from another source is automatically added by radarr/sonarr/lidarr/readarr/whisparr (if available)
- Type: Boolean
- Permissible Values: True, False
- Is Mandatory: No (Defaults to False)
- Only relevant together with search_unmet_cutoff_content and search_missing_content
- Specified how many days should elapse before decluttarr tries to search for a given wanted item again
- Type: Integer
- Permissible Values: Any number
- Is Mandatory: No (Defaults to 7)
**REMOVE_FAILED_IMPORTS**
**MAX_CONCURRENT_SEARCHES**
- Only relevant together with search_unmet_cutoff_content and search_missing_content
- Specified how many ites concurrently on a single arr should be search for in a given iteration
- Each arr counts separately
- Example: If your wanted-list has 100 entries, and you define "3" as your number, after roughly 30 searches you'll have all items on your list searched for.
- Since the timer-setting steer how often the jobs run, if you put 10minutes there, after one hour you'll have run 6x, and thus already processed 18 searches. Long story short: No need to put a very high number here (else you'll just create unecessary traffic on your end..).
- Type: Integer
- Permissible Values: Any number
- Is Mandatory: No (Defaults to 3)
### **Jobs**
This is the interesting section. It defines which job you want decluttarr to run for you.
CONTINUE HEREEEEEEEE
- Steers whether downloads that failed importing are removed from the queue
- This can happen, for example, when a better version is already present
- Note: Only considers an import failed if the import message contains a warning that is listed on FAILED_IMPORT_MESSAGE_PATTERNS (see below)
- These downloads are added to the blocklist
- If the setting IGNORE_PRIVATE_TRACKERS is true, and the affected torrent is a private tracker, the queue item will be removed, but the torrent files will be kept
- Type: Boolean
- Permissible Values: True, False
- Is Mandatory: No (Defaults to False)
**REMOVE_METADATA_MISSING**
@@ -259,6 +449,20 @@ Steers which type of cleaning is applied to the downloads queue
- Permissible Values: True, False
- Is Mandatory: No (Defaults to False)
**SKIP_FILES**
- Steers whether files within torrents are marked as 'not download' if they match one of these conditions
1) They are less than 100% available
2) They are not one of the desired file types supported by the *arr apps:
3) They contain one of these words (case insensitive) and are smaller than 500 MB:
- Trailer
- Sample
- If all files of a torrent are marked as 'not download' then the torrent will be removed and blacklisted
- Note that this is only supported when qBittorrent is configured in decluttarr and it will turn on the setting 'Keep unselected files in ".unwanted" folder' in qBittorrent
- Type: Boolean
- Permissible Values: True, False
- Is Mandatory: No (Defaults to False)
**RUN_PERIODIC_RESCANS**
- Steers whether searches are automatically triggered for items that are missing or have not yet met the cutoff
@@ -294,12 +498,12 @@ If it you face issues, please first check the closed issues before opening a new
**MIN_DOWNLOAD_SPEED**
- Sets the minimum download speed for active downloads
- If the increase in the downloaded file size of a download is less than this value between two consecutive checks, the download is considered slow and is removed if happening more ofthen than the permitted attempts
- If the increase in the downloaded file size of a download is less than this value between two consecutive checks, the download is considered slow and is removed if happening more ofthen than the permitted strikes
- Type: Integer
- Unit: KBytes per second
- Is Mandatory: No (Defaults to 100, but is only enforced when "REMOVE_SLOW" is true)
**PERMITTED_ATTEMPTS**
**PERMITTED_STRIKES**
- Defines how many times a download has to be caught as stalled, slow or stuck downloading metadata before it is removed
- Type: Integer
@@ -441,14 +645,6 @@ If a different torrent manager is used, comment out this section (see above the
- Password used to log in to qBittorrent
- Optional; not needed if authentication bypassing on qBittorrent is enabled (for instance for local connections)
## Credits
- Script for detecting stalled downloads expanded on code by MattDGTL/sonarr-radarr-queue-cleaner
- Script to read out config expanded on code by syncarr/syncarr
- SONARR/RADARR team & contributors for their great product, API documenation, and guidance in their Discord channel
- Particular thanks to them for adding an additional flag to their API that allowed this script detect downloads stuck finding metadata
- craggles17 for arm compatibility
- Fxsch for improved documentation / ReadMe
## Disclaimer

View File

@@ -1,48 +0,0 @@
[general]
LOG_LEVEL = VERBOSE
TEST_RUN = True
[features]
REMOVE_TIMER = 10
REMOVE_FAILED = True
REMOVE_FAILED_IMPORTS = True
REMOVE_METADATA_MISSING = True
REMOVE_MISSING_FILES = True
REMOVE_ORPHANS = True
REMOVE_SLOW = True
REMOVE_STALLED = True
REMOVE_UNMONITORED = True
RUN_PERIODIC_RESCANS = {"SONARR": {"MISSING": true, "CUTOFF_UNMET": true, "MAX_CONCURRENT_SCANS": 3, "MIN_DAYS_BEFORE_RESCAN": 7}, "RADARR": {"MISSING": true, "CUTOFF_UNMET": true, "MAX_CONCURRENT_SCANS": 3, "MIN_DAYS_BEFORE_RESCAN": 7}}
[feature_settings]
MIN_DOWNLOAD_SPEED = 100
PERMITTED_ATTEMPTS = 3
NO_STALLED_REMOVAL_QBIT_TAG = Don't Kill
IGNORE_PRIVATE_TRACKERS = FALSE
FAILED_IMPORT_MESSAGE_PATTERNS = ["Not a Custom Format upgrade for existing", "Not an upgrade for existing"]
IGNORED_DOWNLOAD_CLIENTS = ["emulerr"]
[radarr]
RADARR_URL = http://radarr:7878
RADARR_KEY = $RADARR_API_KEY
[sonarr]
SONARR_URL = http://sonarr:8989
SONARR_KEY = $SONARR_API_KEY
[lidarr]
LIDARR_URL = http://lidarr:8686
LIDARR_KEY = $LIDARR_API_KEY
[readarr]
READARR_URL = http://lidarr:8787
READARR_KEY = $READARR_API_KEY
[whisparr]
WHISPARR_URL = http://whisparr:6969
WHISPARR_KEY = $WHISPARR_API_KEY
[qbittorrent]
QBITTORRENT_URL = http://qbittorrent:8080
QBITTORRENT_USERNAME = Your name (or empty)
QBITTORRENT_PASSWORD = Your password (or empty)

View File

@@ -0,0 +1,63 @@
general:
log_level: INFO
test_run: true
timer: 10
# ignored_download_clients: ["emulerr"]
# ssl_verification: false # Optional: Defaults to true
# private_tracker_handling: "obsolete_tag" # remove, skip, obsolete_tag. Optional. Default: remove
# public_tracker_handling: "remove" # remove, skip, obsolete_tag. Optional. Default: remove
# obsolete_tag: "Obsolete" # optional. Default: "Obsolete"
# protected_tag: "Keep" # optional. Default: "Keep"
job_defaults:
max_strikes: 3
min_days_between_searches: 7
max_concurrent_searches: 3
jobs:
remove_bad_files:
remove_failed_downloads:
remove_failed_imports:
message_patterns:
- Not a Custom Format upgrade for existing*
- Not an upgrade for existing*
remove_metadata_missing:
# max_strikes: 3
remove_missing_files:
remove_orphans:
remove_slow:
# min_speed: 100
# max_strikes: 3
remove_stalled:
# max_strikes: 3
remove_unmonitored:
search_unmet_cutoff_content:
# min_days_between_searches: 7
# max_concurrent_searches: 3
search_missing_content:
# min_days_between_searches: 7
# max_concurrent_searches: 3
instances:
sonarr:
- base_url: "http://sonarr:8989"
api_key: "xxxx"
radarr:
- base_url: "http://radarr:7878"
api_key: "xxxx"
readarr:
- base_url: "http://readarr:8787"
api_key: "xxxx"
lidarr:
- base_url: "http://lidarr:8686"
api_key: "xxxx"
whisparr:
- base_url: "http://whisparr:6969"
api_key: "xxxx"
download_clients:
qbittorrent:
- base_url: "http://qbittorrent:8080" # You can use decluttar without qbit (not all features available, see readme).
# username: xxxx # (optional -> if not provided, assuming not needed)
# password: xxxx # (optional -> if not provided, assuming not needed)
# name: "qBittorrent" # (optional -> if not provided, assuming "qBittorrent". Must correspond with what is specified in your *arr as download client name)

View File

@@ -1,130 +0,0 @@
#### Turning off black formatting
# fmt: off
from config.parser import get_config_value
from config.env_vars import *
# Define data types and default values for settingsDict variables
# General
LOG_LEVEL = get_config_value('LOG_LEVEL', 'general', False, str, 'INFO')
TEST_RUN = get_config_value('TEST_RUN', 'general', False, bool, False)
SSL_VERIFICATION = get_config_value('SSL_VERIFICATION', 'general', False, bool, True)
# Features
REMOVE_TIMER = get_config_value('REMOVE_TIMER', 'features', False, float, 10)
REMOVE_FAILED = get_config_value('REMOVE_FAILED', 'features', False, bool, False)
REMOVE_FAILED_IMPORTS = get_config_value('REMOVE_FAILED_IMPORTS' , 'features', False, bool, False)
REMOVE_METADATA_MISSING = get_config_value('REMOVE_METADATA_MISSING', 'features', False, bool, False)
REMOVE_MISSING_FILES = get_config_value('REMOVE_MISSING_FILES', 'features', False, bool, False)
REMOVE_NO_FORMAT_UPGRADE = get_config_value('REMOVE_NO_FORMAT_UPGRADE', 'features', False, bool, False) # OUTDATED - WILL RETURN WARNING
REMOVE_ORPHANS = get_config_value('REMOVE_ORPHANS', 'features', False, bool, False)
REMOVE_SLOW = get_config_value('REMOVE_SLOW', 'features', False, bool, False)
REMOVE_STALLED = get_config_value('REMOVE_STALLED', 'features', False, bool, False)
REMOVE_UNMONITORED = get_config_value('REMOVE_UNMONITORED', 'features', False, bool, False)
RUN_PERIODIC_RESCANS = get_config_value('RUN_PERIODIC_RESCANS', 'features', False, dict, {})
# Feature Settings
MIN_DOWNLOAD_SPEED = get_config_value('MIN_DOWNLOAD_SPEED', 'feature_settings', False, int, 0)
PERMITTED_ATTEMPTS = get_config_value('PERMITTED_ATTEMPTS', 'feature_settings', False, int, 3)
NO_STALLED_REMOVAL_QBIT_TAG = get_config_value('NO_STALLED_REMOVAL_QBIT_TAG', 'feature_settings', False, str, 'Don\'t Kill')
IGNORE_PRIVATE_TRACKERS = get_config_value('IGNORE_PRIVATE_TRACKERS', 'feature_settings', False, bool, True)
FAILED_IMPORT_MESSAGE_PATTERNS = get_config_value('FAILED_IMPORT_MESSAGE_PATTERNS','feature_settings', False, list, [])
IGNORED_DOWNLOAD_CLIENTS = get_config_value('IGNORED_DOWNLOAD_CLIENTS', 'feature_settings', False, list, [])
# Radarr
RADARR_URL = get_config_value('RADARR_URL', 'radarr', False, str)
RADARR_KEY = None if RADARR_URL == None else \
get_config_value('RADARR_KEY', 'radarr', True, str)
# Sonarr
SONARR_URL = get_config_value('SONARR_URL', 'sonarr', False, str)
SONARR_KEY = None if SONARR_URL == None else \
get_config_value('SONARR_KEY', 'sonarr', True, str)
# Lidarr
LIDARR_URL = get_config_value('LIDARR_URL', 'lidarr', False, str)
LIDARR_KEY = None if LIDARR_URL == None else \
get_config_value('LIDARR_KEY', 'lidarr', True, str)
# Readarr
READARR_URL = get_config_value('READARR_URL', 'readarr', False, str)
READARR_KEY = None if READARR_URL == None else \
get_config_value('READARR_KEY', 'readarr', True, str)
# Whisparr
WHISPARR_URL = get_config_value('WHISPARR_URL', 'whisparr', False, str)
WHISPARR_KEY = None if WHISPARR_URL == None else \
get_config_value('WHISPARR_KEY', 'whisparr', True, str)
# qBittorrent
QBITTORRENT_URL = get_config_value('QBITTORRENT_URL', 'qbittorrent', False, str, '')
QBITTORRENT_USERNAME = get_config_value('QBITTORRENT_USERNAME', 'qbittorrent', False, str, '')
QBITTORRENT_PASSWORD = get_config_value('QBITTORRENT_PASSWORD', 'qbittorrent', False, str, '')
########################################################################################################################
########### Validate settings
if not (IS_IN_PYTEST or RADARR_URL or SONARR_URL or LIDARR_URL or READARR_URL or WHISPARR_URL):
print(f'[ ERROR ]: No Radarr/Sonarr/Lidarr/Readarr/Whisparr URLs specified (nothing to monitor)')
exit()
#### Validate rescan settings
PERIODIC_RESCANS = get_config_value("PERIODIC_RESCANS", "features", False, dict, {})
rescan_supported_apps = ["SONARR", "RADARR"]
rescan_default_values = {
"MISSING": (True, bool),
"CUTOFF_UNMET": (True, bool),
"MAX_CONCURRENT_SCANS": (3, int),
"MIN_DAYS_BEFORE_RESCAN": (7, int),
}
# Remove rescan apps that are not supported
for key in list(RUN_PERIODIC_RESCANS.keys()):
if key not in rescan_supported_apps:
print(f"[ WARNING ]: Removed '{key}' from RUN_PERIODIC_RESCANS since only {rescan_supported_apps} are supported.")
RUN_PERIODIC_RESCANS.pop(key)
# Ensure SONARR and RADARR have the required parameters with default values if they are present
for app in rescan_supported_apps:
if app in RUN_PERIODIC_RESCANS:
for param, (default, expected_type) in rescan_default_values.items():
if param not in RUN_PERIODIC_RESCANS[app]:
print(f"[ INFO ]: Adding missing parameter '{param}' to '{app}' with default value '{default}'.")
RUN_PERIODIC_RESCANS[app][param] = default
else:
# Check the type and correct if necessary
current_value = RUN_PERIODIC_RESCANS[app][param]
if not isinstance(current_value, expected_type):
print(
f"[ INFO ]: Parameter '{param}' for '{app}' must be of type {expected_type.__name__} and found value '{current_value}' (type '{type(current_value).__name__}'). Defaulting to '{default}'."
)
RUN_PERIODIC_RESCANS[app][param] = default
########### Enrich setting variables
if RADARR_URL: RADARR_URL = RADARR_URL.rstrip('/') + '/api/v3'
if SONARR_URL: SONARR_URL = SONARR_URL.rstrip('/') + '/api/v3'
if LIDARR_URL: LIDARR_URL = LIDARR_URL.rstrip('/') + '/api/v1'
if READARR_URL: READARR_URL = READARR_URL.rstrip('/') + '/api/v1'
if WHISPARR_URL: WHISPARR_URL = WHISPARR_URL.rstrip('/') + '/api/v3'
if QBITTORRENT_URL: QBITTORRENT_URL = QBITTORRENT_URL.rstrip('/') + '/api/v2'
RADARR_MIN_VERSION = "5.3.6.8608"
if "RADARR" in PERIODIC_RESCANS:
RADARR_MIN_VERSION = "5.10.3.9171"
SONARR_MIN_VERSION = "4.0.1.1131"
if "SONARR" in PERIODIC_RESCANS:
SONARR_MIN_VERSION = "4.0.9.2332"
LIDARR_MIN_VERSION = None
READARR_MIN_VERSION = None
WHISPARR_MIN_VERSION = '2.0.0.548'
QBITTORRENT_MIN_VERSION = '4.3.0'
SUPPORTED_ARR_APPS = ['RADARR', 'SONARR', 'LIDARR', 'READARR', 'WHISPARR']
########### Add Variables to Dictionary
settingsDict = {}
for var_name in dir():
if var_name.isupper():
settingsDict[var_name] = locals()[var_name]

View File

@@ -1,6 +0,0 @@
import os
IS_IN_DOCKER = os.environ.get("IS_IN_DOCKER")
IMAGE_TAG = os.environ.get("IMAGE_TAG", "Local")
SHORT_COMMIT_ID = os.environ.get("SHORT_COMMIT_ID", "n/a")
IS_IN_PYTEST = os.environ.get("IS_IN_PYTEST")

View File

@@ -1,82 +0,0 @@
#!/usr/bin/env python
import sys
import os
import configparser
import json
from config.env_vars import *
# Configures how to parse configuration file
config_file_name = "config.conf"
config_file_full_path = os.path.join(
os.path.abspath(os.path.dirname(__file__)), config_file_name
)
sys.tracebacklimit = 0 # dont show stack traces in prod mode
config = configparser.ConfigParser()
config.optionxform = str # maintain capitalization of config keys
config.read(config_file_full_path)
def config_section_map(section):
"Load the config file into a dictionary"
dict1 = {}
options = config.options(section)
for option in options:
try:
value = config.get(section, option)
# Attempt to parse JSON for dictionary-like values
try:
dict1[option] = json.loads(value)
except json.JSONDecodeError:
dict1[option] = value
except Exception as e:
print(f"Exception on {option}: {e}")
dict1[option] = None
return dict1
def cast(value, type_):
return type_(value)
def get_config_value(key, config_section, is_mandatory, datatype, default_value=None):
"Return for each key the corresponding value from the Docker Environment or the Config File"
if IS_IN_DOCKER:
config_value = os.environ.get(key)
if config_value is not None:
config_value = config_value
elif is_mandatory:
print(f"[ ERROR ]: Variable not specified in Docker environment: {key}")
sys.exit(0)
else:
config_value = default_value
else:
try:
config_value = config_section_map(config_section).get(key)
except configparser.NoSectionError:
config_value = None
if config_value is not None:
config_value = config_value
elif is_mandatory:
print(
f"[ ERROR ]: Mandatory variable not specified in config file, section [{config_section}]: {key} (data type: {datatype.__name__})"
)
sys.exit(0)
else:
config_value = default_value
# Apply data type
try:
if datatype == bool:
config_value = eval(str(config_value).capitalize())
elif datatype == list or datatype == dict:
if not isinstance(config_value, datatype):
config_value = json.loads(config_value)
elif config_value is not None:
config_value = cast(config_value, datatype)
except Exception as e:
print(
f'[ ERROR ]: The value retrieved for [{config_section}]: {key} is "{config_value}" and cannot be converted to data type {datatype}'
)
print(e)
sys.exit(0)
return config_value

View File

@@ -1,25 +0,0 @@
#FROM python:3.9-slim-buster
# For debugging:
# sudo docker run --rm -it --entrypoint sh ghcr.io/manimatter/decluttarr:dev
FROM python:3.10.13-slim
# Define a build-time argument for IMAGE_TAG
ARG IMAGE_TAG
ARG SHORT_COMMIT_ID
# Set an environment variable using the build-time argument
ENV IMAGE_TAG=$IMAGE_TAG
ENV SHORT_COMMIT_ID=$SHORT_COMMIT_ID
LABEL org.opencontainers.image.source="https://github.com/ManiMatter/decluttarr"
ENV IS_IN_DOCKER 1
WORKDIR /app
COPY ./docker/requirements.txt ./docker/requirements.txt
RUN pip install --no-cache-dir -r docker/requirements.txt
COPY . .
CMD ["python", "main.py"]

42
docker/dockerfile Normal file
View File

@@ -0,0 +1,42 @@
#FROM python:3.9-slim-buster
# For debugging:
# First build:
# sudo docker build --no-cache --progress=plain -f ./docker/dockerfile -t decluttarr .
# Entering image (and printing env variables):
# sudo docker run --rm -it -w /app --entrypoint sh decluttarr -c "printenv; exec sh"
# Then run from host (using docker-compose and as image: decluttarr:latest)
# sudo docker run --rm -v "/config:/app/config" --name decluttarr decluttarr
# Entering running container:
# sudo docker exec -it -w /app decluttarr sh -c "printenv; exec sh"
# Alternatively: Inspect env vars via portainer
FROM python:3.10.13-slim
# Define a build-time argument for IMAGE_TAG
ARG IMAGE_TAG
ARG SHORT_COMMIT_ID
# Set an environment variable using the build-time argument
ENV IMAGE_TAG=$IMAGE_TAG
ENV SHORT_COMMIT_ID=$SHORT_COMMIT_ID
LABEL org.opencontainers.image.source="https://github.com/ManiMatter/decluttarr"
ENV IN_DOCKER=true
WORKDIR /app
# Copy files
COPY ./docker/requirements.txt requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
COPY main.py main.py
COPY src src
CMD ["python", "main.py"]
# For debugging:
# CMD ["sh", "-c", "while true; do sleep 1000; done"]

View File

@@ -2,8 +2,12 @@
requests==2.32.3
asyncio==3.4.3
python-dateutil==2.8.2
verboselogs==1.7
pytest==8.0.1
pytest-asyncio==0.23.5
pre-commit==3.8.0
black==24.8.0
pylint==3.3.3
autoflake==2.3.1
isort==5.13.2
envyaml==1.10.211231
demjson3==3.0.6

95
main.py
View File

@@ -1,97 +1,36 @@
# Import Libraries
import asyncio
import logging, verboselogs
from src.settings.settings import Settings
logger = verboselogs.VerboseLogger(__name__)
import json
from src.utils.startup import launch_steps
from src.utils.log_setup import logger
from src.job_manager import JobManager
# Import Functions
from config.definitions import settingsDict
from src.utils.loadScripts import *
from src.decluttarr import queueCleaner
from src.utils.rest import rest_get, rest_post
from src.utils.trackers import Defective_Tracker, Download_Sizes_Tracker
settings = Settings()
job_manager = JobManager(settings)
# Hide SSL Verification Warnings
if settingsDict["SSL_VERIFICATION"] == False:
import warnings
warnings.filterwarnings("ignore", message="Unverified HTTPS request")
# Set up logging
setLoggingFormat(settingsDict)
# Main function
async def main(settingsDict):
# Adds to settings Dict the instances that are actually configures
settingsDict["INSTANCES"] = []
for arrApplication in settingsDict["SUPPORTED_ARR_APPS"]:
if settingsDict[arrApplication + "_URL"]:
settingsDict["INSTANCES"].append(arrApplication)
# Pre-populates the dictionaries (in classes) that track the items that were already caught as having problems or removed
defectiveTrackingInstances = {}
for instance in settingsDict["INSTANCES"]:
defectiveTrackingInstances[instance] = {}
defective_tracker = Defective_Tracker(defectiveTrackingInstances)
download_sizes_tracker = Download_Sizes_Tracker({})
# Get name of arr-instances
for instance in settingsDict["INSTANCES"]:
settingsDict = await getArrInstanceName(settingsDict, instance)
# Check outdated
upgradeChecks(settingsDict)
# Welcome Message
showWelcome()
# Current Settings
showSettings(settingsDict)
# Check Minimum Version and if instances are reachable and retrieve qbit cookie
settingsDict = await instanceChecks(settingsDict)
# Create qBit protection tag if not existing
await createQbitProtectionTag(settingsDict)
# Show Logger Level
showLoggerLevel(settingsDict)
# # Main function
async def main():
await launch_steps(settings)
# Start Cleaning
while True:
logger.verbose("-" * 50)
# Refresh qBit Cookie
if settingsDict["QBITTORRENT_URL"]:
await qBitRefreshCookie(settingsDict)
if not settingsDict["QBIT_COOKIE"]:
logger.error("Cookie Refresh failed - exiting decluttarr")
exit()
# Cache protected (via Tag) and private torrents
protectedDownloadIDs, privateDowloadIDs = await getProtectedAndPrivateFromQbit(
settingsDict
)
# Refresh qBit Cookies
for qbit in settings.download_clients.qbittorrent:
await qbit.refresh_cookie()
# Run script for each instance
for instance in settingsDict["INSTANCES"]:
await queueCleaner(
settingsDict,
instance,
defective_tracker,
download_sizes_tracker,
protectedDownloadIDs,
privateDowloadIDs,
)
for arr in settings.instances.arrs:
await job_manager.run_jobs(arr)
logger.verbose("")
logger.verbose("Queue clean-up complete!")
# Wait for the next run
await asyncio.sleep(settingsDict["REMOVE_TIMER"] * 60)
await asyncio.sleep(settings.general.timer * 60)
return
if __name__ == "__main__":
asyncio.run(main(settingsDict))
asyncio.run(main())

23
pyproject.toml Normal file
View File

@@ -0,0 +1,23 @@
[tool.pylint]
ignore = ".venv"
ignore-patterns = ["__pycache__", ".pytest_cache"]
disable = [
"logging-fstring-interpolation", # W1203
"f-string-without-interpolation", # W1309
"broad-exception-caught", # W0718
"missing-module-docstring", # C0114
"missing-class-docstring", # C0115
"missing-function-docstring", # C0116
"line-too-long", # C0301
]
[tool.pytest.ini_options]
# log_cli = true # Uncomment this if you need it
addopts = "-q --tb=short -s"
log_cli_level = "INFO"
log_cli_format = "%(asctime)s - %(levelname)s - %(name)s - %(message)s"
log_cli_date_format = "%Y-%m-%d %H:%M:%S"
testpaths = [
"tests"
]

View File

@@ -1,183 +0,0 @@
# Cleans the download queue
import logging, verboselogs
logger = verboselogs.VerboseLogger(__name__)
from src.utils.shared import errorDetails, get_queue
from src.jobs.remove_failed import remove_failed
from src.jobs.remove_failed_imports import remove_failed_imports
from src.jobs.remove_metadata_missing import remove_metadata_missing
from src.jobs.remove_missing_files import remove_missing_files
from src.jobs.remove_orphans import remove_orphans
from src.jobs.remove_slow import remove_slow
from src.jobs.remove_stalled import remove_stalled
from src.jobs.remove_unmonitored import remove_unmonitored
from src.jobs.run_periodic_rescans import run_periodic_rescans
from src.utils.trackers import Deleted_Downloads
async def queueCleaner(
settingsDict,
arr_type,
defective_tracker,
download_sizes_tracker,
protectedDownloadIDs,
privateDowloadIDs,
):
# Read out correct instance depending on radarr/sonarr flag
run_dict = {}
if arr_type == "RADARR":
BASE_URL = settingsDict["RADARR_URL"]
API_KEY = settingsDict["RADARR_KEY"]
NAME = settingsDict["RADARR_NAME"]
full_queue_param = "includeUnknownMovieItems"
elif arr_type == "SONARR":
BASE_URL = settingsDict["SONARR_URL"]
API_KEY = settingsDict["SONARR_KEY"]
NAME = settingsDict["SONARR_NAME"]
full_queue_param = "includeUnknownSeriesItems"
elif arr_type == "LIDARR":
BASE_URL = settingsDict["LIDARR_URL"]
API_KEY = settingsDict["LIDARR_KEY"]
NAME = settingsDict["LIDARR_NAME"]
full_queue_param = "includeUnknownArtistItems"
elif arr_type == "READARR":
BASE_URL = settingsDict["READARR_URL"]
API_KEY = settingsDict["READARR_KEY"]
NAME = settingsDict["READARR_NAME"]
full_queue_param = "includeUnknownAuthorItems"
elif arr_type == "WHISPARR":
BASE_URL = settingsDict["WHISPARR_URL"]
API_KEY = settingsDict["WHISPARR_KEY"]
NAME = settingsDict["WHISPARR_NAME"]
full_queue_param = "includeUnknownSeriesItems"
else:
logger.error("Unknown arr_type specified, exiting: %s", str(arr_type))
sys.exit()
# Cleans up the downloads queue
logger.verbose("Cleaning queue on %s:", NAME)
# Refresh queue:
try:
full_queue = await get_queue(BASE_URL, API_KEY, settingsDict, params={full_queue_param: True})
if full_queue:
logger.debug("queueCleaner/full_queue at start:")
logger.debug(full_queue)
deleted_downloads = Deleted_Downloads([])
items_detected = 0
if settingsDict["REMOVE_FAILED"]:
items_detected += await remove_failed(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
)
if settingsDict["REMOVE_FAILED_IMPORTS"]:
items_detected += await remove_failed_imports(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
)
if settingsDict["REMOVE_METADATA_MISSING"]:
items_detected += await remove_metadata_missing(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
)
if settingsDict["REMOVE_MISSING_FILES"]:
items_detected += await remove_missing_files(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
)
if settingsDict["REMOVE_ORPHANS"]:
items_detected += await remove_orphans(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
full_queue_param,
)
if settingsDict["REMOVE_SLOW"]:
items_detected += await remove_slow(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
download_sizes_tracker,
)
if settingsDict["REMOVE_STALLED"]:
items_detected += await remove_stalled(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
)
if settingsDict["REMOVE_UNMONITORED"]:
items_detected += await remove_unmonitored(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
arr_type,
)
if items_detected == 0:
logger.verbose(">>> Queue is clean.")
else:
logger.verbose(">>> Queue is empty.")
if settingsDict["RUN_PERIODIC_RESCANS"]:
await run_periodic_rescans(
settingsDict,
BASE_URL,
API_KEY,
NAME,
arr_type,
)
except Exception as error:
errorDetails(NAME, error)
return

107
src/job_manager.py Normal file
View File

@@ -0,0 +1,107 @@
# Cleans the download queue
from src.utils.log_setup import logger
from src.utils.queue_manager import QueueManager
from src.jobs.remove_bad_files import RemoveBadFiles
from src.jobs.remove_failed_imports import RemoveFailedImports
from src.jobs.remove_failed_downloads import RemoveFailedDownloads
from src.jobs.remove_metadata_missing import RemoveMetadataMissing
from src.jobs.remove_missing_files import RemoveMissingFiles
from src.jobs.remove_orphans import RemoveOrphans
from src.jobs.remove_slow import RemoveSlow
from src.jobs.remove_stalled import RemoveStalled
from src.jobs.remove_unmonitored import RemoveUnmonitored
from src.jobs.search_handler import SearchHandler
class JobManager:
arr = None
def __init__(self, settings):
self.settings = settings
async def run_jobs(self, arr):
self.arr = arr
await self.removal_jobs()
await self.search_jobs()
async def removal_jobs(self):
logger.verbose(f"")
logger.verbose(f"Cleaning queue on {self.arr.name}:")
if not await self._queue_has_items():
return
if not await self._qbit_connected():
return
# Refresh trackers
await self.arr.tracker.refresh_private_and_protected(self.settings)
# Execute Cleaning
removal_jobs = self._get_removal_jobs()
items_detected = 0
for removal_job in removal_jobs:
items_detected += await removal_job.run()
if items_detected == 0:
logger.verbose(">>> Queue is clean.")
async def search_jobs(self):
if (
self.arr.arr_type == "whisparr"
): # Whisparr does not support this endpoint (yet?)
return
if self.settings.jobs.search_missing_content.enabled:
await SearchHandler(self.arr, self.settings).handle_search("missing")
if self.settings.jobs.search_unmet_cutoff_content.enabled:
await SearchHandler(self.arr, self.settings).handle_search("cutoff")
async def _queue_has_items(self):
queue_manager = QueueManager(self.arr, self.settings)
full_queue = await queue_manager.get_queue_items("full")
if full_queue:
logger.debug(
f"job_runner/full_queue at start: %s",
queue_manager.format_queue(full_queue),
)
return True
else:
logger.verbose(">>> Queue is empty.")
return False
async def _qbit_connected(self):
for qbit in self.settings.download_clients.qbittorrent:
# Check if any client is disconnected
if not await qbit.check_qbit_connected():
logger.warning(
f">>> qBittorrent is disconnected. Skipping queue cleaning on {self.arr.name}."
)
return False
return True
def _get_removal_jobs(self):
"""
Returns a list of enabled removal job instances based on the provided settings.
Each job is included if the corresponding attribute exists and is truthy in settings.jobs.
"""
removal_job_classes = {
"remove_bad_files": RemoveBadFiles,
"remove_failed_imports": RemoveFailedImports,
"remove_failed_downloads": RemoveFailedDownloads,
"remove_metadata_missing": RemoveMetadataMissing,
"remove_missing_files": RemoveMissingFiles,
"remove_orphans": RemoveOrphans,
"remove_slow": RemoveSlow,
"remove_stalled": RemoveStalled,
"remove_unmonitored": RemoveUnmonitored,
}
jobs = []
for removal_job_name, removal_job_class in removal_job_classes.items():
if getattr(self.settings.jobs, removal_job_name, False):
jobs.append(
removal_job_class(self.arr, self.settings, removal_job_name)
)
return jobs

View File

@@ -0,0 +1,68 @@
from src.utils.log_setup import logger
class RemovalHandler:
def __init__(self, arr, settings, job_name):
self.arr = arr
self.settings = settings
self.job_name = job_name
async def remove_downloads(self, affected_downloads, blocklist):
for download_id in list(affected_downloads.keys()):
logger.debug(
"remove_download/deleted_downloads.dict IN: %s",
str(self.arr.tracker.deleted),
)
queue_item = affected_downloads[download_id][0]
handling_method = await self._get_handling_method(download_id, queue_item)
if download_id in self.arr.tracker.deleted or handling_method == "skip":
del affected_downloads[download_id]
continue
if handling_method == "remove":
await self._remove_download(queue_item, blocklist)
elif handling_method == "tag_as_obsolete":
await self._tag_as_obsolete(queue_item, download_id)
# Print out detailed removal messages (if any)
if "removal_messages" in queue_item:
for msg in queue_item["removal_messages"]:
logger.info(msg)
self.arr.tracker.deleted.append(download_id)
logger.debug(
"remove_download/arr_instance.tracker.deleted OUT: %s",
str(self.arr.tracker.deleted),
)
async def _remove_download(self, queue_item, blocklist):
queue_id = queue_item["id"]
logger.info(f">>> Job '{self.job_name}' triggered removal: {queue_item['title']}")
if not self.settings.general.test_run:
await self.arr.remove_queue_item(queue_id=queue_id, blocklist=blocklist)
async def _tag_as_obsolete(self, queue_item, download_id):
logger.info(f">>> Job'{self.job_name}' triggered obsolete-tagging: {queue_item['title']}")
if not self.settings.general.test_run:
for qbit in self.settings.download_clients.qbittorrent:
await qbit.set_tag(tags=[self.settings.general.obsolete_tag], hashes=[download_id])
async def _get_handling_method(self, download_id, queue_item):
if queue_item['protocol'] != 'torrent':
return "remove" # handling is only implemented for torrent
client_implemenation = await self.arr.get_download_client_implementation(queue_item['downloadClient'])
if client_implemenation != "QBittorrent":
return "remove" # handling is only implemented for qbit
if len(self.settings.download_clients.qbittorrent) == 0:
return "remove" # qbit not configured, thus can't tag
if download_id in self.arr.tracker.private:
return self.settings.general.private_tracker_handling
return self.settings.general.public_tracker_handling

82
src/jobs/removal_job.py Normal file
View File

@@ -0,0 +1,82 @@
from abc import ABC, abstractmethod
from src.utils.queue_manager import QueueManager
from src.utils.log_setup import logger
from src.jobs.strikes_handler import StrikesHandler
from src.jobs.removal_handler import RemovalHandler
class RemovalJob(ABC):
job_name = None
blocklist = True
queue_scope = None
affected_items = None
affected_downloads = None
job = None
max_strikes = None
# Default class attributes (can be overridden in subclasses)
def __init__(self, arr, settings, job_name):
self.arr = arr
self.settings = settings
self.job_name = job_name
self.job = getattr(self.settings.jobs, self.job_name)
self.queue_manager = QueueManager(self.arr, self.settings)
async def run(self):
if not self.job.enabled:
return 0
if await self.is_queue_empty(self.job_name, self.queue_scope):
return 0
self.affected_items = await self._find_affected_items()
self.affected_downloads = self.queue_manager.group_by_download_id(self.affected_items)
# -- Checks --
self._ignore_protected()
self.max_strikes = getattr(self.job, "max_strikes", None)
if self.max_strikes:
self.affected_downloads = StrikesHandler(
job_name=self.job_name,
arr=self.arr,
max_strikes=self.max_strikes,
).check_permitted_strikes(self.affected_downloads)
# -- Removal --
await RemovalHandler(
arr=self.arr,
settings=self.settings,
job_name=self.job_name,
).remove_downloads(self.affected_downloads, self.blocklist)
return len(self.affected_downloads)
async def is_queue_empty(self, job_name, queue_scope="normal"):
# Check if queue empty
queue_items = await self.queue_manager.get_queue_items(queue_scope)
logger.debug(
f"{job_name}/queue IN: %s",
self.queue_manager.format_queue(queue_items),
)
# Early exit if no queue
if not queue_items:
return True
return False
def _ignore_protected(self):
"""
Filters out downloads that are in the protected tracker.
Directly updates self.affected_downloads.
"""
self.affected_downloads = {
download_id: queue_items
for download_id, queue_items in self.affected_downloads.items()
if download_id not in self.arr.tracker.protected
}
@abstractmethod # Imlemented on level of each removal job
async def _find_affected_items(self):
pass

View File

@@ -0,0 +1,195 @@
import os
from src.jobs.removal_job import RemovalJob
from src.utils.log_setup import logger
class RemoveBadFiles(RemovalJob):
queue_scope = "normal"
blocklist = True
# fmt: off
good_extensions = [
# Movies, TV Shows (Radarr, Sonarr, Whisparr)
".webm", ".m4v", ".3gp", ".nsv", ".ty", ".strm", ".rm", ".rmvb", ".m3u", ".ifo", ".mov", ".qt", ".divx", ".xvid", ".bivx", ".nrg", ".pva", ".wmv", ".asf", ".asx", ".ogm", ".ogv", ".m2v", ".avi", ".bin", ".dat", ".dvr-ms", ".mpg", ".mpeg", ".mp4", ".avc", ".vp3", ".svq3", ".nuv", ".viv", ".dv", ".fli", ".flv", ".wpl", ".img", ".iso", ".vob", ".mkv", ".mk3d", ".ts", ".wtv", ".m2ts",
# Subs (Radarr, Sonarr, Whisparr)
".sub", ".srt", ".idx",
# Audio (Lidarr, Readarr)
".aac", ".aif", ".aiff", ".aifc", ".ape", ".flac", ".mp2", ".mp3", ".m4a", ".m4b", ".m4p", ".mp4a", ".oga", ".ogg", ".opus", ".vorbis", ".wma", ".wav", ".wv", "wavepack",
# Text (Readarr)
".epub", ".kepub", ".mobi", ".azw3", ".pdf",
]
bad_keywords = ["Sample", "Trailer"]
bad_keyword_limit = 500 # Megabyte; do not remove items larger than that
# fmt: on
async def _find_affected_items(self):
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
# Get in-scope download IDs
result = self._group_download_ids_by_client(queue)
affected_items = []
for download_client, info in result.items():
download_client_type = info["download_client_type"]
download_ids = info["download_ids"]
if download_client_type == "qbittorrent":
client_items = await self._handle_qbit(download_client, download_ids, queue)
affected_items.extend(client_items)
return affected_items
def _group_download_ids_by_client(self, queue):
"""Group all relevant download IDs by download client.
Limited to qbittorrent currently, as no other download clients implemented"""
result = {}
for item in queue:
download_client_name = item.get("downloadClient")
if not download_client_name:
continue
download_client, download_client_type = self.settings.download_clients.get_download_client_by_name(download_client_name)
if not download_client or not download_client_type:
continue
# Skip non-qBittorrent clients for now
if download_client_type != "qbittorrent":
continue
result.setdefault(download_client, {
"download_client_type": download_client_type,
"download_ids": set()
})["download_ids"].add(item["downloadId"])
return result
async def _handle_qbit(self, qbit_client, hashes, queue):
"""Handle qBittorrent-specific logic for marking files as 'Do Not Download'."""
affected_items = []
qbit_items = await qbit_client.get_qbit_items(hashes=hashes)
for qbit_item in self._get_items_to_process(qbit_items):
self.arr.tracker.extension_checked.append(qbit_item["hash"])
torrent_files = await self._get_active_files(qbit_client, qbit_item["hash"])
stoppable_files = self._get_stoppable_files(torrent_files)
if not stoppable_files:
continue
await self._mark_files_as_stopped(qbit_client, qbit_item["hash"], stoppable_files)
self._log_stopped_files(stoppable_files, qbit_item["name"])
if self._all_files_stopped(torrent_files, stoppable_files):
logger.verbose(">>> All files in this torrent have been marked as 'Do not Download'. Removing torrent.")
affected_items.extend(self._match_queue_items(queue, qbit_item["hash"]))
return affected_items
# -- Helper functions for qbit handling --
def _get_items_to_process(self, qbit_items):
"""Return only downloads that have metadata, are supposedly downloading.
Additionally, each dowload should be checked at least once (for bad extensions), and thereafter only if availabiliy drops to less than 100%"""
return [
item for item in qbit_items
if (
item.get("has_metadata")
and item["state"] in {"downloading", "forcedDL", "stalledDL"}
and (
item["hash"] not in self.arr.tracker.extension_checked
or item["availability"] < 1
)
)
]
async def _get_active_files(self, qbit_client, torrent_hash):
"""Return only files from the torrent that are still set to download, with file extension and name."""
files = await qbit_client.get_torrent_files(torrent_hash) # Await the async method
return [
{
**f, # Include all original file properties
"file_name": os.path.basename(f["name"]), # Add proper filename (without folder)
"file_extension": os.path.splitext(f["name"])[1], # Add file_extension (e.g., .mp3)
}
for f in files if f["priority"] > 0
]
def _log_stopped_files(self, stopped_files, torrent_name):
logger.verbose(
f">>> Stopped downloading {len(stopped_files)} file{'s' if len(stopped_files) != 1 else ''} in: {torrent_name}"
)
for file, reasons in stopped_files:
logger.verbose(f">>> - {file['file_name']} ({' & '.join(reasons)})")
def _all_files_stopped(self, torrent_files):
"""Check if no files remain with download priority."""
return all(f["priority"] == 0 for f in torrent_files)
def _match_queue_items(self, queue, download_hash):
"""Find matching queue item(s) by downloadId (uppercase)."""
return [
item for item in queue
if item["downloadId"] == download_hash.upper()
]
def _get_stoppable_files(self, torrent_files):
"""Return files that can be marked as 'Do not Download' based on specific conditions."""
stoppable_files = []
for file in torrent_files:
# If the file has metadata and its priority is greater than 0, we can check it
if file["priority"] > 0:
reasons = []
# Check for bad extension
if self._is_bad_extension(file):
reasons.append(f"Bad extension: {file['file_extension']}")
# Check if the file has low availability
if self._is_complete_partial(file):
reasons.append(f"Low availability: {file['availability'] * 100:.1f}%")
# Only add to stoppable_files if there are reasons to stop the file
if reasons:
stoppable_files.append((file, reasons))
return stoppable_files
def _is_bad_extension(self, file):
"""Check if the file has a bad extension."""
return file['file_extension'].lower() not in self.good_extensions
def _is_complete_partial(self, file):
"""Check if the availability is less than 100% and the file is not fully downloaded"""
if file["availability"] < 1 and not file["progress"] == 1:
return True
return False
async def _mark_files_as_stopped(self, qbit_client, torrent_hash, stoppable_files):
"""Mark specific files as 'Do Not Download' in qBittorrent."""
for file, _ in stoppable_files:
if not self.settings.general.test_run:
await qbit_client.set_torrent_file_priority(torrent_hash, file['index'], 0)
def _all_files_stopped(self, torrent_files, stoppable_files):
"""Check if all files are either stopped (priority 0) or in the stoppable files list."""
stoppable_file_indexes= {file[0]["index"] for file in stoppable_files}
return all(f["priority"] == 0 or f["index"] in stoppable_file_indexes for f in torrent_files)
def _match_queue_items(self, queue, download_hash):
"""Find matching queue item(s) by downloadId (uppercase)."""
return [
item for item in queue
if item["downloadId"].upper() == download_hash.upper()
]

View File

@@ -1,64 +0,0 @@
from src.utils.shared import (
errorDetails,
formattedQueueInfo,
get_queue,
privateTrackerCheck,
protectedDownloadCheck,
execute_checks,
permittedAttemptsCheck,
remove_download,
qBitOffline,
)
import sys, os, traceback
import logging, verboselogs
logger = verboselogs.VerboseLogger(__name__)
async def remove_failed(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
):
# Detects failed and triggers delete. Does not add to blocklist
try:
failType = "failed"
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
logger.debug("remove_failed/queue IN: %s", formattedQueueInfo(queue))
if not queue:
return 0
if await qBitOffline(settingsDict, failType, NAME):
return 0
# Find items affected
affectedItems = []
for queueItem in queue:
if "errorMessage" in queueItem and "status" in queueItem:
if queueItem["status"] == "failed":
affectedItems.append(queueItem)
affectedItems = await execute_checks(
settingsDict,
affectedItems,
failType,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
privateDowloadIDs,
protectedDownloadIDs,
addToBlocklist=False,
doPrivateTrackerCheck=True,
doProtectedDownloadCheck=True,
doPermittedAttemptsCheck=False,
)
return len(affectedItems)
except Exception as error:
errorDetails(NAME, error)
return 0

View File

@@ -0,0 +1,17 @@
from src.jobs.removal_job import RemovalJob
class RemoveFailedDownloads(RemovalJob):
queue_scope = "normal"
blocklist = False
async def _find_affected_items(self):
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
affected_items = []
for item in queue:
if "status" in item:
if item["status"] == "failed":
affected_items.append(item)
return affected_items

View File

@@ -1,105 +1,69 @@
from src.utils.shared import errorDetails, formattedQueueInfo, get_queue, execute_checks
import sys, os, traceback
import logging, verboselogs
import fnmatch
from src.jobs.removal_job import RemovalJob
logger = verboselogs.VerboseLogger(__name__)
class RemoveFailedImports(RemovalJob):
queue_scope = "normal"
blocklist = True
async def _find_affected_items(self):
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
affected_items = []
patterns = self.job.message_patterns
for item in queue:
if not self._is_valid_item(item):
continue
removal_messages = self._prepare_removal_messages(item, patterns)
if removal_messages:
item["removal_messages"] = removal_messages
affected_items.append(item)
return affected_items
def _is_valid_item(self, item):
"""Check if item has the necessary fields and is in a valid state."""
# Required fields that must be present in the item
required_fields = {"status", "trackedDownloadStatus", "trackedDownloadState", "statusMessages"}
# Check if all required fields are present
if not all(field in item for field in required_fields):
return False
# Check if the item's status is completed and the tracked status is warning
if item["status"] != "completed" or item["trackedDownloadStatus"] != "warning":
return False
# Check if the tracked download state is one of the allowed states
if item["trackedDownloadState"] not in {"importPending", "importFailed", "importBlocked"}:
return False
# If all checks pass, the item is valid
return True
async def remove_failed_imports(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
):
# Detects downloads stuck downloading meta data and triggers repeat check and subsequent delete. Adds to blocklist
try:
failType = "failed import"
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
logger.debug("remove_failed_imports/queue IN: %s", formattedQueueInfo(queue))
if not queue:
return 0
def _prepare_removal_messages(self, item, patterns):
"""Prepare removal messages, adding the tracked download state and matching messages."""
messages = self._get_matching_messages(item["statusMessages"], patterns)
if not messages:
return []
# Find items affected
affectedItems = []
removal_messages = [f">>>>> Tracked Download State: {item['trackedDownloadState']}"] + messages
return removal_messages
# Check if any patterns have been specified
patterns = settingsDict.get("FAILED_IMPORT_MESSAGE_PATTERNS", [])
if not patterns: # If patterns is empty or not present
patterns = None
for queueItem in queue:
if (
"status" in queueItem
and "trackedDownloadStatus" in queueItem
and "trackedDownloadState" in queueItem
and "statusMessages" in queueItem
):
removal_messages = []
if (
queueItem["status"] == "completed"
and queueItem["trackedDownloadStatus"] == "warning"
and queueItem["trackedDownloadState"]
in {"importPending", "importFailed", "importBlocked"}
):
# Find messages that find specified pattern and put them into a "removal_message" that will be displayed in the logger when removing the affected item
if not patterns:
# No patterns defined - including all status messages in the removal_messages
removal_messages.append(">>>>> Status Messages (All):")
for statusMessage in queueItem["statusMessages"]:
removal_messages.extend(
f">>>>> - {message}"
for message in statusMessage.get("messages", [])
)
else:
# Specific patterns defined - only removing if any of these are matched
for statusMessage in queueItem["statusMessages"]:
messages = statusMessage.get("messages", [])
for message in messages:
if any(pattern in message for pattern in patterns):
removal_messages.append(f">>>>> - {message}")
if removal_messages:
removal_messages.insert(
0,
">>>>> Status Messages (matching specified patterns):",
)
if removal_messages:
removal_messages = list(
dict.fromkeys(removal_messages)
) # deduplication
removal_messages.insert(
0,
">>>>> Tracked Download State: "
+ queueItem["trackedDownloadState"],
)
queueItem["removal_messages"] = removal_messages
affectedItems.append(queueItem)
check_kwargs = {
"settingsDict": settingsDict,
"affectedItems": affectedItems,
"failType": failType,
"BASE_URL": BASE_URL,
"API_KEY": API_KEY,
"NAME": NAME,
"deleted_downloads": deleted_downloads,
"defective_tracker": defective_tracker,
"privateDowloadIDs": privateDowloadIDs,
"protectedDownloadIDs": protectedDownloadIDs,
"addToBlocklist": True,
"doPrivateTrackerCheck": False,
"doProtectedDownloadCheck": True,
"doPermittedAttemptsCheck": False,
"extraParameters": {"keepTorrentForPrivateTrackers": True},
}
affectedItems = await execute_checks(**check_kwargs)
return len(affectedItems)
except Exception as error:
errorDetails(NAME, error)
return 0
def _get_matching_messages(self, status_messages, patterns):
"""Extract messages matching the provided patterns (or all messages if no pattern)."""
matched_messages = []
if not patterns:
# No patterns provided, include all messages
for status_message in status_messages:
matched_messages.extend(f">>>>> - {msg}" for msg in status_message.get("messages", []))
else:
# Patterns provided, match only those messages that fit the patterns
for status_message in status_messages:
for msg in status_message.get("messages", []):
if any(fnmatch.fnmatch(msg, pattern) for pattern in patterns):
matched_messages.append(f">>>>> - {msg}")
return matched_messages

View File

@@ -1,66 +1,19 @@
from src.utils.shared import (
errorDetails,
formattedQueueInfo,
get_queue,
privateTrackerCheck,
protectedDownloadCheck,
execute_checks,
permittedAttemptsCheck,
remove_download,
qBitOffline,
)
import sys, os, traceback
import logging, verboselogs
logger = verboselogs.VerboseLogger(__name__)
from src.jobs.removal_job import RemovalJob
async def remove_metadata_missing(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
):
# Detects downloads stuck downloading meta data and triggers repeat check and subsequent delete. Adds to blocklist
try:
failType = "missing metadata"
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
logger.debug("remove_metadata_missing/queue IN: %s", formattedQueueInfo(queue))
if not queue:
return 0
if await qBitOffline(settingsDict, failType, NAME):
return 0
# Find items affected
affectedItems = []
for queueItem in queue:
if "errorMessage" in queueItem and "status" in queueItem:
class RemoveMetadataMissing(RemovalJob):
queue_scope = "normal"
blocklist = True
async def _find_affected_items(self):
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
affected_items = []
for item in queue:
if "errorMessage" in item and "status" in item:
if (
queueItem["status"] == "queued"
and queueItem["errorMessage"]
== "qBittorrent is downloading metadata"
item["status"] == "queued"
and item["errorMessage"] == "qBittorrent is downloading metadata"
):
affectedItems.append(queueItem)
affectedItems = await execute_checks(
settingsDict,
affectedItems,
failType,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
privateDowloadIDs,
protectedDownloadIDs,
addToBlocklist=True,
doPrivateTrackerCheck=True,
doProtectedDownloadCheck=True,
doPermittedAttemptsCheck=True,
)
return len(affectedItems)
except Exception as error:
errorDetails(NAME, error)
return 0
affected_items.append(item)
return affected_items

View File

@@ -1,81 +1,36 @@
from src.utils.shared import (
errorDetails,
formattedQueueInfo,
get_queue,
privateTrackerCheck,
protectedDownloadCheck,
execute_checks,
permittedAttemptsCheck,
remove_download,
qBitOffline,
)
import sys, os, traceback
import logging, verboselogs
from src.jobs.removal_job import RemovalJob
logger = verboselogs.VerboseLogger(__name__)
class RemoveMissingFiles(RemovalJob):
queue_scope = "normal"
blocklist = False
async def _find_affected_items(self):
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
affected_items = []
async def remove_missing_files(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
):
# Detects downloads broken because of missing files. Does not add to blocklist
try:
failType = "missing files"
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
logger.debug("remove_missing_files/queue IN: %s", formattedQueueInfo(queue))
if not queue:
return 0
if await qBitOffline(settingsDict, failType, NAME):
return 0
# Find items affected
affectedItems = []
for queueItem in queue:
if "status" in queueItem:
# case to check for failed torrents
if (
queueItem["status"] == "warning"
and "errorMessage" in queueItem
and (
queueItem["errorMessage"]
== "DownloadClientQbittorrentTorrentStateMissingFiles"
or queueItem["errorMessage"] == "The download is missing files"
or queueItem["errorMessage"] == "qBittorrent is reporting missing files"
)
):
affectedItems.append(queueItem)
# case to check for failed nzb's/bad files/empty directory
if queueItem["status"] == "completed" and "statusMessages" in queueItem:
for statusMessage in queueItem["statusMessages"]:
if "messages" in statusMessage:
for message in statusMessage["messages"]:
if message.startswith(
"No files found are eligible for import in"
):
affectedItems.append(queueItem)
affectedItems = await execute_checks(
settingsDict,
affectedItems,
failType,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
privateDowloadIDs,
protectedDownloadIDs,
addToBlocklist=False,
doPrivateTrackerCheck=True,
doProtectedDownloadCheck=True,
doPermittedAttemptsCheck=False,
for item in queue:
if self._is_failed_torrent(item) or self._is_bad_nzb(item):
affected_items.append(item)
return affected_items
def _is_failed_torrent(self, item):
return (
"status" in item
and item["status"] == "warning"
and "errorMessage" in item
and item["errorMessage"] in [
"DownloadClientQbittorrentTorrentStateMissingFiles",
"The download is missing files",
"qBittorrent is reporting missing files",
]
)
return len(affectedItems)
except Exception as error:
errorDetails(NAME, error)
return 0
def _is_bad_nzb(self, item):
if "status" in item and item["status"] == "completed" and "statusMessages" in item:
for status_message in item["statusMessages"]:
if "messages" in status_message:
for message in status_message["messages"]:
if message.startswith("No files found are eligible for import in"):
return True
return False

View File

@@ -1,76 +1,11 @@
from src.utils.shared import (
errorDetails,
formattedQueueInfo,
get_queue,
privateTrackerCheck,
protectedDownloadCheck,
execute_checks,
permittedAttemptsCheck,
remove_download,
)
import sys, os, traceback
import logging, verboselogs
from src.jobs.removal_job import RemovalJob
logger = verboselogs.VerboseLogger(__name__)
class RemoveOrphans(RemovalJob):
queue_scope = "full"
blocklist = False
async def _find_affected_items(self):
affected_items = await self.queue_manager.get_queue_items(queue_scope="orphans")
return affected_items
async def remove_orphans(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
full_queue_param,
):
# Removes downloads belonging to movies/tv shows that have been deleted in the meantime. Does not add to blocklist
try:
failType = "orphan"
full_queue = await get_queue(
BASE_URL, API_KEY, settingsDict, params={full_queue_param: True}
)
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
logger.debug("remove_orphans/full queue IN: %s", formattedQueueInfo(full_queue))
if not full_queue:
return 0 # By now the queue may be empty
logger.debug("remove_orphans/queue IN: %s", formattedQueueInfo(queue))
# Find items affected
# 1. create a list of the "known" queue items
queueIDs = [queueItem["id"] for queueItem in queue] if queue else []
affectedItems = []
# 2. compare all queue items against the known ones, and those that are not found are the "unknown" or "orphan" ones
for queueItem in full_queue:
if queueItem["id"] not in queueIDs:
affectedItems.append(queueItem)
affectedItems = await execute_checks(
settingsDict,
affectedItems,
failType,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
privateDowloadIDs,
protectedDownloadIDs,
addToBlocklist=False,
doPrivateTrackerCheck=True,
doProtectedDownloadCheck=True,
doPermittedAttemptsCheck=False,
)
logger.debug(
"remove_orphans/full queue OUT: %s",
formattedQueueInfo(
await get_queue(
BASE_URL, API_KEY, settingsDict, params={full_queue_param: True}
)
),
)
return len(affectedItems)
except Exception as error:
errorDetails(NAME, error)
return 0

View File

@@ -1,143 +1,106 @@
from src.utils.shared import (
errorDetails,
formattedQueueInfo,
get_queue,
privateTrackerCheck,
protectedDownloadCheck,
execute_checks,
permittedAttemptsCheck,
remove_download,
qBitOffline,
)
import sys, os, traceback
import logging, verboselogs
from src.utils.rest import rest_get
logger = verboselogs.VerboseLogger(__name__)
from src.jobs.removal_job import RemovalJob
from src.utils.log_setup import logger
async def remove_slow(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
download_sizes_tracker,
):
# Detects slow downloads and triggers delete. Adds to blocklist
try:
failType = "slow"
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
logger.debug("remove_slow/queue IN: %s", formattedQueueInfo(queue))
if not queue:
return 0
if await qBitOffline(settingsDict, failType, NAME):
return 0
# Find items affected
affectedItems = []
alreadyCheckedDownloadIDs = []
for queueItem in queue:
if (
"downloadId" in queueItem
and "size" in queueItem
and "sizeleft" in queueItem
and "status" in queueItem
):
if queueItem["downloadId"] not in alreadyCheckedDownloadIDs:
alreadyCheckedDownloadIDs.append(
queueItem["downloadId"]
) # One downloadId may occur in multiple queueItems - only check once for all of them per iteration
if (
queueItem["protocol"] == "usenet"
): # No need to check for speed for usenet, since there users pay for speed
continue
if queueItem["status"] == "downloading":
if (
queueItem["size"] > 0 and queueItem["sizeleft"] == 0
): # Skip items that are finished downloading but are still marked as downloading. May be the case when files are moving
logger.info(
">>> Detected %s download that has completed downloading - skipping check (torrent files likely in process of being moved): %s",
failType,
queueItem["title"],
)
continue
# determine if the downloaded bit on average between this and the last iteration is greater than the min threshold
downloadedSize, previousSize, increment, speed = (
await getDownloadedSize(
settingsDict, queueItem, download_sizes_tracker, NAME
)
)
if (
queueItem["downloadId"] in download_sizes_tracker.dict
and speed is not None
):
if speed < settingsDict["MIN_DOWNLOAD_SPEED"]:
affectedItems.append(queueItem)
logger.debug(
"remove_slow/slow speed detected: %s (Speed: %d KB/s, KB now: %s, KB previous: %s, Diff: %s, In Minutes: %s",
queueItem["title"],
speed,
downloadedSize,
previousSize,
increment,
settingsDict["REMOVE_TIMER"],
)
class RemoveSlow(RemovalJob):
queue_scope = "normal"
blocklist = True
affectedItems = await execute_checks(
settingsDict,
affectedItems,
failType,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
privateDowloadIDs,
protectedDownloadIDs,
addToBlocklist=True,
doPrivateTrackerCheck=True,
doProtectedDownloadCheck=True,
doPermittedAttemptsCheck=True,
async def _find_affected_items(self):
queue = await self.queue_manager.get_queue_items(queue_scope=self.queue_scope)
affected_items = []
checked_ids = set()
for item in queue:
if not self._is_valid_item(item):
continue
download_id = item["downloadId"]
if download_id in checked_ids:
continue # One downloadId may occur in multiple items - only check once for all of them per iteration
checked_ids.add(download_id)
if self._is_usenet(item):
continue # No need to check for speed for usenet, since there users pay for speed
if self._is_completed_but_stuck(item):
logger.info(
f">>> '{self.job_name}' detected download marked as slow as well as completed. Files most likely in process of being moved. Not removing: {item['title']}"
)
continue
downloaded, previous, increment, speed = await self._get_progress_stats(
item
)
if self._is_slow(speed):
affected_items.append(item)
logger.debug(
f'remove_slow/slow speed detected: {item["title"]} '
f"(Speed: {speed} KB/s, KB now: {downloaded}, KB previous: {previous}, "
f"Diff: {increment}, In Minutes: {self.settings.general.timer})"
)
return affected_items
def _is_valid_item(self, item):
required_keys = {"downloadId", "size", "sizeleft", "status", "protocol"}
return required_keys.issubset(item)
def _is_usenet(self, item):
return item.get("protocol") == "usenet"
def _is_completed_but_stuck(self, item):
return (
item["status"] == "downloading"
and item["size"] > 0
and item["sizeleft"] == 0
)
return len(affectedItems)
except Exception as error:
errorDetails(NAME, error)
return 0
def _is_slow(self, speed):
return (
speed is not None
and speed < self.job.min_speed
)
async def getDownloadedSize(settingsDict, queueItem, download_sizes_tracker, NAME):
try:
# Determines the speed of download
# Since Sonarr/Radarr do not update the downlodedSize on realtime, if possible, fetch it directly from qBit
if (
settingsDict["QBITTORRENT_URL"]
and queueItem["downloadClient"] == "qBittorrent"
):
qbitInfo = await rest_get(
settingsDict["QBITTORRENT_URL"] + "/torrents/info",
params={"hashes": queueItem["downloadId"]},
cookies=settingsDict["QBIT_COOKIE"],
)
downloadedSize = qbitInfo[0]["completed"]
async def _get_progress_stats(self, item):
download_id = item["downloadId"]
download_progress = self._get_download_progress(item, download_id)
previous_progress, increment, speed = self._compute_increment_and_speed(
download_id, download_progress
)
self.arr.tracker.download_progress[download_id] = download_progress
return download_progress, previous_progress, increment, speed
def _get_download_progress(self, item, download_id):
download_client_name = item.get("downloadClient")
if download_client_name:
download_client, download_client_type = self.settings.download_clients.get_download_client_by_name(download_client_name)
if download_client_type == "qbitorrent":
progress = self._try_get_qbit_progress(download_client, download_id)
if progress is not None:
return progress
return self._fallback_progress(item)
def _try_get_qbit_progress(self, qbit, download_id):
try:
return qbit.get_download_progress(download_id)
except Exception:
return None
def _fallback_progress(self, item):
logger.debug(
"get_progress_stats: Using imprecise method to determine download increments because either a different download client than qBitorrent is used, or the download client name in the config does not match with what is configured in your *arr download client settings"
)
return item["size"] - item["sizeleft"]
def _compute_increment_and_speed(self, download_id, current_progress):
previous_progress = self.arr.tracker.download_progress.get(download_id)
if previous_progress is not None:
increment = current_progress - previous_progress
speed = round(increment / 1000 / (self.settings.general.timer * 60), 1)
else:
logger.debug(
"getDownloadedSize/WARN: Using imprecise method to determine download increments because no direct qBIT query is possible"
)
downloadedSize = queueItem["size"] - queueItem["sizeleft"]
if queueItem["downloadId"] in download_sizes_tracker.dict:
previousSize = download_sizes_tracker.dict.get(queueItem["downloadId"])
increment = downloadedSize - previousSize
speed = round(increment / 1000 / (settingsDict["REMOVE_TIMER"] * 60), 1)
else:
previousSize = None
increment = None
speed = None
download_sizes_tracker.dict[queueItem["downloadId"]] = downloadedSize
return downloadedSize, previousSize, increment, speed
except Exception as error:
errorDetails(NAME, error)
return
increment = speed = None
return previous_progress, increment, speed

View File

@@ -1,66 +1,21 @@
from src.utils.shared import (
errorDetails,
formattedQueueInfo,
get_queue,
privateTrackerCheck,
protectedDownloadCheck,
execute_checks,
permittedAttemptsCheck,
remove_download,
qBitOffline,
)
import sys, os, traceback
import logging, verboselogs
logger = verboselogs.VerboseLogger(__name__)
from src.jobs.removal_job import RemovalJob
async def remove_stalled(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
):
# Detects stalled and triggers repeat check and subsequent delete. Adds to blocklist
try:
failType = "stalled"
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
logger.debug("remove_stalled/queue IN: %s", formattedQueueInfo(queue))
if not queue:
return 0
if await qBitOffline(settingsDict, failType, NAME):
return 0
# Find items affected
affectedItems = []
for queueItem in queue:
if "errorMessage" in queueItem and "status" in queueItem:
class RemoveStalled(RemovalJob):
queue_scope = "normal"
blocklist = True
async def _find_affected_items(self):
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
affected_items = []
for item in queue:
if "errorMessage" in item and "status" in item:
if (
queueItem["status"] == "warning"
and queueItem["errorMessage"]
item["status"] == "warning"
and item["errorMessage"]
== "The download is stalled with no connections"
):
affectedItems.append(queueItem)
affectedItems = await execute_checks(
settingsDict,
affectedItems,
failType,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
privateDowloadIDs,
protectedDownloadIDs,
addToBlocklist=True,
doPrivateTrackerCheck=True,
doProtectedDownloadCheck=True,
doPermittedAttemptsCheck=True,
)
return len(affectedItems)
except Exception as error:
errorDetails(NAME, error)
return 0
affected_items.append(item)
return affected_items

View File

@@ -1,98 +1,24 @@
from src.utils.shared import (
errorDetails,
formattedQueueInfo,
get_queue,
privateTrackerCheck,
protectedDownloadCheck,
execute_checks,
permittedAttemptsCheck,
remove_download,
)
import sys, os, traceback
import logging, verboselogs
from src.jobs.removal_job import RemovalJob
logger = verboselogs.VerboseLogger(__name__)
from src.utils.rest import rest_get
class RemoveUnmonitored(RemovalJob):
queue_scope = "normal"
blocklist = False
async def _find_affected_items(self):
queue = await self.queue_manager.get_queue_items(queue_scope="normal")
async def remove_unmonitored(
settingsDict,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
protectedDownloadIDs,
privateDowloadIDs,
arr_type,
):
# Removes downloads belonging to movies/tv shows that are not monitored. Does not add to blocklist
try:
failType = "unmonitored"
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
logger.debug("remove_unmonitored/queue IN: %s", formattedQueueInfo(queue))
if not queue:
return 0
# Find items affected
monitoredDownloadIDs = []
for queueItem in queue:
if arr_type == "SONARR":
isMonitored = (
await rest_get(
f'{BASE_URL}/episode/{str(queueItem["episodeId"])}', API_KEY
)
)["monitored"]
elif arr_type == "RADARR":
isMonitored = (
await rest_get(
f'{BASE_URL}/movie/{str(queueItem["movieId"])}', API_KEY
)
)["monitored"]
elif arr_type == "LIDARR":
isMonitored = (
await rest_get(
f'{BASE_URL}/album/{str(queueItem["albumId"])}', API_KEY
)
)["monitored"]
elif arr_type == "READARR":
isMonitored = (
await rest_get(
f'{BASE_URL}/book/{str(queueItem["bookId"])}', API_KEY
)
)["monitored"]
elif arr_type == "WHISPARR":
isMonitored = (
await rest_get(
f'{BASE_URL}/episode/{str(queueItem["episodeId"])}', API_KEY
)
)["monitored"]
if isMonitored:
monitoredDownloadIDs.append(queueItem["downloadId"])
# First pass: Check if items are monitored
monitored_download_ids = []
for item in queue:
detail_item_id = item["detail_item_id"]
if await self.arr.is_monitored(detail_item_id):
monitored_download_ids.append(item["downloadId"])
affectedItems = []
for queueItem in queue:
if queueItem["downloadId"] not in monitoredDownloadIDs:
affectedItems.append(
queueItem
) # One downloadID may be shared by multiple queueItems. Only removes it if ALL queueitems are unmonitored
affectedItems = await execute_checks(
settingsDict,
affectedItems,
failType,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
privateDowloadIDs,
protectedDownloadIDs,
addToBlocklist=False,
doPrivateTrackerCheck=True,
doProtectedDownloadCheck=True,
doPermittedAttemptsCheck=False,
)
return len(affectedItems)
except Exception as error:
errorDetails(NAME, error)
return 0
# Second pass: Append queue items none that depends on download id is monitored
affected_items = []
for queue_item in queue:
if queue_item["downloadId"] not in monitored_download_ids:
affected_items.append(
queue_item
) # One downloadID may be shared by multiple queue_items. Only removes it if ALL queueitems are unmonitored
return affected_items

View File

@@ -1,128 +0,0 @@
from src.utils.shared import (
errorDetails,
rest_get,
rest_post,
get_queue,
get_arr_records,
)
import logging, verboselogs
from datetime import datetime, timedelta, timezone
import dateutil.parser
logger = verboselogs.VerboseLogger(__name__)
async def run_periodic_rescans(
settingsDict,
BASE_URL,
API_KEY,
NAME,
arr_type,
):
# Checks the wanted items and runs scans
if not arr_type in settingsDict["RUN_PERIODIC_RESCANS"]:
return
try:
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
check_on_endpoint = []
RESCAN_SETTINGS = settingsDict["RUN_PERIODIC_RESCANS"][arr_type]
if RESCAN_SETTINGS["MISSING"]:
check_on_endpoint.append("missing")
if RESCAN_SETTINGS["CUTOFF_UNMET"]:
check_on_endpoint.append("cutoff")
params = {"sortDirection": "ascending"}
if arr_type == "SONARR":
params["sortKey"] = "episodes.lastSearchTime"
queue_ids = [r["seriesId"] for r in queue if "seriesId" in r]
series = await rest_get(f"{BASE_URL}/series", API_KEY)
series_dict = {s["id"]: s for s in series}
elif arr_type == "RADARR":
params["sortKey"] = "movies.lastSearchTime"
queue_ids = [r["movieId"] for r in queue if "movieId" in r]
for end_point in check_on_endpoint:
records = await get_arr_records(
BASE_URL, API_KEY, params=params, end_point=f"wanted/{end_point}"
)
if records is None:
logger.verbose(
f">>> Rescan: No {end_point} items, thus nothing to rescan."
)
continue
# Filter out items that are already being downloaded (are in queue)
records = [r for r in records if r["id"] not in queue_ids]
if records is None:
logger.verbose(
f">>> Rescan: All {end_point} items are already being downloaded, thus nothing to rescan."
)
continue
# Remove records that have recently been searched already
for record in reversed(records):
if not (
("lastSearchTime" not in record)
or (
(
dateutil.parser.isoparse(record["lastSearchTime"])
+ timedelta(days=RESCAN_SETTINGS["MIN_DAYS_BEFORE_RESCAN"])
)
< datetime.now(timezone.utc)
)
):
records.remove(record)
# Select oldest records
records = records[: RESCAN_SETTINGS["MAX_CONCURRENT_SCANS"]]
if not records:
logger.verbose(
f">>> Rescan: All {end_point} items have recently been scanned for, thus nothing to rescan."
)
continue
if arr_type == "SONARR":
for record in records:
series_id = record.get("seriesId")
if series_id and series_id in series_dict:
record["series"] = series_dict[series_id]
else:
record["series"] = (
None # Or handle missing series info as needed
)
logger.verbose(
f">>> Running a scan for {len(records)} {end_point} items:\n"
+ "\n".join(
[
f"{episode['series']['title']} (Season {episode['seasonNumber']} / Episode {episode['episodeNumber']} / Aired: {episode.get('airDate', 'Unknown')}): {episode['title']}"
for episode in records
]
)
)
json = {
"name": "EpisodeSearch",
"episodeIds": [r["id"] for r in records],
}
elif arr_type == "RADARR":
logger.verbose(
f">>> Running a scan for {len(records)} {end_point} items:\n"
+ "\n".join(
[f"{movie['title']} ({movie['year']})" for movie in records]
)
)
json = {"name": "MoviesSearch", "movieIds": [r["id"] for r in records]}
if not settingsDict["TEST_RUN"]:
await rest_post(
url=BASE_URL + "/command",
json=json,
headers={"X-Api-Key": API_KEY},
)
except Exception as error:
errorDetails(NAME, error)
return 0

116
src/jobs/search_handler.py Normal file
View File

@@ -0,0 +1,116 @@
from datetime import datetime, timedelta, timezone
import dateutil.parser
from src.utils.log_setup import logger
from src.utils.wanted_manager import WantedManager
from src.utils.queue_manager import QueueManager
class SearchHandler:
def __init__(self, arr, settings):
self.arr = arr
self.settings = settings
self.job = None
self.wanted_manager = WantedManager(self.arr, self.settings)
async def handle_search(self, search_type):
self._initialize_job(search_type)
wanted_items = await self._get_initial_wanted_items(search_type)
if not wanted_items:
return
queue = await QueueManager(self.arr, self.settings).get_queue_items(
queue_scope="normal"
)
wanted_items = self._filter_wanted_items(wanted_items, queue)
if not wanted_items:
return
await self._log_items(wanted_items, search_type)
await self._trigger_search(wanted_items)
def _initialize_job(self, search_type):
logger.verbose("")
if search_type == "missing":
logger.verbose(f"Searching for missing content on {self.arr.name}:")
self.job = self.settings.jobs.search_missing_content
elif search_type == "cutoff":
logger.verbose(f"Searching for unmet cutoff content on {self.arr.name}:")
self.job = self.settings.jobs.search_unmet_cutoff_content
else:
raise ValueError(f"Unknown search type: {search_type}")
def _get_initial_wanted_items(self, search_type):
wanted = self.wanted_manager.get_wanted_items(search_type)
if not wanted:
logger.verbose(f">>> No {search_type} items, thus not triggering a search.")
return wanted
def _filter_wanted_items(self, items, queue):
items = self._filter_already_downloading(items, queue)
if not items:
logger.verbose(f">>> All items already downloading, nothing to search for.")
return []
items = self._filter_recent_searches(items)
if not items:
logger.verbose(
f">>> All items recently searched for, thus not triggering another search."
)
return []
return items[: self.job.max_concurrent_searches]
def _filter_already_downloading(self, wanted_items, queue):
queue_ids = {q[self.arr.detail_item_id_key] for q in queue}
return [item for item in wanted_items if item["id"] not in queue_ids]
async def _trigger_search(self, items):
if not self.settings.general.test_run:
ids = [item["id"] for item in items]
await self.wanted_manager.search_items(ids)
def _filter_recent_searches(self, items):
now = datetime.now(timezone.utc)
result = []
for item in items:
last = item.get("lastSearchTime")
if not last:
item["lastSearchDateFormatted"] = "Never"
item["daysSinceLastSearch"] = None
result.append(item)
continue
last_time = dateutil.parser.isoparse(last)
days_ago = (now - last_time).days
if last_time + timedelta(days=self.job.min_days_between_searches) < now:
item["lastSearchDateFormatted"] = last_time.strftime("%Y-%m-%d")
item["daysSinceLastSearch"] = days_ago
result.append(item)
return result
async def _log_items(self, items, search_type):
logger.verbose(f">>> Running a scan for {len(items)} {search_type} items:")
for item in items:
if self.arr.arr_type in ["radarr", "readarr", "lidarr"]:
title = item.get("title", "Unknown")
logger.verbose(f">>> - {title}")
elif self.arr.arr_type == "sonarr":
series = await self.arr.get_series()
series_title = next(
(s["title"] for s in series if s["id"] == item.get("seriesId")),
"Unknown",
)
episode = item.get("episodeNumber", "00")
season = item.get("seasonNumber", "00")
season_numbering = f"S{int(season):02}/E{int(episode):02}"
logger.verbose(f">>> - {series_title} ({season_numbering})")
async def _get_series_dict(self):
series = await self.arr.rest_get("series")
return {s["id"]: s for s in series}

View File

@@ -0,0 +1,69 @@
from src.utils.log_setup import logger
class StrikesHandler:
def __init__(self, job_name, arr, max_strikes):
self.job_name = job_name
self.tracker = arr.tracker
self.max_strikes = max_strikes
self.tracker.defective.setdefault(job_name, {})
def check_permitted_strikes(self, affected_downloads):
self._recover_downloads(affected_downloads)
return self._apply_strikes_and_filter(affected_downloads)
def _recover_downloads(self, affected_downloads):
recovered = [
d_id for d_id in self.tracker.defective[self.job_name]
if d_id not in dict(affected_downloads)
]
for d_id in recovered:
logger.info(
">>> Download no longer marked as %s: %s",
self.job_name,
self.tracker.defective[self.job_name][d_id]["title"],
)
del self.tracker.defective[self.job_name][d_id]
def _apply_strikes_and_filter(self, affected_downloads):
for d_id, queue_items in list(affected_downloads.items()):
title = queue_items[0]["title"]
strikes = self._increment_strike(d_id, title)
strikes_left = self.max_strikes - strikes
self._log_strike_status(title, strikes, strikes_left)
if strikes_left >= 0:
del affected_downloads[d_id]
return affected_downloads
def _increment_strike(self, d_id, title):
entry = self.tracker.defective[self.job_name].setdefault(
d_id, {"title": title, "strikes": 0}
)
entry["strikes"] += 1
return entry["strikes"]
def _log_strike_status(self, title, strikes, strikes_left):
if strikes_left >= 0:
logger.info(
">>> Job '%s' detected download (%s/%s strikes): %s",
self.job_name, strikes, self.max_strikes, title,
)
elif strikes_left == -1:
logger.verbose(
">>> Job '%s' detected download (%s/%s strikes): %s",
self.job_name, strikes, self.max_strikes, title,
)
elif strikes_left <= -2:
logger.info(
">>> Job '%s' detected download (%s/%s strikes): %s",
self.job_name, strikes, self.max_strikes, title,
)
logger.info(
'>>> [Tip!] Since this download should already have been removed in a previous iteration but keeps coming back, this indicates the blocking of the torrent does not work correctly. Consider turning on the option "Reject Blocklisted Torrent Hashes While Grabbing" on the indexer in the *arr app: %s',
title,
)

View File

@@ -0,0 +1,83 @@
import yaml
def mask_sensitive_value(value, key, sensitive_attributes):
"""Mask the value if it's in the sensitive attributes."""
return "*****" if key in sensitive_attributes else value
def filter_internal_attributes(data, internal_attributes, hide_internal_attr):
"""Filter out internal attributes based on the hide_internal_attr flag."""
return {
k: v
for k, v in data.items()
if not (hide_internal_attr and k in internal_attributes)
}
def clean_dict(data, sensitive_attributes, internal_attributes, hide_internal_attr):
"""Clean a dictionary by masking sensitive attributes and filtering internal ones."""
cleaned = {
k: mask_sensitive_value(v, k, sensitive_attributes)
for k, v in data.items()
}
return filter_internal_attributes(cleaned, internal_attributes, hide_internal_attr)
def clean_list(obj, sensitive_attributes, internal_attributes, hide_internal_attr):
"""Clean a list of dicts or class instances."""
cleaned_list = []
for entry in obj:
if isinstance(entry, dict):
cleaned_list.append(clean_dict(entry, sensitive_attributes, internal_attributes, hide_internal_attr))
elif hasattr(entry, "__dict__"):
cleaned_list.append(clean_dict(vars(entry), sensitive_attributes, internal_attributes, hide_internal_attr))
else:
cleaned_list.append(entry)
return cleaned_list
def clean_object(obj, sensitive_attributes, internal_attributes, hide_internal_attr):
"""Clean an object (either a dict, class instance, or other types)."""
if isinstance(obj, dict):
return clean_dict(obj, sensitive_attributes, internal_attributes, hide_internal_attr)
elif hasattr(obj, "__dict__"):
return clean_dict(vars(obj), sensitive_attributes, internal_attributes, hide_internal_attr)
else:
return mask_sensitive_value(obj, "", sensitive_attributes)
def get_config_as_yaml(
data,
sensitive_attributes=None,
internal_attributes=None,
hide_internal_attr=True,
):
"""Main function to process the configuration into YAML format."""
if sensitive_attributes is None:
sensitive_attributes = set()
if internal_attributes is None:
internal_attributes = set()
config_output = {}
for key, obj in data.items():
if key.startswith("_"):
continue
# Process list-based config
if isinstance(obj, list):
cleaned_list = clean_list(
obj, sensitive_attributes, internal_attributes, hide_internal_attr
)
if cleaned_list:
config_output[key] = cleaned_list
# Process dict or class-like object config
else:
cleaned_obj = clean_object(
obj, sensitive_attributes, internal_attributes, hide_internal_attr
)
if cleaned_obj:
config_output[key] = cleaned_obj
return yaml.dump(config_output, indent=2, default_flow_style=False, sort_keys=False)

View File

@@ -0,0 +1,61 @@
import os
from src.settings._config_as_yaml import get_config_as_yaml
class Envs:
def __init__(self):
self.in_docker = os.environ.get("IN_DOCKER", "").lower() == "true"
self.image_tag = os.environ.get("IMAGE_TAG") or "Local"
self.short_commit_id = os.environ.get("SHORT_COMMIT_ID") or "n/a"
self.use_config_yaml = False # Overwritten later if config file exists
def config_as_yaml(self):
return get_config_as_yaml(self.__dict__)
class Paths:
logs = "./temp/log.txt"
tracker = "./temp/tracker.txt"
config_file = "./config/config.yaml"
class ApiEndpoints:
radarr = "/api/v3"
sonarr = "/api/v3"
lidarr = "/api/v1"
readarr = "/api/v1"
whisparr = "/api/v3"
qbittorrent = "/api/v2"
class MinVersions:
radarr = "5.10.3.9171"
sonarr = "4.0.9.2332"
lidarr = "2.11.1.4621"
readarr = "0.4.15.2787"
whisparr = "2.0.0.548"
qbittorrent = "4.3.0"
class FullQueueParameter:
radarr = "includeUnknownMovieItems"
sonarr = "includeUnknownSeriesItems"
lidarr = "includeUnknownArtistItems"
readarr = "includeUnknownAuthorItems"
whisparr = "includeUnknownSeriesItems"
class DetailItemKey:
radarr = "movie"
sonarr = "episode"
lidarr = "album"
readarr = "book"
whisparr = "episode"
class DetailItemSearchCommand:
radarr = "MoviesSearch"
sonarr = "EpisodeSearch"
lidarr = "BookSearch"
readarr = "BookSearch"
whisparr = None

View File

@@ -0,0 +1,69 @@
from src.settings._config_as_yaml import get_config_as_yaml
from src.settings._download_clients_qBit import QbitClients
class DownloadClients:
"""Represents all download clients."""
qbittorrent = None
download_client_types = [
"qbittorrent",
]
def __init__(self, config, settings):
self._set_qbit_clients(config, settings)
self.check_unique_download_client_types()
def _set_qbit_clients(self, config, settings):
download_clients = config.get("download_clients", {})
if isinstance(download_clients, dict):
self.qbittorrent = QbitClients(config, settings)
if not self.qbittorrent: # Unsets settings in general section needed for qbit (if no qbit is defined)
for key in [
"private_tracker_handling",
"public_tracker_handling",
"obsolete_tag",
"protected_tag",
]:
setattr(settings.general, key, None)
def config_as_yaml(self):
"""Logs all download clients."""
return get_config_as_yaml(
{"qbittorrent": self.qbittorrent},
sensitive_attributes={"username", "password", "cookie"},
internal_attributes={ "api_url", "cookie", "settings", "min_version"},
hide_internal_attr=True
)
def check_unique_download_client_types(self):
"""Ensures that all download client names are unique.
This is important since downloadClient in arr goes by name, and
this is needed to link it to the right IP set up in the yaml config
(which may be different to the one donfigured in arr)"""
seen = set()
for download_client_type in self.download_client_types:
download_clients = getattr(self, download_client_type, [])
# Check each client in the list
for client in download_clients:
name = getattr(client, "name", None)
if name is None:
raise ValueError(f'{download_client_type} client does not have a name ({client.base_url}).\nMake sure that the name corresponds with the name set in your *arr app for that download client.')
if name.lower() in seen:
raise ValueError(f"Download client names must be unique. Duplicate name found: '{name}'\nMake sure that the name corresponds with the name set in your *arr app for that download client.")
else:
seen.add(name.lower())
def get_download_client_by_name(self, name: str):
"""Retrieve the download client and its type by its name."""
name_lower = name.lower()
for download_client_type in self.download_client_types:
download_clients = getattr(self, download_client_type, [])
# Check each client in the list
for client in download_clients:
if client.name.lower() == name_lower:
return client, download_client_type
return None, None

View File

@@ -0,0 +1,347 @@
from packaging import version
from src.utils.common import make_request, wait_and_exit
from src.settings._constants import ApiEndpoints, MinVersions
from src.utils.log_setup import logger
class QbitError(Exception):
pass
class QbitClients(list):
"""Represents all qBittorrent clients"""
def __init__(self, config, settings):
super().__init__()
self._set_qbit_clients(config, settings)
def _set_qbit_clients(self, config, settings):
qbit_config = config.get("download_clients", {}).get("qbittorrent", [])
if not isinstance(qbit_config, list):
logger.error(
"Invalid config format for qbittorrent clients. Expected a list."
)
return
for client_config in qbit_config:
try:
self.append(QbitClient(settings, **client_config))
except TypeError as e:
logger.error(f"Error parsing qbittorrent client config: {e}")
class QbitClient:
"""Represents a single qBittorrent client."""
cookie: str = None
version: str = None
def __init__(
self,
settings,
base_url: str = None,
username: str = None,
password: str = None,
name: str = None
):
self.settings = settings
if not base_url:
logger.error("Skipping qBittorrent client entry: 'base_url' is required.")
raise ValueError("qBittorrent client must have a 'base_url'.")
self.base_url = base_url.rstrip("/")
self.api_url = self.base_url + getattr(ApiEndpoints, "qbittorrent")
self.min_version = getattr(MinVersions, "qbittorrent")
self.username = username
self.password = password
self.name = name
if not self.name:
logger.verbose("No name provided for qbittorrent client, assuming 'qBitorrent'. If the name used in your *arr is different, please correct either the name in your *arr, or set the name in your config")
self.name = "qBittorrent"
self._remove_none_attributes()
def _remove_none_attributes(self):
"""Removes attributes that are None to keep the object clean."""
for attr in list(vars(self)):
if getattr(self, attr) is None:
delattr(self, attr)
async def refresh_cookie(self):
"""Refresh the qBittorrent session cookie."""
try:
endpoint = f"{self.api_url}/auth/login"
data = {"username": getattr(self, 'username', ''), "password": getattr(self, 'password', '')}
headers = {"content-type": "application/x-www-form-urlencoded"}
response = await make_request(
"post", endpoint, self.settings, data=data, headers=headers
)
if response.text == "Fails.":
raise ConnectionError("Login failed.")
self.cookie = {"SID": response.cookies["SID"]}
logger.debug("qBit cookie refreshed!")
except Exception as e:
logger.error(f"Error refreshing qBit cookie: {e}")
self.cookie = {}
raise QbitError(e) from e
async def fetch_version(self):
"""Fetch the current qBittorrent version."""
endpoint = f"{self.api_url}/app/version"
response = await make_request("get", endpoint, self.settings, cookies=self.cookie)
self.version = response.text[1:] # Remove the '_v' prefix
logger.debug(f"qBit version for client qBittorrent: {self.version}")
async def validate_version(self):
"""Check if the qBittorrent version meets minimum and recommended requirements."""
min_version = self.settings.min_versions.qbittorrent
if version.parse(self.version) < version.parse(min_version):
logger.error(
f"Please update qBittorrent to at least version {min_version}. Current version: {self.version}"
)
raise QbitError(
f"qBittorrent version {self.version} is too old. Please update."
)
if version.parse(self.version) < version.parse("5.0.0"):
logger.info(
f"[Tip!] Consider upgrading to qBittorrent v5.0.0 or newer to reduce network overhead."
)
async def create_tag(self):
"""Create the protection tag in qBittorrent if it doesn't exist."""
url = f"{self.api_url}/torrents/tags"
response = await make_request("get", url, self.settings, cookies=self.cookie)
current_tags = response.json()
if self.settings.general.protected_tag not in current_tags:
logger.verbose(f"Creating protection tag: {self.settings.general.protected_tag}")
if not self.settings.general.test_run:
data = {"tags": self.settings.general.protected_tag}
await make_request(
"post",
self.api_url + "/torrents/createTags",
self.settings,
data=data,
cookies=self.cookie,
)
if (
self.settings.general.public_tracker_handling == "tag_as_obsolete"
or self.settings.general.private_tracker_handling == "tag_as_obsolete"
):
if self.settings.general.obsolete_tag not in current_tags:
logger.verbose(f"Creating obsolete tag: {self.settings.general.obsolete_tag}")
if not self.settings.general.test_run:
data = {"tags": self.settings.general.obsolete_tag}
await make_request(
"post",
self.api_url + "/torrents/createTags",
self.settings,
data=data,
cookies=self.cookie,
)
async def set_unwanted_folder(self):
"""Set the 'unwanted folder' setting in qBittorrent if needed."""
if self.settings.jobs.remove_bad_files:
endpoint = f"{self.api_url}/app/preferences"
response = await make_request(
"get", endpoint, self.settings, cookies=self.cookie
)
qbit_settings = response.json()
if not qbit_settings.get("use_unwanted_folder"):
logger.info(
"Enabling 'Keep unselected files in .unwanted folder' in qBittorrent."
)
if not self.settings.general.test_run:
data = {"json": '{"use_unwanted_folder": true}'}
await make_request(
"post",
self.api_url + "/app/setPreferences",
self.settings,
data=data,
cookies=self.cookie,
)
async def check_qbit_reachability(self):
"""Check if the qBittorrent URL is reachable."""
try:
endpoint = f"{self.api_url}/auth/login"
data = {"username": getattr(self, 'username', ''), "password": getattr(self, 'password', '')}
headers = {"content-type": "application/x-www-form-urlencoded"}
await make_request(
"post", endpoint, self.settings, data=data, headers=headers, log_error=False
)
except Exception as e:
tip = "💡 Tip: Did you specify the URL (and username/password if required) correctly?"
logger.error(f"-- | qBittorrent\n❗️ {e}\n{tip}\n")
wait_and_exit()
async def check_qbit_connected(self):
"""Check if the qBittorrent is connected to internet."""
qbit_connection_status = ((
await make_request(
"get",
self.api_url + "/sync/maindata",
self.settings,
cookies=self.cookie,
)
).json())["server_state"]["connection_status"]
if qbit_connection_status == "disconnected":
return False
else:
return True
async def setup(self):
"""Perform the qBittorrent setup by calling relevant managers."""
# Check reachabilty
await self.check_qbit_reachability()
# Refresh the qBittorrent cookie first
await self.refresh_cookie()
try:
# Fetch version and validate it
await self.fetch_version()
await self.validate_version()
logger.info(f"OK | qBittorrent ({self.base_url})")
except QbitError as e:
logger.error(f"qBittorrent version check failed: {e}")
wait_and_exit() # Exit if version check fails
# Continue with other setup tasks regardless of version check result
await self.create_tag()
await self.set_unwanted_folder()
async def get_protected_and_private(self):
"""Fetches torrents from qBittorrent and checks for protected and private status."""
protected_downloads = []
private_downloads = []
# Fetch all torrents
qbit_items = await self.get_qbit_items()
for qbit_item in qbit_items:
# Fetch protected torrents (by tag)
if self.settings.general.protected_tag in qbit_item.get("tags", []):
protected_downloads.append(qbit_item["hash"].upper())
# Fetch private torrents
if not (self.settings.general.private_tracker_handling == "remove" or self.settings.general.public_tracker_handling == "remove"):
if version.parse(self.version) >= version.parse("5.0.0"):
if qbit_item.get("private"):
private_downloads.append(qbit_item["hash"].upper())
else:
qbit_item_props = await make_request(
"get",
self.api_url + "/torrents/properties",
self.settings,
params={"hash": qbit_item["hash"]},
cookies=self.cookie,
)
if not qbit_item_props:
logger.error(
"Torrent %s not found on qBittorrent - potentially removed while checking if private. "
"Consider upgrading qBit to v5.0.4 or newer to avoid this problem.",
qbit_item["hash"],
)
continue
if qbit_item_props.get("is_private", False):
private_downloads.append(qbit_item["hash"].upper())
qbit_item["private"] = qbit_item_props.get("is_private", None)
return protected_downloads, private_downloads
async def set_tag(self, tags, hashes):
"""
Sets tags to one or more torrents in qBittorrent.
Args:
tags (list): A list of tag names to be added.
hashes (list): A list of torrent hashes to which the tags should be applied.
"""
# Ensure hashes are provided as a string separated by '|'
hashes_str = "|".join(hashes)
# Ensure tags are provided as a string separated by ',' (comma)
tags_str = ",".join(tags)
# Prepare the data for the request
data = {
"hashes": hashes_str,
"tags": tags_str
}
# Perform the request to add the tag(s) to the torrents
await make_request(
"post",
self.api_url + "/torrents/addTags",
self.settings,
data=data,
cookies=self.cookie,
)
async def get_download_progress(self, download_id):
items = await self.get_qbit_items(download_id)
return items[0]["completed"]
async def get_qbit_items(self, hashes=None):
params = None
if hashes:
if isinstance(hashes, str):
hashes = [hashes]
params = {"hashes": "|".join(hashes).lower()} # Join and make lowercase
response = await make_request(
method="get",
endpoint=self.api_url + "/torrents/info",
settings=self.settings,
params=params,
cookies=self.cookie,
)
return response.json()
async def get_torrent_files(self, download_id):
# this may not work if the wrong qbit
response = await make_request(
method="get",
endpoint=self.api_url + "/torrents/files",
settings=self.settings,
params={"hash": download_id.lower()},
cookies=self.cookie,
)
return response.json()
async def set_torrent_file_priority(self, download_id, file_id, priority = 0):
data={
"hash": download_id.lower(),
"id": file_id,
"priority": priority,
}
await make_request(
"post",
self.api_url + "/torrents/filePrio",
self.settings,
data=data,
cookies=self.cookie,
)

74
src/settings/_general.py Normal file
View File

@@ -0,0 +1,74 @@
import yaml
from src.utils.log_setup import logger
from src.settings._validate_data_types import validate_data_types
from src.settings._config_as_yaml import get_config_as_yaml
class General:
"""Represents general settings for the application."""
VALID_TRACKER_HANDLING = {"remove", "skip", "obsolete_tag"}
log_level: str = "INFO"
test_run: bool = False
ssl_verification: bool = True
timer: float = 10.0
ignored_download_clients: list = []
private_tracker_handling: str = "remove"
public_tracker_handling: str = "remove"
obsolete_tag: str = None
protected_tag: str = "Keep"
def __init__(self, config):
general_config = config.get("general", {})
self.log_level = general_config.get("log_level", self.log_level.upper())
self.test_run = general_config.get("test_run", self.test_run)
self.timer = general_config.get("timer", self.timer)
self.ssl_verification = general_config.get("ssl_verification", self.ssl_verification)
self.ignored_download_clients = general_config.get("ignored_download_clients", self.ignored_download_clients)
self.private_tracker_handling = general_config.get("private_tracker_handling", self.private_tracker_handling)
self.public_tracker_handling = general_config.get("public_tracker_handling", self.public_tracker_handling)
self.obsolete_tag = general_config.get("obsolete_tag", self.obsolete_tag)
self.protected_tag = general_config.get("protected_tag", self.protected_tag)
# Validate tracker handling settings
self.private_tracker_handling = self._validate_tracker_handling( self.private_tracker_handling, "private_tracker_handling" )
self.public_tracker_handling = self._validate_tracker_handling( self.public_tracker_handling, "public_tracker_handling" )
self.obsolete_tag = self._determine_obsolete_tag(self.obsolete_tag)
validate_data_types(self)
self._remove_none_attributes()
def _remove_none_attributes(self):
"""Removes attributes that are None to keep the object clean."""
for attr in list(vars(self)):
if getattr(self, attr) is None:
delattr(self, attr)
def _validate_tracker_handling(self, value, field_name):
"""Validates tracker handling options. Defaults to 'remove' if invalid."""
if value not in self.VALID_TRACKER_HANDLING:
logger.error(
f"Invalid value '{value}' for {field_name}. Defaulting to 'remove'."
)
return "remove"
return value
def _determine_obsolete_tag(self, obsolete_tag):
"""Defaults obsolete tag to "obsolete", only if none is provided and the tag is needed for handling """
if obsolete_tag is None and (
self.private_tracker_handling == "obsolete_tag"
or self.public_tracker_handling == "obsolete_tag"
):
return "Obsolete"
return obsolete_tag
def config_as_yaml(self):
"""Logs all general settings."""
# yaml_output = yaml.dump(vars(self), indent=2, default_flow_style=False, sort_keys=False)
# logger.info(f"General Settings:\n{yaml_output}")
return get_config_as_yaml(
vars(self),
)

296
src/settings/_instances.py Normal file
View File

@@ -0,0 +1,296 @@
import requests
from packaging import version
from src.utils.log_setup import logger
from src.settings._constants import (
ApiEndpoints,
MinVersions,
FullQueueParameter,
DetailItemKey,
DetailItemSearchCommand,
)
from src.settings._config_as_yaml import get_config_as_yaml
from src.utils.common import make_request, wait_and_exit
class Tracker:
def __init__(self):
self.protected = []
self.private = []
self.defective = {}
self.download_progress = {}
self.deleted = []
self.extension_checked = []
async def refresh_private_and_protected(self, settings):
protected_downloads = []
private_downloads = []
for qbit in settings.download_clients.qbittorrent:
protected, private = await qbit.get_protected_and_private()
protected_downloads.extend(protected)
private_downloads.extend(private)
self.protected = protected_downloads
self.private = private_downloads
class ArrError(Exception):
pass
class Instances:
"""Represents all Arr instances."""
def __init__(self, config, settings):
self.arrs = ArrInstances(config, settings)
if not self.arrs:
logger.error("No valid Arr instances found in the config.")
wait_and_exit()
def get_by_arr_type(self, arr_type):
"""Return a list of arr instances matching the given arr_type."""
return [arr for arr in self.arrs if arr.arr_type == arr_type]
def config_as_yaml(self, hide_internal_attr=True):
"""Logs all configured Arr instances while masking sensitive attributes."""
internal_attributes={
"settings",
"api_url",
"min_version",
"arr_type",
"full_queue_parameter",
"monitored_item",
"detail_item_key",
"detail_item_id_key",
"detail_item_ids_key",
"detail_item_search_command",
}
outputs = []
for arr_type in ["sonarr", "radarr", "readarr", "lidarr", "whisparr"]:
arrs = self.get_by_arr_type(arr_type)
if arrs:
output = get_config_as_yaml(
{arr_type.capitalize(): arrs},
sensitive_attributes={"api_key"},
internal_attributes=internal_attributes,
hide_internal_attr=hide_internal_attr,
)
outputs.append(output)
return "\n".join(outputs)
def check_any_arrs(self):
"""Check if there are any ARR instances."""
if not self.arrs:
logger.warning("No ARR instances found.")
wait_and_exit()
class ArrInstances(list):
"""Represents all Arr clients (Sonarr, Radarr, etc.)."""
def __init__(self, config, settings):
super().__init__()
self._load_clients(config, settings)
def _load_clients(self, config, settings):
instances_config = config.get("instances", {})
if not isinstance(instances_config, dict):
logger.error("Invalid format for 'instances'. Expected a dictionary.")
return
for arr_type, clients in instances_config.items():
if not isinstance(clients, list):
logger.error(f"Invalid config format for {arr_type}. Expected a list.")
continue
for client_config in clients:
try:
self.append(
ArrInstance(
settings,
arr_type=arr_type,
base_url=client_config["base_url"],
api_key=client_config["api_key"],
)
)
except KeyError as e:
logger.error(
f"Missing required key {e} in {arr_type} client config."
)
class ArrInstance:
"""Represents an individual Arr instance (Sonarr, Radarr, etc.)."""
version: str = None
name: str = None
tracker = Tracker()
def __init__(self, settings, arr_type: str, base_url: str, api_key: str):
if not base_url:
logger.error(f"Skipping {arr_type} client entry: 'base_url' is required.")
raise ValueError(f"{arr_type} client must have a 'base_url'.")
if not api_key:
logger.error(f"Skipping {arr_type} client entry: 'api_key' is required.")
raise ValueError(f"{arr_type} client must have an 'api_key'.")
self.settings = settings
self.arr_type = arr_type
self.base_url = base_url.rstrip("/")
self.api_key = api_key
self.api_url = self.base_url + getattr(ApiEndpoints, arr_type)
self.min_version = getattr(MinVersions, arr_type)
self.full_queue_parameter = getattr(FullQueueParameter, arr_type)
self.detail_item_key = getattr(DetailItemKey, arr_type)
self.detail_item_id_key = self.detail_item_key + "Id"
self.detail_item_ids_key = self.detail_item_key + "Ids"
self.detail_item_search_command = getattr(DetailItemSearchCommand, arr_type)
async def _check_ui_language(self):
"""Check if the UI language is set to English."""
endpoint = self.api_url + "/config/ui"
headers = {"X-Api-Key": self.api_key}
response = await make_request("get", endpoint, self.settings, headers=headers)
ui_language = (response.json())["uiLanguage"]
if ui_language > 1: # Not English
logger.error("!! %s Error: !!", self.name)
logger.error(
f"> Decluttarr only works correctly if UI language is set to English (under Settings/UI in {self.name})"
)
logger.error(
"> Details: https://github.com/ManiMatter/decluttarr/issues/132)"
)
raise ArrError("Not English")
def _check_min_version(self, status):
"""Check if ARR instance meets minimum version requirements."""
self.version = status["version"]
min_version = getattr(self.settings.min_versions, self.arr_type)
if min_version:
if version.parse(self.version) < version.parse(min_version):
logger.error("!! %s Error: !!", self.name)
logger.error(
f"> Please update {self.name} ({self.base_url}) to at least version {min_version}. Current version: {self.version}"
)
raise ArrError("Not meeting minimum version requirements")
def _check_arr_type(self, status):
"""Check if the ARR instance is of the correct type."""
actual_arr_type = status["appName"]
if actual_arr_type.lower() != self.arr_type:
logger.error("!! %s Error: !!", self.name)
logger.error(
f"> Your {self.name} ({self.base_url}) points to a {actual_arr_type} instance, rather than {self.arr_type}. Did you specify the wrong IP?"
)
raise ArrError("Wrong Arr Type")
async def _check_reachability(self):
"""Check if ARR instance is reachable."""
try:
endpoint = self.api_url + "/system/status"
headers = {"X-Api-Key": self.api_key}
response = await make_request(
"get", endpoint, self.settings, headers=headers, log_error=False
)
status = response.json()
return status
except Exception as e:
if isinstance(e, requests.exceptions.HTTPError):
response = getattr(e, "response", None)
if response is not None and response.status_code == 401:
tip = "💡 Tip: Have you configured the API_KEY correctly?"
else:
tip = f"💡 Tip: HTTP error occurred. Status: {getattr(response, 'status_code', 'unknown')}"
elif isinstance(e, requests.exceptions.RequestException):
tip = "💡 Tip: Have you configured the URL correctly?"
else:
tip = ""
logger.error(f"-- | {self.arr_type} ({self.base_url})\n❗️ {e}\n{tip}\n")
raise ArrError(e) from e
async def setup(self):
"""Checks on specific ARR instance"""
try:
status = await self._check_reachability()
self.name = status.get("instanceName", self.arr_type)
self._check_arr_type(status)
self._check_min_version(status)
await self._check_ui_language()
# Display result
logger.info(f"OK | {self.name} ({self.base_url})")
logger.debug(f"Current version of {self.name}: {self.version}")
except Exception as e:
if not isinstance(e, ArrError):
logger.error(f"Unhandled error: {e}", exc_info=True)
wait_and_exit()
async def get_download_client_implementation(self, download_client_name):
"""Fetch download client information and return the implementation value."""
endpoint = self.api_url + "/downloadclient"
headers = {"X-Api-Key": self.api_key}
# Fetch the download client list from the API
response = await make_request("get", endpoint, self.settings, headers=headers)
# Check if the response is a list
download_clients = response.json()
# Find the client where the name matches client_name
for client in download_clients:
if client.get("name") == download_client_name:
# Return the implementation value if found
return client.get("implementation", None)
return None
async def remove_queue_item(self, queue_id, blocklist=False):
"""
Remove a specific queue item from the queue by its qeue id.
Sends a delete request to the API to remove the item.
Args:
queue_id (str): The quueue ID of the queue item to be removed.
blocklist (bool): Whether to add the item to the blocklist. Default is False.
Returns:
bool: Returns True if the removal was successful, False otherwise.
"""
endpoint = f"{self.api_url}/queue/{queue_id}"
headers = {"X-Api-Key": self.api_key}
json_payload = {"removeFromClient": True, "blocklist": blocklist}
# Send the request to remove the download from the queue
response = await make_request(
"delete", endpoint, self.settings, headers=headers, json=json_payload
)
# If the response is successful, return True, else return False
if response.status_code == 200:
return True
else:
return False
async def is_monitored(self, detail_id):
"""Check if detail item (like a book, series, etc) is monitored."""
endpoint = f"{self.api_url}/{self.detail_item_key}/{detail_id}"
headers = {"X-Api-Key": self.api_key}
response = await make_request("get", endpoint, self.settings, headers=headers)
return response.json()["monitored"]
async def get_series(self):
"""Fetch download client information and return the implementation value."""
endpoint = self.api_url + "/series"
headers = {"X-Api-Key": self.api_key}
response = await make_request("get", endpoint, self.settings, headers=headers)
return response.json()

161
src/settings/_jobs.py Normal file
View File

@@ -0,0 +1,161 @@
from src.utils.log_setup import logger
from src.settings._validate_data_types import validate_data_types
from src.settings._config_as_yaml import get_config_as_yaml
class JobParams:
"""Represents individual job settings, with an 'enabled' flag and optional parameters."""
enabled: bool = False
message_patterns: list
max_strikes: int
min_speed: int
max_concurrent_searches: int
min_days_between_searches: int
def __init__(
self,
enabled=None,
message_patterns=None,
max_strikes=None,
min_speed=None,
max_concurrent_searches=None,
min_days_between_searches=None,
):
self.enabled = enabled
self.message_patterns = message_patterns
self.max_strikes = max_strikes
self.min_speed = min_speed
self.max_concurrent_searches = max_concurrent_searches
self.min_days_between_searches = min_days_between_searches
# Remove attributes that are None to keep the object clean
self._remove_none_attributes()
def _remove_none_attributes(self):
"""Removes attributes that are None to keep the object clean."""
for attr in list(vars(self)):
if getattr(self, attr) is None:
delattr(self, attr)
class JobDefaults:
"""Represents default job settings."""
max_strikes: int = 3
max_concurrent_searches: int = 3
min_days_between_searches: int = 7
min_speed: int = 100
message_patterns = ["*"]
def __init__(self, config):
job_defaults_config = config.get("job_defaults", {})
self.max_strikes = job_defaults_config.get("max_strikes", self.max_strikes)
self.max_concurrent_searches = job_defaults_config.get(
"max_concurrent_searches", self.max_concurrent_searches
)
self.min_days_between_searches = job_defaults_config.get(
"min_days_between_searches", self.min_days_between_searches
)
validate_data_types(self)
class Jobs:
"""Represents all jobs explicitly"""
def __init__(self, config):
self.job_defaults = JobDefaults(config)
self._set_job_defaults()
self._set_job_configs(config)
del self.job_defaults
def _set_job_defaults(self):
self.remove_bad_files = JobParams()
self.remove_failed_downloads = JobParams()
self.remove_failed_imports = JobParams(
message_patterns=self.job_defaults.message_patterns
)
self.remove_metadata_missing = JobParams(
max_strikes=self.job_defaults.max_strikes
)
self.remove_missing_files = JobParams()
self.remove_orphans = JobParams()
self.remove_slow = JobParams(
max_strikes=self.job_defaults.max_strikes,
min_speed=self.job_defaults.min_speed,
)
self.remove_stalled = JobParams(max_strikes=self.job_defaults.max_strikes)
self.remove_unmonitored = JobParams()
self.search_unmet_cutoff_content = JobParams(
max_concurrent_searches=self.job_defaults.max_concurrent_searches,
min_days_between_searches=self.job_defaults.min_days_between_searches,
)
self.search_missing_content = JobParams(
max_concurrent_searches=self.job_defaults.max_concurrent_searches,
min_days_between_searches=self.job_defaults.min_days_between_searches,
)
def _set_job_configs(self, config):
# Populate jobs from YAML config
for job_name in self.__dict__:
if job_name != "job_defaults" and job_name in config.get("jobs", {}):
self._set_job_settings(job_name, config["jobs"][job_name])
def _set_job_settings(self, job_name, job_config):
"""Sets per-job config settings"""
job = getattr(self, job_name, None)
if (
job_config is None
): # this triggers only when reading from yaml-file. for docker-compose, empty configs are not loaded, thus the entire job would not be parsed
job.enabled = True
elif isinstance(job_config, bool):
if job:
job.enabled = job_config
else:
job = JobParams(enabled=job_config)
elif isinstance(job_config, dict):
job_config.setdefault("enabled", True)
if job:
for key, value in job_config.items():
setattr(job, key, value)
else:
job = JobParams(**job_config)
else:
job = JobParams(enabled=False)
setattr(self, job_name, job)
validate_data_types(
job, self.job_defaults
) # Validates and applies defauls from job_defaults
def log_status(self):
job_strings = []
for job_name, job_obj in self.__dict__.items():
if isinstance(job_obj, JobParams):
job_strings.append(f"{job_name}: {job_obj.enabled}")
status = "\n".join(job_strings)
logger.info(status)
def config_as_yaml(self):
filtered = {
k: v
for k, v in vars(self).items()
if not hasattr(v, "enabled") or v.enabled
}
return get_config_as_yaml(
filtered,
internal_attributes={"enabled"},
hide_internal_attr=True,
)
def list_job_status(self):
"""Returns a string showing each job and whether it's enabled or not using emojis."""
lines = []
for name, obj in vars(self).items():
if hasattr(obj, "enabled"):
status = "🟢" if obj.enabled else "⚪️"
lines.append(f"{status} {name}")
return "\n".join(lines)

View File

@@ -0,0 +1,138 @@
import os
import yaml
from src.utils.log_setup import logger
CONFIG_MAPPING = {
"general": [
"LOG_LEVEL",
"TEST_RUN",
"TIMER",
"SSL_VERIFICATION",
"IGNORED_DOWNLOAD_CLIENTS",
],
"job_defaults": [
"MAX_STRIKES",
"MIN_DAYS_BETWEEN_SEARCHES",
"MAX_CONCURRENT_SEARCHES",
],
"jobs": [
"REMOVE_BAD_FILES",
"REMOVE_FAILED_DOWNLOADS",
"REMOVE_FAILED_IMPORTS",
"REMOVE_METADATA_MISSING",
"REMOVE_MISSING_FILES",
"REMOVE_ORPHANS",
"REMOVE_SLOW",
"REMOVE_STALLED",
"REMOVE_UNMONITORED",
"SEARCH_UNMET_CUTOFF_CONTENT",
"SEARCH_MISSING_CONTENT",
],
"instances": ["SONARR", "RADARR", "READARR", "LIDARR", "WHISPARR"],
"download_clients": ["QBITTORRENT"],
}
def get_user_config(settings):
"""Checks if data is read from enviornment variables, or from yaml file.
Reads from environment variables if in docker, unless in docker-compose "USE_CONFIG_YAML" is set to true.
Then the config file is read.
"""
config = {}
if _config_file_exists(settings):
config = _load_from_yaml_file(settings)
settings.envs.use_config_yaml = True
elif settings.envs.in_docker:
config = _load_from_env()
# Ensure all top-level keys exist, even if empty
for section in CONFIG_MAPPING:
if config.get(section) is None:
config[section] = {}
return config
def _parse_env_var(key: str) -> dict | list | str | int | None:
"""Helper function to parse one setting input key"""
raw_value = os.getenv(key)
if raw_value is None:
return None
try:
parsed = yaml.safe_load(raw_value)
return _lowercase(parsed)
except yaml.YAMLError as e:
logger.error(f"Failed to parse environment variable {key} as YAML:\n{e}")
return {}
def _load_section(keys: list[str]) -> dict:
"""Helper function to parse one section of expected config"""
section_config = {}
for key in keys:
parsed = _parse_env_var(key)
if parsed is not None:
section_config[key.lower()] = parsed
return section_config
def _load_from_env() -> dict:
"""Main function to load settings from env"""
config = {}
for section, keys in CONFIG_MAPPING.items():
config[section] = _load_section(keys)
return config
def _load_from_env() -> dict:
config = {}
for section, keys in CONFIG_MAPPING.items():
section_config = {}
for key in keys:
raw_value = os.getenv(key)
if raw_value is None:
continue
try:
parsed_value = yaml.safe_load(raw_value)
parsed_value = _lowercase(parsed_value)
except yaml.YAMLError as e:
logger.error(
f"Failed to parse environment variable {key} as YAML:\n{e}"
)
parsed_value = {}
section_config[key.lower()] = parsed_value
config[section] = section_config
return config
def _lowercase(data):
"""Translates recevied keys (for instance setting-keys of jobs) to lower case"""
if isinstance(data, dict):
return {str(k).lower(): _lowercase(v) for k, v in data.items()}
elif isinstance(data, list):
return [_lowercase(item) for item in data]
else:
# Leave strings and other types unchanged
return data
def _config_file_exists(settings):
config_path = settings.paths.config_file
return os.path.exists(config_path)
def _load_from_yaml_file(settings):
"""Reads config from YAML file and returns a dict."""
config_path = settings.paths.config_file
try:
with open(config_path, "r", encoding="utf-8") as file:
config = yaml.safe_load(file) or {}
return config
except yaml.YAMLError as e:
logger.error("Error reading YAML file: %s", e)
return {}

View File

@@ -0,0 +1,91 @@
import inspect
from src.utils.log_setup import logger
def validate_data_types(cls, default_cls=None):
"""Ensures all attributes match expected types dynamically.
If default_cls is provided, the default key is taken from this class rather than the own class
If the attribute doesn't exist in `default_cls`, fall back to `cls.__class__`.
"""
annotations = inspect.get_annotations(cls.__class__) # Extract type hints
for attr, expected_type in annotations.items():
if not hasattr(cls, attr): # Skip if attribute is missing
continue
value = getattr(cls, attr)
default_source = default_cls if default_cls and hasattr(default_cls, attr) else cls.__class__
default_value = getattr(default_source, attr, None)
if value == default_value:
continue
if not isinstance(value, expected_type):
try:
if expected_type is bool:
value = convert_to_bool(value)
elif expected_type is int:
value = int(value)
elif expected_type is float:
value = float(value)
elif expected_type is str:
value = convert_to_str(value)
elif expected_type is list:
value = convert_to_list(value)
elif expected_type is dict:
value = convert_to_dict(value)
else:
raise TypeError(f"Unhandled type conversion for '{attr}': {expected_type}")
except Exception as e:
logger.error(
f"❗️ Invalid type for '{attr}': Expected {expected_type.__name__}, but got {type(value).__name__}. "
f"Error: {e}. Using default value: {default_value}"
)
value = default_value
setattr(cls, attr, value)
# --- Helper Functions ---
def convert_to_bool(raw_value):
"""Converts strings like 'yes', 'no', 'true', 'false' into boolean values."""
if isinstance(raw_value, bool):
return raw_value
true_values = {"1", "yes", "true", "on"}
false_values = {"0", "no", "false", "off"}
if isinstance(raw_value, str):
raw_value = raw_value.strip().lower()
if raw_value in true_values:
return True
elif raw_value in false_values:
return False
else:
raise ValueError(f"Invalid boolean value: '{raw_value}'")
def convert_to_str(raw_value):
"""Ensures a string and trims whitespace."""
if isinstance(raw_value, str):
return raw_value.strip()
return str(raw_value).strip()
def convert_to_list(raw_value):
"""Ensures a value is a list."""
if isinstance(raw_value, list):
return [convert_to_str(item) for item in raw_value]
return [convert_to_str(raw_value)] # Wrap single values in a list
def convert_to_dict(raw_value):
"""Ensures a value is a dictionary."""
if isinstance(raw_value, dict):
return {convert_to_str(k): v for k, v in raw_value.items()}
raise TypeError(f"Expected dict but got {type(raw_value).__name__}")

60
src/settings/settings.py Normal file
View File

@@ -0,0 +1,60 @@
from src.utils.log_setup import configure_logging
from src.settings._constants import Envs, MinVersions, Paths
# from src.settings._migrate_legacy import migrate_legacy
from src.settings._general import General
from src.settings._jobs import Jobs
from src.settings._download_clients import DownloadClients
from src.settings._instances import Instances
from src.settings._user_config import get_user_config
class Settings:
min_versions = MinVersions()
paths = Paths()
def __init__(self):
self.envs = Envs()
config = get_user_config(self)
self.general = General(config)
self.jobs = Jobs(config)
self.download_clients = DownloadClients(config, self)
self.instances = Instances(config, self)
configure_logging(self)
def __repr__(self):
sections = [
("ENVIRONMENT SETTINGS", "envs"),
("GENERAL SETTINGS", "general"),
("ACTIVE JOBS", "jobs"),
("JOB SETTINGS", "jobs"),
("INSTANCE SETTINGS", "instances"),
("DOWNLOAD CLIENT SETTINGS", "download_clients"),
]
messages = []
messages.append("🛠️ Decluttarr - Settings 🛠️")
messages.append("-"*80)
messages.append("")
for title, attr_name in sections:
section = getattr(self, attr_name, None)
section_content = section.config_as_yaml()
if title == "ACTIVE JOBS":
messages.append(self._format_section_title(title))
messages.append(self.jobs.list_job_status() + "\n")
elif section_content != "{}\n":
messages.append(self._format_section_title(title))
messages.append(section_content + "\n")
return "\n".join(messages)
def _format_section_title(self, name, border_length=50, symbol="="):
"""Format section title with centered name and hash borders."""
padding = max(border_length - len(name) - 2, 0) # 4 for spaces
left_hashes = right_hashes = padding // 2
if padding % 2 != 0:
right_hashes += 1
return f"{symbol * left_hashes} {name} {symbol * right_hashes}\n"

39
src/utils/common.py Normal file
View File

@@ -0,0 +1,39 @@
import sys
import time
import asyncio
import requests
from src.utils.log_setup import logger
async def make_request(
method: str, endpoint: str, settings, timeout: int = 5, log_error = True, **kwargs
) -> requests.Response:
"""
A utility function to make HTTP requests (GET, POST, DELETE, PUT).
"""
try:
# Make the request using the method passed (get, post, etc.)
response = await asyncio.to_thread(
getattr(requests, method.lower()),
endpoint,
**kwargs,
verify=settings.general.ssl_verification,
timeout=timeout,
)
response.raise_for_status()
return response
except requests.exceptions.HTTPError as http_err:
if log_error:
logger.error(f"HTTP error occurred: {http_err}", exc_info=True)
raise
except Exception as err:
if log_error:
logger.error(f"Other error occurred: {err}", exc_info=True)
raise
def wait_and_exit(seconds=30):
logger.info(f"Decluttarr will wait for {seconds} seconds and then exit.")
time.sleep(seconds)
sys.exit()

View File

@@ -1,246 +0,0 @@
#### Turning off black formatting
# fmt: off
########### Import Libraries
import logging, verboselogs
logger = verboselogs.VerboseLogger(__name__)
from dateutil.relativedelta import relativedelta as rd
import requests
from src.utils.rest import rest_get, rest_post #
from src.utils.shared import qBitRefreshCookie
import asyncio
from packaging import version
def setLoggingFormat(settingsDict):
# Sets logger output to specific format
log_level_num=logging.getLevelName(settingsDict['LOG_LEVEL'])
logging.basicConfig(
format=('' if settingsDict['IS_IN_DOCKER'] else '%(asctime)s ') + ('[%(levelname)-7s]' if settingsDict['LOG_LEVEL']=='VERBOSE' else '[%(levelname)s]') + ': %(message)s',
level=log_level_num
)
return
async def getArrInstanceName(settingsDict, arrApp):
# Retrieves the names of the arr instances, and if not defined, sets a default (should in theory not be requried, since UI already enforces a value)
try:
if settingsDict[arrApp + '_URL']:
settingsDict[arrApp + '_NAME'] = (await rest_get(settingsDict[arrApp + '_URL']+'/system/status', settingsDict[arrApp + '_KEY']))['instanceName']
except:
settingsDict[arrApp + '_NAME'] = arrApp.title()
return settingsDict
async def getProtectedAndPrivateFromQbit(settingsDict):
# Returns two lists containing the hashes of Qbit that are either protected by tag, or are private trackers (if IGNORE_PRIVATE_TRACKERS is true)
protectedDownloadIDs = []
privateDowloadIDs = []
if settingsDict['QBITTORRENT_URL']:
# Fetch all torrents
qbitItems = await rest_get(settingsDict['QBITTORRENT_URL']+'/torrents/info',params={}, cookies=settingsDict['QBIT_COOKIE'])
for qbitItem in qbitItems:
# Fetch protected torrents (by tag)
if settingsDict['NO_STALLED_REMOVAL_QBIT_TAG'] in qbitItem.get('tags'):
protectedDownloadIDs.append(str.upper(qbitItem['hash']))
# Fetch private torrents
if settingsDict['IGNORE_PRIVATE_TRACKERS']:
if version.parse(settingsDict['QBIT_VERSION']) >= version.parse('5.1.0'):
if qbitItem['private']:
privateDowloadIDs.append(str.upper(qbitItem['hash']))
else:
qbitItemProperties = await rest_get(settingsDict['QBITTORRENT_URL']+'/torrents/properties',params={'hash': qbitItem['hash']}, cookies=settingsDict['QBIT_COOKIE'])
if not qbitItemProperties:
logger.error("Torrent %s not found on qBittorrent - potentially already removed whilst checking if torrent is private. Consider upgrading qBit to v5.1.0 or newer to avoid this problem.", qbitItem['hash'])
continue
if qbitItemProperties.get('is_private', False):
privateDowloadIDs.append(str.upper(qbitItem['hash']))
qbitItem['private'] = qbitItemProperties.get('is_private', None) # Adds the is_private flag to qbitItem info for simplified logging
logger.debug('main/getProtectedAndPrivateFromQbit/qbitItems: %s', str([{"hash": str.upper(item["hash"]), "name": item["name"], "category": item["category"], "tags": item["tags"], "private": item.get("private", None)} for item in qbitItems]))
logger.debug('main/getProtectedAndPrivateFromQbit/protectedDownloadIDs: %s', str(protectedDownloadIDs))
logger.debug('main/getProtectedAndPrivateFromQbit/privateDowloadIDs: %s', str(privateDowloadIDs))
return protectedDownloadIDs, privateDowloadIDs
def showWelcome():
# Welcome Message
logger.info('#' * 50)
logger.info('Decluttarr - Application Started!')
logger.info('')
logger.info('Like this app? Thanks for giving it a ⭐️ on GitHub!')
logger.info('https://github.com/ManiMatter/decluttarr/')
logger.info('')
return
def showSettings(settingsDict):
# Settings Message
fmt = '{0.days} days {0.hours} hours {0.minutes} minutes'
logger.info('*** Current Settings ***')
logger.info('Version: %s', settingsDict['IMAGE_TAG'])
logger.info('Commit: %s', settingsDict['SHORT_COMMIT_ID'])
logger.info('')
logger.info('%s | Removing failed downloads (%s)', str(settingsDict['REMOVE_FAILED']), 'REMOVE_FAILED')
logger.info('%s | Removing failed imports (%s)', str(settingsDict['REMOVE_FAILED_IMPORTS']), 'REMOVE_FAILED_IMPORTS')
if settingsDict['REMOVE_FAILED_IMPORTS'] and not settingsDict['FAILED_IMPORT_MESSAGE_PATTERNS']:
logger.verbose ('> Any imports with a warning flag are considered failed, as no patterns specified (%s).', 'FAILED_IMPORT_MESSAGE_PATTERNS')
elif settingsDict['REMOVE_FAILED_IMPORTS'] and settingsDict['FAILED_IMPORT_MESSAGE_PATTERNS']:
logger.verbose ('> Imports with a warning flag are considered failed if the status message contains any of the following patterns:')
for pattern in settingsDict['FAILED_IMPORT_MESSAGE_PATTERNS']:
logger.verbose(' - "%s"', pattern)
logger.info('%s | Removing downloads missing metadata (%s)', str(settingsDict['REMOVE_METADATA_MISSING']), 'REMOVE_METADATA_MISSING')
logger.info('%s | Removing downloads missing files (%s)', str(settingsDict['REMOVE_MISSING_FILES']), 'REMOVE_MISSING_FILES')
logger.info('%s | Removing orphan downloads (%s)', str(settingsDict['REMOVE_ORPHANS']), 'REMOVE_ORPHANS')
logger.info('%s | Removing slow downloads (%s)', str(settingsDict['REMOVE_SLOW']), 'REMOVE_SLOW')
logger.info('%s | Removing stalled downloads (%s)', str(settingsDict['REMOVE_STALLED']), 'REMOVE_STALLED')
logger.info('%s | Removing downloads belonging to unmonitored items (%s)', str(settingsDict['REMOVE_UNMONITORED']), 'REMOVE_UNMONITORED')
for arr_type, RESCAN_SETTINGS in settingsDict['RUN_PERIODIC_RESCANS'].items():
logger.info('%s/%s (%s) | Search missing/cutoff-unmet items. Max queries/list: %s. Min. days to re-search: %s (%s)', RESCAN_SETTINGS['MISSING'], RESCAN_SETTINGS['CUTOFF_UNMET'], arr_type, RESCAN_SETTINGS['MAX_CONCURRENT_SCANS'], RESCAN_SETTINGS['MIN_DAYS_BEFORE_RESCAN'], 'RUN_PERIODIC_RESCANS')
logger.info('')
logger.info('Running every: %s', fmt.format(rd(minutes=settingsDict['REMOVE_TIMER'])))
if settingsDict['REMOVE_SLOW']:
logger.info('Minimum speed enforced: %s KB/s', str(settingsDict['MIN_DOWNLOAD_SPEED']))
logger.info('Permitted number of times before stalled/missing metadata/slow downloads are removed: %s', str(settingsDict['PERMITTED_ATTEMPTS']))
if settingsDict['QBITTORRENT_URL']:
logger.info('Downloads with this tag will be skipped: \"%s\"', settingsDict['NO_STALLED_REMOVAL_QBIT_TAG'])
logger.info('Private Trackers will be skipped: %s', settingsDict['IGNORE_PRIVATE_TRACKERS'])
if settingsDict['IGNORED_DOWNLOAD_CLIENTS']:
logger.info('Download clients skipped: %s',", ".join(settingsDict['IGNORED_DOWNLOAD_CLIENTS']))
logger.info('')
logger.info('*** Configured Instances ***')
for instance in settingsDict['INSTANCES']:
if settingsDict[instance + '_URL']:
logger.info(
'%s%s: %s',
instance.title(),
f" ({settingsDict.get(instance + '_NAME')})" if settingsDict.get(instance + '_NAME') != instance.title() else "",
(settingsDict[instance + '_URL']).split('/api')[0]
)
if settingsDict['QBITTORRENT_URL']:
logger.info(
'qBittorrent: %s',
(settingsDict['QBITTORRENT_URL']).split('/api')[0]
)
logger.info('')
return
def upgradeChecks(settingsDict):
if settingsDict['REMOVE_NO_FORMAT_UPGRADE']:
logger.warn('❗️' * 10 + ' OUTDATED SETTINGS ' + '❗️' * 10 )
logger.warn('')
logger.warn("❗️ %s was replaced with %s.", 'REMOVE_NO_FORMAT_UPGRADE', 'REMOVE_FAILED_IMPORTS')
logger.warn("❗️ Please check the ReadMe and update your settings.")
logger.warn("❗️ Specifically read the section on %s.", 'FAILED_IMPORT_MESSAGE_PATTERNS')
logger.warn('')
logger.warn('❗️' * 29)
logger.warn('')
return
async def instanceChecks(settingsDict):
# Checks if the arr and qbit instances are reachable, and returns the settings dictionary with the qbit cookie
logger.info('*** Check Instances ***')
error_occured = False
# Check ARR-apps
for instance in settingsDict['INSTANCES']:
if settingsDict[instance + '_URL']:
# Check instance is reachable
try:
response = await asyncio.get_event_loop().run_in_executor(None, lambda: requests.get(settingsDict[instance + '_URL']+'/system/status', params=None, headers={'X-Api-Key': settingsDict[instance + '_KEY']}, verify=settingsDict['SSL_VERIFICATION']))
response.raise_for_status()
except Exception as error:
error_occured = True
logger.error('!! %s Error: !!', instance.title())
logger.error('> %s', error)
if isinstance(error, requests.exceptions.HTTPError) and error.response.status_code == 401:
logger.error ('> Have you configured %s correctly?', instance + '_KEY')
arr_status = response.json()
if not error_occured:
# Check if network settings are pointing to the right Arr-apps
current_app = arr_status['appName']
if current_app.upper() != instance:
error_occured = True
logger.error('!! %s Error: !!', instance.title())
logger.error('> Your %s points to a %s instance, rather than %s. Did you specify the wrong IP?', instance + '_URL', current_app, instance.title())
if not error_occured:
# Check minimum version requirements are met
current_version = arr_status['version']
if settingsDict[instance + '_MIN_VERSION']:
if version.parse(current_version) < version.parse(settingsDict[instance + '_MIN_VERSION']):
error_occured = True
logger.error('!! %s Error: !!', instance.title())
logger.error('> Please update %s to at least version %s. Current version: %s', instance.title(), settingsDict[instance + '_MIN_VERSION'], current_version)
if not error_occured:
# Check if language is english
uiLanguage = (await rest_get(settingsDict[instance + '_URL']+'/config/ui', settingsDict[instance + '_KEY']))['uiLanguage']
if uiLanguage > 1: # Not English
error_occured = True
logger.error('!! %s Error: !!', instance.title())
logger.error('> Decluttarr only works correctly if UI language is set to English (under Settings/UI in %s)', instance.title())
logger.error('> Details: https://github.com/ManiMatter/decluttarr/issues/132)')
if not error_occured:
logger.info('OK | %s', instance.title())
logger.debug('Current version of %s: %s', instance, current_version)
# Check Bittorrent
if settingsDict['QBITTORRENT_URL']:
# Checking if qbit can be reached, and checking if version is OK
await qBitRefreshCookie(settingsDict)
if not settingsDict['QBIT_COOKIE']:
error_occured = True
if not error_occured:
qbit_version = await rest_get(settingsDict['QBITTORRENT_URL']+'/app/version',cookies=settingsDict['QBIT_COOKIE'])
qbit_version = qbit_version[1:] # version without _v
settingsDict['QBIT_VERSION'] = qbit_version
if version.parse(qbit_version) < version.parse(settingsDict['QBITTORRENT_MIN_VERSION']):
error_occured = True
logger.error('-- | %s *** Error: %s ***', 'qBittorrent', 'Please update qBittorrent to at least version %s Current version: %s',settingsDict['QBITTORRENT_MIN_VERSION'], qbit_version)
if not error_occured:
logger.info('OK | %s', 'qBittorrent')
if version.parse(settingsDict['QBIT_VERSION']) < version.parse('5.1.0'):
logger.info('>>> [Tip!] qBittorrent (Consider upgrading to v5.1.0 or newer to reduce network overhead. You are on %s)', qbit_version) # Particularly if people have many torrents and use private trackers
logger.debug('Current version of %s: %s', 'qBittorrent', qbit_version)
if error_occured:
logger.warning('At least one instance had a problem. Waiting for 60 seconds, then exiting Decluttarr.')
await asyncio.sleep(60)
exit()
logger.info('')
return settingsDict
async def createQbitProtectionTag(settingsDict):
# Creates the qBit Protection tag if not already present
if settingsDict['QBITTORRENT_URL']:
current_tags = await rest_get(settingsDict['QBITTORRENT_URL']+'/torrents/tags',cookies=settingsDict['QBIT_COOKIE'])
if not settingsDict['NO_STALLED_REMOVAL_QBIT_TAG'] in current_tags:
if settingsDict['QBITTORRENT_URL']:
logger.info('Creating tag in qBittorrent: %s', settingsDict['NO_STALLED_REMOVAL_QBIT_TAG'])
if not settingsDict['TEST_RUN']:
await rest_post(url=settingsDict['QBITTORRENT_URL']+'/torrents/createTags', data={'tags': settingsDict['NO_STALLED_REMOVAL_QBIT_TAG']}, headers={'content-type': 'application/x-www-form-urlencoded'}, cookies=settingsDict['QBIT_COOKIE'])
def showLoggerLevel(settingsDict):
logger.info('#' * 50)
if settingsDict['LOG_LEVEL'] == 'INFO':
logger.info('LOG_LEVEL = INFO: Only logging changes (switch to VERBOSE for more info)')
else:
logger.info(f'')
if settingsDict['TEST_RUN']:
logger.info(f'*'* 50)
logger.info(f'*'* 50)
logger.info(f'')
logger.info(f'!! TEST_RUN FLAG IS SET !!')
logger.info(f'NO UPDATES/DELETES WILL BE PERFORMED')
logger.info(f'')
logger.info(f'*'* 50)
logger.info(f'*'* 50)

57
src/utils/log_setup.py Normal file
View File

@@ -0,0 +1,57 @@
import logging
import os
from logging.handlers import RotatingFileHandler
# Track added logging levels
_added_levels = {}
def add_logging_level(level_name, level_num):
"""Dynamically add a custom logging level."""
if level_name in _added_levels or level_num in _added_levels.values():
raise ValueError(f"Logging level '{level_name}' or number '{level_num}' already exists.")
logging.addLevelName(level_num, level_name.upper())
def log_method(self, message, *args, **kwargs):
if self.isEnabledFor(level_num):
self.log(level_num, message, *args, **kwargs)
setattr(logging.Logger, level_name.lower(), log_method)
setattr(logging, level_name.upper(), level_num)
_added_levels[level_name] = level_num
# Add custom logging levels
add_logging_level("TRACE", 5)
add_logging_level("VERBOSE", 15)
# Configure the default logger
logger = logging.getLogger(__name__)
# Default console handler
console_handler = logging.StreamHandler()
console_format = logging.Formatter("%(asctime)s | %(levelname)-7s | %(message)s", "%Y-%m-%d %H:%M:%S")
console_handler.setFormatter(console_format)
logger.addHandler(console_handler)
logger.setLevel(logging.INFO)
def configure_logging(settings):
"""Add a file handler and adjust log levels for all handlers."""
log_file = settings.paths.logs
log_dir = os.path.dirname(log_file)
os.makedirs(log_dir, exist_ok=True)
# File handler
file_handler = RotatingFileHandler(log_file, maxBytes=50 * 1024 * 1024, backupCount=2)
file_format = logging.Formatter("%(asctime)s | %(levelname)-7s | %(message)s", "%Y-%m-%d %H:%M:%S")
file_handler.setFormatter(file_format)
logger.addHandler(file_handler)
# Update log level for all handlers
log_level = getattr(logging, settings.general.log_level.upper(), logging.INFO)
for handler in logger.handlers:
handler.setLevel(log_level)
logger.setLevel(log_level)

View File

@@ -1,47 +0,0 @@
def nested_set(dic, keys, value, matchConditions=None):
# Sets the value of a key in a dictionary to a certain value.
# If multiple items are present, it can filter for a matching item
for key in keys[:-1]:
dic = dic.setdefault(key, {})
if matchConditions:
i = 0
match = False
for item in dic:
for matchCondition in matchConditions:
if item[matchCondition] != matchConditions[matchCondition]:
match = False
break
else:
match = True
if match:
dic = dic[i]
break
i += 1
dic[keys[-1]] = value
def add_keys_nested_dict(d, keys, defaultValue=None):
# Creates a nested value if key does not exist
for key in keys[:-1]:
if key not in d:
d[key] = {}
d = d[key]
d.setdefault(keys[-1], defaultValue)
def nested_get(dic, return_attribute, matchConditions):
# Retrieves a list contained in return_attribute, found within dic based on matchConditions
i = 0
match = False
hits = []
for item in dic:
for matchCondition in matchConditions:
if item[matchCondition] != matchConditions[matchCondition]:
match = False
break
else:
match = True
if match:
hits.append(dic[i][return_attribute])
i += 1
return hits

193
src/utils/queue_manager.py Normal file
View File

@@ -0,0 +1,193 @@
from src.utils.log_setup import logger
from src.utils.common import make_request
class QueueManager:
def __init__(self, arr, settings):
self.arr = arr
self.settings = settings
async def get_queue_items(self, queue_scope):
"""
Retrieves queue items based on the scope.
queue_scope:
"normal" = normal queue
"orphans" = orphaned queue items (in full queue but not in normal queue)
"full" = full queue
"""
if queue_scope == "normal":
queue_items = await self._get_queue(full_queue=False)
elif queue_scope == "orphans":
full_queue = await self._get_queue(full_queue=True)
queue = await self._get_queue(full_queue=False)
queue_items = [fq for fq in full_queue if fq not in queue]
elif queue_scope == "full":
queue_items = await self._get_queue(full_queue=True)
else:
raise ValueError(f"Invalid queue_scope: {queue_scope}")
return queue_items
async def _get_queue(self, full_queue=False):
# Step 1: Refresh the queue (now internal)
await self._refresh_queue()
# Step 2: Get the total number of records
record_count = await self._get_total_records(full_queue)
# Step 3: Get all records using `arr.full_queue_parameter`
queue = await self._get_arr_records(full_queue, record_count)
# Step 4: Filter the queue based on delayed items and ignored download clients
queue = self._ignore_delayed_queue_items(queue)
queue = self._filter_out_ignored_download_clients(queue)
queue = self._add_detail_item_key(queue)
return queue
def _add_detail_item_key(self, queue):
"""Normalizes episodeID, bookID, etc so it can just be called by 'detail_item_id'"""
for items in queue:
items["detail_item_id"] = items.get(self.arr.detail_item_id_key)
return queue
async def _refresh_queue(self):
# Refresh the queue by making the POST request using an external make_request function
await make_request(
method="POST",
endpoint=f"{self.arr.api_url}/command",
settings=self.settings,
json={"name": "RefreshMonitoredDownloads"},
headers={"X-Api-Key": self.arr.api_key},
)
async def _get_total_records(self, full_queue):
# Get the total number of records from the queue using `arr.full_queue_parameter`
params = {self.arr.full_queue_parameter: full_queue}
response = (
await make_request(
method="GET",
endpoint=f"{self.arr.api_url}/queue",
settings=self.settings,
params=params,
headers={"X-Api-Key": self.arr.api_key},
)
).json()
return response["totalRecords"]
async def _get_arr_records(self, full_queue, record_count):
# Get all records based on the count (with pagination) using `arr.full_queue_parameter`
if record_count == 0:
return []
params = {"page": "1", "pageSize": record_count}
if full_queue:
params |= {self.arr.full_queue_parameter: full_queue}
records = (
await make_request(
method="GET",
endpoint=f"{self.arr.api_url}/queue",
settings=self.settings,
params=params,
headers={"X-Api-Key": self.arr.api_key},
)
).json()
return records["records"]
def _ignore_delayed_queue_items(self, queue):
# Ignores delayed queue items
if queue is None:
return queue
seen_combinations = set()
filtered_queue = []
for queue_item in queue:
indexer = queue_item.get("indexer", "No indexer")
protocol = queue_item.get("protocol", "No protocol")
combination = (queue_item["title"], protocol, indexer)
if queue_item["status"] == "delay":
if combination not in seen_combinations:
seen_combinations.add(combination)
logger.debug(
">>> Delayed queue item ignored: %s (Protocol: %s, Indexer: %s)",
queue_item["title"],
protocol,
indexer,
)
else:
filtered_queue.append(queue_item)
return filtered_queue
def _filter_out_ignored_download_clients(self, queue):
# Filters out ignored download clients
if queue is None:
return queue
filtered_queue = []
for queue_item in queue:
download_client = queue_item.get("downloadClient", "Unknown client")
if download_client in self.settings.general.ignored_download_clients:
logger.debug(
">>> Queue item ignored due to ignored download client: %s (Download Client: %s)",
queue_item["title"],
download_client,
)
else:
filtered_queue.append(queue_item)
return filtered_queue
def format_queue(self, queue_items):
if not queue_items:
return "empty"
formatted_dict = {}
for queue_item in queue_items:
download_id = queue_item.get("downloadId")
item_id = queue_item.get("id")
if download_id in formatted_dict:
formatted_dict[download_id]["IDs"].append(item_id)
else:
formatted_dict[download_id] = {
"downloadId": download_id,
"downloadTitle": queue_item.get("title"),
"IDs": [item_id],
"protocol": [queue_item.get("protocol")],
"status": [queue_item.get("status")],
}
return list(formatted_dict.values())
def group_by_download_id(self, queue_items):
# Groups queue items by download ID and returns a dict where download ID is the key, and value is the list of queue items belonging to that downloadID
# Queue item is limited to certain keys
retain_keys = {
"id": None,
"detail_item_id": None,
"title": "Unknown",
"size": 0,
"sizeleft": 0,
"downloadClient": "Unknown",
"protocol": "Unknown",
"status": "Unknown",
"trackedDownloadState": "Unknown",
"statusMessages": [],
"removal_messages": [],
}
grouped_dict = {}
for queue_item in queue_items:
download_id = queue_item["downloadId"]
if download_id not in grouped_dict:
grouped_dict[download_id] = []
# Filter and add default values if keys are missing
filtered_item = {
key: queue_item.get(key, retain_keys.get(key, None))
for key in retain_keys
}
grouped_dict[download_id].append(filtered_item)
return grouped_dict

View File

@@ -1,109 +0,0 @@
########### Functions to call radarr/sonarr APIs
import logging
import asyncio
import requests
from requests.exceptions import RequestException
import json
from config.definitions import settingsDict
# GET
async def rest_get(url, api_key=None, params=None, cookies=None):
try:
headers = {"X-Api-Key": api_key} if api_key else None
response = await asyncio.get_event_loop().run_in_executor(
None,
lambda: requests.get(
url,
params=params,
headers=headers,
cookies=cookies,
verify=settingsDict["SSL_VERIFICATION"],
),
)
response.raise_for_status()
return response.json()
except requests.exceptions.HTTPError as e:
print("HTTP Error:", e)
except RequestException as e:
return response.text
except ValueError as e:
logging.error(f"Error parsing JSON response from {url}: {e}")
return None
# DELETE
async def rest_delete(url, api_key, params=None):
if settingsDict["TEST_RUN"]:
return
try:
headers = {"X-Api-Key": api_key}
response = await asyncio.get_event_loop().run_in_executor(
None,
lambda: requests.delete(
url,
params=params,
headers=headers,
verify=settingsDict["SSL_VERIFICATION"],
),
)
response.raise_for_status()
if response.status_code in [200, 204]:
return None
return response.json()
except RequestException as e:
logging.error(f"Error making API request to {url}: {e}")
return None
except ValueError as e:
logging.error(f"Error parsing JSON response from {url}: {e}")
return None
# POST
async def rest_post(url, data=None, json=None, headers=None, cookies=None):
if settingsDict["TEST_RUN"]:
return
try:
response = await asyncio.get_event_loop().run_in_executor(
None,
lambda: requests.post(
url,
data=data,
json=json,
headers=headers,
cookies=cookies,
verify=settingsDict["SSL_VERIFICATION"],
),
)
response.raise_for_status()
if response.status_code in (200, 201):
return None
return response.json()
except RequestException as e:
logging.error(f"Error making API request to {url}: {e}")
return None
except ValueError as e:
logging.error(f"Error parsing JSON response from {url}: {e}")
return None
# PUT
async def rest_put(url, api_key, data):
if settingsDict["TEST_RUN"]:
return
try:
headers = {"X-Api-Key": api_key} | {"content-type": "application/json"}
response = await asyncio.get_event_loop().run_in_executor(
None,
lambda: requests.put(
url, data=data, headers=headers, verify=settingsDict["SSL_VERIFICATION"]
),
)
response.raise_for_status()
return response.json()
except RequestException as e:
logging.error(f"Error making API request to {url}: {e}")
return None
except ValueError as e:
logging.error(f"Error parsing JSON response from {url}: {e}")
return None

View File

@@ -1,411 +0,0 @@
# Shared Functions
import logging, verboselogs
import asyncio
import requests
logger = verboselogs.VerboseLogger(__name__)
from src.utils.rest import rest_get, rest_delete, rest_post
from src.utils.nest_functions import add_keys_nested_dict, nested_get
import sys, os, traceback
async def get_arr_records(BASE_URL, API_KEY, params={}, end_point=""):
# All records from a given endpoint
record_count = (await rest_get(f"{BASE_URL}/{end_point}", API_KEY, params))[
"totalRecords"
]
if record_count == 0:
return []
records = await rest_get(
f"{BASE_URL}/{end_point}",
API_KEY,
{"page": "1", "pageSize": record_count} | params,
)
return records["records"]
async def get_queue(BASE_URL, API_KEY, settingsDict, params={}):
# Refreshes and retrieves the current queue
await rest_post(
url=BASE_URL + "/command",
json={"name": "RefreshMonitoredDownloads"},
headers={"X-Api-Key": API_KEY},
)
queue = await get_arr_records(BASE_URL, API_KEY, params=params, end_point="queue")
queue = filterOutDelayedQueueItems(queue)
queue = filterOutIgnoredDownloadClients(queue, settingsDict)
return queue
def filterOutDelayedQueueItems(queue):
# Ignores delayed queue items
if queue is None:
return queue
seen_combinations = set()
filtered_queue = []
for queue_item in queue:
# Use get() method with default value "No indexer" if 'indexer' key does not exist
indexer = queue_item.get("indexer", "No indexer")
protocol = queue_item.get("protocol", "No protocol")
combination = (queue_item["title"], protocol, indexer)
if queue_item["status"] == "delay":
if combination not in seen_combinations:
seen_combinations.add(combination)
logger.debug(
">>> Delayed queue item ignored: %s (Protocol: %s, Indexer: %s)",
queue_item["title"],
protocol,
indexer,
)
else:
filtered_queue.append(queue_item)
return filtered_queue
def filterOutIgnoredDownloadClients(queue, settingsDict):
"""
Filters out queue items whose download client is listed in IGNORED_DOWNLOAD_CLIENTS.
"""
if queue is None:
return queue
filtered_queue = []
for queue_item in queue:
download_client = queue_item.get("downloadClient", "Unknown client")
if download_client in settingsDict["IGNORED_DOWNLOAD_CLIENTS"]:
logger.debug(
">>> Queue item ignored due to ignored download client: %s (Download Client: %s)",
queue_item["title"],
download_client,
)
else:
filtered_queue.append(queue_item)
return filtered_queue
def privateTrackerCheck(settingsDict, affectedItems, failType, privateDowloadIDs):
# Ignores private tracker items (if setting is turned on)
for affectedItem in reversed(affectedItems):
if (
settingsDict["IGNORE_PRIVATE_TRACKERS"]
and affectedItem["downloadId"] in privateDowloadIDs
):
affectedItems.remove(affectedItem)
return affectedItems
def protectedDownloadCheck(settingsDict, affectedItems, failType, protectedDownloadIDs):
# Checks if torrent is protected and skips
for affectedItem in reversed(affectedItems):
if affectedItem["downloadId"] in protectedDownloadIDs:
logger.verbose(
">>> Detected %s download, tagged not to be killed: %s",
failType,
affectedItem["title"],
)
logger.debug(
">>> DownloadID of above %s download (%s): %s",
failType,
affectedItem["title"],
affectedItem["downloadId"],
)
affectedItems.remove(affectedItem)
return affectedItems
async def execute_checks(
settingsDict,
affectedItems,
failType,
BASE_URL,
API_KEY,
NAME,
deleted_downloads,
defective_tracker,
privateDowloadIDs,
protectedDownloadIDs,
addToBlocklist,
doPrivateTrackerCheck,
doProtectedDownloadCheck,
doPermittedAttemptsCheck,
extraParameters={},
):
# Goes over the affected items and performs the checks that are parametrized
try:
# De-duplicates the affected items (one downloadid may be shared by multiple affected items)
downloadIDs = []
for affectedItem in reversed(affectedItems):
if affectedItem["downloadId"] not in downloadIDs:
downloadIDs.append(affectedItem["downloadId"])
else:
affectedItems.remove(affectedItem)
# Skips protected items
if doPrivateTrackerCheck:
affectedItems = privateTrackerCheck(
settingsDict, affectedItems, failType, privateDowloadIDs
)
if doProtectedDownloadCheck:
affectedItems = protectedDownloadCheck(
settingsDict, affectedItems, failType, protectedDownloadIDs
)
# Checks if failing more often than permitted
if doPermittedAttemptsCheck:
affectedItems = permittedAttemptsCheck(
settingsDict, affectedItems, failType, BASE_URL, defective_tracker
)
# Deletes all downloads that have not survived the checks
for affectedItem in affectedItems:
# Checks whether when removing the queue item from the *arr app the torrent should be kept
removeFromClient = True
if extraParameters.get("keepTorrentForPrivateTrackers", False):
if (
settingsDict["IGNORE_PRIVATE_TRACKERS"]
and affectedItem["downloadId"] in privateDowloadIDs
):
removeFromClient = False
# Removes the queue item
await remove_download(
settingsDict,
BASE_URL,
API_KEY,
affectedItem,
failType,
addToBlocklist,
deleted_downloads,
removeFromClient,
)
# Exit Logs
if settingsDict["LOG_LEVEL"] == "DEBUG":
queue = await get_queue(BASE_URL, API_KEY, settingsDict)
logger.debug(
"execute_checks/queue OUT (failType: %s): %s",
failType,
formattedQueueInfo(queue),
)
# Return removed items
return affectedItems
except Exception as error:
errorDetails(NAME, error)
return []
def permittedAttemptsCheck(
settingsDict, affectedItems, failType, BASE_URL, defective_tracker
):
# Checks if downloads are repeatedly found as stalled / stuck in metadata. Removes the items that are not exeeding permitted attempts
# Shows all affected items (for debugging)
logger.debug(
"permittedAttemptsCheck/affectedItems: %s",
", ".join(
f"{affectedItem['id']}:{affectedItem['title']}:{affectedItem['downloadId']}"
for affectedItem in affectedItems
),
)
# 2. Check if those that were previously defective are no longer defective -> those are recovered
affectedDownloadIDs = [affectedItem["downloadId"] for affectedItem in affectedItems]
try:
recoveredDownloadIDs = [
trackedDownloadIDs
for trackedDownloadIDs in defective_tracker.dict[BASE_URL][failType]
if trackedDownloadIDs not in affectedDownloadIDs
]
except KeyError:
recoveredDownloadIDs = []
logger.debug(
"permittedAttemptsCheck/recoveredDownloadIDs: %s", str(recoveredDownloadIDs)
)
for recoveredDownloadID in recoveredDownloadIDs:
logger.info(
">>> Download no longer marked as %s: %s",
failType,
defective_tracker.dict[BASE_URL][failType][recoveredDownloadID]["title"],
)
del defective_tracker.dict[BASE_URL][failType][recoveredDownloadID]
logger.debug(
"permittedAttemptsCheck/defective_tracker.dict IN: %s",
str(defective_tracker.dict),
)
# 3. For those that are defective, add attempt + 1 if present before, or make attempt = 1.
for affectedItem in reversed(affectedItems):
try:
defective_tracker.dict[BASE_URL][failType][affectedItem["downloadId"]][
"Attempts"
] += 1
except KeyError:
add_keys_nested_dict(
defective_tracker.dict,
[BASE_URL, failType, affectedItem["downloadId"]],
{"title": affectedItem["title"], "Attempts": 1},
)
attempts_left = (
settingsDict["PERMITTED_ATTEMPTS"]
- defective_tracker.dict[BASE_URL][failType][affectedItem["downloadId"]][
"Attempts"
]
)
# If not exceeding the number of permitted times, remove from being affected
if attempts_left >= 0: # Still got attempts left
logger.info(
">>> Detected %s download (%s out of %s permitted times): %s",
failType,
str(
defective_tracker.dict[BASE_URL][failType][
affectedItem["downloadId"]
]["Attempts"]
),
str(settingsDict["PERMITTED_ATTEMPTS"]),
affectedItem["title"],
)
affectedItems.remove(affectedItem)
if attempts_left <= -1: # Too many attempts
logger.info(
">>> Detected %s download too many times (%s out of %s permitted times): %s",
failType,
str(
defective_tracker.dict[BASE_URL][failType][
affectedItem["downloadId"]
]["Attempts"]
),
str(settingsDict["PERMITTED_ATTEMPTS"]),
affectedItem["title"],
)
if (
attempts_left <= -2
): # Too many attempts and should already have been removed
# If supposedly deleted item keeps coming back, print out guidance for "Reject Blocklisted Torrent Hashes While Grabbing"
logger.verbose(
'>>> [Tip!] Since this download should already have been removed in a previous iteration but keeps coming back, this indicates the blocking of the torrent does not work correctly. Consider turning on the option "Reject Blocklisted Torrent Hashes While Grabbing" on the indexer in the *arr app: %s',
affectedItem["title"],
)
logger.debug(
"permittedAttemptsCheck/defective_tracker.dict OUT: %s",
str(defective_tracker.dict),
)
return affectedItems
async def remove_download(
settingsDict,
BASE_URL,
API_KEY,
affectedItem,
failType,
addToBlocklist,
deleted_downloads,
removeFromClient,
):
# Removes downloads and creates log entry
logger.debug(
"remove_download/deleted_downloads.dict IN: %s", str(deleted_downloads.dict)
)
if affectedItem["downloadId"] not in deleted_downloads.dict:
# "schizophrenic" removal:
# Yes, the failed imports are removed from the -arr apps (so the removal kicks still in)
# But in the torrent client they are kept
if removeFromClient:
logger.info(">>> Removing %s download: %s", failType, affectedItem["title"])
else:
logger.info(
">>> Removing %s download (without removing from torrent client): %s",
failType,
affectedItem["title"],
)
# Print out detailed removal messages (if any were added in the jobs)
if "removal_messages" in affectedItem:
for removal_message in affectedItem["removal_messages"]:
logger.info(removal_message)
if not settingsDict["TEST_RUN"]:
await rest_delete(
f'{BASE_URL}/queue/{affectedItem["id"]}',
API_KEY,
{"removeFromClient": removeFromClient, "blocklist": addToBlocklist},
)
deleted_downloads.dict.append(affectedItem["downloadId"])
logger.debug(
"remove_download/deleted_downloads.dict OUT: %s", str(deleted_downloads.dict)
)
return
def errorDetails(NAME, error):
exc_type, exc_obj, exc_tb = sys.exc_info()
fname = os.path.split(exc_tb.tb_frame.f_code.co_filename)[1]
logger.warning(
">>> Queue cleaning failed on %s. (File: %s / Line: %s / %s)",
NAME,
fname,
exc_tb.tb_lineno,
traceback.format_exc(),
)
return
def formattedQueueInfo(queue):
try:
# Returns queueID, title, and downloadID
if not queue:
return "empty"
formatted_list = []
for queue_item in queue:
download_id = queue_item.get("downloadId", None)
item_id = queue_item.get("id", None)
# Check if there is an entry with the same download_id and title
existing_entry = next(
(item for item in formatted_list if item["downloadId"] == download_id),
None,
)
if existing_entry:
existing_entry["IDs"].append(item_id)
else:
formatted_list.append({
"downloadId": download_id,
"downloadTitle": queue_item.get("title"),
"IDs": [item_id],
"protocol": [queue_item.get("protocol")],
"status": [queue_item.get("status")],
})
return formatted_list
except Exception as error:
errorDetails("formattedQueueInfo", error)
logger.debug("formattedQueueInfo/queue for debug: %s", str(queue))
return "error"
async def qBitOffline(settingsDict, failType, NAME):
if settingsDict["QBITTORRENT_URL"]:
qBitConnectionStatus = (
await rest_get(
settingsDict["QBITTORRENT_URL"] + "/sync/maindata",
cookies=settingsDict["QBIT_COOKIE"],
)
)["server_state"]["connection_status"]
if qBitConnectionStatus == "disconnected":
logger.warning(
">>> qBittorrent is disconnected. Skipping %s queue cleaning failed on %s.",
failType,
NAME,
)
return True
return False
async def qBitRefreshCookie(settingsDict):
try:
response = await asyncio.get_event_loop().run_in_executor(None, lambda: requests.post(settingsDict['QBITTORRENT_URL']+'/auth/login', data={'username': settingsDict['QBITTORRENT_USERNAME'], 'password': settingsDict['QBITTORRENT_PASSWORD']}, headers={'content-type': 'application/x-www-form-urlencoded'}, verify=settingsDict['SSL_VERIFICATION']))
if response.text == 'Fails.':
raise ConnectionError('Login failed.')
response.raise_for_status()
settingsDict['QBIT_COOKIE'] = {'SID': response.cookies['SID']}
logger.debug('qBit cookie refreshed!')
except Exception as error:
logger.error('!! %s Error: !!', 'qBittorrent')
logger.error('> %s', error)
logger.error('> Details:')
logger.error(response.text)
settingsDict['QBIT_COOKIE'] = {}

69
src/utils/startup.py Normal file
View File

@@ -0,0 +1,69 @@
import warnings
from src.utils.log_setup import logger
def show_welcome(settings):
messages = []
# Show welcome message
messages.append("🎉🎉🎉 Decluttarr - Application Started! 🎉🎉🎉")
messages.append("-"*80)
messages.append("")
messages.append("Like this app? Thanks for giving it a ⭐️ on GitHub!")
messages.append("https://github.com/ManiMatter/decluttarr/")
# Show info level tip
if settings.general.log_level == "INFO":
messages.append("")
messages.append("")
messages.append("💡 Tip: More logs?")
messages.append("If you want to know more about what's going on, switch log level to 'VERBOSE'")
# Show bug report tip
messages.append("")
messages.append("")
messages.append("🐛 Found a bug?")
messages.append("Before reporting bugs on GitHub, please:")
messages.append("1) Check the readme on github")
messages.append("2) Check open and closed issues on github")
messages.append("3) Switch your logs to 'DEBUG' level")
messages.append("4) Turn off any features other than the one(s) causing it")
messages.append("5) Provide the full logs via pastebin on your GitHub issue")
messages.append("Once submitted, thanks for being responsive and helping debug / re-test")
# Show test mode tip
if settings.general.test_run:
messages.append("")
messages.append("")
messages.append("=================== IMPORTANT ====================")
if settings.general.test_run:
messages.append("")
messages.append("⚠️ ⚠️ ⚠️ TEST MODE IS ACTIVE ⚠️ ⚠️ ⚠️")
messages.append("Decluttarr won't actually do anything for you...")
messages.append("You can change this via the setting 'test_run'")
messages.append("")
messages.append("")
messages.append("-"*80)
# Log all messages at once
logger.info("\n".join(messages))
async def launch_steps(settings):
# Hide SSL Verification Warnings
if not settings.general.ssl_verification:
warnings.filterwarnings("ignore", message="Unverified HTTPS request")
logger.info(settings)
show_welcome(settings)
logger.info("*** Checking Instances ***")
# Check qbit, fetch initial cookie, and set tag (if needed)
for qbit in settings.download_clients.qbittorrent:
await qbit.setup()
# Setup arrs (apply checks, and store information)
settings.instances.check_any_arrs()
for arr in settings.instances.arrs:
await arr.setup()

View File

@@ -1,17 +0,0 @@
# Set up classes that allow tracking of items from one loop to the next
class Defective_Tracker:
# Keeps track of which downloads were already caught as stalled previously
def __init__(self, dict):
self.dict = dict
class Download_Sizes_Tracker:
# Keeps track of the file sizes of the downloads
def __init__(self, dict):
self.dict = dict
class Deleted_Downloads:
# Keeps track of which downloads have already been deleted (to not double-delete)
def __init__(self, dict):
self.dict = dict

View File

@@ -0,0 +1,65 @@
from src.utils.common import make_request
class WantedManager:
def __init__(self, arr, settings):
self.arr = arr
self.settings = settings
async def get_wanted_items(self, missing_or_cutoff):
"""
Retrieves wanted items :
missing_or_cutoff: Drives whether missing or cutoff items are retrieved
"""
record_count = await self._get_total_records(missing_or_cutoff)
missing_or_cutoff = await self._get_arr_records(missing_or_cutoff, record_count)
return missing_or_cutoff
async def _get_total_records(self, missing_or_cutoff):
# Get the total number of records from wanted
response = (
await make_request(
method="GET",
endpoint=f"{self.arr.api_url}/wanted/{missing_or_cutoff}",
settings=self.settings,
headers={"X-Api-Key": self.arr.api_key},
)
).json()
return response["totalRecords"]
async def _get_arr_records(self, missing_or_cutoff, record_count):
# Get all records based on the count (with pagination)
if record_count == 0:
return []
sort_key = f"{self.arr.detail_item_key}s.lastSearchTime"
params = {"page": "1", "pageSize": record_count, "sortKey": sort_key}
records = (
await make_request(
method="GET",
endpoint=f"{self.arr.api_url}/wanted/{missing_or_cutoff}",
settings=self.settings,
params=params,
headers={"X-Api-Key": self.arr.api_key},
)
).json()
return records["records"]
async def search_items(self, detail_ids):
"""Search items by detail IDs"""
if isinstance(detail_ids, str):
detail_ids = [detail_ids]
json = {
"name": self.arr.detail_item_search_command,
self.arr.detail_item_ids_key: detail_ids,
}
await make_request(
method="POST",
endpoint=f"{self.arr.api_url}/command",
settings=self.settings,
json=json,
headers={"X-Api-Key": self.arr.api_key},
)

View File

@@ -1,33 +0,0 @@
{
"records": [
{
"id": 1,
"downloadId": "A123",
"title": "Sonarr Title 1",
"status": "completed",
"trackedDownloadStatus": "ok",
"trackedDownloadState": "importing",
"statusMessages": []
},
{
"id": 2,
"downloadId": "B123",
"title": "Sonarr Title 2",
"status": "completed",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importBlocked",
"statusMessages": [
{
"title": "One or more episodes expected in this release were not imported or missing from the release",
"messages": []
},
{
"title": "Sonarr Title 2.mkv",
"messages": [
"Episode XYZ was not found in the grabbed release: Sonarr Title 2.mkv"
]
}
]
}
]
}

View File

@@ -1,32 +0,0 @@
{
"records": [
{
"id": 1,
"downloadId": "A123",
"title": "Sonarr Title 1",
"status": "completed",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importBlocked",
"statusMessages": [
{
"title": "First Message",
"messages": [
"Message 1 - hello world"
]
},
{
"title": "Duplicate of First Message",
"messages": [
"Message 1 - hello world"
]
},
{
"title": "Second of Message",
"messages": [
"Message 2 - goodbye all"
]
}
]
}
]
}

View File

@@ -1,60 +0,0 @@
{
"records": [
{
"id": 1,
"downloadId": "A123",
"title": "Sonarr Title 1",
"status": "completed",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importPending",
"statusMessages": [
{
"title": "First Message",
"messages": [
"Message 1 - hello world"
]
},
{
"title": "Duplicate of First Message",
"messages": [
"Message 1 - hello world"
]
},
{
"title": "Second of Message",
"messages": [
"Message 2 - goodbye all"
]
}
]
},
{
"id": 2,
"downloadId": "B123",
"title": "Sonarr Title 2",
"status": "completed",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importFailed",
"statusMessages": [
{
"title": "First Message",
"messages": [
"Message 1 - hello world"
]
},
{
"title": "Duplicate of First Message",
"messages": [
"Message 1 - hello world"
]
},
{
"title": "Second of Message",
"messages": [
"Message 2 - goodbye all"
]
}
]
}
]
}

View File

@@ -1,81 +0,0 @@
import os
os.environ["IS_IN_PYTEST"] = "true"
import logging
import json
import pytest
from typing import Dict, Set, Any
from unittest.mock import AsyncMock
from src.jobs.remove_failed_imports import remove_failed_imports
# Utility function to load mock data
def load_mock_data(file_name):
with open(file_name, "r") as file:
return json.load(file)
async def mock_get_queue(mock_data):
logging.debug("Mock get_queue called")
return mock_data
async def run_test(
settingsDict: Dict[str, Any],
expected_removal_messages: Dict[int, Set[str]],
mock_data_file: str,
monkeypatch: pytest.MonkeyPatch,
) -> None:
# Load mock data
mock_data = load_mock_data(mock_data_file)
# Create an AsyncMock for execute_checks with side effect
execute_checks_mock = AsyncMock()
# Define a side effect function
def side_effect(*args, **kwargs):
logging.debug("Mock execute_checks called with kwargs: %s", kwargs)
# Return the affectedItems from kwargs
return kwargs.get("affectedItems", [])
# Attach side effect to the mock
execute_checks_mock.side_effect = side_effect
# Create an async mock for get_queue that returns mock_data
mock_get_queue = AsyncMock(return_value=mock_data["records"])
# Patch the methods
monkeypatch.setattr("src.jobs.remove_failed_imports.get_queue", mock_get_queue)
monkeypatch.setattr(
"src.jobs.remove_failed_imports.execute_checks", execute_checks_mock
)
# Call the function
await remove_failed_imports(
settingsDict=settingsDict,
BASE_URL="",
API_KEY="",
NAME="",
deleted_downloads=set(),
defective_tracker=set(),
protectedDownloadIDs=set(),
privateDowloadIDs=set(),
)
# Assertions
assert execute_checks_mock.called # Ensure the mock was called
# Assert expected items are there
args, kwargs = execute_checks_mock.call_args
affectedItems = kwargs.get("affectedItems", [])
affectedItems_ids = {item["id"] for item in affectedItems}
expectedItems_ids = set(expected_removal_messages.keys())
assert len(affectedItems) == len(expected_removal_messages)
assert affectedItems_ids == expectedItems_ids
# Assert all expected messages are there
for affectedItem in affectedItems:
assert "removal_messages" in affectedItem
assert expected_removal_messages[affectedItem["id"]] == set(
affectedItem.get("removal_messages", [])
)

View File

@@ -1,45 +0,0 @@
import pytest
from remove_failed_imports_utils import run_test
mock_data_file = "tests/jobs/remove_failed_imports/mock_data/mock_data_1.json"
@pytest.mark.asyncio
async def test_with_pattern_one_message(monkeypatch):
settingsDict = {
"FAILED_IMPORT_MESSAGE_PATTERNS": ["not found in the grabbed release"]
}
expected_removal_messages = {
2: {
">>>>> Tracked Download State: importBlocked",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Episode XYZ was not found in the grabbed release: Sonarr Title 2.mkv",
}
}
await run_test(settingsDict, expected_removal_messages, mock_data_file, monkeypatch)
@pytest.mark.asyncio
async def test_with_empty_pattern_one_message(monkeypatch):
settingsDict = {"FAILED_IMPORT_MESSAGE_PATTERNS": []}
expected_removal_messages = {
2: {
">>>>> Tracked Download State: importBlocked",
">>>>> Status Messages (All):",
">>>>> - Episode XYZ was not found in the grabbed release: Sonarr Title 2.mkv",
}
}
await run_test(settingsDict, expected_removal_messages, mock_data_file, monkeypatch)
@pytest.mark.asyncio
async def test_without_pattern_one_message(monkeypatch):
settingsDict = {}
expected_removal_messages = {
2: {
">>>>> Tracked Download State: importBlocked",
">>>>> Status Messages (All):",
">>>>> - Episode XYZ was not found in the grabbed release: Sonarr Title 2.mkv",
}
}
await run_test(settingsDict, expected_removal_messages, mock_data_file, monkeypatch)

View File

@@ -1,45 +0,0 @@
import pytest
from remove_failed_imports_utils import run_test
mock_data_file = "tests/jobs/remove_failed_imports/mock_data/mock_data_2.json"
@pytest.mark.asyncio
async def test_multiple_status_messages_multiple_pattern(monkeypatch):
settingsDict = {"FAILED_IMPORT_MESSAGE_PATTERNS": ["world", "all"]}
expected_removal_messages = {
1: {
">>>>> Tracked Download State: importBlocked",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Message 1 - hello world",
">>>>> - Message 2 - goodbye all",
}
}
await run_test(settingsDict, expected_removal_messages, mock_data_file, monkeypatch)
@pytest.mark.asyncio
async def test_multiple_status_messages_single_pattern(monkeypatch):
settingsDict = {"FAILED_IMPORT_MESSAGE_PATTERNS": ["world"]}
expected_removal_messages = {
1: {
">>>>> Tracked Download State: importBlocked",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Message 1 - hello world",
}
}
await run_test(settingsDict, expected_removal_messages, mock_data_file, monkeypatch)
@pytest.mark.asyncio
async def test_multiple_status_messages_no_pattern(monkeypatch):
settingsDict = {}
expected_removal_messages = {
1: {
">>>>> Tracked Download State: importBlocked",
">>>>> Status Messages (All):",
">>>>> - Message 1 - hello world",
">>>>> - Message 2 - goodbye all",
}
}
await run_test(settingsDict, expected_removal_messages, mock_data_file, monkeypatch)

View File

@@ -1,62 +0,0 @@
import pytest
from remove_failed_imports_utils import run_test
mock_data_file = "tests/jobs/remove_failed_imports/mock_data/mock_data_3.json"
@pytest.mark.asyncio
async def test_multiple_statuses_multiple_pattern(monkeypatch):
settingsDict = {"FAILED_IMPORT_MESSAGE_PATTERNS": ["world", "all"]}
expected_removal_messages = {
1: {
">>>>> Tracked Download State: importPending",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Message 1 - hello world",
">>>>> - Message 2 - goodbye all",
},
2: {
">>>>> Tracked Download State: importFailed",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Message 1 - hello world",
">>>>> - Message 2 - goodbye all",
},
}
await run_test(settingsDict, expected_removal_messages, mock_data_file, monkeypatch)
@pytest.mark.asyncio
async def test_multiple_statuses_single_pattern(monkeypatch):
settingsDict = {"FAILED_IMPORT_MESSAGE_PATTERNS": ["world"]}
expected_removal_messages = {
1: {
">>>>> Tracked Download State: importPending",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Message 1 - hello world",
},
2: {
">>>>> Tracked Download State: importFailed",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Message 1 - hello world",
},
}
await run_test(settingsDict, expected_removal_messages, mock_data_file, monkeypatch)
@pytest.mark.asyncio
async def test_multiple_statuses_no_pattern(monkeypatch):
settingsDict = {}
expected_removal_messages = {
1: {
">>>>> Tracked Download State: importPending",
">>>>> Status Messages (All):",
">>>>> - Message 1 - hello world",
">>>>> - Message 2 - goodbye all",
},
2: {
">>>>> Tracked Download State: importFailed",
">>>>> Status Messages (All):",
">>>>> - Message 1 - hello world",
">>>>> - Message 2 - goodbye all",
},
}
await run_test(settingsDict, expected_removal_messages, mock_data_file, monkeypatch)

View File

@@ -0,0 +1,128 @@
from unittest.mock import AsyncMock, patch
import pytest
from src.jobs.removal_handler import RemovalHandler
# ---------- Fixtures ----------
@pytest.fixture(name="mock_logger")
def fixture_mock_logger():
with patch("src.jobs.removal_handler.logger") as mock:
yield mock
@pytest.fixture(name="settings")
def fixture_settings():
settings = AsyncMock()
settings.general.test_run = False
settings.general.obsolete_tag = "obsolete_tag"
settings.download_clients.qbittorrent = [AsyncMock()]
return settings
@pytest.fixture(name="arr")
def fixture_arr():
arr = AsyncMock()
arr.api_url = "https://mock-api-url"
arr.api_key = "mock_api_key"
arr.tracker = AsyncMock()
arr.tracker.deleted = []
arr.get_download_client_implementation.return_value = "QBittorrent"
return arr
@pytest.fixture(name="affected_downloads")
def fixture_affected_downloads():
return {
"AABBCC": [
{
"id": 1,
"downloadId": "AABBCC",
"title": "My Series A - Season 1",
"size": 1000,
"sizeleft": 500,
"downloadClient": "qBittorrent",
"protocol": "torrent",
"status": "paused",
"trackedDownloadState": "downloading",
"statusMessages": [],
}
]
}
# ---------- Parametrized Test ----------
@pytest.mark.asyncio
@pytest.mark.parametrize(
"protocol, qb_config, client_impl, is_private, pub_handling, priv_handling, expected",
[
("emule", [AsyncMock()], "MyDonkey", None, "remove", "remove", "remove"),
("torrent", [], "QBittorrent", None, "remove", "remove", "remove"),
("torrent", [AsyncMock()], "OtherClient", None, "remove", "remove", "remove"),
("torrent", [AsyncMock()], "QBittorrent", True, "remove", "remove", "remove"),
("torrent", [AsyncMock()], "QBittorrent", True, "remove", "tag_as_obsolete", "tag_as_obsolete"),
("torrent", [AsyncMock()], "QBittorrent", True, "remove", "skip", "skip"),
("torrent", [AsyncMock()], "QBittorrent", False, "remove", "remove", "remove"),
("torrent", [AsyncMock()], "QBittorrent", False, "tag_as_obsolete", "remove", "tag_as_obsolete"),
("torrent", [AsyncMock()], "QBittorrent", False, "skip", "remove", "skip"),
],
)
async def test_remove_downloads(
protocol,
qb_config,
client_impl,
is_private,
pub_handling,
priv_handling,
expected,
arr,
settings,
affected_downloads,
):
# ---------- Arrange ----------
download_id = "AABBCC"
item = affected_downloads[download_id][0]
item["protocol"] = protocol
item["downloadClient"] = "qBittorrent"
settings.download_clients.qbittorrent = qb_config
settings.general.public_tracker_handling = pub_handling
settings.general.private_tracker_handling = priv_handling
arr.get_download_client_implementation.return_value = client_impl
arr.tracker.private = [download_id] if is_private else []
arr.tracker.deleted = []
handler = RemovalHandler(arr=arr, settings=settings, job_name="Test Job")
# ---------- Act ----------
await handler.remove_downloads(affected_downloads, blocklist=True)
observed = await handler._get_handling_method(download_id, item)
# ---------- Assert ----------
assert observed == expected
if expected == "remove":
arr.remove_queue_item.assert_awaited_once_with(
queue_id=item["id"], blocklist=True
)
assert download_id in arr.tracker.deleted
elif expected == "tag_as_obsolete":
if qb_config:
qb_config[0].set_tag.assert_awaited_once_with(
tags=[settings.general.obsolete_tag],
hashes=[download_id],
)
assert download_id in arr.tracker.deleted
elif expected == "skip":
assert download_id not in affected_downloads
assert download_id not in arr.tracker.deleted
if expected != "tag_as_obsolete" and qb_config:
qb_config[0].set_tag.assert_not_awaited()

View File

@@ -0,0 +1,309 @@
from unittest.mock import MagicMock, AsyncMock
import pytest
from src.jobs.remove_bad_files import RemoveBadFiles
from tests.jobs.test_utils import removal_job_fix
import os
# Fixture for arr mock
@pytest.fixture(name="arr")
def fixture_arr():
arr = AsyncMock()
arr.api_url = "https://mock-api-url"
arr.api_key = "mock_api_key"
arr.tracker = AsyncMock()
arr.tracker.extension_checked = []
arr.get_download_client_implementation.return_value = "QBittorrent"
return arr
@pytest.fixture(name="qbit_client")
def fixture_qbit_client():
qbit_client = AsyncMock()
return qbit_client
@pytest.fixture(name="removal_job")
def fixture_removal_job(arr):
removal_job = removal_job_fix(RemoveBadFiles)
removal_job.arr = arr
return removal_job
@pytest.mark.parametrize(
"file_name, expected_result",
[
("file.mp4", False), # Good extension
("file.mkv", False), # Good extension
("file.avi", False), # Good extension
("file.exe", True), # Bad extension
("file.sample", True), # Bad extension
],
)
def test_is_bad_extension(removal_job, file_name, expected_result):
"""This test will verify that files with bad extensions are properly identified."""
# Act
file = {"name": file_name} # Simulating a file object
file["file_extension"] = os.path.splitext(file["name"])[1].lower()
result = removal_job._is_bad_extension(file) # pylint: disable=W0212
# Assert
assert result == expected_result
@pytest.mark.parametrize(
"file, is_incomplete_partial",
[
({"availability": 1, "progress": 1}, False), # Fully available
({"availability": 0.5, "progress": 0.5}, True), # Low availability
( {"availability": 0.5, "progress": 1}, False,), # Downloaded, low availability
({"availability": 0.9, "progress": 0.8}, True), # Low availability
],
)
def test_is_complete_partial(removal_job, file, is_incomplete_partial):
"""This test checks if the availability logic works correctly."""
# Act
result = removal_job._is_complete_partial(file) # pylint: disable=W0212
# Assert
assert result == is_incomplete_partial
@pytest.mark.parametrize(
"qbit_item, expected_processed",
[
# Case 1: Torrent without metadata
(
{
"hash": "hash",
"has_metadata": False,
"state": "downloading",
"availability": 0.5,
},
False,
),
# Case 2: Torrent with different status
(
{
"hash": "hash",
"has_metadata": True,
"state": "uploading",
"availability": 0.5,
},
False,
),
# Case 3: Torrent checked before and full availability
(
{
"hash": "checked-hash",
"has_metadata": True,
"state": "downloading",
"availability": 1.0,
},
False,
),
# Case 4: Torrent not checked before and full availability
(
{
"hash": "not-checked-hash",
"has_metadata": True,
"state": "downloading",
"availability": 1.0,
},
True,
),
# Case 5: Torrent checked before and partial availability
(
{
"hash": "checked-hash",
"has_metadata": True,
"state": "downloading",
"availability": 0.8,
},
True,
),
# Case 6: Torrent with partial availability (downloading)
(
{
"hash": "hash",
"has_metadata": True,
"state": "downloading",
"availability": 0.8,
},
True,
),
# Case 7: Torrent with partial availability (forcedDL)
(
{
"hash": "hash",
"has_metadata": True,
"state": "forcedDL",
"availability": 0.8,
},
True,
),
# Case 8: Torrent with partial availability (stalledDL)
(
{
"hash": "hash",
"has_metadata": True,
"state": "forcedDL",
"availability": 0.8,
},
True,
),
],
)
@pytest.mark.asyncio
async def test_get_items_to_process(qbit_item, expected_processed, removal_job, arr):
"""Test the _get_items_to_process method of RemoveBadFiles class."""
# Mocking the tracker extension_checked to simulate which torrents have been checked
arr.tracker.extension_checked = {"checked-hash"}
# Act
processed_items = removal_job._get_items_to_process(
[qbit_item]
) # pylint: disable=W0212
# Extract the hash from the processed items
processed_hashes = [item["hash"] for item in processed_items]
# Assert
if expected_processed:
assert qbit_item["hash"] in processed_hashes
else:
assert qbit_item["hash"] not in processed_hashes
@pytest.mark.parametrize(
"file, should_be_stoppable",
[
# Stopped files - No need to stop again
(
{
"index": 0,
"name": "file.exe",
"priority": 0,
"availability": 1.0,
"progress": 1.0,
},
False,
),
(
{
"index": 0,
"name": "file.mp3",
"priority": 0,
"availability": 1.0,
"progress": 1.0,
},
False,
),
# Bad file extension Always stop (if not alredy stopped)
(
{
"index": 0,
"name": "file.exe",
"priority": 1,
"availability": 1.0,
"progress": 1.0,
},
True,
),
(
{
"index": 0,
"name": "file.exe",
"priority": 1,
"availability": 0.5,
"progress": 1.0,
},
True,
),
(
{
"index": 0,
"name": "file.exe",
"priority": 1,
"availability": 0.0,
"progress": 1.0,
},
True,
),
# Good file extension Stop only if availability < 1 **and** progress < 1
(
{
"index": 0,
"name": "file.mp3",
"priority": 1,
"availability": 1.0,
"progress": 1.0,
},
False,
), # Fully done and fully available
(
{
"index": 0,
"name": "file.mp3",
"priority": 1,
"availability": 0.3,
"progress": 1.0,
},
False,
), # Fully done and partially available
(
{
"index": 0,
"name": "file.mp3",
"priority": 1,
"availability": 1.0,
"progress": 0.5,
},
False,
), # Fully available
(
{
"index": 0,
"name": "file.mp3",
"priority": 1,
"availability": 0.3,
"progress": 0.9,
},
True,
), # Partially done and not available
],
)
def test_get_stoppable_file_single(removal_job, file, should_be_stoppable):
# Add file_extension based on the file name
file["file_extension"] = os.path.splitext(file["name"])[1].lower()
stoppable = removal_job._get_stoppable_files([file]) # pylint: disable=W0212
is_stoppable = bool(stoppable)
assert is_stoppable == should_be_stoppable
@pytest.fixture(name="torrent_files")
def fixture_torrent_files():
return [
{"index": 0, "name": "file1.mp3", "priority": 0}, # Already stopped
{"index": 1, "name": "file2.mp3", "priority": 0}, # Already stopped
{"index": 2, "name": "file3.exe", "priority": 1},
{"index": 3, "name": "file4.exe", "priority": 1},
{"index": 4, "name": "file5.mp3", "priority": 1},
]
@pytest.mark.parametrize(
"stoppable_indexes, all_files_stopped",
[
([0], False), # Case 1: Nothing changes (stopping an already stopped file)
([2], False), # Case 2: One additional file stopped
([2, 3, 4], True), # Case 3: All remaining files stopped
([0, 1, 2, 3, 4], True), # Case 4: Mix of both
],
)
def test_all_files_stopped(
removal_job, torrent_files, stoppable_indexes, all_files_stopped
):
# Create stoppable_files using only the index for each file and a dummy reason
stoppable_files = [({"index": idx}, "some reason") for idx in stoppable_indexes]
result = removal_job._all_files_stopped(torrent_files, stoppable_files) # pylint: disable=W0212
assert result == all_files_stopped

View File

@@ -0,0 +1,48 @@
import pytest
from src.jobs.remove_failed_downloads import RemoveFailedDownloads
from tests.jobs.test_utils import removal_job_fix
# Test to check if items with "failed" status are included in affected items with parameterized data
@pytest.mark.asyncio
@pytest.mark.parametrize(
"queue_data, expected_download_ids",
[
(
[
{"downloadId": "1", "status": "failed"}, # Item with failed status
{"downloadId": "2", "status": "completed"}, # Item with completed status
{"downloadId": "3"} # No status field
],
["1"] # Only the failed item should be affected
),
(
[
{"downloadId": "1", "status": "completed"}, # Item with completed status
{"downloadId": "2", "status": "completed"},
{"downloadId": "3", "status": "completed"}
],
[] # No failed items, so no affected items
),
(
[
{"downloadId": "1", "status": "failed"}, # Item with failed status
{"downloadId": "2", "status": "failed"}
],
["1", "2"] # Both failed items should be affected
)
]
)
async def test_find_affected_items(queue_data, expected_download_ids):
# Arrange
removal_job = removal_job_fix(RemoveFailedDownloads, queue_data=queue_data)
# Act
affected_items = await removal_job._find_affected_items() # pylint: disable=W0212
# Assert
assert isinstance(affected_items, list)
# Assert that the affected items match the expected download IDs
affected_download_ids = [item["downloadId"] for item in affected_items]
assert sorted(affected_download_ids) == sorted(expected_download_ids), \
f"Expected affected items with downloadIds {expected_download_ids}, got {affected_download_ids}"

View File

@@ -0,0 +1,141 @@
from unittest.mock import MagicMock
import pytest
from src.jobs.remove_failed_imports import RemoveFailedImports
from tests.jobs.test_utils import removal_job_fix
@pytest.mark.asyncio
@pytest.mark.parametrize(
"item, expected_result",
[
# Valid item scenario
(
{
"status": "completed",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importPending",
"statusMessages": [{"messages": ["Import failed"]}],
},
True
),
# Invalid item with wrong status
(
{
"status": "downloading",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importPending",
"statusMessages": [{"messages": ["Import failed"]}],
},
False
),
# Invalid item with missing required fields
(
{
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importPending",
"statusMessages": [{"messages": ["Import failed"]}],
},
False
),
# Invalid item with wrong trackedDownloadStatus
(
{
"status": "completed",
"trackedDownloadStatus": "downloading",
"trackedDownloadState": "importPending",
"statusMessages": [{"messages": ["Import failed"]}],
},
False
),
# Invalid item with wrong trackedDownloadState
(
{
"status": "completed",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "downloaded",
"statusMessages": [{"messages": ["Import failed"]}],
},
False
),
]
)
async def test_is_valid_item(item, expected_result):
#Fix
removal_job = removal_job_fix(RemoveFailedImports)
# Act
result = removal_job._is_valid_item(item) # pylint: disable=W0212
# Assert
assert result == expected_result
# Fixture with 3 valid items with different messages and downloadId
@pytest.fixture(name="queue_data")
def fixture_queue_data():
return [
{
"downloadId": "1",
"status": "completed",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importPending",
"statusMessages": [{"messages": ["Import failed due to issue A"]}],
},
{
"downloadId": "2",
"status": "completed",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importFailed",
"statusMessages": [{"messages": ["Import failed due to issue B"]}],
},
{
"downloadId": "3",
"status": "completed",
"trackedDownloadStatus": "warning",
"trackedDownloadState": "importBlocked",
"statusMessages": [{"messages": ["Import blocked due to issue C"]}],
}
]
# Test the different patterns and check if the right downloads are selected
@pytest.mark.asyncio
@pytest.mark.parametrize(
"patterns, expected_download_ids, removal_messages_expected",
[
(["*"], ["1", "2", "3"], True), # Match everything, expect removal messages
(["Import failed*"], ["1", "2"], True), # Match "Import failed", expect removal messages
(["Import blocked*"], ["3"], True), # Match "Import blocked", expect removal messages
(["*due to issue A"], ["1"], True), # Match "due to issue A", expect removal messages
(["Import failed due to issue C"], [], False), # No match for "Import failed due to issue C", expect no removal messages
],
)
async def test_find_affected_items_with_patterns(queue_data, patterns, expected_download_ids, removal_messages_expected):
# Arrange
removal_job = removal_job_fix(RemoveFailedImports, queue_data=queue_data)
# Mock the job settings for message patterns
removal_job.job = MagicMock()
removal_job.job.message_patterns = patterns
# Act
affected_items = await removal_job._find_affected_items() # pylint: disable=W0212
# Assert
assert isinstance(affected_items, list)
# Check if the correct downloadIds are in the affected items
affected_download_ids = [item["downloadId"] for item in affected_items]
# Assert the affected download IDs are as expected
assert sorted(affected_download_ids) == sorted(expected_download_ids)
# Check if removal messages are expected and present
for item in affected_items:
if removal_messages_expected:
assert "removal_messages" in item, f"Expected removal messages for item {item['downloadId']}"
assert len(item["removal_messages"]) > 0, f"Expected non-empty removal messages for item {item['downloadId']}"
else:
assert "removal_messages" not in item, f"Did not expect removal messages for item {item['downloadId']}"

View File

@@ -0,0 +1,55 @@
import pytest
from src.jobs.remove_metadata_missing import RemoveMetadataMissing
from tests.jobs.test_utils import removal_job_fix
# Test to check if items with the specific error message are included in affected items with parameterized data
@pytest.mark.asyncio
@pytest.mark.parametrize(
"queue_data, expected_download_ids",
[
(
[
{"downloadId": "1", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"}, # Valid item
{"downloadId": "2", "status": "completed", "errorMessage": "qBittorrent is downloading metadata"}, # Wrong status
{"downloadId": "3", "status": "queued", "errorMessage": "Some other error"} # Incorrect errorMessage
],
["1"] # Only the item with "queued" status and the correct errorMessage should be affected
),
(
[
{"downloadId": "1", "status": "queued", "errorMessage": "Some other error"}, # Incorrect errorMessage
{"downloadId": "2", "status": "completed", "errorMessage": "qBittorrent is downloading metadata"}, # Wrong status
{"downloadId": "3", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"} # Correct item
],
["3"] # Only the item with "queued" status and the correct errorMessage should be affected
),
(
[
{"downloadId": "1", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"}, # Valid item
{"downloadId": "2", "status": "queued", "errorMessage": "qBittorrent is downloading metadata"} # Another valid item
],
["1", "2"] # Both items match the condition
),
(
[
{"downloadId": "1", "status": "completed", "errorMessage": "qBittorrent is downloading metadata"}, # Wrong status
{"downloadId": "2", "status": "queued", "errorMessage": "Some other error"} # Incorrect errorMessage
],
[] # No items match the condition
)
]
)
async def test_find_affected_items(queue_data, expected_download_ids):
# Arrange
removal_job = removal_job_fix(RemoveMetadataMissing, queue_data=queue_data)
# Act
affected_items = await removal_job._find_affected_items() # pylint: disable=W0212
# Assert
assert isinstance(affected_items, list)
# Assert that the affected items match the expected download IDs
affected_download_ids = [item["downloadId"] for item in affected_items]
assert sorted(affected_download_ids) == sorted(expected_download_ids), \
f"Expected affected items with downloadIds {expected_download_ids}, got {affected_download_ids}"

View File

@@ -0,0 +1,78 @@
import pytest
from src.jobs.remove_missing_files import RemoveMissingFiles
from tests.jobs.test_utils import removal_job_fix
@pytest.mark.asyncio
@pytest.mark.parametrize(
"queue_data, expected_download_ids",
[
(
[ # valid failed torrent (warning + matching errorMessage)
{"downloadId": "1", "status": "warning", "errorMessage": "DownloadClientQbittorrentTorrentStateMissingFiles"},
{"downloadId": "2", "status": "warning", "errorMessage": "The download is missing files"},
{"downloadId": "3", "status": "warning", "errorMessage": "qBittorrent is reporting missing files"},
],
["1", "2", "3"]
),
(
[ # wrong status for errorMessage, should be ignored
{"downloadId": "1", "status": "failed", "errorMessage": "The download is missing files"},
],
[]
),
(
[ # valid "completed" with matching statusMessage
{
"downloadId": "1",
"status": "completed",
"statusMessages": [
{"messages": ["No files found are eligible for import in /some/path"]}
],
},
{
"downloadId": "2",
"status": "completed",
"statusMessages": [
{"messages": ["Everything looks good!"]}
],
},
],
["1"]
),
(
[ # No statusMessages key or irrelevant messages
{"downloadId": "1", "status": "completed"},
{
"downloadId": "2",
"status": "completed",
"statusMessages": [{"messages": ["Other message"]}]
},
],
[]
),
(
[ # Mixed: one matching warning + one matching statusMessage
{"downloadId": "1", "status": "warning", "errorMessage": "The download is missing files"},
{
"downloadId": "2",
"status": "completed",
"statusMessages": [{"messages": ["No files found are eligible for import in foo"]}]
},
{"downloadId": "3", "status": "completed"},
],
["1", "2"]
),
]
)
async def test_find_affected_items(queue_data, expected_download_ids):
# Arrange
removal_job = removal_job_fix(RemoveMissingFiles, queue_data=queue_data)
# Act
affected_items = await removal_job._find_affected_items() # pylint: disable=W0212
# Assert
assert isinstance(affected_items, list)
affected_download_ids = [item["downloadId"] for item in affected_items]
assert sorted(affected_download_ids) == sorted(expected_download_ids), \
f"Expected affected items with downloadIds {expected_download_ids}, got {affected_download_ids}"

View File

@@ -0,0 +1,46 @@
import pytest
from src.jobs.remove_orphans import RemoveOrphans
from tests.jobs.test_utils import removal_job_fix
@pytest.fixture(name="queue_data")
def fixture_queue_data():
return [
{
"downloadId": "AABBCC",
"id": 1,
"title": "My Series A - Season 1",
"size": 1000,
"sizeleft": 500,
"downloadClient": "qBittorrent",
"protocol": "torrent",
"status": "paused",
"trackedDownloadState": "downloading",
"statusMessages": [],
},
{
"downloadId": "112233",
"id": 2,
"title": "My Series B - Season 1",
"size": 1000,
"sizeleft": 500,
"downloadClient": "qBittorrent",
"protocol": "torrent",
"status": "paused",
"trackedDownloadState": "downloading",
"statusMessages": [],
}
]
@pytest.mark.asyncio
async def test_find_affected_items_returns_queue(queue_data):
# Fix
removal_job = removal_job_fix(RemoveOrphans, queue_data=queue_data)
# Act
affected_items = await removal_job._find_affected_items() # pylint: disable=W0212
# Assert
assert isinstance(affected_items, list)
assert len(affected_items) == 2
assert affected_items[0]["downloadId"] == "AABBCC"
assert affected_items[1]["downloadId"] == "112233"

View File

@@ -0,0 +1,168 @@
from unittest.mock import AsyncMock, MagicMock
import pytest
from src.jobs.remove_slow import RemoveSlow
from tests.jobs.test_utils import removal_job_fix
@pytest.mark.asyncio
@pytest.mark.parametrize(
"item, expected_result",
[
(
# Valid: has downloadId, size, sizeleft, and status = "downloading"
{
"downloadId": "abc",
"size": 1000,
"sizeleft": 500,
"status": "downloading",
"protocol": "torrent",
},
True,
),
(
# Invalid: missing sizeleft
{
"downloadId": "abc",
"size": 1000,
"status": "downloading",
"protocol": "torrent",
},
False,
),
(
# Invalid: missing size
{
"downloadId": "abc",
"sizeleft": 500,
"status": "downloading",
"protocol": "torrent",
},
False,
),
(
# Invalid: missing status
{"downloadId": "abc", "size": 1000, "sizeleft": 500, "protocol": "torrent"},
False,
),
(
# Invalid: missing protocol
{
"downloadId": "abc",
"size": 1000,
"sizeleft": 500,
"status": "downloading",
},
False,
),
],
)
async def test_is_valid_item(item, expected_result):
removal_job = removal_job_fix(RemoveSlow)
result = removal_job._is_valid_item(item) # pylint: disable=W0212
assert result == expected_result
@pytest.fixture(name="slow_queue_data")
def fixture_slow_queue_data():
return [
{
"downloadId": "usenet",
"progress_previous": 800, # previous progress
"progress_now": 800, # current progress
"total_size": 1000,
"protocol": "usenet", # should be ignored
},
{
"downloadId": "importing",
"progress_previous": 0,
"progress_now": 1000,
"total_size": 1000,
"protocol": "torrent",
},
{
"downloadId": "stuck",
"progress_previous": 200,
"progress_now": 200,
"total_size": 1000,
"protocol": "torrent",
},
{
"downloadId": "slow",
"progress_previous": 100,
"progress_now": 150,
"total_size": 1000,
"protocol": "torrent",
},
{
"downloadId": "medium",
"progress_previous": 500,
"progress_now": 900,
"total_size": 1000,
"protocol": "torrent",
},
{
"downloadId": "fast",
"progress_previous": 100,
"progress_now": 900,
"total_size": 1000,
"protocol": "torrent",
},
]
@pytest.fixture(name="arr")
def fixture_arr():
mock = MagicMock()
mock.tracker.download_progress = AsyncMock()
return mock
@pytest.mark.asyncio
@pytest.mark.parametrize(
"min_speed, expected_ids",
[
(0, []), # No min download speed; all torrents pass
(500, ["stuck"]), # Only stuck and slow are included
(1000, ["stuck", "slow"]), # Same as above
(10000, ["stuck", "slow", "medium"]), # Only stuck and slow are below 5.0
(1000000, ["stuck", "slow", "medium", "fast"]), # Fast torrent included (but not importing)
],
)
async def test_find_affected_items_with_varied_speeds(
slow_queue_data, min_speed, expected_ids, arr
):
removal_job = removal_job_fix(RemoveSlow, queue_data=slow_queue_data)
# Set up job and timer
removal_job.job = MagicMock()
removal_job.job.min_speed = min_speed
removal_job.settings = MagicMock()
removal_job.settings.general.timer = 1 # 1 minute for speed calculation
removal_job.arr = arr # Inject the mocked arr object
removal_job._is_valid_item = MagicMock( return_value=True ) # Mock the _is_valid_item method to always return True # pylint: disable=W0212
# Inject size and sizeleft into each item in the queue
for item in slow_queue_data:
item["size"] = item["total_size"] * 1000000 # Inject total size as 'size'
item["sizeleft"] = ( item["size"] - item["progress_now"] * 1000000 ) # Calculate sizeleft
item["status"] = "downloading"
item["title"] = item["downloadId"]
# Mock the download progress in `arr.tracker.download_progress`
removal_job.arr.tracker.download_progress = {
item["downloadId"]: item["progress_previous"] * 1000000
for item in slow_queue_data
}
# Call the method we're testing
affected_items = await removal_job._find_affected_items() # pylint: disable=W0212
# Extract case identifiers of affected items
affected_ids = [item["downloadId"] for item in affected_items]
# Assert that the affected cases match the expected ones
assert sorted(affected_ids) == sorted(expected_ids)
# Ensure 'importing' and 'usenet' are never flagged for removal
assert "importing" not in affected_ids
assert "usenet" not in affected_ids

View File

@@ -0,0 +1,55 @@
import pytest
from src.jobs.remove_stalled import RemoveStalled
from tests.jobs.test_utils import removal_job_fix
# Test to check if items with the specific error message are included in affected items with parameterized data
@pytest.mark.asyncio
@pytest.mark.parametrize(
"queue_data, expected_download_ids",
[
(
[
{"downloadId": "1", "status": "warning", "errorMessage": "The download is stalled with no connections"}, # Valid item
{"downloadId": "2", "status": "completed", "errorMessage": "The download is stalled with no connections"}, # Wrong status
{"downloadId": "3", "status": "warning", "errorMessage": "Some other error"} # Incorrect errorMessage
],
["1"] # Only the item with "warning" status and the correct errorMessage should be affected
),
(
[
{"downloadId": "1", "status": "warning", "errorMessage": "Some other error"}, # Incorrect errorMessage
{"downloadId": "2", "status": "completed", "errorMessage": "The download is stalled with no connections"}, # Wrong status
{"downloadId": "3", "status": "warning", "errorMessage": "The download is stalled with no connections"} # Correct item
],
["3"] # Only the item with "warning" status and the correct errorMessage should be affected
),
(
[
{"downloadId": "1", "status": "warning", "errorMessage": "The download is stalled with no connections"}, # Valid item
{"downloadId": "2", "status": "warning", "errorMessage": "The download is stalled with no connections"} # Another valid item
],
["1", "2"] # Both items match the condition
),
(
[
{"downloadId": "1", "status": "completed", "errorMessage": "The download is stalled with no connections"}, # Wrong status
{"downloadId": "2", "status": "warning", "errorMessage": "Some other error"} # Incorrect errorMessage
],
[] # No items match the condition
)
]
)
async def test_find_affected_items(queue_data, expected_download_ids):
# Arrange
removal_job = removal_job_fix(RemoveStalled, queue_data=queue_data)
# Act
affected_items = await removal_job._find_affected_items() # pylint: disable=W0212
# Assert
assert isinstance(affected_items, list)
# Assert that the affected items match the expected download IDs
affected_download_ids = [item["downloadId"] for item in affected_items]
assert sorted(affected_download_ids) == sorted(expected_download_ids), \
f"Expected affected items with downloadIds {expected_download_ids}, got {affected_download_ids}"

View File

@@ -0,0 +1,79 @@
from unittest.mock import AsyncMock, MagicMock
import pytest
from src.jobs.remove_unmonitored import RemoveUnmonitored
from tests.jobs.test_utils import removal_job_fix
@pytest.fixture(name="arr")
def fixture_arr():
mock = MagicMock()
mock.is_monitored = AsyncMock()
return mock
@pytest.mark.asyncio
@pytest.mark.parametrize(
"queue_data, monitored_ids, expected_download_ids",
[
# All items monitored -> no affected items
(
[
{"downloadId": "1", "detail_item_id": 101},
{"downloadId": "2", "detail_item_id": 102}
],
{101: True, 102: True},
[]
),
# All items unmonitored -> all affected
(
[
{"downloadId": "1", "detail_item_id": 101},
{"downloadId": "2", "detail_item_id": 102}
],
{101: False, 102: False},
["1", "2"]
),
# One monitored, one not
(
[
{"downloadId": "1", "detail_item_id": 101},
{"downloadId": "2", "detail_item_id": 102}
],
{101: True, 102: False},
["2"]
),
# Shared downloadId, only one monitored -> not affected
(
[
{"downloadId": "1", "detail_item_id": 101},
{"downloadId": "1", "detail_item_id": 102}
],
{101: False, 102: True},
[]
),
# Shared downloadId, none monitored -> affected
(
[
{"downloadId": "1", "detail_item_id": 101},
{"downloadId": "1", "detail_item_id": 102}
],
{101: False, 102: False},
["1", "1"]
),
]
)
async def test_find_affected_items(queue_data, monitored_ids, expected_download_ids, arr):
# Patch arr mock with side_effect
async def mock_is_monitored(detail_item_id):
return monitored_ids[detail_item_id]
arr.is_monitored = AsyncMock(side_effect=mock_is_monitored)
# Arrange
removal_job = removal_job_fix(RemoveUnmonitored, queue_data=queue_data)
removal_job.arr = arr # Inject the mocked arr object
# Act
affected_items = await removal_job._find_affected_items() # pylint: disable=W0212
# Assert
affected_download_ids = [item["downloadId"] for item in affected_items]
assert affected_download_ids == expected_download_ids, \
f"Expected downloadIds {expected_download_ids}, got {affected_download_ids}"

View File

@@ -0,0 +1,64 @@
import pytest
from unittest.mock import MagicMock
from src.jobs.strikes_handler import StrikesHandler
@pytest.mark.parametrize(
"current_hashes, expected_remaining_in_tracker",
[
([], []), # nothing active → all removed
(["HASH1", "HASH2"], ["HASH1", "HASH2"]), # both active → none removed
(["HASH2"], ["HASH2"]), # only HASH2 active → HASH1 removed
],
)
def test_recover_downloads(current_hashes, expected_remaining_in_tracker):
"""Tests if tracker correctly removes items (if recovered) and adds new ones"""
# Fix
tracker = MagicMock()
tracker.defective = {
"remove_stalled": {
"HASH1": {"title": "Movie-with-one-strike", "strikes": 1},
"HASH2": {"title": "Movie-with-three-strikes", "strikes": 3},
}
}
arr = MagicMock()
arr.tracker = tracker
handler = StrikesHandler(job_name="remove_stalled", arr=arr, max_strikes=3)
affected_downloads = [(hash_id, {"title": "dummy"}) for hash_id in current_hashes]
# Act
handler._recover_downloads(affected_downloads) # pylint: disable=W0212
# Assert
assert sorted(tracker.defective["remove_stalled"].keys()) == sorted(expected_remaining_in_tracker)
# ---------- Test ----------
@pytest.mark.parametrize(
"strikes_before_increment, max_strikes, expected_in_affected_downloads",
[
(1, 3, False), # Below limit → should not be affected
(2, 3, False), # Below limit → should not be affected
(3, 3, True), # At limit, will be pushed over limit → should not be affected
(4, 3, True), # Over limit → should be affected
],
)
def test_apply_strikes_and_filter(strikes_before_increment, max_strikes, expected_in_affected_downloads):
job_name = "remove_stalled"
tracker = MagicMock()
tracker.defective = {job_name: {"HASH1": {"title": "dummy", "strikes": strikes_before_increment}}}
arr = MagicMock()
arr.tracker = tracker
handler = StrikesHandler(job_name=job_name, arr=arr, max_strikes=max_strikes)
affected_downloads = {
"HASH1": [{"title": "dummy"}]
}
result = handler._apply_strikes_and_filter(affected_downloads) # pylint: disable=W0212
if expected_in_affected_downloads:
assert "HASH1" in result
else:
assert "HASH1" not in result

33
tests/jobs/test_utils.py Normal file
View File

@@ -0,0 +1,33 @@
# test_utils.py
from unittest.mock import AsyncMock
from unittest.mock import patch
from asyncio import Future
def mock_class_init(cls, *args, **kwargs):
"""
Mocks the __init__ method of a class to bypass constructor logic.
"""
with patch.object(cls, '__init__', lambda x, *args, **kwargs: None):
instance = cls(*args, **kwargs)
return instance
def removal_job_fix(cls, queue_data=None, settings=None):
"""
Mocks the initialization of Jobs and the queue_manager attribute.
Args:
cls: The class to instantiate (e.g., RemoveOrphans).
queue_data: The mock data for the get_queue_items method (default: None).
Returns:
An instance of the class with a mocked queue_manager.
"""
# Mock the initialization of the class (no need to pass arr, settings, job_name)
instance = mock_class_init(cls, arr=None, settings=settings, job_name="Test Job")
# Mock the queue_manager and its get_queue_items method
instance.queue_manager = AsyncMock()
instance.queue_manager.get_queue_items.return_value = queue_data
return instance

View File

@@ -0,0 +1,109 @@
import os
import textwrap
import pytest
import yaml
from unittest.mock import patch
from src.settings._user_config import _load_from_env
# ---- Pytest Fixtures ----
# Pre-define multiline YAML snippets with dedent and strip for clarity
# Single values as plain strings (not YAML block strings)
log_level_value = "VERBOSE"
timer_value = "10"
ssl_verification_value = "true"
# List
ignored_download_clients_yaml = textwrap.dedent("""
- emulerr
- napster
""").strip()
# Job: No settings
remove_bad_files_yaml = "" # empty string represents flag enabled with no config
# Job: One Setting
remove_slow_yaml = textwrap.dedent("""
- max_strikes: 3
""").strip()
# Job: Multiple Setting
remove_stalled_yaml = textwrap.dedent("""
- min_speed: 100
- max_strikes: 3
- some_bool_upper: TRUE
- some_bool_lower: false
- some_bool_sentence: False
""").strip()
# Arr Instances
radarr_yaml = textwrap.dedent("""
- base_url: "http://radarr:7878"
api_key: "radarr1_key"
""").strip()
sonarr_yaml = textwrap.dedent("""
- base_url: "sonarr_1_api_key"
api_key: "sonarr1_api_url"
- base_url: "sonarr_2_api_key"
api_key: "sonarr2_api_url"
""").strip()
# Qbit Instances
qbit_yaml = textwrap.dedent("""
- base_url: "http://qbittorrent:8080"
username: "qbit_username1"
password: "qbit_password1"
""").strip()
@pytest.fixture(name="env_vars")
def fixture_env_vars():
env = {
"LOG_LEVEL": log_level_value,
"TIMER": timer_value,
"SSL_VERIFICATION": ssl_verification_value,
"IGNORED_DOWNLOAD_CLIENTS": ignored_download_clients_yaml,
"REMOVE_BAD_FILES": remove_bad_files_yaml,
"REMOVE_SLOW": remove_slow_yaml,
"REMOVE_STALLED": remove_stalled_yaml,
"RADARR": radarr_yaml,
"SONARR": sonarr_yaml,
"QBITTORRENT": qbit_yaml,
}
with patch.dict(os.environ, env, clear=True):
yield env
# ---- Parametrized Tests ----
remove_ignored_download_clients_expected = yaml.safe_load(ignored_download_clients_yaml)
remove_bad_files_expected = yaml.safe_load(remove_bad_files_yaml)
remove_slow_expected = yaml.safe_load(remove_slow_yaml)
remove_stalled_expected = yaml.safe_load(remove_stalled_yaml)
radarr_expected = yaml.safe_load(radarr_yaml)
sonarr_expected = yaml.safe_load(sonarr_yaml)
qbit_expected = yaml.safe_load(qbit_yaml)
@pytest.mark.parametrize("section,key,expected", [
("general", "log_level", log_level_value),
("general", "timer", int(timer_value)),
("general", "ssl_verification", True),
("general", "ignored_download_clients", remove_ignored_download_clients_expected),
("jobs", "remove_bad_files", remove_bad_files_expected),
("jobs", "remove_slow", remove_slow_expected),
("jobs", "remove_stalled", remove_stalled_expected),
("instances", "radarr", radarr_expected),
("instances", "sonarr", sonarr_expected),
("downloaders", "qbittorrent", qbit_expected),
])
def test_env_loading_parametrized(env_vars, section, key, expected): # pylint: disable=unused-argument
config = _load_from_env()
assert section in config
assert key in config[section]
value = config[section][key]
if isinstance(expected, list):
# Compare as lists
assert value == expected
else:
assert value == expected

View File

@@ -1,152 +0,0 @@
# python3 -m pytest
import pytest
from src.utils.nest_functions import nested_set, add_keys_nested_dict, nested_get
# import asyncio
# Dictionary that is modified / queried as part of tests
input_dict = {
1: {
"name": "Breaking Bad 1",
"data": {"episodes": 3, "year": 1991, "actors": ["Peter", "Paul", "Ppacey"]},
},
2: {
"name": "Breaking Bad 2",
"data": {"episodes": 6, "year": 1992, "actors": ["Weter", "Waul", "Wpacey"]},
},
3: {
"name": "Breaking Bad 3",
"data": {"episodes": 9, "year": 1993, "actors": ["Zeter", "Zaul", "Zpacey"]},
},
}
# @pytest.mark.asyncio
# async def test_nested_set():
def test_nested_set():
expected_output = {
1: {
"name": "Breaking Bad 1",
"data": {
"episodes": 3,
"year": 1991,
"actors": ["Peter", "Paul", "Ppacey"],
},
},
2: {
"name": "Breaking Bad 2",
"data": {
"episodes": 6,
"year": 1994,
"actors": ["Weter", "Waul", "Wpacey"],
},
},
3: {
"name": "Breaking Bad 3",
"data": {
"episodes": 9,
"year": 1993,
"actors": ["Zeter", "Zaul", "Zpacey"],
},
},
}
output = input_dict
# await nested_set(output, [2, 'data' ,'year'], 1994)
nested_set(output, [2, "data", "year"], 1994)
assert expected_output == output
def test_nested_set_conditions():
input = {
1: [
{"year": 2001, "rating": "high"},
{"year": 2002, "rating": "high"},
{"year": 2003, "rating": "high"},
],
2: [
{"year": 2001, "rating": "high"},
{"year": 2002, "rating": "high"},
{"year": 2003, "rating": "high"},
],
}
expected_output = {
1: [
{"year": 2001, "rating": "high"},
{"year": 2002, "rating": "high"},
{"year": 2003, "rating": "high"},
],
2: [
{"year": 2001, "rating": "high"},
{"year": 2002, "rating": "high"},
{"year": 2003, "rating": "LOW"},
],
}
output = input
nested_set(output, [2, "rating"], "LOW", {"year": 2003})
assert expected_output == output
def test_nested_set_conditions_multiple():
input = {
1: [
{"rating": "high", "color": 1, "stack": 1},
{"rating": "high", "color": 2, "stack": 2},
{"rating": "high", "color": 2, "stack": 1},
]
}
expected_output = {
1: [
{"rating": "high", "color": 1, "stack": 1},
{"rating": "high", "color": 2, "stack": 2},
{"rating": "LOW", "color": 2, "stack": 1},
]
}
output = input
nested_set(output, [1, "rating"], "LOW", {"color": 2, "stack": 1})
assert expected_output == output
def test_add_keys_nested_dict():
expected_output = {
1: {
"name": "Breaking Bad 1",
"data": {
"episodes": 3,
"year": 1991,
"actors": ["Peter", "Paul", "Ppacey"],
},
},
2: {
"name": "Breaking Bad 2",
"data": {
"episodes": 6,
"year": 1994,
"actors": ["Weter", "Waul", "Wpacey"],
"spaceship": True,
},
},
3: {
"name": "Breaking Bad 3",
"data": {
"episodes": 9,
"year": 1993,
"actors": ["Zeter", "Zaul", "Zpacey"],
},
},
}
output = input_dict
add_keys_nested_dict(output, [2, "data", "spaceship"], True)
assert expected_output == output
def test_nested_get():
input = {
1: [
{"name": "A", "color": 1, "stack": 1},
{"name": "B", "color": 2, "stack": 2},
{"name": "C", "color": 2, "stack": 1},
]
}
expected_output = ["C"]
output = nested_get(input[1], "name", {"color": 2, "stack": 1})
assert expected_output == output

View File

@@ -1,11 +0,0 @@
{
"id": 1,
"downloadId": "A",
"title": "Sonarr Title 1",
"removal_messages": [
">>>>> Tracked Download State: importBlocked",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Episode XYZ was not found in the grabbed release: Sonarr Title 2.mkv",
">>>>> - And yet another message"
]
}

View File

@@ -1,64 +0,0 @@
import os
os.environ["IS_IN_PYTEST"] = "true"
import logging
import json
import pytest
from typing import Dict, Set, Any
from src.utils.shared import remove_download
from src.utils.trackers import Deleted_Downloads
# Utility function to load mock data
def load_mock_data(file_name):
with open(file_name, "r") as file:
return json.load(file)
async def mock_rest_delete() -> None:
logger.debug(f"Mock rest_delete called with URL")
async def run_test(
settingsDict: Dict[str, Any],
expected_removal_messages: Set[str],
failType: str,
removeFromClient: bool,
mock_data_file: str,
monkeypatch: pytest.MonkeyPatch,
caplog: pytest.LogCaptureFixture,
) -> None:
# Load mock data
affectedItem = load_mock_data(mock_data_file)
# Mock the `rest_delete` function
monkeypatch.setattr("src.utils.shared.rest_delete", mock_rest_delete)
# Call the function
with caplog.at_level(logging.INFO):
# Call the function and assert no exceptions
try:
deleted_downloads = Deleted_Downloads([])
await remove_download(
settingsDict=settingsDict,
BASE_URL="",
API_KEY="",
affectedItem=affectedItem,
failType=failType,
addToBlocklist=True,
deleted_downloads=deleted_downloads,
removeFromClient=removeFromClient,
)
except Exception as e:
pytest.fail(f"remove_download raised an exception: {e}")
# Assertions:
# Check that expected log messages are in the captured log
log_messages = {
record.message for record in caplog.records if record.levelname == "INFO"
}
assert expected_removal_messages == log_messages
# Check that the affectedItem's downloadId was added to deleted_downloads
assert affectedItem["downloadId"] in deleted_downloads.dict

View File

@@ -1,50 +0,0 @@
import pytest
from remove_download_utils import run_test
# Parameters identical across all tests
mock_data_file = "tests/utils/remove_download/mock_data/mock_data_1.json"
failType = "failed import"
@pytest.mark.asyncio
async def test_removal_with_removal_messages(monkeypatch, caplog):
settingsDict = {"TEST_RUN": True}
removeFromClient = True
expected_removal_messages = {
">>> Removing failed import download: Sonarr Title 1",
">>>>> Tracked Download State: importBlocked",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Episode XYZ was not found in the grabbed release: Sonarr Title 2.mkv",
">>>>> - And yet another message",
}
await run_test(
settingsDict=settingsDict,
expected_removal_messages=expected_removal_messages,
failType=failType,
removeFromClient=removeFromClient,
mock_data_file=mock_data_file,
monkeypatch=monkeypatch,
caplog=caplog,
)
@pytest.mark.asyncio
async def test_schizophrenic_removal_with_removal_messages(monkeypatch, caplog):
settingsDict = {"TEST_RUN": True}
removeFromClient = False
expected_removal_messages = {
">>> Removing failed import download (without removing from torrent client): Sonarr Title 1",
">>>>> Tracked Download State: importBlocked",
">>>>> Status Messages (matching specified patterns):",
">>>>> - Episode XYZ was not found in the grabbed release: Sonarr Title 2.mkv",
">>>>> - And yet another message",
}
await run_test(
settingsDict=settingsDict,
expected_removal_messages=expected_removal_messages,
failType=failType,
removeFromClient=removeFromClient,
mock_data_file=mock_data_file,
monkeypatch=monkeypatch,
caplog=caplog,
)